text
stringlengths 2.85k
2.55M
| label
int64 0
10
|
---|---|
arXiv:1611.04732v1 [math.AC] 15 Nov 2016
BETTI NUMBERS OF CERTAIN SUM IDEALS
JOYDIP SAHA, INDRANATH SENGUPTA, AND GAURAB TRIPATHI
A BSTRACT. In this paper we compute the Betti numbers for ideals of
the form I1 (XY ) + J, where X and Y are matrices and J is the ideal
generated by the 2 × 2 minors of the matrix consisting of any two rows
of X.
1. I NTRODUCTION
Let K be a field and {xij ; 1 ≤ i, j ≤ n}, {yj ; 1 ≤ j ≤ n} be indeterminates over K; n ≥ 2. Let R := K[xij , yj ] denote the polynomial algebra
over K. Let X denote an n × n matrix such that its entries belong to the
ideal h{xij ; 1 ≤ i ≤ n, 1 ≤ j ≤ n}i. Let Y = (yj )n×1 be the n × 1
column
Pnmatrix. Let I1 (XY ) denote the ideal generated by the polynomials
gj = i=1 xji yi , j = 1, . . . , n, which are the 1 × 1 minors or entries of the
n × n matrix XY . The primality, primary decomposition and Betti numbers of ideals of the form I1 (XY ) have been studied in [9] and [10], with
the help of Gröbner bases for I1 (XY ).
Ideals of the form I1 (XY ) + J are particularly interesting because they
occur in several geometric considerations like linkage and generic residual intersection of polynomial ideals, especially in the context of syzygies.
Bruns-Kustin-Miller [1] resolved the ideal I1 (XY )+Imin(m,n) (X), where X
is a generic m×n matrix and Y is a generic n×1 matrix. Johnson-McLoud
[5] proved certain properties for the ideals of the form I1 (XY ) + I2 (X),
where X is a generic symmetric matrix and Y is either generic or generic
alternating.
We say that I and J intersect transversally if I ∩J = IJ. Suppose that F·
resolves R/I and G· resolves R/J minimally. It is interesting to note that
if I and J intersect transversally, then the tensor product complex F ⊗R G
2010 Mathematics Subject Classification. Primary 13D02; Secondary 13C40, 13P10,
13D07.
Key words and phrases. Gröbner basis, Betti numbers, transversal intersection, mapping cone.
The first author thanks UGC for the Senior Research Fellowship.
The second author is the corresponding author, who is supported by the the research
project EMR/2015/000776 sponsored by the SERB, Government of India.
The third author thanks CSIR for the Senior Research Fellowship.
1
2
JOYDIP SAHA, INDRANATH SENGUPTA, AND GAURAB TRIPATHI
resolves R/I + J minimally; see Lemma 3.7. Therefore, it is useful to
know if two ideals intersect transversally, especially when one is trying to
compute minimal free resolutions and Betti numbers for ideals of the form
I + J, through iterated techniques; see [4].
2. N OTATION
M AIN T HEOREM
xi1 xi2 · · · xin
e
• If X is generic and i < j; let Xij =
.
xj1 xj2 · · · xjn
• If X is generic symmetric and i < j; let
x1i · · · xii · · · xij · · · xin
e
Xij =
.
x1j · · · xij · · · xjj · · · xjn
AND THE
eij .
• Let Gij denote the set of all 2 × 2 minors of X
eij ) denote the ideal generated Gij .
• I2 ( X
Our aim in this paper is to prove the following theorem:
Theorem 2.1. Let X = (xij ) be either the generic or the generic symmetric
matrix of order n. Let 1 ≤ i < j ≤ n.
eij ) + hgi , gj i are given by
(1) The total Betti numbers
for theideal I2 (X
n
n
n
b0 = 1, b1 = 2 + 2, b2 = 2 3 + n, bi+1 = i i+1
+ (i − 2) ni , for
2 ≤ i ≤ n − 2 and bn = n − 2.
(2) Let 1 ≤ k ≤ n−2. Let βk,p denote the p-th total Betti number for the
eij )+hgi , gj , gl1 , . . . , gl i, such that 1 ≤ l1 < . . . < lk ≤ n
ideal I2 (X
k
and lt is the smallest in the set {1, 2, . . . , n} \ {i, j, l1 , . . . , lt−1 }, for
every 1 ≤ t ≤ k. They are given by βk,0 = 1, βk,p = βk−1,p−1 +
βk−1,p for 1 ≤ p ≤ n + k − 1 and βk,n+k = n − 2.
eij ) are
In particular, the total Betti numbers for the ideal I1 (XY ) + I2 (X
βn−2,0 , βn−2,1 , . . . , βn−2,2n−2 .
3. P RELIMINARIES
3.1. Determinantal Ideals. We recall some useful results on determinantal ideals pertaining to our work. We refer to [2], [3], [8] for detailed discussions on these.
Theorem 3.1. Let K be a field and let xij : 1 ≤ i ≤ m, 1 ≤ j ≤ n be
indeterminates over K. Let A = (xij ) be the m×n matrix of indeterminates
and Im (A) denotes the ideal generated by the maximal minors of A. The set
of maximal minors of A is a universal Gröbner basis for the ideal Im (A).
SUM IDEALS
3
Proof. See [2].
The Eagon-Northcott Complex. We present the relevant portion from the
book [3] here. Let F = Rf and G = Rg be free modules of finite rank over
the polynomial ring R. The Eagon-Northcott complex of a map α : F −→
G (or that of a matrix A representing α) is a complex
df −g
df −g+1
EN(α) : 0 → (Symf −g G)∗ ⊗ ∧f F −→ (Symf −g−1 G)∗ ⊗ ∧f −1 F −→
d
d2 g
∧g α
3
· · · −→ (Sym2G)∗ ⊗ ∧g+2 F −→
G∗ ⊗ ∧g+1 F ∧ F −→ ∧g G.
Here Symk G is the k-th symmetric power of G and M ∗ = HomR (M, R).
The map dj are defined as follows. First we define a diagonal map
(Symk G)∗ → G∗ ⊗ (Symk−1G)∗
X ′
′′
ui ⊗ ui
u 7→
i
as the dual of the multiplication map G ⊗ Symk−1G −→ Symk G in the
symmetric algebra of G. Next we define an analogous diagonal map
∧k F −→ F ⊗ ∧k−1 F
X ′
′′
v 7→
vi ⊗ vi
i
as the dual of the multiplication in the exterior algebra of F ∗ .
Theorem 3.2 (Eagon-Northcott). The Eagon-Northcott complex is a free
resolution of R/Ig (α) iff grade(Ig (α)) = f − g + 1 where Ig (α) denotes
the g × g minors of the matrix A representing α.
Proof. See [3].
3.2. Mapping Cone. We present the relevant portion from the book [8]
′
′
here. Let R be the polynomial ring. Let φ· : (U· , d· ) → (U· , d· ) be a map of
complexes of finitely generated R-modules. The mapping cone of φ· is the
′
complex W· with differential δ· defined as follows. Let Wi = Ui−1 ⊕ Ui ,
′
′
′
′
with δ|Ui−1 = −d + φ : Ui−1 −→ Ui−2 ⊕ Ui−1 and δ|U ′ = d : Ui → Ui−1
i
for each i.
Theorem 3.3. Let M be an ideal minimally generated by the polynomials
f1 , . . . , fr . Set Mi = hf1 , . . . , fi i, for 1 ≤ i ≤ r. Thus, M = Mr . For each
i ≥ 1, we have the short exact sequence
fi+1
0 −→ S/(Mi : fi+1 ) −→ S/Mi −→ S/Mi+1 −→ 0.
If resolutions of S/Mi and S/(Mi : fi+1 ) are known then we can construct
a resolution of S/Mi+1 by the mapping cone construction.
4
JOYDIP SAHA, INDRANATH SENGUPTA, AND GAURAB TRIPATHI
Proof. See Construction 27.3 in [8].
3.3. Gröbner basis and transversal intersection of Ideals.
Lemma 3.4. Let h1 , h2 · · · , hn ∈ R be such that with respect to a suitable
monomial order on R, the leading terms of them are mutually coprime.
Then, h1 , h2 · · · , hn is a regular sequence in R.
Proof. . See Lemma 4.3 in [9].
Lemma 3.5. Suppose that X is either generic or generic symmetric. The
eij ), with respect a suitable
set Gij is a Gröbner basis for the ideal I2 (X
monomial order.
Proof. We choose the lexicographic monomial order given by the following
′
′
ordering among the variables: xst > xs′ t′ if (s , t ) > (s, t) and yn >
yn−1 > · · · , y1 > xst for all s, t. We now apply Lemma 4.2 in [9] for the
matrix X t and for k = 2.
Definition 1. Let T ⊂ R be a set of monomials. We define
supp(T ) = {(i, j, 0) | xij divides m for some m ∈ T } ∪
{(0, 0, k) | yk divides m for some m ∈ T }.
If T = {m}, then we write supp(m) instead of supp({m}).
Lemma 3.6. Let > be a monomial ordering on R. Let I and J be ideals
in R, such that m(I) and m(J) denote unique minimal generating sets for
their leading ideals Lt(I) and Lt(J) respectively. Then, I ∩ J = IJ if
supp(m(I))∩supp(m(J)) = ∅. In other words, the ideals I and J intersect
transversally if the set of variables occurring in the set m(I) is disjointed
from the the set of variables occurring in the set m(J).
Proof. Let f ∈ I ∩ J; we show that f ∈ IJ. Now f ∈ I ∩ J implies that
f ∈ I and therefore Lt(f ) ∈ Lt(I). Hence, mi ∈ m(I) such that mi | Lt(f ).
Similarly, there exists monomial mj ∈ m(J) such that mj | Lt(f ). Given
that mi and mj are of disjoint support, we have mi mj | Lt(f ) and this
(f )
proves that Lt(f ) ∈ Lt(IJ). We replace f by f − Lt
. Now the proof
mi mj
Lt(f ) < Lt(f ).
follows by induction since, Lt f − m
i mj
SUM IDEALS
5
3.4. Homological Lemmas.
Lemma 3.7. Let I and J be graded ideals in a graded ring R, such that
I ∩ J = I · J. Suppose that F and G are minimal free resolutions of I
and J respectively. Then F ⊗ G is a minimal free resolution for the graded
ideal I + J.
Proof. Consider the short exact sequence 0 −→ I −→ R −→ R/I −→ 0
and tensor it with R/J over R. We get the exact sequence
0 −→ Tor1 (R/I, R/J) −→ I/I · J −→ R/J −→ R/I + J −→ 0.
The terms on the left are 0 since R is a flat R module. Moreover, the
kernel of the map from I/I · J −→ R/J is I ∩ J/I · J. Therefore
Tor1 (R/I, R/J) = 0 if and only if I ∩J = I ·J. By the corollary 1 of theorem 3 proved in [6], Tor1 (R/I, R/J) = 0 implies that Tori (R/I, R/J) =
0 for all i ≥ 1. Therefore, Hi (F ⊗ G ) ≃ Tori (R/I, R/J) ≃ 0 for all i ≥ 1
and H0 (F ⊗ G ) ≃ R/I + J. This proves that F ⊗ G resolves I + J. The
resolution is minimal since F and G are minimal.
Lemma 3.8. Let
A
A
1
2
Ra1 −→
Ra2 −→
Ra3
be an exact sequence of free modules. Let Q1 , Q2 , Q3 be invertible matrices
of sizes a1 , a2 , a3 respectively. Then,
Ra1
Q−1
2 A1 Q 1
Ra2
−→
Q−1
3 A2 Q 2
−→
Ra3
is also an exact sequence of free modules.
Proof. The following diagram is a commutative diagram is free modules
and the vertical maps are isomorphisms:
A1
ROa1
/
RaO 2
Q1
Q−1
2 A1 Q 1
Ra1
Therefore, Ra1
Ra3 is exact.
Q−1
2 A1 Q 1
−→
Ra2
/
R
Q−1
3 A2 Q 2
−→
A2
/
RaO 3
Q2
Q3
Q−1
A2 Q 2
3
a2
/ a3
R
A
A
1
2
Ra3 is exact since Ra1 −→
Ra2 −→
Corollary 3.9. Let
C
B
A
Ra1 −→ Ra2 −→ Ra3 −→ Ra4
be an exact sequence of free modules. Let P1 , P2 , P3 be invertible matrices
of sizes a1 , a2 , a3 respectively. Then,
P −1 CP1
BP
AP −1
3
Ra1 2−→ Ra2 −→2 Ra3 −→
Ra4
6
JOYDIP SAHA, INDRANATH SENGUPTA, AND GAURAB TRIPATHI
is also an exact sequence of free modules.
C
B
Proof. Consider the sequence Ra1 −→ Ra2 −→ Ra3 . If we take Q1 =
P1 , Q2 = P2 and Q3 = I and apply Lemma ??, we get that the sequence
Ra1
P2−1 CP1
−→
BP
Ra2 −→2 Ra3 is exact. We further note that the entire se-
P −1 CP1
BP
A
An+1
A
quence Ra1 2−→ Ra2 −→2 Ra3 −→ Ra4 is exact as well, since Im(B) =
BP
Im(BP2 ) and P2 is invertible. Let us now consider the sequence Ra2 −→2
A
Ra3 −→ Ra4 . We take Q1 = Q3 = I, Q2 = P3−1 and apply Lemma 2.3 to
arrive at our conclusion.
Lemma 3.10. Let
An−1
n
· · · −→ Rβn+1 −→ Rβn −→
Rβn−1 −→ Rβn−2 −→ · · ·
be an exact sequence of free R modules. Let aij denote the (i, j)-th entry
of An . Suppose that alm = ±1 for some l and m, ali = 0 for i 6= m and
′
ajm = 0 for j 6= l. Let An+1 be the matrix obtained by deleting the m-th
′
row from An+1 , An−1 the matrix obtained by deleting the l-th column from
′
An−1 and An the matrix obtained by deleting the l-th row and m-th column
from An . Then, the sequence
′
· · · −→ R
βn+1
An+1
−→ R
′
′
βn −1 An
−→ R
βn−1 −1
An−1
−→ Rβn−2 −→ · · ·
is exact.
Proof. The fact that the latter sequence is a complex is self evident. We
need to prove its exactness. By the previous lemma we may assume that
l = m = 1, for we choose elementary matrices to permute rows and
columns and these matrices are always invertible. Now, due to exactness
of the first complex we have An−1 An = 0. This implies that the first col′
umn of An−1 = 0, which implies that Im(An−1 ) = Im(An−1 ). Therefore,
the right exactness of An+1 is preserved. By a similar argument we can
′
prove that the left exactness of An+1 is preserved.
′
from R. If (x) ∈ ker(A
Let (x) denote a tuple with entries
n ), then
(0, x) ∈ ker(An ). There exists y ∈ Rβn+1 such that An−1 y = (0, x).
′
′
It follows that An−1 y = (x), proving the left exactness of An . By a
′
similar argument we can prove the right exactness of An .
Lemma 3.11. Let A be q × p matrix over R with Aij = ±1, for some i and
j. Let C be a p × s matrix and B a r × q matrix over R. There exist an
invertible q × q matrix X and an invertible p × p matrix Y , such that
SUM IDEALS
7
(i) (XAY )kj = δki and (XAY )ik = δjk , that is
···
0 ···
···
0 ···
..
..
.
.
.
.
.
XAY = 0 · · · 0 1 0 · · · 0 ; 1 at the (i, j)−th spot.
.
..
..
..
.
.
···
0 ···
···
0 ···
P
(ii) (Y −1 C)kl = Ckl for k 6= j and (Y −1 C)jl = Cjl + t6=i (ait )Ctl
P
(iii) (BX −1 )kl = Bkl for l 6= i and (BX −1 )ki = Bki + t6=i (atj )Bkt .
Proof. (i) We prove for aij = 1. The other case is similar. We take Y =
Πk6=j Ejk (−aik ) and X = Πk6=i Eki (−akj ), where Ekl (α) denotes the matrix
E with Ekl = α, Ett = 1 and Eut = 0 for u 6= t and (u, t) 6= (k, l).
(ii) and (iii) are easy to verify.
Lemma 3.12. Let A be q × p matrix, C be a p × s matrix and B a r × q
matrix over R. The matrices A, B and C satisfy property Pij if they satisfy
the following conditions:
• Aij = 1 , Aik ∈ m for k 6= j and Akj ∈ m for k 6= i;
• Bki ∈ m, for 1 ≤ k ≤ r;
• Cjl ∈ m, for 1 ≤ l ≤ s.
The matrices XAY , BX −1 and Y −1 C satisfy property Pij , if A, B, C
satisfy property Pij .
Proof. This follows from the above lemma since aik and akj belong to m.
4. M INIMAL
eij ) + hgi , gj i
FREE RESOLUTION OF I2 (X
Lemma 4.1. Let X be generic or generic symmetric. Let i < j.
eij )) = n − 1.
(i) ht(I2 (X
fij ).
(ii) The Eagon-Northcott complex minimally resolves the ideal I2 (X
Proof. (i) We show that f1 , . . . , fn−1 , given by fk = xik xj,k+1 − xjk xi,k+1,
1 ≤ k ≤ n − 1 form a regular sequence.
Let us first assume that X is generic. We take the lexicographic monomial order induced by the following ordering among the variables: xi1 >
e and
xi2 > · · · > xin > xj2 > xj3 > · · · > xjn > xj1 not appearing in X
the variables yk are smaller than xj1 . Then, Lt(fk ) = xik xj,k+1 and hence
8
JOYDIP SAHA, INDRANATH SENGUPTA, AND GAURAB TRIPATHI
gcd(Lt(fk ), Lt(fl )) = 1 for every k 6= l. Therefore, f1 , . . . , fn−1 is a regular
e ≥ n − 1. On the other hand,
sequence by Lemma 3.4 and hence ht(I2 (X))
e ≤ n − 1, by Theorem [13.10] in [7]. Hence, ht(I2 (X))
e ≤ n − 1.
ht(I2 (X))
If X is generic symmetric, then we have to choose the lexicographic
monomial order induced by xii > xij > x1i > x2i > · · · > xi−1,i >
xi,i+1 > · · · > xin > xjj > · · · > xc
ij > · · · > xj−1,j > xj,j+1 > · · · > xjn
fij and the variables yp are smaller than
and variables xkl not appearing in X
xjn .
fij ) is n − 1, which is the maximum. Hence, the
(ii) The height of I2 (X
fij ).
Eagon-Northcott complex minimally resolves the ideal I2 (X
Lemma 4.2. Let X be generic or generic symmetric. Let i < j. Then,
eij ) ∩ hgi i = I2 (X
eij ) · hgi i, that is, the ideals I2 (X
eij ) and hgi i intersect
I2 ( X
transversally.
Proof. Let X be generic. We choose the lexicographic monomial order
′
′
given by the following ordering among the variables: xst > xs′ t′ if (s , t ) >
(s, t) and yn > yn−1 > · · · > y1 > xst for all s, t. Then, by Lemma 3.5
eij ).
the set of all 2 × 2 minors forms a Gröbner basis for the ideal I2 (X
eij ))) doesn’t involve the inClearly, the minimal generating set m(Lt(I2 (X
determinates xin and yn , whereas Lt(gi ) = xin yn . Hence, the supports of
eij ))) and m(Lt(gi )) are disjoint. Therefore, by Lemma 3.6 we
m(Lt(I2 (X
are done.
Let X be generic symmetric. Once we choose the correct monomial
order, the rest of the proof is similar to the generic case. Suppose that
(i, j) = (n − 1, n). We choose the lexicographic monomial order given by
the following ordering among the variables:
y1 > yn > yn−1 > · · · > y2 >
>
>
>
xn−1,n−1 > xn−1,n > x1,n−1 > x2,n−1 > · · · > xn−2,n−1
xnn
x1n > · · · > xn−2,n
xst for all other s, t.
Suppose that (i, j) 6= (n − 1, n). We choose the lexicographic monomial
order given by the following ordering among the variables:
yn > yn−1 > · · · > y1 > xii > xij > x1i > x2i > · · · > xi−1,i > xi,i+1 > · · · > xin
> xjj > · · · > xj−1,j > xj,j+1 > · · · > xjn
> xst for all other s, t.
SUM IDEALS
9
eij ) + hgi i : gj ) =
Lemma 4.3. Let X be generic and i < j. Then, (I2 (X
eij ) + hgi i :
hxi1 , . . . , xin i. If X is generic symmetric and i < j, then (I2 (X
gj ) = hx1i , . . . , xi−1,i , xii , . . . , xin i.
P
Proof. X be generic. We have xit gj = xjt gi + nk=1 (xit xjk − xik xjt )yk .
fij ) + hgi i : gj ). Moreover, I2 (X
eij ) + hgi i ⊆
Hence, hxi1 , · · · , xin i ⊆ (I2 (X
hxi1 , · · · , xin i and gj ∈
/ hxi1 , · · · , xin i. The ideal hxi1 , · · · , xin i being a
eij ) + hgi i : gj ). The
prime ideal, it follows that hxi1 , · · · , xin i ⊇ (I2 (X
proof for the generic symmetric case is similar.
5. R ESOLUTION
OF THE SUM IDEALS
eij ) +
Our aim is to construct a minimal free resolution for the ideal I2 (X
eij ) and hgi i intersect
hg1, . . . , gn i. We have proved that the ideals I2 (X
eij ) + hgi i can therefore be resolved
transversally; see 4.2. The ideal I2 (X
eij ) +
minimally by Theorem 3.7. We have also proved that the ideal I2 (X
hgi i and the ideal hgj i have linear quotient; see 4.3. Therefore, the ideal
eij ) + hgi , gj i can be resolved by the mapping cone construction. A
I2 ( X
minimal free resolution can then be extracted from this resolution by apeij ) + hgi , gj i
plying Lemma 3.12. Next, we will show that the ideal I2 (X
intersects transversally with the ideal hgl1 i, if l1 is the minimum in the
eij ) +
set {1, 2, . . . , n} \ {i, j}; see Lemma 5.4. Therefore, the ideal I2 (X
hgi , gj , gl1 i can be resolved minimally by Theorem 3.7. Proceeding in this
eij )+hgi , gj , gl1 , . . . , gl i
manner, we will be able to show that the ideals I2 (X
k
and hglk+1 i intersect transversally, if 1 ≤ l1 < . . . < lk < lk+1 ≤ n and
lk+1 is the smallest in the set {1, 2, . . . , n} \ {i, j, l1 , . . . , lk }; see Lemma
eij ) +
5.4. This finally gives us a minimal free resolution for the ideal I2 (X
hgi , gj , gl1 , . . . , gln−2 i, with 1 ≤ l1 < . . . < ln−2 ≤ n and lt ∈
/ {i, j} for
every t.
Let us assume that X is generic and i = 1 and j = 2. The proofs for
the general i and j, with i < j would be similar according to the aforesaid
scheme. The proofs in the case when X is generic symmetric would be
similar as well. Comments for general i < j and the symmetric case have
been made whenever necessary.
10
JOYDIP SAHA, INDRANATH SENGUPTA, AND GAURAB TRIPATHI
e12 ) + hg1 , g2 i. The minimal free
5.1. A minimal free resolution for I2 (X
e12 ) is given by the Eagon-Northcott complex, which is
resolution of I2 (X
the following:
δk
e −→ 0
Ek−1 −→ · · · E1 −→ E0 −→ R/I2 (X)
E : 0 −→ En−1 −→ · · · Ek −→
n
where E0 ∼
= R1 , Ek = Rk(k+1) and for each k = 0, . . . , n − 2, the map
δk : Ek → Ek−1 is defined as
δk (ei1 ∧ · · · ∧ eik+1 ) ⊗
v2k−1
δk (ei1 ∧ · · · ∧ eik+1 ) ⊗ v1k−1
δk (ei1 ∧ · · · ∧ eik+1 ⊗
v1j v2k−j−1)
=
k+1
X
x2s (ei1 ∧ · · · ∧ eˆs ∧ · · · ∧ eik+1 ) ⊗ v2k−2
=
k+1
X
(−1)s+1 x1s (ei1 ∧ · · · ∧ eˆs ∧ · · · ∧ eik+1 ) ⊗ v1k−2
s=1
s=1
=
k+1
X
(−1)s+1 x1s (ei1 ∧ · · · ∧ eˆs ∧ · · · ∧ eik+1 ) ⊗ v1j−1v2k−j−1
s=1
+
k+1
X
x2s (ei1 ∧ · · · ∧ eˆs ∧ · · · ∧ eik+1 ) ⊗ v1j v2k−j−2
s=1
for every ordered k + 1 tuple (i1 , i2 , · · · , ik+1 ), with 1 ≤ i1 < · · · < ik+1 ≤
n and for every j = 1, 2, · · · , k − 2.
A minimal resolution of hg1 i is given by:
g1
G : 0 −→ R −→ R −→ R/hg1 i −→ 0.
e12 ) and hg1 i intersect transversally, by Lemma 4.2. ThereThe ideals I2 (X
e12 ) + hg1 i is given
fore, by Lemma 3.7, a minimal free resolution for I2 (X
by the tensor product complex
ψk+1
e12 )+hg1 i → 0
E ⊗G : 0 → En−1 → · · · → Ek+1 ⊕Ek −→ Ek ⊕Ek−1 → · · · → E0 → R/I2 (X
such that ψk : Ek ⊕ Ek−1 −→ Ek−1 ⊕ Ek−2 is the map defined as
j k−j−1
j k−j−1
ψk ei1 ∧ · · · ∧ eik+1 ⊗ v1 v2
= δk (ei1 ∧ · · · ∧ eik+1 ) ⊗ v1 v2
ψk (ei1 ∧ · · · ∧ eik ) ⊗ v1j v2k−j−2 = (−1)k−1g1 (ei1 ∧ · · · ∧ eik ) ⊗ v1j v2k−j−2
j k−j−2
.
+ δk−1 (ei1 ∧ · · · ∧ eik ) ⊗ v1 v2
e12 ) + hg1, g2 i by mapping
Now we find a minimal free resolution for I2 (X
e12 ) +
cone. Let Ck := (E· ⊗ G· )k . We have proved in Lemma 4.3 that hI2 (X
hg1i : g2 i = hx11 , x12 , · · · , x1n i; which is minimally resolved by the Koszul
SUM IDEALS
11
complex. Let us denote the Koszul Complex by (F· ; σk ), where σk is the kth differential. We first construct the connecting map τ· : F· → E· ⊗ G· . Let
n
n
n
us write Fk := R(k ) and Ck := Rk(k+1) ⊕R(k−1)( k ) . The map τk : Fk → Ck
is defined as:
X
τk (ei1 ∧ · · · ∧ eik ) =
yj (ei1 ∧· · ·∧eik ∧ej )⊗v1k−1 −(ei1 ∧· · ·∧eik )⊗v1k−2 .
j
Let us choose the lexicographic ordering among the k tuples (i1 , . . . , ik ),
n
such that 1 ≤ i1 < · · · < ik ≤ n in order to write an ordered basis for R(k ) .
We define lexicographic ordering among the tuples (i1 , . . . , ik+1 , k − j, j),
for j = 0, . . . , k and k = 1, . . . , n − 1 to order the basis elements for
n
n
n
Rk(k+1) . Moreover, in the free module Ck = Rk(k+1) ⊕ R(k−1)( k ) , we order
n
the basis elements in such a way that those for Rk(k+1) appear first. The
matrix representation of τk with respect to the chosen ordered bases is the
following:
Ak( n )×(n)
0k( n )×(n)
k
k+1
k
k+1
.
−I(n)×(n)
k
k
0
(k−1)(nk)×(nk)
0(k−2)(n)×(n)
k
k
Theorem 5.1. The following diagram commutes for every k = 1, . . . , n−1:
τk
FO k
/
σk+1
CO k
ψk+1
Fk+1
τk+1
/
Ck+1
Proof. It suffices to prove the statement for a basis element (ei1 ∧· · ·∧eik+1 )
of Fk+1 . Without loss of generality we consider (e1 ∧ · · · ∧ ek+1 ). We first
compute (τk ◦ σk+1 )(e1 ∧ · · · ∧ ek+1 ).
σk+1
(e1 ∧ · · · ∧ ek+1 ) 7−→
k+1
X
(−1)j+1 x1j (e1 ∧ · · · ∧ eˆj ∧ · · · ∧ ek+1 )
j=1
τk
7−→
k+1
X
(−1)
j=1
−
k+1
X
j=1
j+1
n
X
x1j [
ys (e1 ∧ e2 ∧ · · · ∧ eˆj ∧ · · · ∧ ek+1 ∧ es ) ⊗ v1k−1]
s=1
(−1)j+1 x1j (e1 ∧ e2 ∧ · · · ∧ eˆj ∧ · · · ∧ ek+1 ) ⊗ v1k−2 .
12
JOYDIP SAHA, INDRANATH SENGUPTA, AND GAURAB TRIPATHI
We now compute (ψk+1 ◦ τk+1 )(e1 ∧ · · · ∧ ek+1 ).
n
τk+1 X
ys (e1 ∧ · · · ∧ ek+1 ∧ es ) ⊗ v1k
(e1 ∧ · · · ∧ ek+1 ) 7−→
s=1
−(e1 ∧ · · · ∧ ek+1 ) ⊗ v1k−1
n
X
ψk+1 X
7−→
[
(−1)j+1 ys x1j (e1 ∧ · · · ∧ eˆj ∧ · · · ∧ ek+1 ∧ es ) ⊗ v1k−1
s=1 j=1,2,··· ,k+1,s
−(−1)k g1 (e1 ∧ · · · ∧ eˆj ∧ · · · ∧ ek+1 ∧ ej ) ⊗ v1k−1
−
k+1
X
(−1)j+1 x1j (e1 ∧ · · · ∧ eˆj ∧ · · · ∧ ek+1 ∧ ej ) ⊗ v1k−2
j=1
=
n
k+1 X
X
[x1j ys (−1)j+1 (e1 ∧ · · · ∧ eˆj ∧ · · · ∧ ek+1 ∧ es ) ⊗ v1k−1]
j=1 s=1
n
X
(−1)s+1 ys x1s (e1 ∧ · · · ∧ eˆs ∧ · · · ∧ ek+1 ∧ es ) ⊗ v1k−1
+
s=1
−(−1)k g1 (e1 ∧ · · · ∧ eˆj ∧ · · · ∧ ek+1 ∧ ej ) ⊗ v1k−1
−
k+1
X
(−1)j+1 x1j (e1 ∧ · · · ∧ eˆj ∧ · · · ∧ ek+1 ∧ ej ) ⊗ v1k−2
j=1
=
k+1 X
n
X
[x1j ys (−1)j+1 (e1 ∧ · · · ∧ eˆj ∧ · · · ∧ ek+1 ∧ es ) ⊗ v1k−1 ]
j=1 s=1
n
X
(−1)s+1 (−1)k+1−s ys x1s (e1 ∧ · · · ∧ eˆs ∧ · · · ∧ ek+1 ∧ es ) ⊗ v1k−1
+
s=1
−(−1)k g1 (e1 ∧ · · · ∧ eˆj ∧ · · · ∧ ek+1 ∧ ej ) ⊗ v1k−1
−
k+1
X
(−1)j+1 x1j (e1 ∧ · · · ∧ eˆj ∧ · · · ∧ ek+1 ∧ ej ) ⊗ v1k−2
j=1
=
k+1 X
n
X
[x1j ys (−1)j+1 (e1 ∧ · · · ∧ eˆj ∧ · · · ∧ ek+1 ∧ es ) ⊗ v1k−1 ]
j=1 s=1
+(−1)k g1 (e1 ∧ · · · ∧ eˆj ∧ · · · ∧ ek+1 ∧ ej ) ⊗ v1k−1
−(−1)k g1 (e1 ∧ · · · ∧ eˆj ∧ · · · ∧ ek+1 ∧ ej ) ⊗ v1k−1
−
k+1
X
(−1)j+1 x1j (e1 ∧ e2 ∧ · · · ∧ eˆj ∧ · · · ∧ ek+1 ∧ ej ) ⊗ v1k−2
j=1
=
k+1 X
n
X
[x1j ys (−1)j+1 (e1 ∧ · · · ∧ eˆj ∧ · · · ∧ ek+1 ∧ es ) ⊗ v1k−1 ]
j=1 s=1
−
k+1
X
j=1
(−1)j+1 x1j (e1 ∧ · · · ∧ eˆj ∧ · · · ∧ ek+1 ∧ ej ) ⊗ v1k−2.
SUM IDEALS
13
e12 )+
Hence the mapping cone M(E· ⊗G· ; F· ) gives us the resolution for I2 (X
hg1, g2 i as described in 3.2. However, this resolution is not minimal. We
now construct a minimal free resolution from M(E· ⊗ G· ; F· ).
e12 ) + hg1 , g2 i has been constructed in
A free resolution for the ideal I2 (X
3.1, which is given by
dn+2
dk+1
d
k
Dk−1 · · · −→ D1 −→ D0 −→ 0,
0 −→ Dn+2 −→ Dn+1 · · · −→ Dk −→
n
n
n
such that Dk = Fk−1 ⊕ Ck = R(k−1) ⊕ (Rk(k+1) ⊕ R(k−1)( k ) ) and dk =
(−σk−1 + τk−1 , ψk ). Let us recall that the map ψ is the differential in the
e12 ) + hg1 i, the map σ is the differential in the Koszul
free resolution for I2 (X
resolution for hx11 , x12 , . . . , x1n i and τ is the connecting homomorphism
between the complexes defined in 3.1. Let us order bases for Fk−1 and
Ck with respect to the lexicographic ordering. Finally we order basis for
Dk in such a way that the basis elements for Fk−1 appear first, followed
by the basis elements for Ck . Therefore, the matrix representation for the
differential map dk is given by
−σk−1
τk−1
−σk−1
0
A
=
ψk
τk−1 =
0
0
0
−I
0
.
ψk
The entries in the matrices representing σk−1 and ψk can only belong to the
maximal ideal hxij , yj i, since both are differentials of minimal free resolutions. The block matrix A has also elements in the maximal ideal hxij , yj i.
The only block which has elements outside the maximal ideal hxij , yj i is in
the identity block appearing in τk−1 . Therefore, it is clear from the matrix
representation of the map dk that we can apply Lemma 3.12 repeatedly to
get rid of non-minimality. Hence, we get a minimal free resolution and the
14
JOYDIP SAHA, INDRANATH SENGUPTA, AND GAURAB TRIPATHI
e12 ) + hg1 , g2 i are
total Betti numbers for the ideal I2 (X
b0 = 1,
n
+ 2,
b1 =
2
n
+ n,
b2 = 2 ·
3
n
n
n
n
n
−
−
+
+ (k − 1)
bk+1 = k
k
k−1
k−1
k
k+1
n
n
, for 2 ≤ k ≤ n − 1,
+ (k − 2)
= k
k
k+1
bn = n − 2.
eij ) + hg1 , . . . , gn i.
5.2. A minimal free resolution for I2 (X
Lemma 5.2. Let Gk = G12 ∪ {g1 , g2 , . . . , gk }, 1 ≤ k ≤ n, where G12 is the
e12 defined in the list of notations in section 2. The
set of all 2×2 minors of X
e12 ) + hg1 , . . . , gk i with respect
set Gk is a Gröbner basis for the ideal I2 (X
to a suitable monomial order.
Proof. We take the lexicographic monomial ordering in R induced by the
following ordering among the indeterminates:
xnn > · · · > xtt > · · · > x33 > y1 > · · · > yn
> x11 > · · · > x1n > x21 > · · · > x2n
> xst for other s, t.
Then, we observe that for every s ≥ 3, Lt(gs ) is coprime with Lt(gt ) for
every 1 ≤ t ≤ k; t 6= s and also coprime with Lt(h) for every h ∈ G.
e12 ). Therefore,
Moreover, by Lemma 3.5, G12 is a Gröbner basis for I2 (X
we only have test the S-polynomials S(g1, g2 ), S(g1 , h) and S(g2 , h), for
h ∈ G.
Pn
We can write S(g1 , g2 ) =
k=1 [12|1k]yk and note that Lt([12|1k]) ≤
Lt(S(g1 , g2 ) for every 1 ≤ k ≤ n. Hence, S(g1 , g2 ) →G ′ 0. We note
that, if i 6= 1 then the leading terms of g1 and [12|st] are mutually coprime
and therefore
P S(g1 , [12|st]) →Gk 0. Next, the expression S(g1 , [12|1t]) =
x1t g2 + s6=t [12|st]ys shows that S(g1 , [12|1t]) →Gk 0. Similarly, if s 6= 1
SUM IDEALS
15
then the leading terms of g2 and [12|st] are mutually coprime and therefore S(g2 , [12|st]) →Gk 0. The proof for S(g2 , [12|1t]) is similar to that of
S(g1, [12|st]).
Remark. The corresponding result for i < j in general would be the following:
Lemma 5.3. Let Gi,j,k = Gij ∪ {gi , gj , gl1 , . . . , glk−2 }, 1 ≤ k ≤ n, 1 ≤
l1 < · · · < lk−2 ≤ n and lt is the smallest in the set {1, 2, . . . , n} \
eij defined
{i, j, l1 , . . . , lt−1 }; Gij denotes the set of all 2 × 2 minors of X
in the list of notations in section 2. The set Gi,j,k is a Gröbner basis for the
eij ) + hg1 , . . . , gk i with respect to a suitable monomial order.
ideal I2 (X
Proof. While proving this statement with i < j arbitrary, we have to choose
the following monomial orders. The rest of the proof remains similar.
Suppose that X is generic, we choose the lexicographic monomial ordering in R induced by the following ordering among the indeterminates:
xnn > · · · > xc
cii > · · · > x11 >
jj > · · · > x
>
>
>
y1 > · · · > yn
xi1 > · · · > xin
xj1 > · · · > xjn
xst for all other
s, t.
If X is generic symmetric, we choose the lexicographic monomial ordering in R induced by the following ordering among the indeterminates:
xnn > · · · > xc
cii > · · · > x11 > y1 > · · · > yn
jj > · · · > x
> xii > xij > x1i > x2i > · · · > xi−1,i > xi,i+1 > · · · > xin
> xjj > · · · > xj−1,j > xj,j+1 > · · · > xjn
> xst for all other s, t.
e12 )+hg1 , . . . , gk i and hgk+1i intersect transverLemma 5.4. The ideals I2 (X
sally, for every 2 ≤ k ≤ n − 1.
e12 ) + hg1 , . . . , gk i such
Proof. Suppose not, then, there exists hk+1 ∈
/ I2 ( X
e
that hk+1 gk+1 ∈ I2 (X12 ) + hg1, . . . , gk i. Let us choose the same monomial
order on R as defined in Lemma 5.2. Upon division by elements of Gk ,
we may further assume that Lt(h) ∤ Lt(hk+1 ) for every h ∈ Gk , since Gk
e12 ) + hg1, . . . , gk i by Lemma 5.2. On
is a Gröbner basis for the ideal I2 (X
e12 ) + hg1 , . . . , gk i and therefore Lt(h) |
the other hand hk+1 gk+1 ∈ I2 (X
Lt(hk+1 ), for some h ∈ Gk , since Lt(h) and Lt(gk+1 ) are mutually coprime,
- a contradiction.
16
JOYDIP SAHA, INDRANATH SENGUPTA, AND GAURAB TRIPATHI
Remark. The corresponding result for i < j in general would be the
eij ) + hgi , gj , gl1 , . . . , gl i and hgl i intersect
following: The ideals I2 (X
k
k+1
transversally, if 1 ≤ l1 < . . . < lk < lk+1 ≤ n and lk+1 is the smallest in
the set {1, 2, . . . , n} \ {i, j, l1 , . . . , lk }, for every 1 ≤ k ≤ n − 3. The proof
is essentially the same as above after we use the Lemma 5.3.
Proof of Theorem 2.1. Part (1) of the theorem has been proved in 5.1. We
now prove part (2) under the assumption i = 1, j = 2. Let the minimal
e12 ) + hg1 , g2 , · · · , gk i be (L , δ ). By Lemma 5.4
free resolution of I2 (X
e12 ) + hg1 , . . . , gk+1i is
and Lemma 3.7, the minimal free resolution of I2 (X
gk+1
given by the tensor product of (L , δ ) and 0 −→ R −→ R −→ 0, and that is
precisely (K , ∆ ), with Kp = Lp ⊕Lp−1 and ∆p = (λp , (−1)p gk+1 + λp−1 ).
Let βk,p , 0 ≤ p ≤ n + k, denote the p-th total Betti number for the ideal
e12 ) + hg1 , . . . , gk i. Then, the total Betti numbers βk+1,p , 0 ≤ p ≤
I2 ( X
e12 ) + hg1 , . . . , gk+1i are given by βk+1,0 = 1,
n + k + 1 for the ideal I2 (X
βk+1,p = βk,p−1 + βk,p for 1 ≤ p ≤ n + k and βk+1,n+k+1 = n − 2. The
proof for general i < j follows similarly according to the strategy discussed
in the beginning of section 5.
eij )+hg1 , . . . , gn i
In particular, the total Betti number βn−2,p for the ideal I2 (X
are given by βn−2,0 = 1, βn−2,p = βn−3,p−1 + βn−3,p for 1 ≤ p ≤ 2n − 3
and βn−2,2n−2 = n − 2.
Example. We show the Betti numbers at each stage for n = 4 and n = 5.
1 6
1 7
n=4: 1 8
1 9
1 10
1
1
1
n=5:
1
1
1
10
11
12
13
14
15
8 3
14 11 3
12 7 2
20 19 9 2
29 39 28 11 2
20 5
4
30 25
9
4
25 25 14
3
37 50 39 17 3
50 87 89 56 20 3
64 137 176 145 76 23 3
R EFERENCES
[1] W. Bruns, A.R. Kustin, M. Miller, The Resolution of the Generic Residual Intersection
of a Complete Intersection, Journal of Algebra 128 (1990) 214-239.
[2] A. Conca, Emanuela De Negri, Elisa Gorla, Universal Gröbner bases for Maximal
Minors, International Mathematics Research Notices 11(2015) 3245-3262.
SUM IDEALS
17
[3] D. Eisenbud, Geometry of Syzygies, Springer-Verlag, NY, 2005.
[4] P. Gimenez, I. Sengupta and H. Srinivasan, Minimal graded free resolution for monomial curves defined by arithmetic sequences, Journal of Algebra 388 (2013) 294-310.
[5] M.R., Johnson, J. McLoud-Mann, On equations defining Veronese Rings, Arch. Math.
(Basel) 86(3)(2006) 205-210.
[6] S.Lichtenbaum, On the vanishing of Tor in regular local rings, Illinois J.Math. 10:
220- 226,1966.
[7] H. Matsumura, Commutative Ring Theory, Cambridge University Press, NY, 1986.
[8] I. Peeva, Graded Syzygies, Springer-Verlag London Limited, 2011.
[9] J. Saha, I. Sengupta, G. Tripathi, Ideals of the form I1 (XY ), arXiv:1609.02765
[math.AC] 2016.
[10] J. Saha, I. Sengupta, G. Tripathi, Primality of certain Determinanatal ideals,
arXiv:1610.00926 [math.AC] 2016.
Department of Mathematics, RKM Vivekananda University, Belur Math, Howrah
711202, India.
E-mail address: [email protected]
Discipline of Mathematics, IIT Gandhinagar, Palaj, Gandhinagar, Gujarat 382355,
INDIA.
E-mail address: [email protected]
Department of Mathematics, Jadavpur University, Kolkata, WB 700 032, India.
E-mail address: [email protected]
| 0 |
arXiv:1711.07737v1 [math.GT] 21 Nov 2017
SUPERRIGIDITY OF ACTIONS ON FINITE RANK
MEDIAN SPACES
ELIA FIORAVANTI
Abstract. Finite rank median spaces are a simultaneous generalisation
of finite dimensional CAT(0) cube complexes and real trees. If Γ is an
irreducible lattice in a product of rank one simple Lie groups, we show
that every action of Γ on a complete, finite rank median space has
a global fixed point. This is in sharp contrast with the behaviour of
actions on infinite rank median spaces.
The fixed point property is obtained as corollary to a superrigidity
result; the latter holds for irreducible lattices in arbitrary products of
compactly generated groups.
We exploit Roller compactifications of median spaces; these were introduced in [Fio17a] and generalise a well-known construction in the
case of cube complexes. We provide a reduced 1-cohomology class that
detects group actions with a finite orbit in the Roller compactification.
Even for CAT(0) cube complexes, only second bounded cohomology
classes were known with this property, due to [CFI16]. As a corollary,
we observe that, in Gromov’s density model, random groups at low density do not have Shalom’s property HF D .
Contents
1. Introduction.
2. Preliminaries.
2.1. Median spaces and median algebras.
2.2. Bridges.
2.3. The Haagerup class.
3. Haagerup class and elementarity of actions.
3.1. The main statement.
3.2. Elementarity and Shalom’s property HF D .
4. Superrigidity.
4.1. The superrigidity result.
4.2. Homomorphisms to coarse median groups.
Appendix A. Structure of UBS’s.
References
1
2
9
9
20
21
23
23
26
27
27
34
35
41
2
ELIA FIORAVANTI
1. Introduction.
A metric space X is median if, for any three points x1 , x2 , x3 ∈ X, there
exists a unique point m ∈ X such that d(xi , m) + d(m, xj ) = d(xi , xj ) for all
1 ≤ i < j ≤ 3. Simple examples are provided by real trees and Rn with the
`1 metric.
Under a finite dimensionality assumption, to each connected median space
b The spaces X and X
b are biX corresponds a canonical CAT(0) space X.
b
Lipschitz equivalent and isometries of X induce isometries of X [Bow16].
For instance, to Rn with the `1 metric we associate Rn with its euclidean
distance.
More elaborate examples of median spaces are provided by simply connected cube complexes satisfying Gromov’s link condition; in this case we
b
obtain a median space X by endowing each cube with the `1 metric and X
is the corresponding CAT(0) cube complex. Median spaces generally display wilder features than cube complexes, as, like real trees, they can be
essentially non-discrete objects. Note in this regard that the class of median
spaces is closed under ultralimits; these also preserve a notion of dimension,
usually called rank (see Section 2.1 for a precise definition).
Despite non-discreteness, finite rank median spaces retain many of the
good combinatorial properties of cube complexes. In addition to the CAT(0)
metric, there is a notion of boundary compatible with the median property
[Fio17a] and many groups of isometries contain free non-abelian subgroups
[CS11, Fio17b]. We would expect many known results for CAT(0) cube
complexes to extend to finite rank median spaces without significant complications, for instance [BCG+ 09, GH10, NS13, Fer15, CFI16, KS16] to name
a few. There is however a notable exception to this pattern: general group
actions on median spaces do not have a clear connection with the existence
of codimension one subgroups [Sag95, Ger97, Fio17b].
Such close similarities between cube complexes and general median spaces
can be ascribed to the existence of a collection W of walls. These encode the
geometry of the space in the same way as hyperplanes do in CAT(0) cube
complexes. The set W should not be thought of as discrete and needs to
be endowed with a measure µ that encodes the “thickness” of sets of walls
[CDH10, Fio17a]. Indeed, the concept of median space is in a certain sense
dual to the notion of space with measured walls [CMV04, dCTV08, CDH10];
these extend spaces with walls [HP98], which are the dual viewpoint on
CAT(0) cube complexes.
Our main theorem is a superrigidity result for irreducible lattices Γ in
products G = G1 × ... × G` of locally compact, compactly generated groups.
Namely, under weak assumptions of non-elementarity, every action of Γ on
a finite rank median space X essentially arises from continuous actions
Gi y Yi on median spaces of lower rank. For cube complexes, this was
known due to [CFI16].
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
3
In the more general context of CAT(0) spaces, similar results were obtained long ago in [Mon06, CM09]. Unfortunately, applying these to a median space X only provides actions of the factors Gi on CAT(0) subspaces
b these subspaces might bear no relation to the median structure on
Zi ⊆ X;
X. This might seem like an irrelevant subtlety, but it is, on the contrary,
key to the fixed point properties that this paper provides.
As an illustration of this, consider an irreducible lattice Γ < SL2 R×SL2 R.
It acts properly and cocompactly on the CAT(0) space H2 ×H2 . In particular,
it is quasi-isometric to a product of surface groups, which can easily be
recognised as cubulated. It was shown in [CD17] that Γ acts properly and
coboundedly on an infinite rank median space. The group Γ is moreover
coarse median in the sense of [Bow13a]. Thus, it should appear particularly
striking that every action of Γ on a connected finite rank median space fixes
a point; this follows from our superrigidity result, see Corollary D below.
The proof of our superrigidity theorem follows a very similar outline to
Monod’s [Mon06]. This is mostly hidden in our application of Theorem 4.1
from [Sha00], thus we believe it is important to highlight the analogy here.
We have already mentioned that, to each finite rank median space X, one
b However, one can also
can associate a finite dimensional CAT(0) space X.
consider the infinite dimensional CAT(0) spaces L2 (W , µ) and L2 (H , νb);
here (H , νb) is a close relative of (W , µ) and will be introduced below.
Retracing the proof of Monod Superrigidity in our context, we would first
induce a continuous action of G on the infinite dimensional CAT(0) space
b (see [Mon06] for a definition). Then, we would prove that a
L2 (G/Γ, X)
b splits as a product Z1 × ... × Zk , where the action
subspace of L2 (G/Γ, X)
of G on the i-th factor only depends on the projection to Gi . Finally, we
b the information gained about Γ y L2 (G/Γ, X).
b
would carry back to Γ y X
Our application of Shalom’s machinery instead constructs a continuous
action of G on the infinite dimensional CAT(0) space L2 (G/Γ, L2 (H , νb)).
Again, one then proves a splitting theorem for this action and carries the
gained insight back to Γ y L2 (H , νb). Our main contribution lies in transferring information back and forth between the actions Γ y L2 (H , νb) and
Γ y X. Indeed, Shalom’s machinery can only be set in motion once we have
a nonvanishing reduced cohomology class for Γ y L2 (H , νb); similarly, once
we have a superrigidity statement for Γ y L2 (H , νb), we need to translate
it back to Γ y X.
We now describe our results in greater detail.
1.1. A cohomological characterisation of elementary actions. Each
median space X has a distinguished collection H of subsets called halfspaces;
every wall gives rise to two halfspaces and each halfspace arises from a wall.
The collection H is equipped with a measure νb, see [Fio17a]. In the case of
cube complexes, one recovers the usual notion of halfspace and νb is simply
the counting measure.
4
ELIA FIORAVANTI
Given a topological group G and an isometric action G y X, one naturally obtains a unitary representation ρ : G → U (L2 (H , νb)) and a cocycle
b : G → L2 (H , νb) which will be referred to as the Haagerup cocycle. This
construction is well-known and appears for instance in [CMV04, dCTV08,
CDH10, FV16]. If G y X has continuous orbits, ρ and b are continuous;
thus b induces a reduced continuous cohomology class [b] ∈ Hc1 (G, ρ).
In [Fio17b], we introduced a notion of elementarity for actions on median
spaces; namely, we say that G y X is Roller elementary if G has at least
one finite orbit within a certain compactification X, the Roller compactification. Roller elementarity implies the existence of a finite orbit in the visual
b
compactification of the CAT(0) space X.
If G y X is an isometric action with continuous orbits, Roller elementarity can be described in terms of the reduced cohomology class [b].
Theorem A. Let X be a complete, finite rank median space. The Haagerup
class [b] ∈ Hc1 (G, ρ) vanishes if and only if G y X is Roller elementary.
Theorem A extends various known results. In the case of simplicial trees,
it appears in [FV16]. For CAT(0) cube complexes, the implication “Roller
nonelementary ⇒ [b] 6= 0” is implicit in [DP16]. In [CFI16], the authors construct a family of bounded cohomology classes detecting Roller elementarity
in CAT(0) cube complexes.
We remark that Theorem A equally holds if we replace L2 (H , νb) with
any Lp (H , νb), 1 ≤ p < +∞, although it is slightly simpler to exploit the
richer structure of Hilbert spaces in its proof.
Our superrigidity result only relies on the implication of Theorem A that
yields [b] 6= 0, but we believe the full statement of Theorem A to be of
independent interest. The proof of the other implication turns out to be
quite technical and requires a careful study of the structure of UBS’s in median spaces; these are a generalisation of the simplices in Hagen’s simplicial
boundary of a CAT(0) cube complex [Hag13]. Most of these details will be
relegated to the appendix.
1.2. Superrigidity of actions. Once we have a nontrivial reduced cohomology class, as provided by Theorem A, we can apply well-established
machinery (namely Theorem 4.1 in [Sha00]) to obtain superrigidity results.
Let X be a complete, finite rank median space. Its Roller compactification
X is partitioned into components [Fio17a]. The subset X ⊆ X forms a full
component; every other component is a complete median space of strictly
lower rank. This aspect of X shares some similarities with refined boundaries
of CAT(0) spaces [Lee00, Cap09].
Given a component Z ⊆ X, a median subalgebra of Z is a subset Y ⊆ Z
that is itself a median space with the restriction of the median metric of Z;
equivalently, the median map m : Z 3 → Z takes Y 3 into Y .
We are now ready to state our main superrigidity result.
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
5
Theorem B. Let X be a complete, finite rank median space. Let Γ be
a uniform, irreducible lattice in a product G = G1 × ... × G` of compactly
generated, locally compact groups with ` ≥ 2. Suppose Γ y X is a Roller
nonelementary action. There exist a finite index subgroup Γ0 ≤ Γ, a Γ0 invariant component Z ⊆ X and a Γ0 -invariant closed median subalgebra
Y ⊆ Z where the action Γ0 y Y extends to a continuous action G0 y Y ,
for some open finite index subgroup G0 ≤ G.
Remarks.
(1) Theorem B also applies to nonuniform lattices, as long
as they are square-integrable; this is a well-known technical condition
that implies finite generation and ensures that Theorem 4.1 in [Sha00]
still holds.
Irreducible lattices in G1 × ... × G` are square-integrable if each
Gi is the group of ki -rational points of a semisimple, almost ki simple, ki -isotropic linear algebraic group defined over some local
field ki [Sha00]. Further examples of nonuniform square-integrable
lattices include minimal Kac-Moody groups over sufficiently large finite ground fields; these can be regarded as irreducible lattices in
the product of the closed automorphism groups of the associated
buildings [Rém99, Rém05].
(2) Theorem B should be compared to Shalom’s superrigidity result for
actions on simplicial trees, Theorem 0.7 in [Sha00]. If X is a simplicial tree, Y is always a subcomplex of X and Γ0 = Γ, G0 = G.
The complications in the statement of Theorem B reflect phenomena
that do not happen in the world of trees.
However, as soon as we leave the context of rank one median
spaces, our result is optimal even if one restricts to CAT(0) square
complexes; see Examples 4.7 and 4.8. We remark that, when X is a
general CAT(0) square complex, the median algebra Y might not be
a subcomplex of X or Z.
(3) We can take Γ0 = Γ, G0 = G and Z = X as long as X does not
split as a nontrivial product of median spaces and G has no finite
b see Theorem 4.4 below.
orbits in the visual compactification of X;
However, even in this case, the action in general extends only to a
proper median subalgebra of X.
(4) For CAT(0) cube complexes, the superrigidity result of [CFI16] is
slightly more general than Theorem B as it applies to all nonuniform lattices. This is due to the use of bounded cohomology (namely
Theorem 16 in [BM02]) rather than reduced cohomology. Our strategy of proof was hinted at on page 9 of [Sha00].
1.3. Fixed point properties for irreducible lattices. Unlike automorphism groups of CAT(0) cube complexes, the isometry group of a median
space needs not be totally disconnected. Still, it is possible to exploit Theorem B to derive a fixed point property for irreducible lattices in connected
groups.
6
ELIA FIORAVANTI
Given a locally compact topological group Q, we denote the connected
component of the identity by Q0 . We say that Q satisfies condition (∗)
if Q/Q0 is amenable or has Shalom’s property HF D (see Section 1.5 for a
definition). In particular, all almost-connected and connected-by-(T) groups
satisfy condition (∗).
Theorem C. Let X be a complete, finite rank median space. Let Γ be a
square-integrable, irreducible lattice in a product G1 × ... × G` with ` ≥ 2.
Suppose that every Gi is compactly generated and satisfies condition (∗).
(1) Every action Γ y X is Roller elementary.
(2) If Γ does not virtually map onto Z, every action Γ y X has a finite
orbit within X. If moreover X is connected, every action Γ y X has
a global fixed point.
When X is a real tree, Theorem C also follows from Theorem 6 in [Mon06].
We remark that every group that virtually maps onto Z admits a Roller
elementary action on Rn with unbounded orbits, for some n ≥ 1.
Corollary D. Let X be a complete, connected, finite rank median space. Let
Γ be an irreducible lattice in a connected, higher rank, semisimple Lie group
G. Every action Γ y X fixes a point.
An analogous result for CAT(0) cube complexes was proved in [CFI16]. If
each simple factor of G has rank at least two, then Γ has property (T) and
Corollary D follows from Theorem 1.2 in [CDH10].
The assumption that X have finite rank is essential for Corollary D to
hold. If at least one simple factor Gi < G is locally isomorphic to O(n, 1)
or U (n, 1), n ≥ 2, the lattice Γ admits an action on an infinite rank median
space with unbounded orbits [CDH10]. Moreover, if all Gi ’s are locally
isomorphic to O(ni , 1), ni ≥ 2, then Γ even admits a proper and cobounded
action on an infinite rank median space [CD17].
1.4. Homomorphisms to coarse median groups. Coarse median spaces
were introduced in [Bow13a] as an attempt to formulate a coarse notion
of nonpositive curvature. They have recently received a lot of attention
[Bow13b, Hae16a, Zei16, SW17, ANWZ17, NWZ17] and proved instrumental
to striking results such as [Hae16b, BHS17b].
A group is said to be coarse median if its Cayley graphs are coarse median
spaces. Examples of finite rank coarse median groups include hyperbolic
groups, cubulated groups [HS16], fundamental groups of closed irreducible
3-manifolds not modelled on Nil or Sol, mapping class groups and, more
generally, all groups that are HHS [Bow15, BHS17a, BHS15].
We will be mainly interested in equivariantly coarse median groups. If we
view coarse median groups as a generalisation of groups that are HHS, equivariantly coarse median groups generalise hierarchically hyperbolic groups
(HHG). In particular, hyperbolic groups, cubulated groups and mapping
class groups also are equivariantly coarse median of finite rank.
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
7
More precisely, we say that a group H is equivariantly coarse median if it is
equipped with a finite generating set S ⊆ H and a coarse median µ : H 3 → H
[Bow13a] such that dS (µ(ha, hb, hc), hµ(a, b, c)) ≤ C < +∞ for all elements
h, a, b, c ∈ H; here dS denotes the word metric induced by S. Note that this
definition does not depend on the choice of S. Equivariantly coarse median
groups have already been considered in [Zei16] under a different name.
If H is a coarse median group of finite rank, every asymptotic cone of
H is endowed with a bi-Lipschitz equivalent median metric [Zei16]. As an
example, in asymptotic cones of mapping class groups the median geodesics
are limits of hierarchy paths [BDS11b, BDS11a]. When H is equivariantly
coarse median, the median metric on each asymptotic cone is preserved by
the action of the ultrapower of H.
Given a group Γ and an infinite sequence of pairwise non-conjugate homomorphisms Γ → H, we can apply the Bestvina-Paulin construction [Bes88,
Pau91]. The result is an isometric action on a median space X with unbounded orbits; this is obtained as the canonical median space bi-Lipschitz
equivalent to an asymptotic cone of H. Along with Theorem C, this implies
the following.
Corollary E. Let H be an equivariantly coarse median group of finite rank.
Let Γ be as in the second part of Theorem C. There exist only finitely many
pairwise non-conjugate homomorphisms Γ → H.
Corollary E applies in particular to the case when Γ is an irreducible lattice
in a connected, higher rank, semisimple Lie group. When Γ has property
(T), this result already appears in [Zei16] and, if in addition H is HHG, a
much stronger statement is provided by Corollary D of [Hae16b]. Note that
Haettel’s method cannot be applied in our context as lattices in products
of rank one groups do admit nonelementary actions on hyperbolic spaces.
Also compare Theorem 4.1 in [BF14] for the case of hyperbolic H and an
arbitrary product G of locally compact, second countable groups.
We remark that a stronger conclusion can be reached if H acts freely on
a complete, finite rank median space. Indeed, the following is an immediate
consequence of Theorem F in [Fio17b].
Proposition F. Let H be a group admitting a free action on a complete,
finite rank median space X. Suppose that every action Γ y X is Roller
elementary. Then every homomorphism Γ → H factors through a virtually
abelian subgroup of H.
Proposition F applies for instance to the case when Γ has no non-abelian
free subgroups, has property HF D or satisfies the hypotheses of the first part
of Theorem C. In particular, if Γ is an irreducible lattice in a connected,
higher rank, semisimple Lie group, every homomorphism Γ → H has finite
image.
This should motivate a certain interest in groups acting freely on complete,
finite rank median spaces. If a group acts freely on a finite dimensional
8
ELIA FIORAVANTI
CAT(0) cube complex, it clearly falls into this class; however, it is unclear
at this stage whether these are the only finitely generated examples. See
[CRK15] for partial results in this direction.
Note that the infinitely generated group Q even admits a proper action on
a rank two median space, which splits as a product of a simplicial tree and the
real line; see Example II.7.13 in [BH99]. However, since Q is a divisible group,
all its elements must act elliptically on any (possibly infinite-dimensional)
CAT(0) cube complex [Hag07].
Even within finitely generated groups, actions on median spaces tend to be
more flexible than actions on CAT(0) cube complexes. For every group H, we
can consider dimf m H, i.e. the minimum rank of a complete median space X
admitting a free action of H; if H does not act freely on any complete median
space, we set dimf m H := −1. Restricting to CAT(0) cube complexes, we
can similarly define dimf c H and, if we only consider (metrically) proper
actions, we obtain dimpm H and dimpc H. Thus, dimpc Q = dimf c Q = −1,
while dimpm Q = 2 and dimf m Q = 1.
We remark that 1 ≤ dimf m H < dimcm H and 1 ≤ dimpm H < dimpm H
for many finitely generated groups H. For instance, dimf c H = 1 if and only
if H is free. On the other hand, by work of E. Rips, dimf m H = 1 if and
only if H is a free product of free abelian and surface groups (excluding a
few nonorientable surfaces); see e.g. Theorem 9.8 in [BF95]. One can use
the same observation to construct free actions of various RAAGs on median
spaces of rank strictly lower than the dimension of the Salvetti complex.
Considering more general actions, we mention that there exist finitely
generated groups admitting actions on real trees with unbounded orbits, but
whose actions on simplicial trees (and in fact even finite dimensional CAT(0)
cube complexes) must have a global fixed point [Min16].
1.5. Shalom’s property HF D and random groups. Theorem A also
allows us to prove that various (non-amenable) groups do not have property
HF D . The latter was introduced in [Sha04]: a topological group G has
property HF D if every unitary representation π with Hc1 (G, π) 6= 0 has a
finite dimensional subrepresentation.
Property HF D is trivially satisfied by every locally compact group with
property (T) [Del77], but also, at the opposite end of the universe of groups,
by a large class of amenable groups. This includes polycyclic groups, lamplighter groups and all connected, locally compact, amenable groups [Sha04,
Mar06]. An example of an amenable group without HF D is provided by the
wreath product Z o Z [Sha04].
We prove the following; see Proposition 3.7 below for a more general result.
Corollary G. Let Γ be a discrete group with property HF D . If Γ acts freely
and cocompactly on a CAT(0) cube complex X, then Γ is virtually abelian.
Property HF D has been studied almost exclusively within the class of
amenable groups, where it happens to be a quasi-isometry invariant [Sha04].
It was a key ingredient (implicitely, or explicitely) in recent more elementary
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
9
proofs of Gromov’s theorem on groups of polynomial growth [Kle10, Oza16].
It has moreover interesting applications to the study of quasi-isometric embeddings into Hilbert spaces [dCTV07].
Property HF D is inherited by uniform lattices and it is stable under direct
products and central extensions [Sha04]. Being satisfied by groups that fall
into two extremely different classes, namely amenable and Kazhdan groups,
it is reasonable to expect a wide variety of groups with property HF D . However, it seems that no answer is known to the following question.
Question. Does every finitely generated group with property HF D virtually
split as a direct product of an amenable group and finitely many groups with
property (T)? Does every word hyperbolic group with property HF D also
satisfy property (T)?
Corollary G and the results of [OW11] imply that random groups at low
density do not satisfy HF D .
Corollary H. With overwhelming probability, random groups at density
d < 16 in Gromov’s density model do not have property HF D .
Note however that, at density d > 13 , random groups are Kazhdan [Ż03,
KK13], hence satisfy property HF D .
Acknowledgements. The author warmly thanks Brian Bowditch, Pierre-Emmanuel Caprace, Indira Chatterji, Yves Cornulier, Thomas Delzant,
Mark Hagen, Masato Mimura, Narutaka Ozawa, Romain Tessera, Pierre
Pansu, Alain Valette for helpful conversations. The author expresses special
gratitude to Cornelia Druţu and Talia Fernós for contributing many of the
ideas of this paper.
This work was undertaken at the Mathematical Sciences Research Institute in Berkeley during the Fall 2016 program in Geometric Group Theory,
where the author was supported by the National Science Foundation under
Grant no. DMS-1440140 and by the GEAR Network. Part of this work was
also carried out at the Isaac Newton Institute for Mathematical Sciences,
Cambridge, during the programme “Non-positive curvature, group actions
and cohomology” and was supported by EPSRC grant no. EP/K032208/1.
The author was also supported by the Clarendon Fund and the Merton
Moussouris Scholarship.
2. Preliminaries.
2.1. Median spaces and median algebras. Let X be a metric space.
Given points x, y ∈ X, the interval I(x, y) is the set of points z ∈ X that lie
between x and y, i.e. that satisfy d(x, y) = d(x, z)+d(z, y). We say that X is
a median space if for all x, y, z ∈ X there exists a unique point m(x, y, z) that
lies in I(x, y)∩I(y, z)∩I(z, x). The median map m : X 3 → X that we obtain
this way endows X with a structure of median algebra. Most definitions in
the theory of median spaces can also be given for arbitrary median algebras;
10
ELIA FIORAVANTI
we will follow this approach in introducing the necessary notions. The reader
can consult e.g. [Rol98, Nic08, CDH10, Bow13a, Bow16, Fio17a, Fio17b] for
more background on median spaces and algebras.
In a median space, I(x, y) = {z ∈ I(x, y) | z = m(x, y, z)}; this can be
taken as a definition of intervals in general median algebras. If (M, m) is
a median algebra, we say that a subset C ⊆ M is convex if I(x, y) ⊆ C
whenever x, y ∈ C. The intersection of a finite family of pairwise intersecting convex sets is always nonempty; this is known as Helly’s Theorem, see
Theorem 2.2 in [Rol98].
A subset h ⊆ M is a halfspace if both h and h∗ := M \ h are convex;
we will denote the set of halfspaces of M by H (M ), or simply by H when
there is no ambiguity. Halfspaces h, k are said to be transverse if no two
distinct elements of the set {h, h∗ , k, k∗ } are comparable in the poset (H , ⊆).
Equivalently, the intersections h ∩ k, h ∩ k∗ , h∗ ∩ k, h∗ ∩ k∗ are all nonempty.
Given A ⊆ H , we write A∗ for {h ∈ H | h∗ ∈ A}. A subset σ ⊆ H is
said to be an ultrafilter if any two halfspaces in σ intersect and H = σ t σ ∗ .
For instance, for each x ∈ M the set σx := {h ∈ H | x ∈ h} is an ultrafilter.
Given subsets A, B ⊆ M , we write H (A|B) := {h ∈ H | B ⊆ h, A ⊆ h∗ }
and σA := H (∅|A); we refer to sets of the form H (x|y), x, y ∈ M , as
halfspace intervals. If C, C 0 ⊆ M are disjoint and convex, the set H (C|C 0 )
is nonempty, see Theorem 2.7 in [Rol98]. In particular σx = σy if and only
if the points x, y ∈ M coincide.
A subset Ω ⊆ H is inseparable if, whenever j ∈ H satisfies h ⊆ j ⊆ k for
h, k ∈ Ω, we have j ∈ Ω. Given a subset A ⊆ H , its inseparable closure is
the smallest inseparable subset of H that contains A; it coincides with the
union of the sets H (k∗ |h), for h, k ∈ A.
A wall is a set of the form w = {h, h∗ }, with h ∈ H ; we say that h and
∗
h are the sides of w. The wall w separates subsets A, B ⊆ M if either h
or h∗ lies in H (A|B); we denote by W (A|B) = W (B|A) the set of walls
separating A and B and by W (M ), or simply W , the set of all walls of the
median algebra M . A wall is contained in a halfspace k if one of its sides is;
a wall w is contained in disjoint halfspaces k1 , k2 if and only if k2 = k∗1 and
w = {k1 , k2 }. If a side of the wall w1 is transverse to a side of the wall w2 ,
we say that w1 and w2 are transverse.
The rank of the median algebra M is the maximum cardinality of a set of
pairwise transverse walls; various alternative (and equivalent) definitions of
the rank can be found in Proposition 6.2 of [Bow13a]. We remark that M
has rank zero if and only if it consists of a single point.
If X is a median space of finite rank r, the topological dimension of every
locally compact subset of X is bounded above by r, see Theorem 2.2 and
Lemma 7.6 in [Bow13a]. If moreover X is complete and connected, X is
b [Bow16]. The visual
bi-Lipschitz equivalent to a canonical CAT(0) space X
b is finite dimensional by Proposition 2.1 in [CL10].
boundary of X
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
11
b yielding a homomorEvery isometry of X extends to an isometry of X
b Every convex subset of X is also convex in X;
b
phism Isom X ,→ Isom X.
the converse is not true: the euclidean convex hull of the points (1, 0, 0),
(0, 1, 0) and (1, 1, 1) in [0, 1]3 is not even a median subalgebra.
Halfspaces in finite rank median spaces are fairly well-behaved [Fio17a]:
Proposition 2.1. Let X be a complete median space of finite rank r. Every
halfspace is either open or closed (possibly both). Moreover, if h1 ) ... ) hk
is a chain of halfspaces with h∗1 ∩ hk 6= ∅, we have k ≤ 2r.
The following is a simple but extremely useful observation: given ultrafilters σ1 , σ2 ⊆ H (M ) and h, k ∈ σ1 \ σ2 , we either have h ⊆ k, or k ⊆ h, or h
and k are transverse. Along with Dilworth’s Theorem [Dil50] this yields the
following.
Lemma 2.2. Let M be a median algebra of finite rank r and let σ1 , σ2 ⊆ H
be ultrafilters.
(1) We can decompose σ1 \ σ2 = C1 t ... t Ck , where k ≤ r and each Ci is
nonempty and totally ordered by inclusion.
(2) Every infinite subset of σ1 \ σ2 contains an infinite subset that is
totally ordered by inclusion.
If C ⊆ M is a subset and x ∈ M , a gate for (x, C) is a point y ∈ C such
that y ∈ I(x, z) for every z ∈ C; gates are unique when they exist. If a gate
exists for every point of M , we say that C is gate-convex ; in this case we
can define a gate-projection πC : M → C by associating to each point of M
the unique gate. The gate-projection to C is always a morphism of median
algebras and satisfies W (x|C) = W (x|πC (x)) for every x ∈ M .
Gate-convex subsets are always convex, but the converse is not always
true. For every y, z ∈ M , the interval I(y, z) is gate-convex with gateprojection x 7→ m(x, y, z). A proof of the following statements can be found
in [Fio17a].
Proposition 2.3. Let C, C 0 ⊆ M be gate-convex.
−1
(1) The sets {h ∈ H (M ) | h∩C 6= ∅, h∗ ∩C 6= ∅}, {πC
(h) | h ∈ H (C)}
and H (C) are all naturally in bijection.
(2) There exists a pair of gates, i.e. a pair (x, x0 ) of points x ∈ C and
x0 ∈ C 0 such that πC (x0 ) = x and πC 0 (x) = x0 . In particular, we have
H (x|x0 ) = H (C|C 0 ).
(3) The set πC (C 0 ) is gate-convex with gate-projection πC ◦ πC 0 . Moreover, πC ◦ πC 0 ◦ πC = πC ◦ πC 0 .
(4) If C ∩ C 0 6= ∅, we have πC (C 0 ) = C ∩ C 0 and πC ◦ πC 0 = πC 0 ◦ πC . In
particular, if C 0 ⊆ C, we have πC 0 = πC 0 ◦ πC .
A median algebra (M, m) endowed with a Hausdorff topology is said to be
a topological median algebra if the median map m : M 3 → M is continuous;
here we equip M 3 with the product topology. Median spaces always provide
12
ELIA FIORAVANTI
topological median algebras; indeed, the median map m is 1-Lipschitz in
that case.
In compact median algebras and complete median spaces, a subset is gateconvex if and only if it is closed and convex; moreover, gate-projections are
continuous. In median spaces, gate-projections are even 1-Lipschitz.
Let X be a complete, finite rank median space. In [Fio17a] we endowed
b and a measure νbX (usually denoted just νb).
the set H with a σ-algebra B
b as measurable sets.
Unlike [Fio17a], here we simply refer to the elements of B
The map ∗ : H → H sending each halfspace to its complement is measure
preserving. Every inseparable subset of H is measurable; in particular, all
ultrafilters are measurable and, for all x, y ∈ X, we have νb(σx 4σy ) = d(x, y).
Almost every halfspace h ∈ H is thick, i.e. both h and h∗ have nonemtpy
b and νb differ from their counterparts in
interior. Note that in general B
[CDH10].
Proposition 2.4. Let X be a complete, finite rank median space and σ ⊆ H
an ultrafilter such that νb(σ4σx ) < +∞ for some x ∈ X. There exists y ∈ X
such that νb(σ4σy ) = 0.
Thus X can be equivalently described as the collection of all ultrafilters
on H that satisfy νb(σ4σx ) < +∞ for some x ∈ X; we identify ultrafilters
whose symmetric difference is νb-null. Considering the space of all ultrafilters
on H we obtain a set X in which X embeds. A structure of median algebra
can be defined on X by setting
m(σ1 , σ2 , σ3 ) := (σ1 ∩ σ2 ) ∪ (σ2 ∩ σ3 ) ∪ (σ3 ∩ σ1 ).
We endow X with a topology such that ultrafilters σn ⊆ H converge to
σ ⊆ H if and only if lim sup(σn 4σ) is νb-null. We refer to X as the Roller
compactification of X [Fio17a].
Proposition 2.5. The Roller compactification X is a compact topological
median algebra. The inclusion X ,→ X is a continuous morphism of median
algebras with dense, convex image.
In general, X is not open in X and the inclusion X ,→ X is not a homeomorphism onto its image. The Roller boundary is defined as ∂X := X \ X.
A point of X can in general be represented by several distinct ultrafilters with null symmetric differences. However, for each ξ ∈ X there is a
unique preferred ultrafilter σξ representing ξ [Fio17a]; this should be seen as
a generalisation of the ultrafilters σx when x ∈ X.
We can extend each halfspace h of X to a halfspace e
h of X such that
e
e
h ∩ X = h; indeed, it suffices to define h := {ξ ∈ X | h ∈ σξ }. When
ξ, η ∈ X, we save the notation H (ξ|η) for the set ση \ σξ ⊆ H (X), instead
of the analogous subset of H (X).
If Y ⊆ X is a closed median subalgebra, the restriction of the metric of X
turns Y into a complete median space with rank(Y ) ≤ rank(X); moreover:
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
13
Lemma 2.6. There is a canonical morphism of median algebras ιY : Y ,→ X.
Proof. We write HY := {h ∈ H (X) | h ∩ Y 6= ∅, h∗ ∩ Y 6= ∅}; intersecting
with Y gives a map p : HY → H (Y ). Lemma 6.5 in [Bow13a] implies that
p is surjective. Thus, for every ultrafilter σ ⊆ H (Y ), there is a unique
ultrafilter σ 0 ⊆ H (X) such that σY ⊆ σ 0 and p(σ 0 ∩ HY ) = σ. Applying
this to canonical ultrafilters yields the required embedding.
Given ultrafilters σ1 , σ2 ⊆ H , we set d(σ1 , σ2 ) := νb(σ1 4σ2 ). We refer to
d as the extended metric on X as it satisfies all the axioms of a metric, even
though the value +∞ is allowed. Note that for points of X this is the same
as the original median metric on X.
A component Z ⊆ X is a maximal set of points having finite pairwise
distances; components are convex subsets of X. One component always
coincides with X ⊆ X; all other components are contained in ∂X. The
following appears in [Fio17a].
Proposition 2.7. Let X be a complete median space of finite rank r. Let
Z ⊆ ∂X be a component and let d denote the extended metric on X.
(1) The metric space (Z, d) is a complete median space of rank at most
r − 1.
(2) Every thick halfspace of Z is of the form e
h ∩ Z for a unique h ∈ H .
If C ⊆ X is closed and convex, the closure of C in X is gate-convex and
naturally identified with the Roller compactification of C; thus the notation
C is not ambiguous. We denote by πC : X → C the corresponding gateprojection; it extends the usual gate-projection X → C. If σ ⊆ H (X) is an
ultrafilter representing the point ξ ∈ X, the set σ ∩ H (C) is an ultrafilter
on H (C) and represents πC (ξ).
Similarly, if Z ⊆ ∂X is a component, the closure of Z in X is gateconvex and naturally identified with the Roller compactification Z. The
gate-projection πZ : X → Z satisfies πZ (X) ⊆ Z. In terms of ultrafilters, πZ
takes the point of X represented by σ ⊆ H (X) to the point of Z represented
by σ ∩ H (Z) ⊆ H (Z). The intersection makes sense as, by part 2 of
Proposition 2.7, almost every halfspace of Z arises from a halfspace of X.
Let Γ be a group. An isometric action Γ y X is said to be Roller elementary if there exists a finite orbit within X. The action is Roller minimal
if rank(X) ≥ 1 and Γ does not preserve any proper, closed, convex subset
C ⊆ X.
Roller elementarity implies – but is in general much stronger than – the
b When X is
existence of a finite orbit in the visual compactification of X.
a CAT(0) cube complex, an action is Roller minimal if and only if, in the
terminology of [CS11], it is essential and does not fix any point in the visual
b
boundary of X.
Neither Roller elementarity, nor Roller minimality implies the other one.
However, Roller minimal actions naturally arise from Roller nonelementary
ones [Fio17b]:
14
ELIA FIORAVANTI
Proposition 2.8. Let X be a complete, finite rank median space with an
isometric action Γ y X. Either Γ y X fixes a point or there exist a Γinvariant component Z ⊆ X and a Γ-invariant, closed, convex subset C ⊆ Z
such that Γ y C is Roller minimal.
A Γ-invariant, closed, convex subset C ⊆ Z ⊆ X always gives rise to a
∗ ); here we introduced the
measurable decomposition H = HC t (σC ∪ σC
sets HC := {h ∈ H | C ∩ e
h 6= ∅, C ∩ e
h∗ 6= ∅} and σC := {h ∈ H | C ⊆ e
h}.
Note that, by part 2 of Proposition 2.7, the measure spaces (HC , νbX ) and
(H (C), νbC ) are isomorphic.
We say that an action Γ y X is without wall inversions if there do not
exist g ∈ Γ and h ∈ H such that gh = h∗ . By Proposition 2.1, any action of
a connected, complete, finite rank median space is without wall inversions.
The following was proved in [Fio17b]; compare with [CS11].
Proposition 2.9. Let X be a complete, finite rank median space with thick
halfspaces h ⊆ k. Let Γ y X be a Roller minimal action without wall
inversions.
(1) There exists g ∈ Γ such that gh∗ ( h and d (gh∗ , h∗ ) > 0.
(2) There exists g ∈ Γ such that gk ( h ⊆ k and d(gk, h∗ ) > 0.
When G is a topological group, all isometric actions G y X will be
implicitely required to have continuous orbit maps. Equivalently, the homomorphism G → Isom X is continuous, where we endow Isom X with the
topology of pointwise convergence. We remark that Isom X is a Hausdorff,
sequentially complete topological group as soon as X is complete.
If (M1 , m1 ) and (M2 , m2 ) are median algebras, the product median algebra
is defined as (M1 × M2 , m), where m = (m1 ◦ p1 , m2 ◦ p2 ); here pi denotes the
projection onto the i-th factor. If (X1 , d1 ) and (X2 , d2 ) are median spaces,
we endow the product X1 ×X2 with the `1 metric, namely d = d1 ◦p1 +d2 ◦p2 .
The median algebra associated to the median space (X1 × X2 , d) is just the
product median algebra arising from X1 and X2 . A median space X is said
to be irreducible if it is not isometric to any nontrivial product X1 × X2 .
Proposition 2.10. Let X1 , ..., Xk be irreducible, complete, finite rank median spaces; consider the product X = X1 × ... × Xk .
(1) We have a measurable partition H (X) = H1 t ... t Hk , where each
Hi is canonically identified with H (Xi ). If h ∈ Hi and k ∈ Hj with
i 6= j, the halfspaces h and k are transverse.
(2) Every isometry of X permutes the members of the partition. The
product Isom X1 × ... × Isom Xk sits inside Isom X as an open,
finite index subgroup.
(3) Every closed, convex subset C ⊆ X is of the form C1 × ... × Ck , where
each Ci is a closed convex subset of Xi .
(4) The Roller compactification X is naturally identified with the product
median algebra X1 × ... × Xk with the product topology.
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
15
Proof. For part 1 and the first half of part 2, see Corollary 2.9 and Proposition 2.11 in [Fio17b]. We conclude the proof of part 2 by showing that
Isom X1 × ... × Isom Xk is open in Isom X. Choose points xi , yi ∈ Xi for
each i and a real number > 0 such that 2 < d(xi , yi ) for all i; denote by
pi : X → Xi the projection onto the i-th factor. Let x ∈ X be the point with
coordinates xi ; we also consider the points zi ∈ X such that pj (zi ) = xj for
all i 6= j and pi (zi ) = yi .
Suppose that F ∈ Isom X is such that d(F (x), x) < and d(F (zi ), zi ) <
for all i. We claim that F (Hi ) = Hi for all i; this implies that the product of
the isometry groups of the factors is open in Isom X. Suppose for the sake of
contradiction that, for indices i 6= j, we have F (Hi ) = Hj . This induces an
isometry f : Xi → Xj such that pj ◦ F = f ◦ pi , see [Fio17a]. In particular,
we have d(xj , f (xi )) ≤ d(x, F (x)) < and d(xj , f (yi ) ≤ d(zi , F (zi )) < ;
thus, d(xi , yi ) = d(f (xi ), f (yi )) < 2, a contradiction.
Irreducibility of the factors plays no role in part 3, so it suffices to consider
the case k = 2. Let C1 and C2 be the projections of C to X1 and X2 . If
x ∈ C1 and y ∈ C2 , there exist u ∈ X1 and v ∈ X2 such that the points
x := (x, v) and y := (u, y) lie in C. It is immediate to observe that the point
(x, y) lies in I(x, y) ⊆ C. Finally, part 4 is Lemma 2.10 in [Fio17b].
Halfspaces h1 , ..., hn form a facing n-tuple, n ≥ 2, if they are pairwise
disjoint. We say that h, k ∈ H are strongly separated if h ∩ k = ∅ and no
j ∈ H is transverse to both h and k. See [Fio17b] for the following.
Proposition 2.11. Let X be an irreducible, complete, finite rank median
space; let h be a thick halfspace.
(1) If X admits a Roller minimal action without wall inversions, there
exist thick halfspaces h0 ⊆ h ⊆ h00 such that h0 and h00∗ are strongly
separated.
(2) If X admits a Roller nonelementary, Roller minimal action without
walls inversions, h is part of a facing n-tuple of thick halfspaces for
every n ≥ 2.
Every complete, finite rank median space can be isometrically embedded
into its barycentric subdivision X 0 . This is a complete median space of the
same rank, see [Fio17b]; when X is the 0-skeleton of a CAT(0) cube complex
with the `1 metric, the space X 0 is given by the 0-skeleton of the customary
barycentric subdivision.
We have a natural homomorphism Isom X ,→ Isom X 0 . Given any isometric action Γ y X, the induced action Γ y X 0 is without wall inversions.
We write (H 0 , ν 0 ) instead of (H (X 0 ), νX 0 ). There is an inclusion preserving
map p : H 0 → H ; it is surjective, (Isom X)-equivariant and its fibres have
cardinality at most two. We have #(p−1 (h)) = 2 if and only if {h} is an
atom for νb; in this case, we refer to each element of p−1 (h) as a hemiatom.
See [Fio17b] for the following lemma.
16
ELIA FIORAVANTI
Lemma 2.12. Let X be a complete, finite rank median space. An action
Γ y X is Roller elementary if and only if the induced action Γ y X 0 is.
The sets {−1, 1} and {−1, 0, 1} inherit a median algebra structure from the
median space R; in particular, we can consider the product median algebras
{−1, 1}k and {−1, 0, 1}k for every k ≥ 0. For every point x ∈ X 0 \ X,
b
there exists a canonical, gate-convex subset C(x)
⊆ X 0 ; it is isomorphic
k
to {−1, 0, 1} , for some k ≥ 1, via an isomorphism that takes x to the
b
centre (0, ..., 0). The intersection C(x) := C(x)
∩ X is gate-convex in X
k
and corresponds to the subset {−1, 1} ⊆ {−1, 0, 1}k . For x ∈ X, we set
b
C(x)
= C(x) = {x}. See [Fio17b] for more details.
Lemma 2.13. Let X be a complete, finite rank median space. Every infinite,
convex subset C ⊆ X 0 intersects X.
Proof. Let x ∈ C be a point minimising R := rank(C(x)); if R = 0 we have
b
x ∈ C ∩ X. Otherwise, there exists a point y ∈ C that does not lie in C(x);
b
b
the gate-projection z of y to C(x) lies in C(x) \ {x}. In particular C(z) is
contained in a face of C(x) and has strictly lower rank, a contradiction since
z ∈ I(x, y) ⊆ C.
In particular, we can obtain the following extension of Lemma 2.12.
Lemma 2.14. If Γ y X is Roller nonelementary and Roller minimal, so is
the action Γ y X 0 .
Proof. Suppose for the sake of contradiction that C ⊆ X 0 is a nonempty,
Γ-invariant, closed, convex subset. By Corollary 4.31 in [Fio17a] there exists
a Γ-invariant component W ⊆ X 0 with W ∩ C 6= ∅; note that C ∩ W is
unbounded by Corollary 2.16 in [Fio17b], since Γ y X 0 is Roller nonelementary by Lemma 2.12. The component W is the barycentric subdivision of a
component Z ⊆ X and Lemma 2.13 implies that C ∩ Z 6= ∅. Since Γ y X
is Roller minimal, we must have Z = X and C ∩ X = X; hence X 0 ⊆ C by
part 2 of Proposition 2.14 in [Fio17b]. We conclude that C = X 0 .
Let us now fix a basepoint x ∈ X, where X is a complete finite rank
median space; the following discussion is independent of our choice of x. A
diverging chain of halfspaces is a sequence (hn )n≥0 such that d(x, hn ) → +∞
and hn+1 ⊆ hn for each n ≥ 0; we use the same terminology for the set
{hn | n ≥ 0}. Given ξ ∈ ∂X, a UBS for ξ is an inseparable subset Ω ⊆ σξ \σx
containing a diverging chain of halfspaces.
Given UBS’s Ω1 , Ω2 ⊆ σξ \ σx , we say that Ω1 is almost contained in Ω2
if the halfspaces in Ω1 \ Ω2 lie at uniformly bounded distance from x; this
is denoted by Ω1 Ω2 . If Ω1 Ω2 and Ω2 Ω1 , the UBS’s are equivalent
and we write Ω1 ∼ Ω2 . We denote the equivalence class of Ω ⊆ H by [Ω]
and the set of all equivalence classes of UBS’s for ξ by U(ξ); the relation
descends to a partial order on U(ξ). A UBS Ω is said to be minimal if
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
17
[Ω] is a minimal element of (U(ξ), ). A minimal UBS is equivalent to the
inseparable closure of any diverging chain that it contains.
We define a directed graph G(ξ) as follows [Hag17, Fio17b]. The vertex
set of G(ξ) is identified with the set of minimal elements of (U(ξ), ). Given
diverging chains (hm )m≥0 and (kn )n≥0 in Ω1 and Ω2 respectively, we draw
an oriented edge from [Ω1 ] to [Ω2 ] if almost every hm is transverse to almost
every kn , but not vice versa; this is independent of the choices involved. A
subset A ⊆ G(ξ)(0) is said to be inseparable if every directed path between
vertices in A only crosses vertices in A. The following can be found in
[Fio17b].
Proposition 2.15. Let X be a complete median space of finite rank r and
ξ ∈ ∂X.
(1) The graph G(ξ) has at most r vertices and contains no directed cycles.
(2) The poset (U(ξ), ) is isomorphic to the poset of inseparable subsets
of G(ξ)(0) , ordered by inclusion. The isomorphism maps [Ω] ∈ U(ξ)
to the set of equivalence classes of minimal UBS’s almost contained
in Ω. In particular, the set U(ξ) is finite.
(3) Given a UBS Ω and a set {Ω1 , ..., Ωk } of representatives of all equivalence classes of minimal UBS’s almost contained in Ω, we have
sup
d(x, h) < +∞.
h∈Ω4(Ω1 ∪...∪Ωk )
If Ω is a minimal UBS, we denote by Ω0 the subset of halfspaces k ∈ Ω
that are not transverse to any diverging chain of halfspaces in Ω. We say
that Ω is reduced if Ω = Ω0 . We say that Ω is strongly reduced if we can
write Ω = C1 t ... t Ck for some k ≥ 1, where each Ci is totally ordered by
inclusion and contains a diverging chain of halfspaces.
Consider the median spaces in Figures 1 and 2; both are subsets of R2
with the restriction of the `1 metric. In both cases, the UBS Ω = σξ \ σx is
minimal. Figure 1 shows that Ω can be reduced, but not strongly reduced. In
Figure 2 the UBS Ω is strongly reduced and exhibits how the decomposition
Ω = C1 t ... t Ck can require k ≥ 2.
Lemma 2.16. Let X be a complete, finite rank median space. Consider
x ∈ X and ξ ∈ ∂X; let Ω ⊆ σξ \ σx be a minimal UBS.
(1) The subset Ω0 is a reduced UBS equivalent to Ω.
(2) There exists a strongly reduced UBS contained in Ω; all its sub-UBS’s
are strongly reduced.
(3) If Ω is strongly reduced, it is reduced.
Proof. If h ⊆ k lie in σξ \ σx and k is transverse to a diverging chain of
halfspaces, then h is transverse to an infinite subchain. This implies that Ω0
is inseparable; moreover, Ω \ Ω0 cannot contain a diverging chain or Ω would
contain two inequivalent UBS’s. This proves part 1.
To obtain part 2, we decompose Ω = C1 t ... t Ck as in Lemma 2.2;
let A be the union of the sets Ci that do not contain a diverging chain.
18
ELIA FIORAVANTI
Figure 1
Figure 2
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
19
There exists D < +∞ such that d(x, h) ≤ D for every h ∈ A. The set
{h ∈ Ω | d(x, h) > D} is a strongly reduced UBS and so are its sub-UBS’s.
Regarding part 3, decompose Ω = C1 t ... t Ck , where each Ci is totally ordered by inclusion and contains a diverging chain. If there existed k ∈ Ω \ Ω0 ,
we would have k ∈ Ci for some i; in particular, k would not be transverse to
a diverging chain in Ci . Since Ω is minimal, k would not be transverse to any
diverging chain in Ω, a contradiction.
Given ξ ∈ ∂X, we denote by Isomξ X the group of isometries that fix ξ.
Let Kξ be the kernel of the action of Isomξ X on U(ξ); it is a finite-index
subgroup of Isomξ X.
If Ω ⊆ σξ \ σx is a UBS, we denote by KΩ the subgroup of Isomξ X
that fixes [Ω]. We can define
χΩ : KΩ → R by the for a transfer−1character
−1
mula χΩ (g) = νb g Ω \ Ω − νb Ω \ g Ω . This is a homomorphism and
only depends on the equivalence class [Ω]. If Ω1 , ..., Ωk are a set of representatives for G(ξ)(0) , we can also consider the full transfer homomorphism
χξ = (χΩ1 , ..., χΩk ) : Kξ → Rk .
Proposition 2.17. Let X be a complete, finite rank median space; consider
ξ ∈ ∂X. The subgroup Kξ is open in Isomξ X and the full transfer homomorphism χξ : Kξ → Rk is continuous. Every finitely generated subgroup
of ker χξ has a finite orbit in X; if X is connected, every finitely generated
subgroup of ker χξ fixes a point.
Before proving the proposition, we will need to obtain the following lemma.
Note that, for every point ξ ∈ ∂X and every halfspace h ∈ σξ , the set
H (h∗ |ξ) = {k ∈ σξ | k ⊆ h} is a UBS.
Lemma 2.18. For every thick halfspace h ∈ σξ and every > 0, there exists
a neighbourhood U of the identity in Isomξ X such that H (h∗ |ξ) ∼ H (gh∗ |ξ)
and νb (H (h∗ |ξ)4H (gh∗ |ξ)) < for all g ∈ U .
Proof. Pick a point x ∈ X with d(x, h) > 0; in a neighbourhood of the
identity of Isomξ X, we have x ∈ gh∗ . If k ∈ H (h∗ |ξ) \ H (gh∗ |ξ) and y is
the gate-projection of x to k, we have y ∈ gh∗ by part 4 of Proposition 2.3,
since k ∩ gh∗ 6= ∅ and x ∈ gh∗ ; thus d(y, g −1 y) ≥ d(k, h∗ ). We conclude that,
for every k ∈ H (h∗ |ξ) with d(k, h∗ ) > 0, there exists a neighbourhood Vk of
the identity in Isomξ X such that k ∈ H (gh∗ |ξ) for all g ∈ Vk .
Decompose H (h∗ |ξ) = C1 t ... t Ck as in Lemma 2.2. Let ki be the union of
all k ∈ Ci with d(h∗ , k) ≥ 2k
; if nonempty, it is a halfspace and d(h∗ , ki ) ≥ 2k
.
∗
Elements
of
H
(h
|ξ)
not
contained
in
any
k
form
a
subset
of
measure
at
i
P
most
νb(H (h∗ |ki )) ≤ 2 . Let V be the intersection of the Vki for ki 6= ∅;
if g ∈ V , the set H (h∗ |ξ) \ H (gh∗ |ξ) has measure at most 2 and consists
of halfspaces at uniformly bounded distance from x ∈ X. It now suffices to
consider U := V ∩ V −1 .
Proof of Proposition 2.17. We only need to prove that Kξ is open and that χξ
is continuous; the rest of the statement is contained in Theorem F of [Fio17b].
20
ELIA FIORAVANTI
For every v ∈ G(ξ)(0) , there exists hv such that the vertices w ∈ G(ξ)(0) with
w [H (h∗v |ξ)] are precisely v and those that are at the other end of an
incoming edge at v. Indeed, given a diverging chain in a UBS representing
v, almost every halfspace in the chain can be chosen as hv .
Let Ai be the set of vertices v ∈ G(ξ)(0) such that there exists no directed
path of length ≥ i in G(ξ) that ends at v. Note that, by Proposition 2.15,
we have ∅ = A0 ( A1 ( ... ( Ak = G(ξ)(0) for some k ≤ r. We will show
that the subgroup Ki ≤ Isomξ X that fixes Ai ⊆ G(ξ)(0) pointwise is open
in Isomξ X and that, for every [Ω] ∈ Ai , the homomorphism χΩ : Ki → R is
continuous. We proceed by induction on i, setting K0 := Isomξ X.
The base step is trivial. If i ≥ 1, let Ai \ Ai−1 = {v1 , ..., vs } and consider
the halfspaces hv1 , ..., hvs . Setting Ξj := H (h∗vj |ξ), Lemma 2.18 provides a
neighbourhood U of id in Isomξ X such that gΞj ∼ Ξj for all j and all g ∈ U .
A minimal UBS almost contained in Ξj projects to an element of Ai−1 or to
vj ; hence, we have gvj = vj for every g ∈ U ∩ Ki−1 . Since, by the inductive
hypothesis, Ki−1 is open, so is Ki . Continuity of the transfer characters is
obtained with a similar argument.
2.2. Bridges. Let a median algebra (M, m) and two gate-convex subsets C1 ,
C2 be fixed throughout this section. All the following results have analogues
in Section 2.G of [CFI16].
Denote by πi : M → Ci the gate-projection to Ci . We will refer to the sets
S1 := {x1 ∈ C1 | ∃x2 ∈ C2 s.t. (x1 , x2 ) are gates for (C1 , C2 )} ,
S2 := {x2 ∈ C2 | ∃x1 ∈ C1 s.t. (x1 , x2 ) are gates for (C1 , C2 )} ,
as the shores of C1 and C2 , respectively. By part 3 of Proposition 2.3, these
coincide with π1 (C2 ) and π2 (C1 ), hence they are gate-convex. The map
π2 |S1 : S1 → S2 is a bijection with inverse π1 |S2 ; if M arises from a median
space X, this is an isometry as gate-projections are 1-Lipschitz. The bridge
is the set
G
G
(∗)
B :=
I (x1 , π2 (x1 )) =
I (π1 (x2 ), x2 ) .
x1 ∈S1
x2 ∈S2
The union is disjoint because, if (x1 , x2 ) is a pair of gates for (C1 , C2 ), we have
πi (I(x1 , x2 )) = {xi } for i = 1, 2; this follows from part 4 of Proposition 2.3
and the observation that I(x1 , x2 ) ∩ Ci = {xi }.
Proposition 2.19. The bridge B is gate-convex and
W (B) = (W (C1 ) ∩ W (C2 )) t W (C1 |C2 ).
For any pair of gates (x1 , x2 ), the bridge is canonically isomorphic to the
product S1 × I(x1 , x2 ); this is an isometry if M arises from a median space.
Proof. Pick a pair of gates (x1 , x2 ), set I := I(x1 , x2 ) and consider the morphism of median algebras φ := πS1 × πI : M → S1 × I. If (x01 , x02 ) is another
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
21
pair of gates, the projection πI provides an isomorphism I(x01 , x02 ) → I mapping each x0i to xi . This observation and the decomposition (∗) above imply
that the restriction φ|B is bijective.
The map πB = (φ|B )−1 ◦φ : M → B is surjective and it is a gate-projection
by Proposition 2.1 in [Fio17a]. By part 1 of Proposition 2.3 and the discussion above, every wall of B arises either from a wall of M cutting S1 or
from a wall of M cutting I; the latter correspond to W (C1 |C2 ) by part 2 of
Proposition 2.3, so we are left to show that W (S1 ) = W (C1 ) ∩ W (C2 ). This
follows from the fact that S1 = π1 (C2 ).
When M arises from a median space X, the measure νb induces a measure
µ
b on the set W [Fio17a]. In this case, the fact that φ|B is an isometry
follows from the decomposition of W (B) above and the observation that
µ
b(W (x|y)) = d(x, y) for all x, y ∈ X.
We can extend the notion of strong separation to arbitrary gate-convex
subsets of median algebras. We say that C1 and C2 are strongly separated
if they are disjoint and W (C1 ) ∩ W (C2 ) = ∅. Note that the condition
W (C1 ) ∩ W (C2 ) = ∅ alone already implies that C1 ∩ C2 consists of at most
one point. In a median space, two halfspaces are strongly separated in the
sense of Section 2.1 if and only if their closures are strongly separated according to the definition above; see Lemma 2.21 below for a stronger result.
Proposition 2.19 implies that two disjoint, gate-convex sets are strongly
separated if and only if their shores are singletons; this yields the following
result.
Corollary 2.20. Let C1 , C2 ⊆ M be strongly separated. There exists a
unique pair of gates (x1 , x2 ) and π1 (C2 ) = {x1 }, π2 (C1 ) = {x2 }.
We will also need the following:
Lemma 2.21. Let h, k ∈ H be strongly separated as halfspaces. The closures
H, K of e
h, ek in X are strongly separated as subsets of X.
Proof. Since h, k have disjoint closures in X, the sets H and K are disjoint
(see e.g. the proof of Theorem 5.1 in [Fio17b]). By Proposition 2.19, it then
suffices to prove that the shore S ⊆ H is a singleton. Suppose for the sake
of contradiction that S contains distinct points ξ, η and let ξ 0 , η 0 ∈ K be
their projections to K; in particular, ση \ σξ = ση0 \ σξ0 . Given j ∈ ση \ σξ ,
the argument at the beginning of the proof shows that the closures of j and
h inside X intersect nontrivially; by Lemma 3.6 in [Fio17a], almost every
j ∈ ση \ σξ intersects h. Considering complements, we conclude that almost
every j ∈ ση \σξ is transverse to h and, similarly, to k. Since d(ξ, η) > 0, there
exists such a j, contradicting the fact that h, k are strongly separated.
2.3. The Haagerup class. Let X be a median space, G a topological group
and p ∈ [1, +∞). Given a Banach space E, we denote by U (E) the group
of linear isometries of E.
22
ELIA FIORAVANTI
An isometric action G y X corresponds to a measure preserving action
G y (H , νb). We obtain a continuous representation ρp : G → U (Lp (H , νb));
when p = 2, we simply write ρ := ρ2 . We will use interchangeably the notations Hc1 (G, ρp ) and Hc1 (G, Lp (H , νb)) to denote continuous cohomology.
Given x ∈ X, we can consider the continuous 1-cocycle bx : G → Lp (H , νb)
defined by bx (g) := g · 1σx − 1σx ; it satisfies kbx (g)kp = (2 · d(x, gx))1/p . We
will refer to bx as a Haagerup cocycle. The cohomology class [bx ] ∈ Hc1 (G, ρp )
does not depend on the point x and we will simply denote it by [b].
The action G y X has bounded orbits if and only if the affine action
G y Lp (H , νb) induced by bx fixes a point; this follows for instance from
the Ryll-Nardzewski Theorem for p > 1 and from Theorem A in [BGM12]
in the case p = 1. Thus, we have [b] = 0 if and only if the action G y X
has bounded orbits.
We can also consider the projection [b] of [b] to reduced continuous cohomology, which carries more interesting geometrical information (see Theorem A in the introduction). We will refer to [b] ∈ Hc1 (G, ρ) as the Haagerup
class of G y X. The choice of p = 2 here is not particularly relevant and
the same discussion could be equally carried out for any other p ∈ [1, +∞)
with few complications; see Remark 3.6.
We conclude this section by collecting a few straightforward lemmata for
later use. Let H be a Hilbert space and G → U (H) a continuous unitary
representation.
Lemma 2.22. If H ≤ G is open and finite-index, functoriality induces an
injective map Hc1 (G, H) ,→ Hc1 (H, H).
Lemma 2.23. Given a G-invariant decomposition H = H1 ⊥ H2 , the projections onto the two factors induce Hc1 (G, H) ' Hc1 (G, H1 ) ⊕ Hc1 (G, H2 ).
Recall that we denote by X 0 the barycentric subdivision of X and by
i : X ,→ X 0 the standard inclusion; we write (H 0 , ν 0 ) instead of (H (X 0 ), νbX 0 ).
Every isometric action G y X also induces a continuous representation
ρ0 : G → U (L2 (H 0 , ν 0 )).
Lemma 2.24. Let X be complete and finite rank and let G y X be an isometric action. The projection p : H 0 → H induces an isometric embedding
p∗ : L2 (H , νb) ,→ L2 (H 0 , ν 0 ) and a monomorphism
p∗ : Hc1 (G, ρ) ,→ Hc1 (G, ρ0 )
taking the Haagerup class of G y X to the Haagerup class of G y X 0 .
Proof. The fact that p∗ is an isometric embedding follows from the observation that p∗ ν 0 = νb. Injectivity of p∗ follows from Lemma 2.23 applied
to L2 (H , νb) and its orthogonal complement. Finally, if x ∈ X and bx is
the corresponding Haagerup cocycle for G y X, the cocycle p∗ ◦ bx is the
Haagerup cocycle for G y X 0 relative to the point i(x) ∈ X 0 .
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
23
3. Haagerup class and elementarity of actions.
3.1. The main statement. Let X be a complete, finite rank median space
and let G y X be an isometric action of a topological group G. The goal of
this section is to prove Theorem A.
By Lemmata 2.12 and 2.24, it suffices to consider the case when G y X
is without wall inversions; this will be a standing assumption throughout the
rest of the section.
Lemma 3.1. Suppose that X is irreducible and that G y X is Roller minimal and Roller nonelementary. There exists a non-abelian free subgroup
H ≤ G such that ρ has no H-almost-invariant vectors and H y X has unbounded orbits.
Proof. Part 2 of Proposition 6.4 in [Fio17b] provides a non-abelian
free subF
group H ≤ G and a measurable, ∗-invariant partition H = h∈H Hh with
gHh = Hgh for all g, h ∈ H. It is immediate from the construction of H that
it acts on X with unbounded orbits. If there existed a sequence of almost
invariant vectors (Fn )n≥0 in L2 (H , νb), say with kFn k2 = 1, we could define
functions fn ∈ `2 (H) by fn (h) := kFn 1Hh k2 . It is immediate to check that
kfn k2 = 1 for every n ≥ 0 and, for every g ∈ H,
2
X
kgfn − fn k22 =
kFn 1Hg−1 h k2 − kFn 1Hh k2 =
h∈H
=
X
k(gFn ) 1Hh k2 − kFn 1Hh k2
2
≤
h∈H
≤
X
k(gFn ) 1Hh − Fn 1Hh k22 = kgFn − Fn k22 −−−−−→ 0.
h∈H
n→+∞
Thus, the regular representation of H would contain almost-invariant vectors, implying amenability of H (see e.g. Theorem G.3.2 in [BdlHV08]). This
is a contradiction.
We can already prove the “only if” half of Theorem A.
Proposition 3.2. If G y X is Roller nonelementary, we have [b] 6= 0.
Proof. Note that, by functoriality of reduced cohomology, it suffices to consider the case when G has the discrete topology; thus, we do not need to
worry about continuity issues. We proceed by induction on r = rank(X).
When r = 0, all actions are Roller elementary; assume for the rest of the
proof that r ≥ 1.
We can also assume that G y X is Roller minimal. Indeed, if C ⊆ Z ⊆ X
are the subsets provided by Proposition 2.8, the action G y C is again without wall inversions and rank(C) ≤ r, by Proposition 2.7. The G-equivariant,
∗ ) induces an orthogonal decommeasurable partition H = HC t (σC ∪ σC
2
position of L (H , νb) and a G-equivariant splitting
∗
Hc1 G, L2 (H , νb) = Hc1 G, L2 (H (C), νbC ) ⊕ Hc1 G, L2 (σC ∪ σC
, νb) .
24
ELIA FIORAVANTI
If p1 and p2 are the orthogonal projections of L2 (H , νb) onto the two direct
summands, we can write [bx ] = [p1 bx ] + [p2 bx ] for every x ∈ X. Note that
the gate-projection πC : X → C maps x to a point ξ ∈ C. The cocycle p1 bx
is precisely the Haagerup cocycle bξ for the action G y C, by part 2 of
Proposition 2.7. In particular, if [bξ ] 6= 0 we can conclude that [b] 6= 0.
We thus assume in the rest of the proof that X = C. If X is irreducible,
Lemma 3.1 provides i : H ,→ G such that H y X has unbounded orbits
and ρ has no H-almost-invariant vectors. The first condition implies that
i∗ [b] 6= 0; the second condition and Theorem 1 in [Gui72] thus yield i∗ [b] 6= 0.
In particular, [b] 6= 0.
If instead X splits as a nontrivial product X1 × X2 and j : G0 ,→ G is
the finite-index subgroup preserving this decomposition, it suffices to show
that j ∗ [b] 6= 0. Writing νbi instead of νbXi , Proposition 2.10 and Lemma 2.23
imply:
Hc1 G0 , L2 (H , νb) = Hc1 G0 , L2 (H (X1 ), νb1 ) ⊕ Hc1 G0 , L2 (H (X2 ), νb2 ) ;
hence, if x = (x1 , x2 ), we have [bx ] = [bxX11 ] + [bxX22 ]. The action G0 y X is
Roller nonelementary, since G0 is finite-index in G; thus, up to exchanging
the two factors, G0 y X1 is Roller nonelementary. Since rank(X1 ) < r,
the inductive hypothesis guarantees that [bxX11 ] 6= 0 and this concludes the
proof.
Before proving the rest of Theorem A we need to obtain a few more results.
Lemma 3.3. If the G-orbit of ξ ∈ X is finite, the G-stabiliser of ξ is open.
Proof. Suppose that Gξ = {ξ} t {ξ1 , ..., ξk } and d(ξ, ξi ) ≥ > 0 for all i,
where d is the extended metric on X. By Proposition 4.24 in [Fio17a], we
can find xi , yi ∈ X such that, denoting by πi the projection to Ii := I(xi , yi ),
we have d(πi (ξ), πi (ξi )) > 34 . In a neighbourhood U ⊆ G of the identity
element we have d(gxi , xi ) < 41 , d(gyi , yi ) < 41 and d(gπi (ξ), πi (ξ)) < 41 ,
for all i.
If g ∈ U , we have d(πgIi (ξi ), πi (ξi )) ≤ d(gxi , xi ) + d(gyi , yi ) < 12 ; if in
addition we had gξ = ξi , we would have πgIi (ξi ) = gπi (ξ). As a consequence,
d(πi (ξ), πi (ξi )) ≤ d(πi (ξ), gπi (ξ)) + d(gπi (ξ), πi (ξi )) < 34 , which would contradict our choice of Ii . We conclude that U is contained in the stabiliser of
ξ, which must be open in G.
The proof of the following fact is rather lengthy and technical; it will be
carried out in Appendix A.
Proposition 3.4. Let ξ ∈ ∂X and K ⊆ Isomξ X be a compact set of isometries acting trivially on U(ξ). There exists a point xK ∈ X such that σξ \σxK
coincides, up to a null set, with ΩK := Ω1K t ... t ΩkK , where
• each ΩiK is a strongly reduced, minimal UBS;
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
25
• if g ∈ K we have gΩiK ⊆ ΩiK whenever χΩi (g) ≥ 0 and gΩiK ⊇ ΩiK
K
whenever χΩi (g) ≤ 0;
K
• if i 6= j and g ∈ K, we have ΩiK ∩ gΩjK = ∅.
Given points ξ ∈ ∂X, x ∈ X and a UBS Ω ⊆ σξ \ σx , we can define a
function αΩ : Ω → R as αΩ (h) := νb (H (x|h) ∩ Ω). The dependence on the
point x is not particularly relevant, so we do not record it in our notation.
We can consider the sets Ωc := {h ∈ Ω | αΩ (h) ≤ c}. In Appendix A we will
obtain the following result (see Lemma A.9).
Proposition 3.5. Suppose that Ω is minimal and strongly reduced. Let
K ⊆ Isomξ X be a compact set of isometries such that gΩ ⊆ Ω for all g ∈ K.
As c → +∞, the functions
i
h
αΩ
· 1Ωc
(g − id) · − 1 −
c
converge to 1Ω\gΩ in L2 (H , νb), uniformly in g ∈ K. If instead gΩ ⊇ Ω for
all g ∈ K, they converge to the function −1gΩ\Ω .
We are finally ready to complete the proof of Theorem A.
Proof of Theorem A. By Proposition 3.2, it suffices to consider the case when
G has a finite orbit in X and, by Lemmata 2.22 and 3.3, we can actually
assume that G fixes a point ξ ∈ X. If ξ ∈ X, we have [b] = 0; suppose
instead that ξ ∈ ∂X. By Proposition 2.17, an open, finite-index subgroup
G0 ≤ G acts trivially on U(ξ); by Lemma 2.22, it suffices to consider the
case G = G0 .
Fix x ∈ X. For every > 0 and every compact subset K ⊆ G, we need
to construct a function ψ ∈ L2 (H , νb) such that kbx (g) − (gψ − ψ)k2 < for
all g ∈ K. Considering the point xK ∈ X provided by Proposition 3.4, it
suffices to find a function φ ∈ L2 (H , νb) such that kbxK (g) − (gφ − φ)k2 <
for all g ∈ K and then set ψ := φ + 1σx − 1σxK . If g ∈ K, considering all
equalities up to null sets, we have
σgxK \ σxK = [(σgxK \ σxK ) ∩ σξ ] t (σgxK \ σxK ) ∩ σξ∗ =
= [(σξ \ σxK ) \ (σξ \ σgxK )] t [(σξ \ σgxK ) \ (σξ \ σxK )]∗ =
= (ΩK \ gΩK ) t (gΩK \ ΩK )∗ .
In particular, since by construction ΩiK ∩ gΩjK = ∅ whenever i 6= j,
σgxK \ σxK =
k
G
∗
ΩiK \ gΩiK t gΩiK \ ΩiK .
i=1
Introducing the notation 2A := 1A − 1A∗ for subsets A ⊆ H , we can rewrite
X
X
bxK (g) = 2σgxK \σxK =
2Ωi \gΩi −
2gΩi \Ωi .
K
χΩi (g)>0
K
K
K
χΩi (g)<0
K
K
26
ELIA FIORAVANTI
For c > 0, consider the function
Gc :=
k
X
i=1
αΩi
K
2(ΩiK ) .
− 1−
c
c
Proposition 3.5 shows that it suffices to take φ = Gc for large c.
Remark 3.6. Theorem A also holds for the analogous class in Hc1 (G, ρp ),
for every p ∈ [1, +∞). Indeed, Lemma 2.23 applies to any decomposition
of a Banach space into closed subspaces. In the proof of Lemma 2.24, a
closed complement to Lp (H , νb) within Lp (H 0 , ν 0 ) is always provided by
the subspace of functions on H 0 that take opposite values on hemiatoms.
Theorem 1 in [Gui72] holds for representations in general Banach spaces.
Finally, if F is a non-abelian free group, `p (F ) has no F -almost-invariant
vectors for every p ∈ [1, +∞).
The value of p also has little importance for most of the material in Appendix A. Note however that Proposition 3.5 fails for p = 1; one needs to
consider functions with a quicker decay in that case.
3.2. Elementarity and Shalom’s property HF D . Let X be a complete,
finite rank median space and let G be a topological group. Our main result
on property HF D is the following.
Proposition 3.7. If G has property HF D , every isometric action G y X
is Roller elementary.
We will need the following lemma.
Lemma 3.8. Suppose that X is irreducible and let G y X be a Roller
nonelementary, Roller minimal action. Let E ⊆ H be a measurable subset
such that νb(gE4E) = 0 for all g ∈ G. Then νb(E) is either 0 or +∞.
Proof. Without loss of generality we can assume that G y X is without wall
inversions, as Lemma 2.14 allows us to pass to the barycentric subdivision X 0
if necessary. Now, suppose that E is such a set and 0 < νb(E) < +∞. Since
X has finite rank, we can find a thick halfspace h such that, replacing E with
E ∗ if necessary, the set Eh := E ∩ {k ∈ H | k ⊆ h} satisfies a := νb(Eh ) > 0.
By part 2 of Proposition 2.11, the halfspace h is part of a facing n-tuple
with n > a1 · νb(E). By Proposition 2.9, there exist g2 , ..., gn ∈ G such that
h, g2 h, ..., gn h are a facing n-tuple. The sets Eh , g2 Eh , ..., gn Eh are pairwise
disjoint and contained in E, up to null sets. However, their union has measure na > νb(E), a contradiction.
Proof of Proposition 3.7. Suppose for the sake of contradiction that G y X
is Roller nonelementary. Without loss of generality, we can assume that X
has minimal rank r among complete median spaces admitting Roller nonelementary actions of G. In particular, X must be irreducible, see the proof
of Proposition 3.2. By Proposition 2.8, we can also assume that G y X
is Roller minimal. Theorem A guarantees that Hc1 (G, ρ) 6= {0} and, since
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
27
G has property HF D , there exists a finite dimensional subrepresentation
V < L2 (H , νb). We will construct a measurable G-invariant subset of H
with positive, finite measure, thus violating Lemma 3.8.
Let f1 , ..., fk be measurable functions on H whose equivalence classes in
L2 (H , νb) form an orthonormal basis of V . Define, for c > 0,
n
o
Ec := h ∈ H | ∃α = (α1 , ..., αk ) ∈ Sk−1 s.t. |(α1 f1 + ... + αk fk )(h)| > c .
Since in the definition of Ec it suffices to look at α’s lying in a countable
dense subset of Sk−1 , we conclude that Ec is measurable. If h ∈ Ec , we must
have |fi (h)| > kc for some i, hence νb(Ec ) < +∞; if c is sufficiently small, we
have νb(Ec ) > 0. Given g ∈ G, there exist real numbers
P αij , 1 ≤ i, j ≤ k,
such that, outside a measure zero set, we have fi = j αij (gfj ) for every i.
P
If h ∈ Ec \ gEc , we must have fi (h) 6= j αij (gfj )(h) for some i; we conclude
that νb(gEc 4Ec ) = 0 for all g ∈ G.
Corollary 3.9. Let Γ be a discrete group with property HF D . If Γ acts freely
and cocompactly on a CAT(0) cube complex X, then Γ is virtually abelian.
Proof. Cocompactness of the action implies that X is finite dimensional. By
Propositions 2.17 and 3.7, there exists a finite-index subgroup Γ0 ≤ Γ and a
normal subgroup N C Γ0 consisting of elliptic elements, such that Γ0 /N is
abelian. Since Γ acts freely, N is trivial.
Recall that, in Gromov’s density model, random groups at density d < 12
are nonelementary hyperbolic with overwhelming probability [Gro93, Oll04].
Together with Theorem 10.4 in [OW11], Corollary 3.9 then immediately
implies the following result.
Corollary 3.10. With overwhelming probability, random groups at density
d < 61 in Gromov’s density model do not have property HF D .
4. Superrigidity.
4.1. The superrigidity result. Let X be a complete median space of finite
rank r and Γ y X an action by isometries of a discrete group Γ.
Lemma 4.1. Suppose h1 , h2 , h3 ∈ H form a facing triple; let ki ∈ H and
h∗i be strongly separated for i = 1, 2, 3. There exists a point z ∈ X such that
m(ξ1 , ξ2 , ξ3 ) = z whenever ξi ∈ kei .
Proof. Let C be the intersection of the closures of e
h∗i inside X; it is nonempty,
closed and convex. Given points ξi ∈ kei , set m := m(ξ1 , ξ2 , ξ3 ). By convexity
we have I(ξ2 , ξ3 ) ⊆ e
h∗1 , hence m ∈ e
h∗1 ; permuting the indices, we obtain
m ∈ C. In particular, denoting by πC the gate projection X → C, we have
m = πC m(ξ1 , ξ2 , ξ3 ) = m(πC ξ1 , πC ξ2 , πC ξ3 ).
The closures of e
h∗i and eki in X are strongly separated by Lemma 2.21;
let {xi } be the shore of h∗i and set z := m(x1 , x2 , x3 ). By Corollary 2.20,
28
ELIA FIORAVANTI
we have πC (ξi ) = xi , hence m = z no matter which points ξi ∈ kei we have
chosen.
Lemma 4.2. Suppose that X is irreducible and assume that Γ y X is Roller
nonelementary and Roller minimal. Given 0 6= f ∈ L2 (H , νb), consider
N
S (f ) := (gn ) ∈ Γ | gn g · f −−−−−→ g · f, ∀g ∈ Γ .
n→+∞
There exists z ∈ X such that, for every (gn ) ∈ S (f ), there exists N ≥ 0
such that gn · z = z for all n ≥ N .
Proof. By part 1 of Proposition 2.10, the barycentric subdivision X 0 of X is
an irreducible median space of the same rank. The action Γ y X 0 is without
wall inversions, Roller nonelementary and Roller minimal by Lemma 2.14.
As usual, we write (H 0 , ν 0 ) for (H (X 0 ), νbX 0 ).
The function f ∈ L2 (H , νb) induces f 0 ∈ L2 (H 0 , ν 0 ) with S (f ) ⊆ S (f 0 ).
We approximate f 0 by a linear combination F of characteristic functions of
halfspace intervals H 0 (x1 |y1 ), ..., H 0 (xk |yk ) such that kF − f 0 k < 31 kf 0 k.
Proposition 2.9 implies that ν 0 (H 0 ) = +∞; all halfspace intervals have
finite measure, so there exists h0 ∈ H 0 such that xi , yi ∈ h0 for all i. Propositions 2.9 and 2.11 provide a thick halfspace h ∈ H 0 such that h∗ and
h0 are strongly separated; in particular, h contains every wall in the set
W 0 (x1 |y1 ) ∪ ... ∪ W 0 (xk |yk ).
Propositions 2.9 and 2.11 also provide γ1 ∈ Γ such that h and γ1 h are
strongly separated and γ2 ∈ Γ such that γ1 h∗ and γ2 h are strongly separated. We can assume without loss of generality that d(γ1 h∗ , γ2 h) ≥ 1, as
by Proposition 2.1 the quantity d(γ1 h∗ , γ2n h) diverges as n goes to infinity.
Thus, we can choose elements γm ∈ Γ such that h∗ ) γ1 h ) γ2 h ) ... and,
for all i ≥ 1, the halfspaces γi h∗ and γi+1 h are strongly separated and at
distance at least 1.
We denote by S ⊆ H 0 the support of the function F and by P the set
of all points xi , yi . Let D be the maximum distance from h∗ of a point
of P and d := dD + 2e. Let us fix an integer m > d and g ∈ Γ such
that kgγm f 0 − γm f 0 k < 31 kf 0 k and kgf 0 − f 0 k < 13 kf 0 k. We prove that
gγm h ⊆ γm−d h.
A straightforward repeated application of the triangle inequality yields
kgF − F k < 2kF k and kgγm F − γm F k < 2kF k; thus, νb(S ∩ gS) > 0
and νb(γm S ∩ gγm S) > 0. Let w be a wall corresponding to a halfspace in
γm S ∩ gγm S. Since w is contained in both γm h and gγm h and Γ y X 0
is without wall inversions, we conclude that γm h ∩ gγm h 6= ∅. A similar
argument shows that h ∩ gh 6= ∅.
Let u be the wall corresponding to gγm h. If u is contained in γm−1 h,
we either have gγm h ⊆ γm−1 h or γm−1 h∗ ∩ gγm h∗ = ∅. The former case
immediately yields gγm h ⊆ γm−d h, while the latter leads to a contradiction
as h ⊆ γm−1 h∗ and gh ⊆ gγm h∗ intersect.
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
29
If instead u is not contained in γm−1 h, it is contained in γm h∗ , by strong
separation. Let 1 ≤ l ≤ m be minimum such that γl h∗ contains u. We have
γl h ⊆ gγm h, since γl h ∩ gγm h ⊇ γm h ∩ gγm h 6= ∅.
Let k be the side of w that is contained in γm h. Since either k or k∗ lies in
gγm S, there exists q ∈ P such that gγm q ∈ k ⊆ γm h. Hence,
m − l ≤ d(γm h, γl h∗ ) ≤ d(gγm q, gγm h∗ ) = d(q, h∗ ) ≤ D
and m − l + 2 ≤ D + 2 ≤ d. By strong separation and minimality of l, the
wall u is contained in γl−2 h ⊆ γm−d h. Hence gγm h ⊆ γm−d h, since otherwise
gγm h∗ ∩ γm−d h∗ = ∅ would again violate the fact that h ∩ gh 6= ∅.
Now consider the intersection C of the closures in X 0 of the halfspaces
γm e
h. It consists of a single point ξ since any j ∈ H 0 with ej ∩ C 6= ∅ and
ej∗ ∩ C 6= ∅ would have to be transverse to almost all γm h, violating strong
separation. Strong separation also implies that ξ actually lies in X ⊆ X 0 .
Given (gn ) ∈ S (f 0 ) we can assume, removing a finite number of elements
if necessary, that kgn f 0 − f 0 k < 13 kf 0 k for all n. Let N (m) be a natural number such that kgn γm f 0 − γm f 0 k < 13 kf 0 k for all n ≥ N (m). When n ≥ N (m),
we have shown that gn γm h ⊆ γm−d h; thus we have gn ξ ∈ gn γme
h ⊆ γm−de
h.
In this case, strong separation implies that σgn ξ 4σξ consists only of halfspaces whose corresponding walls are contained in γm−d−1 h. This shows that
lim sup σgn ξ 4σξ = ∅; we conclude that gn ξ → ξ in the topology of X, for
every (gn ) ∈ S (f 0 ).
We finally construct the point z ∈ X. Let j, m be thick halfspaces of X 0
so that m∗ and j are strongly separated and ξ ∈ ej. Part 2 of Proposition 2.11
provides a facing triple consisting of m, m1 , m2 . We choose thick halfspaces
j1 , j2 ∈ H 0 such that m∗i and ji are strongly separated for i = 1, 2; by
Proposition 2.9 we can find hi ∈ Γ such that hi j ⊆ ji . Let z ∈ X 0 be the
point provided by Lemma 4.1 applied to j, j1 , j2 and m, m1 , m2 ; in particular,
we have z = m(ξ, h1 ξ, h2 ξ), hence z ∈ X.
Since the set S (f 0 ) is closed under conjugation by elements of Γ, we have
gn hi ξ → hi ξ for all (gn ) ∈ S (f 0 ). Hence, given (gn ) ∈ S (f 0 ), there exists
N ≥ 0 such that for every n ≥ N we have gn ξ ∈ ej and gn hi ξ ∈ eji , for i = 1, 2.
Thus,
gn z = gn m(ξ, h1 ξ, h2 ξ) = m(gn ξ, gn h1 ξ, gn h2 ξ) = z.
In the rest of the section, we consider a locally compact group G and a
lattice Γ < G. Any Borel fundamental domain U ⊆ G defines a cocycle
α : G × U → Γ such that gu ∈ α(g, u) · U . We say that Γ is square-integrable
if Γ is finitely generated and U can be chosen so that
Z
|α(g, u)|2S du < +∞, ∀g ∈ G;
U
here du is the Haar measure on U and | · |S denotes the word length with
respect to a finite generating set S ⊆ Γ. Integrability does not depend
30
ELIA FIORAVANTI
on the choice of S. Uniform lattices are always square-integrable and a
few nonuniform examples were mentioned in the introduction; see [Sha00,
Rém99, Rém05, CR09, CR10] for more details and examples.
We assume that G splits as a product G1 × ... × G` , where each Gi is
compactly generated and ` ≥ 2. We also require the lattice Γ < G to be
irreducible, i.e. to project densely into each factor Gi .
Consider a unitary representation π : Γ → U (H); we denote by H0 the
subspace of invariant vectors. Let c : Γ → H be a 1-cocycle for π. We will
make use of the following result of Y. Shalom in an essential way; see page 14
and Theorem 4.1 in [Sha00] for a proof.
Theorem 4.3. Suppose that Γ is square-integrable and that H0 = {0}. There
exist Γ-invariant closed subspaces Hi ≤ H, i = 1, ..., `, where the restriction
πi : Γ → U (Hi ) extends to a continuous representation πi : G → U (Hi ) that
factors through the projection pi : G → Gi . Furthermore, there are cocycles
ci : Γ → Hi , i = 1, ..., `, such that c and c1 + ... + c` represent the same class
in H 1 (Γ, π).
The following is a version of our Theorem B under stronger hypotheses.
Theorem 4.4. Suppose that Γ is square-integrable and X is irreducible;
let Γ y X be Roller nonelementary and Roller minimal. There exists a
Γ-invariant, closed median subalgebra Y ⊆ X where Γ y Y extends to a
continuous action G y Y . Moreover, G y Y factors through a projection
p i : G → Gi .
Proof. We have H 1 (Γ, ρ) 6= {0} by Theorem A. Lemma 3.8 implies that ρ
has no nonzero invariant vectors; thus, Theorem 4.3 provides a Γ-invariant
subspace {0} =
6 Hi ⊆ L2 (H , νb) where the action of Γ extends to a continuous
action of G factoring through a projection pi : G → Gi .
Pick any 0 6= f ∈ Hi and consider the set S (f ) introduced in Lemma 4.2.
Any sequence (gn ) ∈ ΓN with pi (gn ) → id lies in S (f ). Thus, Lemma 4.2
implies that the Γ-invariant set
n
o
Y := x ∈ X | ∀(gn ) ∈ ΓN s.t. pi (gn ) → id, we have gn x → x
is nonempty. Note that Y is a median subalgebra of X, thus the restriction
of the metric of X gives Y a structure of median space. Since Y is a closed
subset of X, it is a complete median space. Finally, Proposition 4.3 in [Sha00]
provides a continuous extension to G of Γ y Y and it factors through pi .
The assumption that Γ y X be Roller minimal and Roller nonelementary
can be replaced with the (stronger) requirement that Γ have no finite orbit in
b see Proposition 5.2 in [Fio17b].
the visual boundary of the CAT(0) space X;
The homomorphism φ : G → Isom Y provided by Theorem 4.4 is continuous with respect to the topology of pointwise convergence. We remark
however that φ remains continuous even if we endow Isom Y with the topology mentioned in Remark 4.5 below; this will be a key point in our proof of
Theorem C.
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
31
Remark 4.5. In the proof of Theorem 4.4, Lemma 4.2 actually yields that
the smaller set
n
o
Y0 := x ∈ X | ∀(gn ) ∈ ΓN s.t. pi (gn ) → id, ∃N ≥ 0 s.t. gn x = x, ∀n ≥ N
is nonempty. Thus, φ : G → Isom Y is continuous with respect to the topology on Isom Y that is generated by stabilisers of points of Y0 . In the statement of Theorem 4.4, we can always take Y to be the closure of Y0 in X.
This topology on Isom Y might seem a lot finer than the topology of
pointwise convergence. To clarify this phenomenon, we mention the following
fact, without proof. Let W be an irreducible, complete, finite rank median
space admitting a Roller nonelementary, Roller minimal action; then there
exists a dense, convex subset C ⊆ W such that, for every x ∈ C, the stabiliser
of x is open for the topology of pointwise convergence on Isom W . This
essentially follows from Lemma 4.1.
Relaxing the hypotheses of Theorem 4.4, we obtain Theorem B for all
square-integrable lattices:
Corollary 4.6. Suppose that Γ is square-integrable; let Γ y X be Roller
nonelementary. There exist a finite index subgroup Γ0 ≤ Γ, a Γ0 -invariant
component Z ⊆ X and a Γ0 -invariant closed median subalgebra Y ⊆ Z where
the action Γ0 y Y extends to a continuous action G0 y Y , for an open finite
index subgroup G0 ≤ G.
Proof. We proceed by induction on rank(X); when the rank is zero there
is nothing to prove, so we assume that the statement holds for all median
spaces of rank at most r − 1. By Proposition 2.8, there exists a Γ-invariant,
closed, convex subset D of a component W ⊆ X such that Γ y D is Roller
minimal and Roller nonelementary. If W ⊆ ∂X, we have rank(D) < r and
we conclude by the inductive hypothesis; thus we can assume that W = X.
Let D = D1 × ... × Dk be the splitting of D into irreducible factors; if
k = 1, the result follows from Theorem 4.4. If k ≥ 2, let Γ1 ≤ Γ be a finite
index subgroup preserving the splitting of D; up to permuting the factors,
we can assume that Γ1 y Di is Roller nonelementary for 1 ≤ i ≤ s and
Roller elementary for i > s. A further finite index subgroup Γ2 ≤ Γ1 fixes a
point ξi ∈ Di for each i > s; we denote by Zi ⊆ Di the component containing
ξi . Note that Γ2 is a square-integrable, irreducible lattice in an open, finite
index subgroup of G.
Since rank(Di ) < r, for each i ≤ s the inductive hypothesis yields a finite
index subgroup Γ0i ≤ Γ2 , an open finite index subgroup G0i ≤ G and a
Γ0i -invariant, closed median subalgebra Yi of a component Zi ⊆ Di where
the action of Γ0i extends to a continuous action of G0i . Let Γ0 be the
intersection of all Γ0i and G0 be the intersection of all G0i , for i ≤ s. The
set Y := Y1 × ... × Ys × {ξs+1 } × ... × {ξk } ⊆ D is a closed median subalgebra
of Z1 × ... × Zk , which is a component of D; in particular, Z1 × ... × Zk is a
closed, convex subset of a component Z ⊆ X. The action Γ0 y Y trivially
extends to a continuous action G0 y Y .
32
ELIA FIORAVANTI
Figure 3
We now describe two examples that illustrate how:
• in Theorem 4.4 the space Y cannot be taken to coincide with X, nor
with a convex subset (Example 4.7);
• in Corollary 4.6 it cannot be avoided to pass to the finite index
subgroup Γ0 , even when the action is Roller minimal (Example 4.8).
The actions that we consider are actually on CAT(0) square complexes. Since
Burger-Mozes groups play an important role in the construction of the two
examples, we briefly recall a few facts regarding their construction.
Given an integer n ≥ 3, we denote by Tn the n-regular tree and by An
the group of even permutations on n elements. We fix a legal colouring
on Tn , i.e. a way of associating an integer in {1, ..., n} to every edge of Tn
so that we see all n integers around each vertex; in particular, we have a
bijection iv : lk(v) → {1, ..., n} for every vertex v. Let U (An ) ≤ Isom Tn be
the subgroup of isometries g such that igv ◦ g ◦ i−1
v ∈ An for every vertex v
of Tn ; we denote by U (An )+ the intersection of U (An ) with the subgroup of
Isom Tn generated by edge stabilisers. If n ≥ 4, the subgroup U (An )+ has
index 2 in U (An ), see Proposition 3.2.1 in [BM00a].
The subgroup U (An ) is closed in Isom Tn ; in particular, it is locally compact, second countable and compactly generated (e.g. by Theorem 4.C.5 and
Proposition 5.B.5 in [CdlH16]). By Theorem 6.3 in [BM00b], there exists a
uniform irreducible lattice Λ ≤ U (A2k ) × U (A2k ) for every integer k ≥ 19.
For the next two examples, we fix such a lattice Λ. Let p1 , p2 : Λ → U (A2k )
+
be the projections into the two factors and set Λ0 := p−1
1 (U (A2k ) ); this is
+
an irreducible lattice in the open, index 2 subgroup U (A2k ) × U (A2k ) of
U (A2k ) × U (A2k ). Let τ : Λ → Z/2Z be the homomorphism with kernel Λ0 .
Example 4.7. Given any tree T , we can blow up every edge to a square
as in Figure 3, thus obtaining a “tree of squares” T. Adjacent squares only
share a vertex; if T has no leaves, each square has a pair of opposite vertices
that are shared with other squares and a pair of opposite vertices that are
not shared. The space T is a complete, rank two median space in which T
embeds as a median subalgebra; edges of T correspond to diagonals joining
shared pairs of vertices of a square.
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
33
We can embed Isom T ,→ Isom T by extending each isometry of T so
that the restriction to each square is orientation preserving. Let σ ∈ Isom T
be the isometry that fixes pointwise the image of the embedding T ,→ T
and acts on each square as a reflection in a diagonal; we have σ 2 = id
and Isom T × hσi ,→ Isom T. Viewing p2 : Λ → U (A2k ) as a homomorphism into Isom T2k we can define a homomorphism Λ → Isom T2k × hσi by
λ 7→ (p2 (λ), τ (λ)). We denote by ψ the composition of this map with the
embedding Isom T2k × hσi ,→ Isom T2k .
The action Λ y T2k induced by ψ is Roller nonelementary and Roller
minimal since the action Λ y T2k induced by p2 is. As T2k is irreducible,
Theorem 4.4 guarantees a continuous extension of Λ y Y to U (A2k ), for
some Λ-invariant median subalgebra of T2k . Indeed, one can take Y to be
the image of T2k ,→ T2k .
However, Y cannot be taken to be a convex subspace (or even a subcomplex) of T2k . Indeed, Y would be forced to be the whole T2k , as this is
the only Λ-invariant convex subset of T2k . The action Λ y T2k does not
extend to U (A2k ) × U (A2k ) by factoring via p1 ; this is because, whenever
elements gn ∈ Λ satisfy p1 (gn ) → id, the sequence (p2 (gn )) must diverge.
However, Λ y T2k also does not extend by factoring through p2 : we have
p2 (Λ0 ) = p2 (Λ) = U (A2k ), but ψ(Λ0 ) is contained in the closed subgroup
Isom T2k < Isom T2k and ψ(Λ) is not.
Example 4.8. Choose an element g ∈ Λ \ Λ0 and consider the action
Λ0 y T2k × T2k given by λ · (x, y) = (p2 (λ) · x, p2 (g −1 λg) · y). Since the
action U (A2k ) y T2k does not preserve any proper closed subtree, the same
holds for the action of p2 (Λ0 ). Part 3 of Proposition 2.10 then implies that Λ0
does not leave any proper, closed, convex subset of T2k × T2k invariant. Note
that no component of ∂(T2k × T2k ) = (∂T2k × T2k ) ∪ (T2k × ∂T2k ) is preserved
by Λ0 , as this would correspond to a fixed point for p2 (Λ0 ) y T2k , hence to
a fixed point for U (A2k ) y T2k . We conclude that Λ0 y T2k × T2k is Roller
minimal and the same argument also shows that it is Roller nonelementary.
One can easily check that Λ0 y T2k × T2k can be extended to an action
of the whole Λ by setting λg · (x, y) = (p2 (λg 2 ) · y, p2 (g −1 λg) · x) for all
λ ∈ Λ0 . This action also is Roller minimal and Roller nonelementary. We
will show, however, that there exists no Λ-equivariant isometric embedding
j : Y ,→ T2k × T2k of a median space Y such that the action on Y extends
continuously to U (A2k ) × U (A2k ) by factoring through one of the factors.
Let j : Y ,→ T2k × T2k be a Λ-equivariant embedding; note that j(Y ) is
entirely contained in a Λ-invariant component Z ⊆ T2k × T2k . In particular,
the previous discussion shows that Z = T2k × T2k .
By Lemma 6.5 in [Bow13a], each wall of j(Y ) arises from a wall of T2k ×T2k ,
i.e. a wall of one of the two factors, see part 1 of Proposition 2.10. Since the
two factors are exchanged by g ∈ Λ, we conclude that Y splits as Y1 × Y2 ,
with Λ0 y Y preserving this decomposition and g exchanging Y1 and Y2 .
34
ELIA FIORAVANTI
Suppose for the sake of contradiction that Λ y Y extends to an action
of U (A2k ) × U (A2k ) by factoring through one of the two factors. As in
Example 4.7, we see that the extension cannot factor via p1 . However, since
p2 (Λ0 ) is dense in U (A2k ) and Λ0 preserves the splitting Y = Y1 × Y2 , part 2
of Proposition 2.10 implies that an extension factoring through p2 would
also preserve the splitting Y = Y1 × Y2 . This contradicts the fact that g
exchanges Y1 and Y2 .
We conclude the section by proving Theorem C.
Proof of Theorem C. We begin by observing that part 2 follows from part 1
and Proposition 2.17. Now, suppose for the sake of contradiction that Γ
admits a Roller nonelementary action on X. As in the proof of Proposition 3.7, we can assume that X is irreducible and that Γ y X is Roller
minimal. Theorem 4.4 then yields a factor Gi , a closed median subalgebra
Y ⊆ X and actions Gi y Y and Γ y Y . Without loss of generality, we can
assume that Y is the closure of Y0 inside X, as in Remark 4.5.
Stabilisers of points of Y0 are open in Gi , thus the identity component G0i
must fix Y0 pointwise. As Y0 is dense in Y , the entire action G0i y Y vanishes
and Gi y Y descends to an action of the group Gi /G0i . Since Gi satisfies
condition (∗), Proposition 3.7 above and Corollary 6.5 in [Fio17b] imply that
the action Gi /G0i y Y is Roller elementary. However, by Lemma 2.6, the
actions Γ y Y and Gi y Y are Roller nonelementary, a contradiction.
4.2. Homomorphisms to coarse median groups. We defined equivariantly coarse median groups in the introduction. Here we simply prove Corollary E.
Proof of Corollary E. Fix a non-principal ultrafilter ω on N and let Hω be
the corresponding ultrapower of H. We endow H with a word metric dS
arising from a finite generating set S ⊆ H. Given λ = (λn ) ∈ RN
+ , we denote
by Coneω (H, λ) the asymptotic cone obtained by taking all basepoints at
the identity and λ as sequence of scaling factors. Let d denote the metric
that dS induces on Coneω (H, λ); it is a geodesic metric and it is preserved
by the natural action Hω y Coneω (H, λ).
If λn → +∞, the coarse median on H induces a structure of finite rank
median algebra on Coneω (H, λ), see Section 9 in [Bow13a]; we denote by
m the corresponding median map. The action Hω y Coneω (H, λ) is by
automorphisms of the median algebra structure. By Propositions 3.3 and 5.1
in [Zei16], we can endow Coneω (H, λ) with a median metric dm that is biLipschitz equivalent to d and preserved by the Hω -action; furthermore, the
median algebra structure associated to dm is given by the map m.
Now suppose for the sake of contradiction that there exist pairwise nonconjugate homomorphisms φn : Γ → H, for n ≥ 0; these correspond to a
homomorphism Γ → Hω , hence to an action on every asymptotic cone of
H that preserves the median metric dm . The Bestvina-Paulin construction
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
35
[Bes88, Pau91] provides us with a sequence µn → +∞ such that, modifying each φn within its conjugacy class if necessary, the induced action
Γ y Coneω (H, µ) has no global fixed point. This, however, contradicts
Theorem C.
Appendix A. Structure of UBS’s.
Let X be a complete median space of finite rank r. We fix points x ∈ X
and ξ ∈ ∂X. Let Ω ⊆ σξ \ σx be a minimal, reduced UBS.
Lemma A.1. Let
T g ∈ Isomξ X be an isometry satisfying gΩ ∼ Ω. Consider
the UBS Ω1 := 0≤i≤r g −i Ω ⊆ Ω.
(1) If χΩ (g) ≥ 0, we have g r! Ω1 ⊆ Ω1 and g r! h ⊆ h for all h ∈ Ω1 .
(2) If χΩ (g) = 0, we have g r! Ω1 = Ω1 and g r! h = h for all h ∈ Ω1 .
Proof. Observe that Ω1 ∼ Ω; we fix a diverging chain (kn )n≥0 in Ω1 . If
h ∈ Ω1 , the halfspaces g i h with 0 ≤ i ≤ r all lie in Ω ⊆ σξ \ σx and cannot
be pairwise transverse; thus, there exists 0 ≤ i ≤ r such that either g i h ⊆ h,
or g i h ⊇ h. Hence, for every h ∈ Ω1 we either have g r! h ⊆ h or g r! h ⊇ h.
Suppose that g r! h0 ( h0 for some h0 ∈ Ω1 ; set hk := g kr! h0 . For each
k ≥ 0, a cofinite subchain of {g −kr! kn }n≥0 is a diverging chain in Ω1 , as
gΩ1 ∼ Ω1 . Hence, for each k ≥ 0 we have h0 ⊇ g −kr! kn if n is sufficiently
large, since Ω is reduced; in particular h0 ⊇ hk ⊇ kn . We conclude that each
hk lies in Ω1 ; Proposition 2.1 guarantees that (hk )k≥0 is a diverging chain.
Let Ξ ⊆ Ω1 be the inseparable closure of {hk }k≥0 ; it is a UBS equivalent
to Ω and it satisfies g r! Ξ ⊆ Ξ. Observe that, for each k ≥ 0,
kr! · χΩ (g) = χΩ (g kr! ) = χΞ (g kr! ) = νb(Ξ \ g kr! Ξ) ≥ d(hk , h∗0 );
since d(hk , h∗0 ) > 0 for some k ≥ 0, we conclude that χΩ (g) > 0. The same
argument applied to g −1 shows that χΩ (g) < 0 if there exists h0 ∈ Ω1 with
g r! h0 ) h0 .
This proves part 2 and shows that g r! h ⊆ h for all h ∈ Ω1 if χΩ (g) > 0. In
the latter case, for every h ∈ Ω1 we have h ⊇ g r! h ⊇ kn for sufficiently large
n, since Ω is reduced; thus, g r! h ∈ Ω1 and g r! Ω1 ⊆ Ω1 .
Corollary A.2. Let g ∈ Isomξ X be an isometry satisfyingTgΩ ∼ Ω. Define
Ω1 as in the previous lemma and consider the UBS Ω2 := 1≤i≤r! g i−1 Ω1 .
(1) If χΩ (g) ≥ 0, we have gΩ2 ⊆ Ω2 .
(2) If χΩ (g) = 0, we have gΩ2 = Ω2 .
In the rest of the appendix, we also consider a compact subset K ⊆ Isomξ X
such that gΩ ∼ Ω for every g ∈ K.
Lemma A.3.
(1) There exists a constant C1 = C1 (Ω, K) such that every h ∈ Ω with d(x, h) > C1 lies in gΩ for every g ∈ K.
(2) If Ξ ⊆ σξ \ σx is a minimal, reduced UBS such that Ξ 6∼ Ω and
gΞ ∼ Ξ for all g ∈ K, there exists a UBS Ω ⊆ Ω that is disjoint from
gΞ for all g ∈ K.
36
ELIA FIORAVANTI
Proof. Let (hn )n≥0 be a diverging chain in Ω with h0 thick. For every g ∈ K,
a cofinite subchain of (g −1 hn )n≥0 is contained in Ω, as gΩ ∼ Ω; since Ω is
reduced, there exists n(g) ≥ 0 so that g −1 hn(g) ⊆ h0 . By Proposition 2.1 we
can assume that νb(H (gh∗0 |hn(g) )) > 0 and Lemma 2.18 provides a neighbourhood U (g) of g in K such that H (γh∗0 |ξ)∩H (gh∗0 |hn(g) ) 6= ∅ for all γ ∈ U (g);
in particular, hn(g) ⊆ γh0 for all γ ∈ U (g). There exist g1 , ..., gk ∈ K such
that K = U (g1 ) ∪ ... ∪ U (gk ); if N is the maximum of the n(gi ), we have
hN ⊆ gh0 for all g ∈ K. Let ΩN be the inseparable closure of {hn }n≥N .
If m ≥ N and g ∈ K, we have gh0 ⊇ hm ⊇ ghn for every sufficiently
large n, since Ω is reduced. This shows that ΩN is contained in gΩ for all
g ∈ K. Since ΩN ∼ Ω, there exists a constant C1 such that every h ∈ Ω
with d(x, h) > C1 lies in ΩN .
To prove part 2, let C be the supremum of distances d(x, h) for h ∈ Ω ∩ Ξ;
we have C < +∞ since Ω 6∼ Ξ. Let M be the maximum distance d(x, gx) for
g ∈ K and consider C 0 := max{C, C1 (Ξ, K −1 ) + M }; we define Ω to be the
set of h ∈ Ω with d(x, h) > C 0 . If there existed h ∈ Ω ∩ gΞ for some g ∈ K,
we would have g −1 h ∈ Ξ and d(x, g −1 h) > C1 (Ξ, K −1 ); thus, part 1 implies
that g −1 h ∈ g −1 Ξ, i.e. h ∈ Ξ, and this contradicts the fact that h ∈ Ω and
d(x, h) > C.
Recall that we have introduced the function αΩ : H → R, defined by the
formula αΩ (h) := νb (H (x|h) ∩ Ω) and the sets Ωc := {h ∈ Ω | αΩ (h) ≤ c}.
Observe that αΩ (k) ≤ αΩ (h) whenever h ⊆ k; in particular αΩ is measurable.
We have αΩ (h) ≤ νb(H (x|h)) = d(x, h) for all h ∈ H .
We say that Ω is small if νb(Ω) < +∞; otherwise, Ω is large. If X is a
CAT(0) cube complex, every UBS is large; an example of a small UBS in
a rank two median space appears in Figure 3 of [Fio17a]. If Ω is small, we
have χΩ (h) = 0 for every isometry h fixing [Ω]. Note that the supremum of
αΩ is precisely νb(Ω).
Lemma A.4. Let hn ∈ Ω be halfspaces with hn+1 ⊆ hn for all n ≥ 0. Then,
αΩ (hn ) → νb(Ω) if and only if (hn )n≥0 is a diverging chain of halfspaces.
Proof. The fact that αΩ (hn ) → νb(Ω) if (hn )n≥0 is a diverging chain follows
from the fact that Ω is reduced. For the other implication, let (km )m≥0 be a
diverging chain in Ω. Since αΩ (h) ≤ d(x, h), it suffices to consider the case
when Ω is small. For every m ≥ 0, the set {j ∈ Ω | j ⊆ km } has measure
am > 0 by Proposition 2.1. For large n we have αΩ (hn ) > νb(Ω) − am , hence
there exists j ⊆ km such that j ∈ H (x|hn ); in particular, hn ⊆ km . Since m
is arbitrary, this shows that (hn )n≥0 is a diverging chain.
Lemma A.5.
(1) For every 0 ≤ c < νb(Ω), the set Ω \ Ωc is a UBS.
(2) For all c ≥ 0, we have νb(Ωc ) ≤ rc.
Proof. Since Ω is reduced, any h ∈ Ω \ Ωc contains almost every halfspace in
any diverging chain in Ω; this provides a diverging chain in Ω \ Ωc . Inseparability follows from the monotonicity of αΩ .
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
37
To prove part 2, we decompose Ωc = C1 t ... t Ck as in Lemma 2.2. If
h, k ∈ Ci and h ⊆ k, we have c ≥ αΩ (h) ≥ νb (H (k∗ |h)). Hence, Lemma 2.27
in [Fio17a] implies that the inseparable closure of Ci has measure at most c.
We conclude that νb(Ωc ) ≤ kc ≤ rc.
Lemma A.6. Assume that gΩ ⊆ Ω for all g ∈ K.
(1) For all h ∈ Ω and g ∈ K, we have
−d(x, gx) − χΩ (g) ≤ αΩ (g −1 h) − αΩ (h) ≤ d(x, gx);
in particular kgαΩ − αΩ k∞ ≤ C2 for some constant C2 = C2 (Ω, K).
(2) For every c > 0, we have gΩc ⊆ Ωc+C2 .
Proof. Indeed,
αΩ (g −1 h) ≤ νb H (x|g −1 x) + νb H (g −1 x|g −1 h) ∩ Ω =
= d(x, g −1 x) + νb (H (x|h) ∩ gΩ) ≤ d(x, gx) + αΩ (h),
and
αΩ (h) ≤ νb (H (x|gx)) + νb (H (gx|h) ∩ Ω) =
= d(x, gx) + νb H (x|g −1 h) ∩ g −1 Ω ≤
≤ d(x, gx) + νb H (x|g −1 h) ∩ Ω + νb g −1 Ω \ Ω =
= d(x, gx) + αΩ (g −1 h) + χΩ (g).
We then take C2 to be the maximum of d(x, gx) + χΩ (g) for g ∈ K; this
exists due to Proposition 2.17. Regarding part 2, observe that, if h ∈ Ωc ,
αΩ (gh) = αΩ (h) + (αΩ (gh) − gαΩ (gh)) ≤ c + C2 .
Lemma A.7. Assume that Ω is strongly reduced.
(1) For every d ≥ 0, there exists a constant C3 = C3 (Ω, d) such that
h ⊆ k for all h, k ∈ Ω with d(x, h) > C3 and d(x, k) ≤ d.
(2) If Ξ ⊆ Ω is a UBS and d(x, k) ≤ d for all k ∈ Ω \ Ξ, we have
αΩ (h) − αΞ (h) = νb(Ω \ Ξ) for all h ∈ Ω with d(x, h) > C3 (Ω, d).
Proof. Decompose Ω = C1 t ... t Ck , where each Ci is totally ordered by inclusion and contains a diverging chain. Pick halfspaces ki ∈ Ci with d(x, ki ) > d.
By part 3 of Lemma 2.16, the halfspace ki cannot be transverse to a diverging chain in Ω; thus, part 2 of Lemma 2.2 guarantees that the halfspaces of
Ω that are transverse to ki lie at uniformly bounded distance from x. We
conclude that there exists C3 such that every h ∈ Ω with d(x, h) > C3 is
contained in each ki , hence also in each k ∈ Ω with d(x, k) ≤ d. This proves
part 1; part 2 is an immediate consequence.
In the rest of the section, we assume that Ω is minimal and strongly
reduced.
Lemma A.8. Suppose that χΩ (g) ≥ 0 for all g ∈ K.
38
ELIA FIORAVANTI
(1) There exists a constant C4 = C4 (Ω, K) such that, for every halfspace
h ∈ Ω with d(x, h) > C4 and every g ∈ K, we have αΩ (gh) ≥ αΩ (h).
(2) There exists a constant C5 = C5 (Ω, K) < νb(Ω) such that we have
g(Ω \ Ωc ) ⊆ Ω \ Ωc and Ωc \ gΩc = Ω \ gΩ whenever c > C5 and
g ∈ K.
Proof. We first observe that, given a minimal UBS Ξ ⊆ σξ \ σx and an
isometry g ∈ Isomξ X with gΞ ⊆ Ξ, we have
αΞ (gh) − αΞ (h) = νb H (g −1 x|h) ∩ g −1 Ξ − νb (H (x|h) ∩ Ξ) =
= νb H (g −1 x|h) ∩ Ξ − νb (H (x|h) ∩ Ξ) + νb H (g −1 x|h) ∩ (g −1 Ξ \ Ξ) =
= −b
ν H (x|g −1 x, h) ∩ Ξ + νb H (g −1 x|h) ∩ (g −1 Ξ \ Ξ) ,
since H (g −1 x|x) ∩ Ξ = ∅, as Ξ ⊆ σξ \ σx . Note that
H (x|g −1 x) ∩ gΞ = g −1 H (gx|x) ∩ g 2 Ξ ⊆ g −1 (H (gx|x) ∩ Ξ) = ∅.
Thus αΞ (gh) − αΞ (h) equals
− νb H (x|g −1 x, h) ∩ (Ξ \ gΞ) + νb H (g −1 x|h) ∩ (g −1 Ξ \ Ξ) ≥
≥ −b
ν (σx∗ ∩ (Ξ \ gΞ)) + νb (H (x|gh) ∩ (Ξ \ gΞ)) ≥
≥ −b
ν ((σx∗ \ H (x|gh)) ∩ (Ξ \ gΞ)) ≥ −b
ν ({k ∈ Ξ \ gΞ | gh 6⊆ k}) .
T
Now, consider for each g ∈ K the set Ω2 (g) := −r+1≤i≤r! g i−1 Ω as in Corollary A.2; we have gΩ2 (g) ⊆ Ω2 (g) and, by part 1 of Lemma A.3, there exists
a constant C1 such that each h ∈ Ω with d(x, h) > C1 lies in gΩ2 (g) for every
g ∈ K. By part 1 of Lemma A.7, if d(x, gh) > C3 (Ω, C1 ) we have
αΩ2 (g) (gh) − αΩ2 (g) (h) ≥ −b
ν ({k ∈ Ω2 (g) \ gΩ2 (g) | gh 6⊆ k}) = 0.
By part 2 of Lemma A.7, if d(x, h) > C3 (Ω, C1 ) and d(x, gh) > C3 (Ω, C1 ),
we have
αΩ (gh) − αΩ (h) = αΩ2 (g) (gh) − αΩ2 (g) (h) ≥ 0.
We conclude that αΩ (gh) ≥ αΩ (h) whenever d(x, h) > C4 := C3 (Ω, C1 ) + M ,
where M is the maximum of the distances d(x, gx) for g ∈ K.
We now prove part 2. Let C4 be as in part 1. Part 1 of Lemma A.3 provides
a constant C10 such that each h ∈ Ω with d(x, h) > C10 lies in gΩ ∩ g −1 Ω for
every g ∈ K. Let C5 be the supremum of the values αΩ (h) for h ∈ Ω with
d(x, h) ≤ max{C4 , C10 }; by Lemma A.4 we have C5 < νb(Ω). If c > C5 , any
h ∈ Ω \ Ωc satisfies d(x, h) > max{C4 , C10 }, hence αΩ (gh) ≥ αΩ (h) > c and
gh ∈ Ω; in particular g(Ω \ Ωc ) ⊆ Ω \ Ωc .
Observe that (Ω \ gΩ) ∩ gΩc ⊆ g(Ωc \ Ω) = ∅ and Ω \ gΩ ⊆ Ωc , by our
choice of the constant C10 ; thus, Ω \ gΩ ⊆ Ωc \ gΩc . Conversely, it is clear
that Ωc \ gΩc ⊆ Ω and
(Ωc \ gΩc ) ∩ gΩ = Ωc ∩ g (Ω \ Ωc ) ⊆ Ωc ∩ (Ω \ Ωc ) = ∅.
Consider now, for c > 0, the functions Fc,Ω := − 1 −
αΩ
c
1Ωc in L2 (H , νb).
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
39
Lemma A.9. Assume that gΩ ⊆ Ω for all g ∈ K. For every , there exists
a constant C = C (Ω, K) < +∞ such that k(g − id)Fc,Ω − 1Ω\gΩ k2 <
for all g ∈ K and all c ≥ C . If instead gΩ ⊇ Ω for all g ∈ K, we have
k(g − id)Fc,Ω + 1gΩ\Ω k2 < for c ≥ C .
Proof. Observe that
gαΩ
αΩ
(g − id)Fc,Ω = − 1 −
1gΩc + 1 −
1Ωc =
c
c
gαΩ
αΩ
gαΩ − αΩ
=− 1−
1gΩc \Ωc + 1 −
1Ωc \gΩc +
1gΩc ∩Ωc .
c
c
c
We will analyse the three summands separately. By Lemma A.6, we have
gαΩ (h)
αΩ (h) − C2
C2
αΩ (h) + C2
2C2
≤ 1−
≤ 1−
<
≤ 1−
,
−
c
c
c
c
c
for each h ∈ gΩc \ Ωc ⊆ Ωc+C2 \ Ωc . By part 2 of Lemma A.5,
gαΩ
2C2 (r(c + C2 ))1/2
2C2
1−
1gΩc \Ωc ≤
·b
ν (gΩc \Ωc )1/2 ≤
−−−−→ 0.
c→+∞
c
c
c
2
By part 1 of Lemma A.3, there exists a constant C1 such that each h ∈ Ω
with d(x, h) > C1 lies in gΩ for every g ∈ K; in particular, if k ∈ Ω \ gΩ for
some g ∈ K, then αΩ (k) ≤ d(x, k) ≤ C1 . By part 2 of Lemma A.8, if c ≥ C5
we have Ωc \ gΩc = Ω \ gΩ and
αΩ
αΩ
1Ωc \gΩc − 1Ω\gΩ = 1Ωc \gΩc − 1Ω\gΩ −
1
=
1−
c
c Ωc \gΩc 2
2
αΩ
C1 M 1/2
C1
=
1Ω\gΩ ≤
· νb (Ω \ gΩ)1/2 ≤
−−−−→ 0,
c→+∞
c
c
c
2
where M is the maximum of χΩ (g) for g ∈ K, which exists by Proposition 2.17. Finally, by part 2 of Lemma A.5 and part 1 of Lemma A.6,
gαΩ − αΩ
1gΩc ∩Ωc
c
≤
2
C2
C2 (cr)1/2
· νb(Ωc )1/2 ≤
−−−−→ 0.
c→+∞
c
c
If instead gΩ ⊇ Ω for all g ∈ K, the previous discussion shows that, for
large c, we have k(g −1 − id)Fc,Ω − 1Ω\g−1 Ω k2 < ; the conclusion follows by
applying g.
Lemma A.10. Let Ξ ⊆ σξ \ σx be a UBS and let Ξ1 , ..., Ξk ⊆ Ξ be pairwise
inequivalent, reduced UBS’s representing all minimal equivalence classes of
UBS’s almost contained in Ξ; for every i, set σi := νb(Ξi ). There exist
(i)
increasing sequences (cn )n≥0 such that
(i)
• cn → σi for
all i =1, ..., k;
1
1
• Ξ \ Ξ (1) ∪ ... ∪ Ξk \ Ξk(k) is inseparable for all n ≥ 0.
cn
cn
Proof. We proceed by induction on k. If k = 1, the lemma is immediate.
Suppose that k ≥ 2; without loss of generality, we can assume that Ξk
corresponds to a vertex with no incoming edges in the full subgraph of G(ξ)
40
ELIA FIORAVANTI
with vertices [Ξ1 ], ..., [Ξk ]. Fix > 0; we will construct c(i) > σi − satisfying
the inseparability condition.
(i)
(k)
Pick a diverging chain (hn )n≥0 in each Ξi ; up to replacing (hn )n≥0 with
a cofinite subchain, we can assume that, for each i ≤ k − 1 and each m ≥ 0,
(k)
(i)
the halfspace hm is transverse to hn for almost every n. By Lemma A.4,
there exists c(k) > σk − such that Ξk \ Ξkc(k) is contained in the inseparable
(k)
closure of {hn }n≥0 . As a consequence, for every h ∈ Ξk \ Ξkc(k) and every
(i)
i ≤ k − 1, the halfspaces h and hn are transverse for almost every n.
Halfspaces lying in the inseparable closure of Ξ1 ∪ ... ∪ Ξk , but in neither
of the Ξi , are at uniformly bounded distance from x, by part 3 of Proposition 2.15; say that these distances are bounded above by M < +∞. Enlarge
M so that all j ∈ Ξk with d(x, j) > M lie in Ξk \ Ξkc(k) . By part 4 of Proposition 2.15 there exists a UBS contained in Ξ such that [Ξ1 ], ..., [Ξk−1 ] are
all the equivalence classes of minimal UBS’s almost contained in Ξ. The
inductive
hypothesis
and Lemma A.4 imply
that we can find c(i) > σi − so
that Ξ1 \ Ξ1c(1) ∪ ... ∪ Ξk−1 \ Ξk−1
is inseparable and d(x, h) > M for
c(k−1)
all h ∈ Ξi \ Ξic(i) with i ≤ k − 1.
Now, if Ξ1 \ Ξ1c(1) ∪ ... ∪ Ξk \ Ξkc(k) were not inseparable, there would
exist j ∈ H such that ku ⊆ j ⊆ kv , for halfspaces ku ∈ Ξu \ Ξuc(u) and
kv ∈ Ξv \ Ξvc(v) , but j would not lie in any of the Ξi \ Ξic(i) . In particular,
u 6= v and k ∈ {u, v}; observe that v 6= k, otherwise kv ∈ Ξk \ Ξkc(k) would
not be transverse to diverging chains in Ξu . Thus, u = k; moreover, for all
(i)
i ≤ k − 1, the halfspace j must be
transverse
to hn for almost every
n, since
v ≤ k − 1 and j does not lie in Ξ1 \ Ξ1c(1) ∪ ... ∪ Ξk−1 \ Ξck−1
(k−1) , which is
inseparable. Since d(x, j) ≥ d(x, kv ) > M , we have j ∈ Ξ1 ∪ ... ∪ Ξk ; the fact
that each Ξi is reduced implies that j ∈ Ξk . By our choice of M , we have
j ∈ Ξk \ Ξkc(k) , a contradiction.
Lemma A.11. There exists a UBS Ξξ ⊆ σξ \σx such that every UBS Ξ ⊆ Ξξ
with Ξ ∼ Ξξ is of the form σξ \ σy for some y ∈ X, up to a null set.
Proof. Let Ξ1 , ..., Ξk ⊆ σξ \σx be pairwise inequivalent minimal UBS’s representing all minimal elements of U(ξ); we can assume that they are all reduced
by Lemma 2.16. Halfspaces in σξ \σx that are transverse to a diverging chain
in each Ξi lie at uniformly bounded distance from x, by part 3 of Proposition 2.15; say that these distances are bounded above by M < +∞. Let Ξξ
consist of all h ∈ σξ \ σx with d(x, h) > M ; it is a UBS equivalent to σξ \ σx .
Observe that, if Ξ ⊆ Ξξ is a UBS equivalent to Ξξ and ξ ∈ e
h for halfspaces
h ⊆ k ∈ Ξ, then h ∈ Ξ. Indeed, otherwise h would not contain any halfspace
of Ξ, by inseparability, and it would therefore be transverse to a diverging
chain in each of the Ξi ; hence d(x, k) ≤ d(x, h) ≤ M , a contradiction.
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
41
Now, given Ξ ⊆ Ξξ , consider the set σ := (σξ \ Ξ) t Ξ∗ ⊆ H . Observe
that σ is an ultrafilter. Indeed, since σξ \ Ξ ⊆ σξ and Ξ∗ ⊆ σx , it suffices to
check that h ∩ k 6= ∅ whenever h ∈ σξ \ Ξ and k ∈ Ξ∗ . If such halfspaces were
disjoint, we would have ξ ∈ e
h and h ⊆ k∗ ∈ Ξ, contradicting the observation
we made above since h 6∈ Ξ.
Finally, by Lemma 4.3 in [Fio17b], we can decompose σξ \ σx = Ξ t Σ for
some set Σ with νb(Σ) < +∞; in particular, σx \ σξ = Ξ∗ t Σ∗ . Since Ξ and
σx are disjoint, we have σx \ σ = σx \ (σξ ∪ Ξ∗ ) = Σ∗ , hence νb(σx 4σ) < +∞.
Proposition 2.4 implies that there exists y ∈ X such that σ4σy is null; thus,
up to measure zero, σξ \ σy = σξ \ σ = Ξ.
We are now ready to prove Proposition 3.4.
Proof of Proposition 3.4. Let Ξ1 , ..., Ξk ⊆ σξ \ σx be pairwise inequivalent
UBS’s representing all minimal elements of the poset U(ξ), ; by Lemma 2.16, we can assume that each Ξi is strongly reduced. Up to replacing
each Ξi with a smaller UBS, part 2 of Lemma A.3 guarantees that we can
assume that Ξi ∩ Ξj = ∅ and Ξi ∩ gΞj for all i 6= j and g ∈ K. By part 2
of Lemma A.8, we have g(Ξi \ Ξic ) ⊆ Ξi \ Ξic for all C5 (Ξi , K) ≤ c < νb(Ξi )
and all g ∈ K with χΞi (g) ≥ 0, while we have g(Ξi \ Ξic ) ⊇ Ξi \ Ξic if
χΞi (g) ≤ 0. Lemma A.10 provides constants
C5 (Ξi , K) ≤ ci < νb(Ξi ) such
that ΩK := Ξ1 \ Ξ1c1 ∪ ... ∪ Ξk \ Ξkck is inseparable. Thus ΩK is a UBS
equivalent to σξ \ σx by part 3 of Proposition 2.15. We conclude by Lemma A.11, enlarging the constants ci if necessary, so that ΩK ⊆ Ξξ ; this is
possible by Lemma A.4.
References
[ANWZ17] Goulnara Arzhantseva, Graham A. Niblo, Nick Wright, and Jiawen Zhang. A
characterization for asymptotic dimension growth. arXiv:1612.06638v2, 2017.
[BCG+ 09] Jacek Brodzki, Sarah J. Campbell, Erik Guentner, Graham A. Niblo, and
Nick J. Wright. Property A and CAT(0) cube complexes. J. Funct. Anal.,
256(5):1408–1431, 2009.
[BdlHV08] Bachir Bekka, Pierre de la Harpe, and Alain Valette. Kazhdan’s property (T),
volume 11 of New Mathematical Monographs. Cambridge University Press,
Cambridge, 2008.
[BDS11a] Jason Behrstock, Cornelia Druţu, and Mark Sapir. Addendum: Median structures on asymptotic cones and homomorphisms into mapping class groups
[mr2783135]. Proc. Lond. Math. Soc. (3), 102(3):555–562, 2011.
[BDS11b] Jason Behrstock, Cornelia Druţu, and Mark Sapir. Median structures on asymptotic cones and homomorphisms into mapping class groups. Proc. Lond.
Math. Soc. (3), 102(3):503–554, 2011.
[Bes88]
Mladen Bestvina. Degenerations of the hyperbolic space. Duke Math. J.,
56(1):143–161, 1988.
[BF95]
Mladen Bestvina and Mark Feighn. Stable actions of groups on real trees.
Invent. Math., 121(2):287–321, 1995.
[BF14]
Uri Bader and Alex Furman. Boundaries, rigidity of representations, and Lyapunov exponents. arXiv:1404.5107v1, 2014.
[BGM12] Uri Bader, Tsachik Gelander, and Nicolas Monod. A fixed point theorem for
L1 spaces. Invent. Math., 189(1):143–148, 2012.
42
[BH99]
[BHS15]
[BHS17a]
[BHS17b]
[BM00a]
[BM00b]
[BM02]
[Bow13a]
[Bow13b]
[Bow15]
[Bow16]
[Cap09]
[CD17]
[CDH10]
[CdlH16]
[CFI16]
[CL10]
[CM09]
[CMV04]
[CR09]
[CR10]
[CRK15]
ELIA FIORAVANTI
Martin R. Bridson and André Haefliger. Metric spaces of non-positive curvature, volume 319 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 1999.
Jason Behrstock, Mark F. Hagen, and Alessandro Sisto. Hierarchically
hyperbolic spaces II: combination theorems and the distance formula.
arXiv:1509.00632v3, 2015.
Jason Behrstock, Mark F. Hagen, and Alessandro Sisto. Hierarchically hyperbolic spaces, I: Curve complexes for cubical groups. Geom. Topol., 21(3):1731–
1804, 2017.
Jason Behrstock, Mark F. Hagen, and Alessandro Sisto. Quasiflats in hierarchically hyperbolic spaces. arXiv:1704.04271v1, 2017.
Marc Burger and Shahar Mozes. Groups acting on trees: from local to global
structure. Inst. Hautes Études Sci. Publ. Math., (92):113–150 (2001), 2000.
Marc Burger and Shahar Mozes. Lattices in product of trees. Inst. Hautes
Études Sci. Publ. Math., (92):151–194 (2001), 2000.
Marc Burger and Nicolas Monod. Continuous bounded cohomology and applications to rigidity theory. Geom. Funct. Anal., 12(2):219–280, 2002.
Brian H. Bowditch. Coarse median spaces and groups. Pacific J. Math.,
261(1):53–93, 2013.
Brian H. Bowditch. Invariance of coarse median spaces under relative hyperbolicity. Math. Proc. Cambridge Philos. Soc., 154(1):85–95, 2013.
Brian H. Bowditch. Large-scale rigidity properties of the mapping class groups.
Preprint, 2015.
Brian H. Bowditch. Some properties of median metric spaces. Groups Geom.
Dyn., 10(1):279–317, 2016.
Pierre-Emmanuel Caprace. Amenable groups and Hadamard spaces with a
totally disconnected isometry group. Comment. Math. Helv., 84(2):437–455,
2009.
Indira Chatterji and Cornelia Druţu. Median geometry for spaces with measured walls and for groups. arXiv:1708.00254v1, 2017.
Indira Chatterji, Cornelia Druţu, and Frédéric Haglund. Kazhdan and
Haagerup properties from the median viewpoint. Adv. Math., 225(2):882–921,
2010.
Yves Cornulier and Pierre de la Harpe. Metric geometry of locally compact
groups, volume 25 of EMS Tracts in Mathematics. European Mathematical
Society (EMS), Zürich, 2016. Winner of the 2016 EMS Monograph Award.
Indira Chatterji, Talia Fernós, and Alessandra Iozzi. The median class and
superrigidity of actions on CAT(0) cube complexes. J. Topol., 9(2):349–400,
2016. With an appendix by Pierre-Emmanuel Caprace.
Pierre-Emmanuel Caprace and Alexander Lytchak. At infinity of finitedimensional CAT(0) spaces. Math. Ann., 346(1):1–21, 2010.
Pierre-Emmanuel Caprace and Nicolas Monod. Isometry groups of nonpositively curved spaces: structure theory. J. Topol., 2(4):661–700, 2009.
Pierre-Alain Cherix, Florian Martin, and Alain Valette. Spaces with measured
walls, the Haagerup property and property (T). Ergodic Theory Dynam. Systems, 24(6):1895–1908, 2004.
Pierre-Emmanuel Caprace and Bertrand Rémy. Simplicity and superrigidity
of twin building lattices. Invent. Math., 176(1):169–221, 2009.
Pierre-Emmanuel Caprace and Bertrand Rémy. Non-distortion of twin building lattices. Geom. Dedicata, 147:397–408, 2010.
Montserrat Casals-Ruiz and Ilya Kazachkov. Limit groups over partially commutative groups and group actions on real cubings. Geom. Topol., 19(2):725–
852, 2015.
SUPERRIGIDITY OF ACTIONS ON FINITE RANK MEDIAN SPACES
43
Pierre-Emmanuel Caprace and Michah Sageev. Rank rigidity for CAT(0) cube
complexes. Geom. Funct. Anal., 21(4):851–891, 2011.
[dCTV07] Yves de Cornulier, Romain Tessera, and Alain Valette. Isometric group actions
on Hilbert spaces: growth of cocycles. Geom. Funct. Anal., 17(3):770–792,
2007.
[dCTV08] Yves de Cornulier, Romain Tessera, and Alain Valette. Isometric group actions on Banach spaces and representations vanishing at infinity. Transform.
Groups, 13(1):125–147, 2008.
[Del77]
Patrick Delorme. 1-cohomologie des représentations unitaires des groupes de
Lie semi-simples et résolubles. Produits tensoriels continus de représentations.
Bull. Soc. Math. France, 105(3):281–336, 1977.
[Dil50]
R. P. Dilworth. A decomposition theorem for partially ordered sets. Ann. of
Math. (2), 51:161–166, 1950.
[DP16]
Thomas
Delzant
and
Pierre
Py.
Cubulable
Kähler
groups.
arXiv:1609.08474v1, 2016.
[Fer15]
Talia Fernós. The Furstenberg-Poisson boundary and CAT(0) cube complexes.
arXiv:1507.05511v1, 2015.
[Fio17a]
Elia Fioravanti. Roller boundaries for median spaces and algebras.
arXiv:1708.01005v2, 2017.
[Fio17b]
Elia Fioravanti. The Tits alternative for finite rank median spaces.
arXiv:1708.01215v2, 2017.
[FV16]
Talia Fernós and Alain Valette. The Mayer-Vietoris sequence for graphs of
groups, property (T) and the first `2 -Betti number. arXiv:1412.3848v2, 2016.
[Ger97]
Victor N. Gerasimov. Semi-splittings of groups and actions on cubings. In Algebra, geometry, analysis and mathematical physics (Russian) (Novosibirsk,
1996), pages 91–109, 190. Izdat. Ross. Akad. Nauk Sib. Otd. Inst. Mat.,
Novosibirsk, 1997.
[GH10]
Erik Guentner and Nigel Higson. Weak amenability of CAT(0)-cubical groups.
Geom. Dedicata, 148:137–156, 2010.
[Gro93]
M. Gromov. Asymptotic invariants of infinite groups. In Geometric group theory, Vol. 2 (Sussex, 1991), volume 182 of London Math. Soc. Lecture Note
Ser., pages 1–295. Cambridge Univ. Press, Cambridge, 1993.
[Gui72]
Alain Guichardet. Sur la cohomologie des groupes topologiques. II. Bull. Sci.
Math. (2), 96:305–332, 1972.
[Hae16a] Thomas Haettel. Higher rank lattices are not coarse median. Algebr. Geom.
Topol., 16(5):2895–2910, 2016.
[Hae16b] Thomas Haettel. Hyperbolic rigidity of higher rank lattices.
arXiv:1607.02004v2, 2016.
[Hag07]
Frédéric Haglund. Isometries of CAT(0) cube complexes are semi-simple.
arXiv:0705.3386v1, 2007.
[Hag13]
Mark F. Hagen. The simplicial boundary of a CAT(0) cube complex. Algebr.
Geom. Topol., 13(3):1299–1367, 2013.
[Hag17]
Mark F. Hagen. Corrigendum to “The simplicial boundary of a CAT(0) cube
complex”. 2017.
[HP98]
Frédéric Haglund and Frédéric Paulin. Simplicité de groupes
d’automorphismes d’espaces à courbure négative. In The Epstein birthday schrift, volume 1 of Geom. Topol. Monogr., pages 181–248. Geom. Topol.
Publ., Coventry, 1998.
[HS16]
Mark F. Hagen and Tim Susse. Hierarchical hyperbolicity of all cubical groups.
arXiv:1609.01313v1, 2016.
[KK13]
Marcin Kotowski and Michał Kotowski. Random groups and property (T ):
Żuk’s theorem revisited. J. Lond. Math. Soc. (2), 88(2):396–416, 2013.
[CS11]
44
[Kle10]
[KS16]
[Lee00]
[Mar06]
[Min16]
[Mon06]
[Nic08]
[NS13]
[NWZ17]
[Oll04]
[OW11]
[Oza16]
[Pau91]
[Rém99]
[Rém05]
[Rol98]
[Sag95]
[Sha00]
[Sha04]
[SW17]
[Ż03]
[Zei16]
ELIA FIORAVANTI
Bruce Kleiner. A new proof of Gromov’s theorem on groups of polynomial
growth. J. Amer. Math. Soc., 23(3):815–829, 2010.
Aditi Kar and Michah Sageev. Ping pong on CAT(0) cube complexes. Comment. Math. Helv., 91(3):543–561, 2016.
Bernhard Leeb. A characterization of irreducible symmetric spaces and Euclidean buildings of higher rank by their asymptotic geometry, volume 326 of
Bonner Mathematische Schriften [Bonn Mathematical Publications]. Universität Bonn, Mathematisches Institut, Bonn, 2000.
Florian Martin. Reduced 1-cohomology of connected locally compact groups
and applications. J. Lie Theory, 16(2):311–328, 2006.
Ashot Minasyan. New examples of groups acting on real trees. J. Topol.,
9(1):192–214, 2016.
Nicolas Monod. Superrigidity for irreducible lattices and geometric splitting.
J. Amer. Math. Soc., 19(4):781–814, 2006.
Bogdan Nica. Group actions on median spaces. arXiv:0809.4099v1, 2008.
Amos Nevo and Michah Sageev. The Poisson boundary of CAT(0) cube complex groups. Groups Geom. Dyn., 7(3):653–695, 2013.
Graham A. Niblo, Nick Wright, and Jiawen Zhang. A four point characterisation for coarse median spaces. arXiv:1708.06960v1, 2017.
Yann Ollivier. Sharp phase transition theorems for hyperbolicity of random
groups. Geom. Funct. Anal., 14(3):595–679, 2004.
Yann Ollivier and Daniel T. Wise. Cubulating random groups at density less
than 1/6. Trans. Amer. Math. Soc., 363(9):4701–4733, 2011.
Narutaka Ozawa. A functional analysis proof of Gromov’s polynomial growth
theorem. arXiv:1510.04223v3, 2016.
Frédéric Paulin. Outer automorphisms of hyperbolic groups and small actions
on R-trees. In Arboreal group theory (Berkeley, CA, 1988), volume 19 of Math.
Sci. Res. Inst. Publ., pages 331–343. Springer, New York, 1991.
Bertrand Rémy. Construction de réseaux en théorie de Kac-Moody. C. R.
Acad. Sci. Paris Sér. I Math., 329(6):475–478, 1999.
Bertrand Rémy. Integrability of induction cocycles for Kac-Moody groups.
Math. Ann., 333(1):29–43, 2005.
Martin A. Roller. Poc sets, median algebras and group actions. An extended
study of Dunwoody’s construction and Sageev’s theorem. Preprint, University
of Southampton, 1998.
Michah Sageev. Ends of group pairs and non-positively curved cube complexes.
Proc. London Math. Soc. (3), 71(3):585–617, 1995.
Yehuda Shalom. Rigidity of commensurators and irreducible lattices. Invent.
Math., 141(1):1–54, 2000.
Yehuda Shalom. Harmonic analysis, cohomology, and the large-scale geometry
of amenable groups. Acta Math., 192(2):119–185, 2004.
Jan Spakula and Nick Wright. Coarse medians and Property A.
arXiv:1602.06084v2, 2017.
A. Żuk. Property (T) and Kazhdan constants for discrete groups. Geom.
Funct. Anal., 13(3):643–670, 2003.
Rudolf Zeidler. Coarse median structures and homomorphisms from Kazhdan
groups. Geom. Dedicata, 180:49–68, 2016.
| 4 |
Massive MIMO Performance Comparison of
Beamforming and Multiplexing in the Terahertz
Band
Sayed Amir Hoseini∗† , Ming Ding† and Mahbub Hassan∗†
arXiv:1710.09031v2 [cs.IT] 26 Oct 2017
∗ School
of Computer Science and Engineering, University of New South Wales, Sydney, Australia
† Data61, CSIRO, Sydney, Australia
Email: [email protected], [email protected], [email protected]
Abstract—In this paper, we compare the performance of two
main MIMO techniques, beamforming and multiplexing, in the
Terahertz (THz) band. The main problem with the THz band is
its huge propagation loss, which is caused by the tremendous
signal attenuation due to molecule absorption of the electromagnetic wave. To overcome the path loss issue, massive MIMO
has been suggested to be employed in the network and is expected
to provide Tbps for a distance within a few meters. In this
context, beamforming is studied recently as the main technique
to take advantage of MIMO in THz and overcome the very
high path loss with the assumption that the THz communication
channel is Line-of-Sight (LoS) and there are not significant
multipath rays. On the other hand, recent studies also showed
that the well-known absorbed energy by molecules can be reradiated immediately in the same frequency. Such re-radiated
signal is correlated with the main signal and can provide rich
scattering paths for the communication channel. This means that
a significant MIMO multiplexing gain can be achieved even in a
LoS scenario for the THz band. Our simulation results reveal a
surprising observation that the MIMO multiplexing could be
a better choice than the MIMO beamforming under certain
conditions in THz communications.
I. I NTRODUCTION
To respond to the huge increasing demand for the wireless
data traffic, recently the terahertz (THz) band (0.1-10 THz) is
envisioned to make Tbps wireless link feasible [1]. In spite
of the wide unused bandwidth in this spectrum, the high
propagation loss is the main issue of using such spectrum.
Thus, the potential applications of the THz link are limited to
short range communications such as nanosensors [2], wireless
on-chip communications and wireless personal area networks
[3]. Moreover, part of the radio signal attenuation at the THz
frequencies is due to molecular absorption which is frequency
selective and increases the total loss to more than 200 dB for
some frequencies at 10-meters distance.
Basically, to overcome the very high path loss the transmit
power could be largely increased. Unfortunately, this is not
feasible with the current technology and it is limited to a few
of mW [3]. Alternately, channel gain can be significantly improved by means of the multi-antenna beamforming technique.
Indeed, Due to the very small footprint of a large number
of antennas at the THz band, beamforming using very large
scale Multiple Input Multiple Output (MIMO) systems has
been considered in the field as a practical solution which can
provide up to 55 dB channel gain at 1 THz [1].
However, beamforming comes at the cost of system complexity and signaling overhead where the transmitter should
receive the channel state information continuously and align
the beam to the receiver. On the other hand, to achieve
a significant MIMO beamforming gain in high frequency
spectrum the beam would become very narrow which is sometimes described as a pencil beam. This makes beamforming
vulnerable to any transmitter/receiver mobility because it is
difficult to perform beam re-alignment in a very short time
interval.
Another approach to take advantage of MIMO is the MIMO
multiplexing technique. While the beamforming technique
strives to focus the transmission energy and achieve a large
channel gain in a specific direction, the multiplexing technique
builds it strength on creating parallel information channels.
However, the multiplexing gain is significant only when there
are enough non-negligible multipath signal components in a
rich scattering environment. Because of the huge path loss,
THz communication is usually assumed to be applied in as a
Line-of-Sight (LoS) dominant channel and thus, the research
focus has been on beamforming rather than multiplexing.
However, recent studies show that in the channel medium,
molecules absorb and re-radiate the electromagnetic energy
in THz band [3–6], which transforms the LoS channel
into a rich-scattering environment. The re-radiation is usually
considered as noise but the theorical model shows it is highly
correlated to main signal [6]. In this paper, we will theoretically investigate the THz channel capacity for both cases of
beamforming and multiplexing in a MIMO set-up. We find that
the multiplexing technique can provide a considerable capacity
gain in comparison with the beamforming technique on certain
conditions. Also, in some other conditions where the beamforming yields a higher capacity, the multiplexing technique is
still preferable choice due to its easier implementation. Note
that in this work we assume a multiplexing technique using
a blind precoding scheme without channel state information
(CSI). In contrast, the beamforming technique always requires
accurate CSI to smartly direct its energy in the spatial domain.
The rest of the paper is structured as follows. In Section II,
we present the molecular absorption model for the calculation of channel transfer function, Section III analyzes the
MIMO channel model considering the molecular re-radiation,
followed by simulation results in Section IV. Finally, we
conclude the paper in Section V.
II. C HANNEL
MODEL AND
MIMO CAPACITY
The molecular absorption model defines how different
species of molecules in a communication channel absorb energy from the electromagnetic signals and how they re-radiate
them back to the environment. This section first explains
the concept of absorption coefficient used to characterize the
absorption capacity of a given molecule species, followed by
the attenuation and re-radiation models that are built upon this
coefficient.
A. Molecular absorption coefficient
The medium absorption coefficient, k(f ), at frequency f is
a weighted sum of the molecular absorption coefficients in the
medium [5], which can be formulated as
k(f ) =
N
X
mi ki (f ),
(1)
i=1
where ki (f ) is the molecular absorption coefficient of species
Si on condition of temperature T and and pressure P. ki (f )
can be obtained from HITRAN [7]. In this work, to get the
values of k(f ), we will use some predefined standard atmosphere conditions and their corresponding ratio of molecules
in the air, which are tabulated in [7].
B. Attenuation of radio signal
The attenuation of the radio signal at the THz frequencies
is due to spreading and molecular absorption. In more detail,
the spreading attenuation is given by
C. Molecular re-radiation
The existing molecules in communication medium will be
excited by electromagnetic waves at specific frequencies. The
excitement is temporary and the vibrational-rotational energy
level of molecules will come back to a steady state and the
absorbed energy will be re-radiated in the same frequency.
These re-radiated waves are usually considered as noise in
the literature [3]. Molecular absorption is not white and its
power spectral density (PSD) is not flat because of the different
resonant frequencies of various species of molecules. The PSD
of the molecular absorption noise that affects the transmission
B
of a signal, SNabs , is contributed by the atmospheric noise SN
X
and the self-induced noise SN as addressed in [5]:
B
X
SNabs (f, d) = SN
(f, d) + SN
(f, d),
c 2
B
,
SN
(f, d) = limd→∞ (kB T0 (1 − e−k(f )d )) √
4πf
c 2
X
,
SN
(f, d) = Pt (f )(1 − e−k(f )d )
4πf d
(4)
(5)
(6)
where k(f ) is the absorption coefficient of the medium at
frequency f , T0 is the reference temperature (296K), kB is
the Boltzmann constant, Pt (f ) is the power spectral density
of the transmitted signal and c is the speed of light. The first
term in (4), which is called sky noise and defined in (5) is
independent of the signal wave. However, the self-induced
noise in (6) is highly correlated with the signal wave [6], and
can be considered as a distorted copy of the signal wave. Thus,
equation (6) can be revised as the received power of the reradiated signal by molecules at the receiver by
c 2
Pr,a (f, d) = Pt (f )(1 − e−k(f )d )
.
(7)
4πf d
Since the phase of the re-radiated wave depends on the
phase of molecular vibration, which varies from molecules
to molecules [8], the received power in this case is affected
by a large number of phase-independent re-radiated photons.
Thus, we assume a uniformly distributed random phase for
the received signal, with its power given by (7).
D. Channel Transfer Function
(3)
The channel transfer function for a single LoS channel is
given by
s
2
d
c
e−k(f )×d × ej2π λ
h̃LoS (f, d) =
4πf d
(8)
d
c
j2π
−k(f )× d
2 × e
λ.
=
e
4πf d
where k(f ) is the absorption coefficient of the medium at
frequency f .
Thus, the line-of-sight (LoS) received power at the receiver
becomes
2
c
Pr,LoS (f, d) = Pt (f ) ×
× e−k(f )×d .
4πf d
Then, the partial channel transfer function resulted from the
molecular absorption and excluding the LoS component can
be represented by
s
c 2
× ej2πβrandom
h̃a (f, d) = (1 − e−k(f )d )
4πf d
(9)
c
j2πβrandom
−k(f )d 21
×e
.
= (1 − e
)
4πf d
Aspread (f, d) =
4πf d
c
2
,
(2)
where c is the speed of light. The attenuation due to molecular
absorption is characterized as
Aabs (f, d) = ek(f )×d ,
Hence, the total channel transfer function is the superposition
of the partial channel transfer functions, which is written as
h̃(f, d) = h̃LoS (f, d) + h̃a (f, d),
h̃(f, d) =
c
4πf d
(10)
d
d
e−k(f )× 2 × ej2π λ
c
1
× ej2πβrandom . (11)
+(1 − e−k(f )d ) 2
4πf d
E. MIMO channel model and capacity
In this paper, we consider a MIMO system that is consisted of nt transmitting antennas and nr receiving ones.
The received signal vector y at nr receiving antennas can be
formulated as
y = H̃x + n,
(12)
where x is the transmitted signal vector form nt transmitting
antennas, and n is an nr ×1 vector with zero-mean independent
noises with variance σ 2 . H̃ is the channel matrix
where each of its elements, h̃ij , is a complex value denoting
the transfer coefficient associated with the jth transmitter
antenna and the ith receiver antenna. Note that h̃ij can be
obtained from (11) for frequency f and distance dij .
The capacity of MIMO channel can be written as
C = log2 det(Inr +
P
H̃H̃† ),
nt σ 2
(13)
where P is total transmitting power, and I is the identity
matrix Since the determinant of (Inr + ntPσ2 H̃H̃† ) can be
computed by the product of the eigenvalues of the matrix
H̃H̃† , the MIMO capacity can thus be written in the form
of a product of non-zero eigenvalues as [9]
C=
κ
X
i=1
log2 (1 +
P λ2i
),
κσ 2
(14)
where λi denotes singular values of the matrix H̃, and hence
the squared singular values λ2i denotes the eigenvalues of
the matrix H̃H̃† . Each of the λ2i characterize an equivalent
P λ2
information channel where kσ2i is the corresponding signalto-noise ratio (SNR) of the channel at the receiver. Note that
κ denotes the number of non-zero λ2i , which for beamforming
technique it is equal to one and in multiplexing technique it
could be the rank of H̃ with κ ≤ min(nr , nt ) [9]. However,
because we use blind precoding and uniform power allocation for multiplexing technique κ = nt . Therefore, equation
(14) is valid for uniform power allocation at the transmitter.
P λ2
Furthermore, the equivalent channel SNR, κσ2i , should meet
a minimum receiver threshold to be reliably detectable by the
receiver. In this paper, we assumed 0 dB as the SNR threshold
and uniform power allocation at the transmitter.
The main difference between beamforming and multiplexing techniques is how to tune or exploit the eigenvalue
distribution. In more details, beamforming technique aims
to maximize λ1 to improve the channel SNR for a single
data stream while in the multiplexing technique, a uniform
eigenvalue distribution is preferable. In this way, multiplexing
technique can utilize parallel data streams through MIMO
and maximize the data rate. The complexity of beamforming
comes from eigenvalues tuning because it means the channel
state information (CSI) should be measured and sent back
to the transmitter periodically for optimum precoding. This
also results in a protocol overhead in the channel. On the
other hand, multiplexing gain can take advantage of eigenvalue
value distribution even with a blind precoding. This is more
beneficial when there is a rich scattering environment in the
channel. In next section, we will discuss how the re-radiation
can provide a rich scattering environment.
III. A NALYSIS
ON THE CHANNEL WITH MOLECULAR
ABSORPTION
To analyze the MIMO channel capacity and characterize the
scattering richness of channel quantitatively, lets decompose
and normalize channel transfer function H̃ as
H(f, d) =
r
K
HLoS (f, d) +
K +1
r
1
Ha (f, d), (15)
K +1
where H, HLoS and Ha are normalized with corresponding
channel gain. Because of uniformly distributed random phase
of received re-radiated signal, elements of Ha are independent
and identically distributed (i.i.d) complex Gaussian random
variables with zero mean and unit magnitude variance. K
is the ratio of powers of the LoS signal and the re-radiated
components and if we assume the channel distance is much
longer than antenna space, it can be obtained by
K=
e−k(f )d
Pr,LoS (f, d)
.
=
Pr,a (f, d)
1 − e−k(f )d
(16)
This is same as the well-known Rician channel model where
the K is called Rician K-factor. Equivalently, K-factor shows
how much channel is rich in term of scattering and multipath rays. Equation (16) shows K is a function of absorption
coefficient of channel medium k(f ) and the distance between
transmitter and receiver d so that a longer distance and a
higher absorption result smaller K, as shown in Figure 1a.
The capacity of MIMO channel considering Rician K-factor
is studied in several works [10, 11]. Authors in [11] showed
the lower bound of Rician channel expected capacity for
large number of antennas is the expected capacity of channel
considering only NLoS component,
r
1
(17)
E(C(H)) ≥ E(C(
Ha )),
K +1
p
=⇒ E(C(H)) ≥ E(C( 1 − e−k(f )d Ha )),
(18)
where E(.) denotes the expectation. It is clear that the
lower band is a increasing function of absorption coefficient, ∀f1 , f2 such that k(f2 ) ≥ k(f1 ), Emin (C(f2 )) ≥
Emin (C(f1 )).
(a) K-factor
(b) MIMO capacity using beamforming
(c) MIMO capacity using multiplexing
Fig. 1: K-factor is an increasing function of distance and absorption coefficient. For both multiplexing and beamforming
techniques, the performance gain is affected by K-factor. The capacity is calculated for 225x225 MIMO system.
IV. S IMULATION
AND DISCUSSION
A. Simulation set-up
In this section, to evaluate the molecular absorption impact
on THz MIMO capacity, we consider a simple n × n MIMO
system with a square uniform Arrays, where at both transmitter
and receiver, the inter-element spacing s is equal to half
of the wavelength and the channel distance is d. Moreover,
we consider uniform power allocation to transmitter arrays
operating in an open-space LoS scenario. The default values
of the parameters are listed in Table I, and different values will
be explained when necessary. Since we apply random phases
on NLoS components created by molecular re-radiation, we
conduct the evaluation of the MIMO capacity with molecular
re-radiation for 1000 times and show the average result.
We use the online browsing and plotting tools1 , which is
based on HITRAN databases [7] to generate absorption coefficients for different single gas or some predefined standard
gas mixture of the atmosphere at sea level, as shown in Table
II. Since the water molecules play main roles in a normal air
environment at THz bands we use the highest and lowest water
ratio in Table II, i.e., the ”USA model, high latitude, winter”
and ”USA model, tropics”. The corresponding absorption
coefficients in THz bands have been shown in Figure 2a for
an ambient temperature of 273 K and a sea level pressure of
1 atm. For a tropic atmosphere, the water ratio is higher than
that of the winter atmosphere, and thus we can see a significant
increase in the absorption coefficient among these two gas
mixtures.
In our simulation, we assume a constant transmit power over
the entire frequency spectrum and display the MIMO capacity
in bps/Hz for THz bands. We consider a MIMO set-up with
225 antennas at each side in a uniform square planar array.
Our aim is to compare the beamforming and multiplexing
1 http://hitran.iao.ru/gasmixture/simlaunch
TABLE I: Simulation parameters
Transmitter and receiver distance (d)
Inter-element spacing (s)
Transmitter arrays angle (φ)
Receiver arrays angle (θ)
Number of arrays on each side (n)
Transmit power
Noise power
0.1, 1, 10 m
0.5λ (wave length)
90◦
90◦
225
0, 10 dBm
−80 dBm
techniques in different channel conditions. First, we calculate
the channel capacity for beamforming while the re-radiation is
totally ignored in the channel. Next, the beamforming capacity
is re-calculated when the re-radiation is taken into account.
Finally, the multiplexing gain is calculated with and without
the consideration of re-radiation. In all scenarios, capacity is
obtained by 14.
In the first step, the simulation is run at 500 GHz with the
practical range of absorption coefficient (10−5 ∼ 10+3 ) over
the THz spectrum, as shown in Figure 1. It should be noted that
the actual value of absorption coefficient at 500 GHz is shown
in Figure 2a. The beamforming and multiplexing techniques
capacity is calculated for a range of 0.1∼10 m distance and a
1 mW transmit power.
Secondly, the channel is simulated for two different transmit
power and three distances with realistic absorption coefficients.
Our assumption on the transmit power is based on current
technology [3] and a previous work on THz massive MIMO
[1]. Furthermore, distances have been chosen to cover various application scenarios. For example, THz nanosensors are
considered to communicate in a very short distance in the
order of 0.1-10 cm or less, while THz communications are also
nominated to provide terabit per second ultra high video communication link at around 1 m distance for home entrainment
devices like TV or virtual reality (VR). In addition, longer
distances to a few meters characterize wireless personal or
local networks. Simulation results are presented in Figure 2.
B. The MIMO Capacity vs. the K-factor
Figure 1 illustrates how the channel is transformed from
a LoS dominant channel to a Rayleigh channel and how it
effects on the MIMO beamforming and multiplexing capacity
gain. As can be seen in Figure 1b, the beamforming gain is
decreasing when the absorption coefficient increases which
is because in the very high absorption, the channel is not
LoS dominant anymore and there is significant NLoS signal
component generated by molecule re-radiation or equivalently
lower K-factor. In contrast, Figure 1c shows the multiplexing
technique takes advantage of higher absorption to reach a huge
data rate. However, the low SNR limit the multiplexing gain in
longer distances so that it drops sharply to zero beyond 2 m.
In Figure 2, more results for the THz spectrum with realistic
absorption coefficients will be presented.
TABLE II: Atmosphere standard gas mixture ratio in percentage for different climates [7]
USA
USA
USA
USA
USA
model,
model,
model,
model,
model,
mean latitude, summer, H=0
mean latitude, winter, H=0
high latitude, summer, H=0
high latitude, winter, H=0
tropics, H=0
H2O:
H2O:
H2O:
H2O:
H2O:
1.860000
0.432000
1.190000
0.141000
2.590000
CO2:
CO2:
CO2:
CO2:
CO2:
0.033000
0.033000
0.033000
0.033000
0.033000
O3:
O3:
O3:
O3:
O3:
0.000003
0.000003
0.000002
0.000002
0.000003
C. The MIMO Capacity vs. the Transmit Power and Distance
The channel attenuation including molecular attenuation in
(3) and spreading attenuation in (2) is illustrated in Figure
2b. While the spreading attenuation is increasing linearly in
dB with distance and frequency, the molecular attenuation is
also increasing with distance but is frequency selective. For
example, while the total loss at 10m is 107 dB for 500 GHz,
the total attenuation at 550 GHz is 86 dB at 1 m and it grows
to 220 dB at 10 m which is mostly because of very high
absorption of water molecules in the channel medium at this
frequency. Note that the channel atmosphere for this case is
from tropic data where the ratio of water molecules in the air
is more than 0.02, as shown in Table II.
Figure 2c and 2d illustrate the capacity of the investigated
transmission techniques for a 10 cm distance. The transmit
power is increased from 1 mW in Figure 2c to 10 mW in
Figure 2d. It can be seen that a huge performance difference
exists between multiplexing and beamforming, thanks to the
tremendous multiplexing gain provided by the rich scattering environment due to molecule re-radiation. Furthermore,
in very high absorption frequencies which existing studies
consider as infeasible windows for THz communications, a
significant capacity improvement can be observed. This is
because more absorption leads to more re-radiation, which
transforms a LoS dominant channel to Rayleigh channel. The
details can be found in Section III, where we have discussed
about how the re-radiation decreases the K-factor and creates
a rich scattering environment. To sum up, the re-radiation improves the multiplexing gain which is fundamentally supported
by a better eigenvalue distribution and channel matrix rank in
mathematical analysis.
In Figure 2e and 2f, the distance is increased to 1 m.
With a relatively large distance for THz communications, it
can be seen the beamforming gain is comparable with the
multiplexing gain. However, we can see the multiplexing
gain in high absorption windows, such as 540-580 GHz,
is significantly higher than the rest of spectrum for a 10
mW transmit power. It is a different story for a 1 mW
transmit power where the capacity drops to zero in high
P λ2i
kσ2
absorption windows because the equivalent SNR, (
), of
most parallel channels created by the multiplexing technique
is less than 0 dB and practically such parallel channels are
useless because the receiver can not reliably detect the received
signals. Such results are not surprising since it has been shown
in several works on conventional communication band [12]
that the multiplexing performance drops dramatically in low
SNR. However, considering the implementation challenges
of beamforming, the multiplexing technique might still be a
preferable choice for frequency up to 1 THz. For example, it
can be observed in Figure 2e at 0.9 THz, the capacity is 4 and
N2O:
N2O:
N2O:
N2O:
N2O:
0.000032
0.000032
0.000031
0.000032
0.000032
CO:
CO:
CO:
CO:
CO:
0.000015
0.000015
0.000015
0.000015
0.000015
CH4:
CH4:
CH4:
CH4:
CH4:
0.000170
0.000170
0.000170
0.000170
0.000170
O2:
O2:
O2:
O2:
O2:
20.900001
20.900001
20.900001
20.900001
20.900001
N2:
N2:
N2:
N2:
N2:
77.206000
78.634779
77.876781
78.925780
76.476779
11.7 bps/Hz for the multiplexing and beamforming techniques,
respectively.
Finally, Figures 2g and 2h present the results for a 10 m
distance. For such a distance, path loss leads to a very low
reception SNR and thus the beamforming performance is significantly better than the multiplexing performance. It is wellknown that beamforming technique is not very effective where
there are strong multipath rays [12]. Thus, it is observed that
in very high absorption frequency windows, the beamforming
performance drops sharply. It is not only because of receiving
strong NLoS rays caused by molecule re-radiation but also due
to LoS signal attenuation. Note that the multiplexing technique
can take advantage of same windows in high SNR as we
discussed above for Figure 2f.
V. C ONCLUSION
In this paper, we compared the beam forming and multiplexing techniques of MIMO in the terahertz band. We showed
in high SNR, high transmit power or lower distance, the
multiplexing technique can provide a considerable capacity
gain compared with beamforming. However, for beyond a few
meters such as 10 meters, there should be enough transmitting
power possibility to use multiplexing technique, otherwise
the capacity drops to zero where the beamforming technique
can still provide effective spectrum efficiency at the cost of
complexity and protocol overhead. Our theoretical model also
showed re-radiation of molecules in the THz band can be
helpful for massive MIMO system to improve the channel performance using multiplexing technique. The re-radiation can
provide significantly strong multipath components to achieve
a full spatial multiplexing gain where the receiver is in an
enough SNR coverage. It means some very high absorption
frequency windows which have been formerly pointed as not
feasible for communication might be more preferable choices
for MIMO in some certain applications.
R EFERENCES
[1] I. F. Akyildiz and J. M. Jornet, “Realizing ultra-massive mimo
(10241024) communication in the (0.0610) terahertz band,” Nano Communication Networks, vol. 8, pp. 46 – 54, 2016.
[2] E. Zarepour, M. Hassan, C. T. Chou, and A. A. Adesina, “Semon: Sensorless event monitoring in self-powered wireless nanosensor networks,”
ACM Transactions on Sensor Networks (TOSN), vol. 13, no. 2, p. 15,
2017.
[3] I. F. Akyildiz, J. M. Jornet, and C. Han, “Terahertz band: Next frontier
for wireless communications,” Physical Communication, vol. 12, pp. 16
– 32, 2014.
[4] J. Kokkoniemi, J. Lehtomäki, and M. Juntti, “A discussion on molecular
absorption noise in the terahertz band,” Nano Communication Networks,
2015.
[5] J. M. Jornet and I. F. Akyildiz, “Femtosecond-Long Pulse-Based Modulation for Terahertz Band Communication in Nanonetworks,” IEEE
Transactions on Communications, vol. 62, no. 5, pp. 1742–1754, May
2014.
10 4
US Model tropic - High H 2 O
Absorption Coefficient (m -1 )
US Model High latitude winter - low H 2 O
10
2
10
0
10
-2
10
-4
10
-6
10
2
10
3
10
4
Frequency (GHz)
(a) absorption coefficient, T= 273 K, P= 1 atm
(b) Signal Attenuation
1200
600
Beamforming w re-radiation
Beamforming w/o re-radiation
Multiplexing - w re-radiation
Multiplexing - w/o re-radiation
Beamforming w re-radiation
Beamforming w/o re-radiation
Multiplexing - w re-radiation
Multiplexing - w/o re-radiation
1000
400
800
Capacity (bps/Hz)
Capacity (bps/Hz)
500
300
200
600
400
100
200
0
10 2
10 3
0
10 2
10 4
Frequency (GHz)
10 3
10 4
Frequency (GHz)
(c) distance=0.1m, transmit power=1mW
(d) distance=0.1m, transmit power=10mW
20
250
Beamforming w re-radiation
Beamforming w/o re-radiation
Multiplexing - w re-radiation
Multiplexing - w/o re-radiation
18
16
Beamforming w re-radiation
Beamforming w/o re-radiation
Multiplexing - w re-radiation
Multiplexing - w/o re-radiation
200
Capacity (bps/Hz)
Capacity (bps/Hz)
14
12
10
8
150
100
6
4
50
2
0
10 2
10 3
0
10 2
10 4
Frequency (GHz)
(e) distance=1m, transmit power=1mW
10 4
(f) distance=1m, transmit power=10mW
12
15
Beamforming w re-radiation
Beamforming w/o re-radiation
Multiplexing - w re-radiation
Multiplexing - w/o re-radiation
10
Beamforming w re-radiation
Beamforming w/o re-radiation
Multiplexing - w re-radiation
Multiplexing - w/o re-radiation
8
Capacity (bps/Hz)
Capacity (bps/Hz)
10 3
Frequency (GHz)
6
4
10
5
2
0
10 2
10 3
10 4
Frequency (GHz)
(g) distance=10m, transmit power=1mW
0
10 2
10 3
10 4
Frequency (GHz)
(h) distance=10m, transmit power=10mW
Fig. 2: 225x225 MIMO channel performance in tropic atmosphere.
[6] J. M. Jornet Montana, “Fundamentals of electromagnetic nanonetworks
in the terahertz band,” Ph.D. dissertation, Georgia Institute of Technology, 2013.
[7] L. Rothman, I. Gordon, Y. Babikov, A. Barbe et al., “The hitran2012
molecular spectroscopic database,” Journal of Quantitative Spectroscopy
and Radiative Transfer, vol. 130, pp. 4 – 50, 2013.
[8] L. D. Barron, Molecular light scattering and optical activity. Cambridge University Press, 2004.
[9] D. Tse and P. Viswanath, “Chapter 07: MIMO I : spatial multiplexing
and channel modeling,” Fundamentals of Wireless Communication, pp.
290–331, 2005.
[10] F. R. Farrokhi, G. J. Foschini, A. Lozano, and R. A. Valenzuela,
“Link-optimal space-time processing with multiple transmit and receive
antennas,” IEEE Communications Letters, vol. 5, no. 3, pp. 85–87, 2001.
[11] G. Lebrun, M. Faulkner, M. Shafi, and P. J. Smith, “Mimo ricean channel capacity: an asymptotic analysis,” IEEE Transactions on Wireless
Communications, vol. 5, no. 6, pp. 1343–1350, 2006.
[12] D. Gesbert, M. Shafi, D.-s. Shiu, P. J. Smith et al., “From theory to
practice: An overview of mimo space-time coded wireless systems,”
IEEE Journal on selected areas in Communications, vol. 21, no. 3, pp.
281–302, 2003.
| 7 |
An Emptiness Algorithm for Regular Types with
Set Operators
arXiv:cs/9811015v1 [cs.LO] 11 Nov 1998
Lunjin Lu and John G. Cleary
Department of Computer Science
University of Waikato
Hamilton, New Zealand
Phone: +64-838-4627/4378
{lunjin,jcleary}@cs.waikato.ac.nz
Abstract. An algorithm to decide the emptiness of a regular type expression with set operators given a set of parameterised type definitions
is presented. The algorithm can also be used to decide the equivalence
of two regular type expressions and the inclusion of one regular type expression in another. The algorithm strictly generalises previous work in
that tuple distributivity is not assumed and set operators are permitted
in type expressions.
Keywords: type, emptiness, prescriptive type
1
Introduction
Types play an important role in programming languages [6]. They
make programs easier to understand and help detect errors. Types
have been introduced into logic programming in the forms of type
checking and inference [5,9,12,26,32] or type analysis [25,33,17,19,13,22,7,23]
or typed languages [16,21,28,31]. Recent logic programming systems
allow the programmer to declare types for predicates and type errors
are then detected either at compile time or at run time. The reader
is referred to [27] for more details on types in logic programming.
A type is a possibly infinite set of ground terms with a finite
representation. An integral part of any type system is its type language that specifies which sets of ground terms are types. To be
useful, types should be closed under intersection, union and complement operations. The decision problems such as the emptiness of
a type, inclusion of a type in another and equivalence of two types
should be decidable. Regular term languages [14,8], called regular
types, satisfy these conditions and have been used widely used as
types [29,25,33,9,17,21,28,31,12,32,19,13,22,7,23].
Most type systems use tuple distributive regular types which are
strictly less powerful than regular types [29,25,33,17,21,28,31,12,32,19,13,22,7,23].
Tuple distributive regular types are regular types closed under tuple
distributive closure. Intuitively, the tuple distributive closure of a set
of terms is the set of all terms constructed recursively by permuting
each argument position among all terms that have the same function
symbol [32].
This paper gives an algorithm to decide if a type expression denotes an empty set of terms. The correctness of the algorithm is
proved and its complexity is analysed. The algorithm works on prescriptive types [28]. By prescriptive types, we mean that the meaning of a type is determined by a given set of type definitions. We
allow parametric and overloading polymorphism in type definitions.
Prescriptive types are useful both in compilers and other program
manipulation tools such as debuggers because they are easy to understand for programmers. Type expressions may contain set operators
with their usual interpretations. Thus, the algorithm can be used to
decide the equivalence of two type expressions and the inclusion of
one type expression in another. The introduction of set operators
into type expressions allows concise and intuitive representation of
regular types.
Though using regular term languages as types allow us to make
use of theoretical results in the field of tree automata [14], algorithms
for testing the emptiness of tree automata cannot be applied directly
as type definitions may be parameterised. For instance, in order to
decide the emptiness of a type expression given a set of type definitions, it would be necessary to construct a tree automaton from
the type expression and the set of type definitions before an algorithm for determining the emptiness of an tree automaton can be
used. When type definitions are parameterised, this would make it
necessary to construct a different automaton each time the emptiness of a type expression is tested. Thus, an algorithm that works
directly with type definitions is desirable as it avoids this repeated
construction of automata.
Attempts have been made in the past to find algorithms for regular types [25,12,32,33,31,10,9]. To our knowledge, Dart and Zobel’s
work [10] is the only one to present decision algorithms for emptiness
and inclusion problems for prescriptive regular types without the tuple distributive restriction. Unfortunately, their decision algorithm
for the inclusion problem is incorrect for regular types in general.
See [24] for a counterexample. Moreover, the type language of Dart
and Zobel is less expressive than that considered in this paper since
it doesn’t allow set operators and parameterised type definitions.
Set constraint solving has also been used in type checking and
type inference [3,2,20,18,11]. However, set constraint solving methods are intended to infer descriptive types [28] rather than for testing
emptiness of prescriptive types [28]. Therefore, they are useful in different settings from the algorithm presented in this paper. Moreover,
algorithms proposed for set constraint solving [3,4,2,1] are not applicable to the emptiness problem we considered in this paper as they
don’t take type definitions into account.
The remainder of this paper is organised as follows. Section 2
describes our language of type expressions and type definitions. Section 3 presents our algorithm for testing if a type expression denotes
an empty set of terms. Section 4 addresses the of the algorithm.
Section 5 presents the complexity of the algorithm and section 6
concludes the paper. Some lemmas are presented in the appendix.
2
Type Language
Let Σ be a fixed ranked alphabet. Each symbol in Σ is called a
function symbol and has a fixed arity. It is assumed that Σ contains
at least one constant that is a function symbol of arity 0. The arity
of a symbol f is denoted as arity(f ). Σ may be considered as the set
of function symbols in a program. Let T (Φ) be the set of all terms
over Φ. T (Σ) is the set of all possible values that a program variable
can take. We shall use regular term languages over Σ as types.
A type is represented by a ground term constructed from another
ranked alphabet Π and {⊓, ⊔, ∼, 1, 0}, called type constructors. It is
assumed that (Π ∪ {⊓, ⊔, ∼, 1, 0}) ∩ Σ = ∅. Thus, a type expression
is a term in T (Π ∪ {⊓, ⊔, ∼, 1, 0}). The denotations of type constructors in Π are determined by type definitions whilst ⊓, ⊔, ∼, 1
and 0 have fixed denotations that will be given soon.
Several equivalent formalisms such as tree automata [14,8], regular term grammars [14,10,8] and regular unary logic programs [32]
have been used to define regular types. We define types by type
rules. A type rule is a production rule of the form c(ζ1 , · · · , ζm ) → τ
where c ∈ Π, ζ1 , · · · , ζm are different type parameters and τ ∈
T (Σ ∪ Π ∪ Ξm ) where Ξm = {ζ1 , · · · , ζm }. The restriction that every
type parameter in the righthand side of a type rule must occur in
the lefthand side of the type rule is often referred to as type preserving [30] and has been used in all the type definition formalisms.
Note that overloading of function symbols is permitted as a function
symbol can appear in the righthand sides of many type rules. We
def S
denote by ∆ the set of all type rules and define Ξ = c∈Π Ξarity(c) .
hΠ, Σ, ∆i is a restricted form of context-free term grammar.
Example 1. Let Σ = {0, s(), nil, cons(, )} and Π = {Nat, Even, List()}.
∆ defines natural numbers, even numbers, and lists where
Nat
→ 0 | s(Nat),
∆ = Even → 0 | s(s(Even)),
List(ζ) → nil | cons(ζ, List(ζ))
where, for instance, Nat → 0 | s(Nat) is an abbreviation of two rules
Nat → 0 and Nat → s(Nat).
∆ is called simplified if τ in each production rule c(ζ1 , · · · , ζm ) → τ
is of the form f (τ1 , · · · , τn ) such that each τj , for 1 ≤ j ≤ n, is either
in Ξm or of the form d(ζ1′ , · · · , ζk′ ) and ζ1′ , · · · , ζk′ ∈ Ξm . We shall
assume that ∆ is simplified. There is no loss of generality to use
a simplified set of type rules since every set of type rules can be
simplified by introducing new type constructors and rewriting and
adding type rules in the spirit of [10].
Example 2. The following is the simplified version of the set of type
rules in example 1. Σ = {0, s(), nil, cons(, )}, Π = {Nat, Even, Odd, List()}
and
∆=
(
Nat → 0 | s(Nat), Even → 0 | s(Odd),
Odd → s(Even), List(ζ) → nil | cons(ζ, List(ζ))
)
A type valuation φ is a mapping from Ξ to T (Π ∪{⊓, ⊔, ∼, 1, 0}).
The instance φ(R) of a production rule R under φ is obtained by replacing each occurrence of each type parameter ζ in R with φ(ζ).
E.g., List(Nat⊓(∼Even)) → cons(Nat⊓(∼Even), List(Nat⊓(∼Even)))
is the instance of List(ζ) → cons(ζ, List(ζ)) under a type valuation
that maps ζ to Nat⊓(∼Even). Let
def
ground(∆) = {φ(R) | R ∈ ∆ ∧ φ ∈ (Ξ 7→ T (Π ∪ {⊓, ⊔, ∼, 1, 0}))}
∪ {1 7→ f (1, · · · , 1) | f ∈ Σ}
ground(∆) is the set of all ground instances of grammar rules in ∆
plus rules of the form 1 → f (1, · · · , 1) for every f ∈ Σ.
Given a set ∆ of type definitions, the type denoted by a type
expression is determined by the following meaning function.
def
[1]]∆ = T (Σ)
def
[0]]∆ = ∅
def
[E1 ⊓E2]∆ = [E1]∆ ∩ [E2]∆
def
[E1 ⊔E2]∆ = [E1]∆ ∪ [E2]∆
def
[∼E]]∆ = T (Σ) − [E]]∆
def
[ω]]∆ =
[
{f (t1 , · · · , tn ) | ∀1 ≤ i ≤ n. ti ∈ [Ei]∆ }
(ω→f (E1 ,···,En ))∈ground(∆)
[·]]∆ gives fixed denotations to ⊓, ⊔, ∼, 1 and 0. ⊓, ⊔ and ∼ are
interpreted by [·]]∆ as set intersection, set union and set complement
with respect to T (Σ). 1 denotes T (Σ) and 0 the empty set.
Example 3. Let ∆ be that in example 2. We have
[Nat]]∆ = {0, s(0), s(s(0)), · · ·}
[Even]]∆ = {0, s(s(0)), s(s(s(s(0)))), · · ·}
[Nat⊓∼Even]]∆ = {s(0), s(s(s(0))), s(s(s(s(s(0))))), · · ·}
[List(Nat⊓∼Even)]]∆ = {cons(s(0), nil), cons(s(s(s(0))), nil), · · ·}
The lemma 5 in the appendix states that every type expression
denotes a regular term language, that is, a regular type.
We extend [·]]∆ to sequences θ of type expressions as follows.
def
[ǫ]]∆ = {ǫ}
def
[hEi • θ′]∆ = [E]]∆ × [θ′]∆
where ǫ is the empty sequence, • is the infix sequence concatenation
operator, hEi is the sequence consisting of the type expression E
and × is the Cartesian product operator. As a sequence of type
expressions, ǫ can be thought of consisting of zero instance of 1. We
use Λ to denote the sequence consisting of zero instance of 0 and
define [Λ]]∆ = ∅.
We shall call a sequence of type expressions simply a sequence.
A sequence expression is an expression consisting of sequences of
the same length and ⊓, ⊔ and ∼. The length of the sequences in a
sequence expression θ is called the dimension of θ and is denoted by
kθk. Let θ, θ1 and θ2 be sequence expressions of the same length.
def
[θ1 ⊓θ2]∆ = [θ1]∆ ∩ [θ2]∆
def
[θ1 ⊔θ2]∆ = [θ1]∆ ∪ [θ2]∆
def
[∼θ]]∆ = T (Σ) × · · · × T (Σ) −[[θ]]∆
|
{z
kθk times
}
A conjunctive sequence expression is a sequence expression of the
form γ1 ∧ · · · ∧ γm where γi for, 1 ≤ i ≤ m, are sequences.
3
Emptiness Algorithm
This section presents an algorithm that decides if a type expression
denotes the empty set with respect to a given set of type definitions.
The algorithm can also be used to decide if (the denotation of) one
type expression is included in (the denotation of) another because
E1 is included in E2 iff E1 ⊓∼E2 is empty.
We first introduce some terminology and notations. A type atom
is a type expression of which the principal type constructor is not a
set operator. A type literal is either a type atom or the complement
of a type atom. A conjunctive type expression C is of the form ⊓i∈I li
with li being a type literal. Let α be a type atom. F (α) defined below
is the set of the principal function symbols of the terms in [α]]∆ .
def
F (α) = {f ∈ Σ | ∃ζ1 · · · ζk .((α → f (ζ1, · · · , ζk )) ∈ ground(∆))}
Let f ∈ Σ. Define
def
Afα = {hα1 , · · · , αk i | (α → f (α1 , · · · , αk )) ∈ ground(∆)}
We have [Afα]∆ = {ht1 , · · · , tk i | f (t1 , · · · , tk ) ∈ [α]]∆ }. Both F (α) and
Afα are finite even though ground(∆)) is usually not finite.
The algorithm repeatedly reduces the emptiness problem of a type
expression to the emptiness problems of sequence expressions and
then reduces the emptiness problem of a sequence expression to the
emptiness problems of type expressions. Tabulation is used to break
down any possible loop and to ensure termination. Let O be a type
def
expression or a sequence expression. Define empty(O) = ([[O]]∆ = ∅).
3.1
Two Reduction Rules
We shall first sketch the two reduction rules and then add tabulation
to form an algorithm. Initially the algorithm is to decide the validity
of a formula of the form
empty(E)
(1)
where E is a type expression.
The first reduction rule rewrites a formula of
the form (1) into a conjunction of formulae of the following form.
Reduction Rule One.
empty(σ)
(2)
where σ is a sequence expression where ∼ is applied to type expressions but not to any sequence expression.
It is obvious that a type expression has a unique (modulo equivalence of denotation) disjunctive normal form. Let DNF(E) be the disjunctive normal form of E. empty(E) can written into ∧C∈DNF(E) empty(C).
Each C is a conjunctive type expression. We assume that C contains
at least one positive type literal. This doesn’t cause any loss of generality as [1⊓C]]∆ = [C]]∆ for any conjunctive type expression C. We
also assume that C doesn’t contain repeated occurrences of the same
type literal.
Let C = ⊓1≤i≤m ωi ⊓ ⊓1≤j≤n ∼τj where ωi and τj are type atoms.
def
The set of positive type literals in C is denoted as pos(C) = {ωi | 1 ≤
i ≤ m} while the set of complemented type atoms are denoted as
def
neg(C) = {τj | 1 ≤ j ≤ n}. lit(C) denotes the set of literals occurring in C. By lemma 3 in the appendix, empty(C) is equivalent
to
∀f ∈ ∩α∈pos(C) F (α).
(3)
empty((⊓ω∈pos(C) (⊔Afω ))⊓(⊓τ ∈neg(C) ∼(⊔Afτ )))
The intuition behind the equivalence is as follows. [C]]∆ is empty
iff, for every function symbol f , the set of the sequences ht1 , · · · , tk i
of terms such that f (t1 , · · · , tk ) ∈ [C]]∆ is empty. Only the function
symbols in ∩α∈pos(C) F (α) need to be considered.
We note the following two special cases of the formula (3).
(a) If ∩α∈pos(C) F (α) = ∅ then the formula (3) is true because ∧∅ =
true. In particular, F (0) = ∅. Thus, if 0 ∈ pos(C) then ∩α∈pos(C) F (α) =
∅ and hence the formula (3) is true.
(b) If Afτ = ∅ for some τ ∈ neg(C) then ⊔Afτ = h0, · · · , 0i and
∼(⊔Afτ ) = h1, · · · , 1i. Thus, τ has no effect on the subformula
for f when Afτ = ∅.
In order to get rid of complement operators over sequence subexpressions, the complement operator in ∼(⊔Afτ ) is pushed inwards
by the function push defined in the following.
def
push(∼(⊔i∈I γi )) = ⊓i∈I push(∼γi )
def
push(∼hE1 , E2 , · · · , Ek i) = ⊔1≤l≤k h 1, · · · , 1, ∼El , 1, · · · , 1 i
def
push(∼ǫ) = Λ
|
{z
l−1
}
|
{z
k−l
for k ≥ 1
}
It follows from De Morgan’s law and the definition of [·]]∆ that
[push(∼(⊔Afτ ))]]∆ = [∼(⊔Afτ )]]∆ . Substituting push(∼(⊔Afτ )) for ∼(⊔Afτ )
in the formula (3) gives rise to a formula of the form (2).
The second reduction rule rewrites a formula
of the form 2 to a conjunction of disjunctions of formulae of the
form 1. Formula 2 is written into a disjunction of formulae of the
form.
empty(Γ )
Reduction Rule Two.
where Γ be a conjunctive sequence expression.
In the case kΓ k = 0, by lemma 4 in the appendix, empty(Γ ) can
be decided without further reduction. If Λ ∈ Γ then empty(Γ ) is true
because [Λ]]∆ = ∅. Otherwise, empty(Γ ) is false because [Γ ]∆ = {ǫ}.
In the case kΓ k =
6 0, empty(Γ ) is equivalent to
∨1≤j≤kΓ k empty(Γ↓j)
def
where, letting Γ = γ1 ⊓ · · · ⊓γk , Γ↓j = ⊓1≤i≤k γij with γij being the j th
component of γi . Note that Γ↓j is a type expression and empty(Γ↓j)
is of the form 1.
3.2
Algorithm
The two reduction rules in the previous section form the core of the
algorithm. However, they alone cannot be used as an algorithm as
a formula empty(E) may reduce to a formula containing empty(E)
as a sub-formula, leading to nontermination. Suppose Σ = {f (), a},
Π = {Null} and ∆ = {Null → f (Null)}. Clearly, empty(Null) is
true. However, by the first reduction rule, empty(Null) reduces to
empty(hNulli) which then reduces to empty(Null) by the second
reduction rule. This process will not terminate.
The solution, inspired by [10], is to remember in a table a particular kind of formulae of which truth is being tested. When a formula
of that kind is tested, the table is first looked up. If the formula is
implied by any formula in the table, then it is determined as true.
Otherwise, the formula is added into the table and then reduced by
a reduction rule.
The emptiness algorithm presented below remembers every conjunctive type expression of which emptiness is being tested. Thus
the table is a set of conjunctive type expressions. Let C1 and C2 be
def
conjunctive type expressions. We define (C1 C2 ) = (lit(C1 ) ⊇
lit(C2 )). Since Ci = ⊓l∈lit(Ci ) l, C1 C2 implies [C1]∆ ⊆ [C2]∆ and
hence (C1 C2 ) ∧ empty(C2 ) implies empty(C1 ).
Adding tabulation to the two reduction rules, we obtain the following algorithm for testing the emptiness of prescriptive regular
types. Let
BCf = (⊓ω∈pos(C) (⊔Afω ))⊓(⊓τ ∈neg(C) push(∼(⊔Afτ ))).
def
etype(E) = etype(E, ∅)
def
etype(E, Ψ ) = ∀C ∈ DNF(E).etype conj(C, Ψ )
(4)
(5)
def
etype
conj(C, Ψ ) =
if pos(C) ∩ neg(C) 6= ∅,
true,
true,
if ∃C ′ ∈ Ψ.C C ′ ,
∀f ∈ ∩
f
otherwise.
α∈pos(C) F(α).eseq(BC , Ψ ∪ {C}),
def
eseq(Θ, Ψ ) = ∀Γ ∈ DNF(Θ).eseq conj(Γ, Ψ )
def
eseq conj(Γ, Ψ ) =
(
true
if kΓ k = 0 ∧ Λ ∈ Γ ,
false
if kΓ k = 0 ∧ Λ 6∈ Γ ,
∃1 ≤ j ≤ kΓ k.etype(Γ↓j, Ψ )
if kΓ k =
6 0.
(6)
(7)
(8)
Equation 4 initialises the table to the empty set. Equations 5
and 6 implement the first reduction rule while equations 7 and 8 implement the second reduction rule. etype(, ) and etype conj(, ) test
the emptiness of an arbitrary type expression and that of a conjunctive type expression respectively. eseq(, ) tests emptiness of a
sequence expression consisting of sequences and ⊓ and ⊔ operators
while eseq conj(, ) tests the emptiness of a conjunctive sequence expression. The expression of which emptiness is to be tested is passed
as the first argument to these functions. The table is passed as the
second argument. It is used in etype conj(, ) to detect a conjunctive
type expression of which emptiness is implied by the emptiness of
a tabled conjunctive type expression. As we shall show later, this
ensures the termination of the algorithm. Each of the four binary
functions returns true iff the emptiness of the first argument is implied by the second argument and the set of type definitions.
Tabling any other kind of expressions such as arbitrary type expressions can also ensure termination. However, tabling conjunctive
type expressions makes it easier to detect the implication of the
emptiness of one expression by that of another because lit(C) can
be easily computed given a conjunctive type expression C. In an implementation, a conjunctive type expression C in the table can be
represented as lit(C).
The first two definitions for etype conj(C, Ψ ) in equation 6 terminates the algorithm when the emptiness of C can be decided by
C and Ψ without using type definitions. The first definition also excludes from the table any conjunctive type expression that contains
both a type atom and its complement.
3.3
Examples
We now illustrate the algorithm with some examples.
Example 4. Let type definitions be given as in example 2. The tree
in figure 1 depicts the evaluation of etype(Nat⊓∼Even⊓∼Odd) by
the algorithm. Nodes are labeled with function calls. We will identity
a node with its label. Arcs from a node to its children are labeled
with the number of the equation that is used to evaluate the node.
Abbreviations used in the labels are defined in the legend to the
right of the tree. Though [A]]∆ = [B]]∆ , A and B are syntactically
different type expressions. The evaluation returns true, verifying
[Nat⊓∼Even⊓∼Odd]]∆ = ∅. Consider etype conj(B, {A}). We have
B A as lit(A) = lit(B). Thus, by equation 6, etype conj(B, {A}) =
true.
etype(A)
3
etype(A, ∅)
4
etype conj(A, ∅)
∧
5
5
eseq(C, {A})
eseq(ǫ⊓Λ, {A})
6
6
eseq conj(ǫ⊓Λ, {A})
eseq conj(C, {A})
7
7
true
etype(B, {A})
4
etype conj(B, {A})
5
true
Legend:
A = N at⊓∼Even⊓∼Odd
B = N at⊓∼Odd⊓∼Even
C = hN ati⊓h∼Oddi⊓h∼Eveni
Fig. 1. Evaluation of etype(N at⊓∼Even⊓∼Odd))
Example 5. Let type definitions be given as in example 2. The tree
in figure 2 depicts the evaluation of etype(List(Even⊓∼Nat)) by the
algorithm. The evaluation returns false, verifying [List(Even⊓∼Nat)]]∆ 6=
∅. Indeed, [List(Even⊓∼Nat)]]∆ = {nil}. The rightmost node is not
evaluated as its sibling returns false, which is enough to establish
the falsity of their parent node.
etype(A)
(3)
etype(A, ∅)
(4)
etype conj(A, ∅)
∧
(5)/nil
(5)/cons(,)
eseq(ǫ, {A})
eseq(hB, Ai, {A})
(6)
eseq conj(ǫ, {A})
(7)
false
Legend:
A = List(Even⊓∼N at)
B = Even⊓∼N at
Fig. 2. Evaluation of etype(List(Even⊓∼N at))
Example 6. The following is a simplified version of the type definitions that is used in [24] to show the incorrectness of the algorithm
by Dart and Zobel for testing inclusion of one regular type in another [10].
Let Π = {α, β, θ, σ, ω, ζ, η}, Σ = {a, b, g(), h(, )} and
∆=
(
α → g(ω), β → g(θ) | g(σ), θ → a | h(θ, ζ), σ → b | h(σ, η),
ω → a | b | h(ω, ζ) | h(ω, η), ζ → a,
η→b
)
Let t = g(h(h(a, b), a)). t ∈ [α]]∆ and t 6∈ [β]]∆ , see example 3
in [24] for more details. So, [α]]∆ 6⊆ [β]]∆ . This is verified by our
algorithm as follows. Let Ψ1 = {α⊓∼β} and Ψ2 = Ψ1 ∪ {ω⊓∼θ⊓∼σ}.
By applying equations 4, 5, 6, 7, 8 and 5 in that order, we have
etype(α⊓∼β) = etype conj(ω⊓∼θ⊓∼σ, Ψ1 ). By equation 6, we have
etype(α⊓∼β) = eseq(ǫ⊓Λ⊓ǫ, Ψ2 ) ∧ eseq(ǫ⊓ǫ⊓Λ, Ψ2 ) ∧ eseq(Θ, Ψ2)
where Θ = (hω, ζi⊔hω, ηi)⊓(h∼θ, 1i⊔h1, ∼ζi)⊓(h∼σ, 1i⊔h1, ∼ηi). We
choose not to simplify expressions such as ǫ⊓ǫ⊓∼Λ so as to make
the example easy to follow. By applying equations 7 and 8, we
have both eseq(ǫ⊓Λ⊓ǫ, Ψ2 ) = true and eseq(ǫ⊓ǫ⊓Λ, Ψ2 ) = true. So,
etype(α⊓∼β) = eseq(Θ, Ψ2). Let Γ = hω, ζi⊓h∼θ, 1i⊓h1, ∼ηi. To
show etype(α⊓∼β) = false, it suffices to show eseq conj(Γ, Ψ2 ) =
false by equation 7 because Γ ∈ DNF(Θ) and etype(α⊓∼β) =
eseq(Θ, Ψ2).
Figure 3 depicts the evaluation of eseq conj(Γ, Ψ2). The node that
is linked to its parent by a dashed line is not evaluated because one of
its siblings returns false, which is sufficient to establish the falsity of
its parent. It is clear from the figure that etype conj(Θ, Ψ2) = false
and hence etype(α⊓∼β) = false.
4
Correctness
This section addresses the correctness of the algorithm. We shall
first show that tabulation ensures the termination of the algorithm
because the table can only be of finite size. We then establish the
partial correctness of the algorithm.
etype conj(Γ, Ψ2 )
∨
7
7
etype(ζ⊓∼η, Ψ2 )
etyp(ω⊓∼θ, Ψ2 )
4
4
etyp conj(ω⊓∼θ, Ψ2 )
etype conj(ζ⊓∼η, Ψ2 )
∧ 5/b
5/a
5/h(,)
5
eseq(ǫ⊓Λ, Ψ3 )
eseq(ǫ⊓ǫ, Ψ4 )
eseq(ǫ⊓ǫ, Ψ3 )
eseq(Θ1 , Ψ3 )
6
6
6
eseq conj(ǫ⊓Λ, Ψ3 )
eseq conj(ǫ⊓ǫ, Ψ3 )
eseq conj(ǫ⊓ǫ, Ψ4 )
7
7
7
true
false
false
Legend:
Θ1 = (hω, ζi⊔hω, ηi)⊓(h∼θ, 1i⊔h1, ∼ζi)
Ψ3 = Ψ2 ∪ {ω⊓∼θ}
Ψ4 = Ψ2 ∪ {ζ⊓∼η}
Γ = hω, ζi⊓h∼θ, 1i⊓h1, ∼ηi
Fig. 3. Evaluation of etype conj(Γ, Ψ2 )
4.1
Termination
Given a type expression E, a top-level type atom in E is a type
atom in E that is not a sub-term of any type atom in E. The
set of top-level type atoms in E is denoted by TLA(E). For instance, letting E = ∼List(Nat)⊔T ree(Nat⊓∼Even), TLA(E) =
{List(Nat), T ree(Nat⊓∼Even)}. We extend TLA(·) to sequences
def S
by TLA(hE1 , E2 , · · · , Ek i) = 1≤i≤k TLA(Ei ).
Given a type expression E0 , the evaluation tree for etype(E0 ) contains nodes of the form etype(E, Ψ ), etype conj(C, Ψ ), eseq(Θ, Ψ )
and eseq conj(Γ, Ψ ) in addition to the root that is etype(E0 ). Only
nodes of the form etype conj(C, Ψ ) add conjunctive type expressions to the table. Other forms of nodes only pass the table around.
Therefore, it suffices to show that the type atoms occurring in the
first argument of the nodes are from a finite set because any conjunctive type expression added into the table is the first argument
of a node of the form etype conj(C, Ψ ).
The set RTA(E0 ) of type atoms relevant to a type expression E0
is the smallest set of type atoms satisfying
– TLA(E0 ) ⊆ RTA(E0 ), and
– if τ is in RTA(E0 ) and τ → f (τ1 , τ2 , · · · , τk ) is in ground(∆) then
TLA(τi ) ⊆ RTA(E0 ) for 1 ≤ i ≤ k.
The height of τi is no more than that of τ for any τ → f (τ1 , τ2 , · · · , τk )
in ground(∆). Thus, the height of any type atom in RTA(E0 ) is
finite. There are only a finite number of type constructors in Π.
Thus, RTA(E0 ) is of finite size. It follows by examining the algorithm
that type atoms in the first argument of the nodes in the evaluation
tree for etype(E0 ) are from RTA(E0 ) which is finite. Therefore, the
algorithm terminates.
4.2
Partial Correctness
The partial correctness of the algorithm is established by showing
etype(E0 ) = true iff empty(E0 ). Let Ψ be a set of conjunctive type
def
expressions. Define ρΨ = ∧C∈Ψ empty(C). The following two lemmas
form the core of our proof of the partial correctness of the algorithm.
Lemma 1. Let Ψ be a set of conjunctive type expressions, E a type
expression, C a conjunctive type expression, Θ a sequence expression
and Γ a conjunctive sequence expression.
(a)
(b)
(c)
(d)
If
If
If
If
ρΨ
ρΨ
ρΨ
ρΨ
|= empty(C)
|= empty(E)
|= empty(Γ )
|= empty(Θ)
then
then
then
then
etype conj(C, Ψ ) = true, and
etype(E, Ψ ) = true, and
etype(Γ, Ψ ) = true, and
etype(Θ, Ψ ) = true.
Proof. The proof is done by induction on the size of the complement
of Ψ with respect to the set of all possible conjunctive type expressions
in which type atoms are from RTA(E0 ) where E0 is a type expression.
Basis. The complement is empty. Ψ contains all possible conjunctive type expressions in which type atoms are from RTA(E0 ). We
have C ∈ Ψ and hence etype conj(C, Ψ ) = true by equation 6. Therefore, (a) holds. (b) follows from (a) and equation 5. (c) follows from
(b), equation 8 and lemma 4 in the appendix, and (d) follows from
(c) and equation 7.
Induction. By lemma 3 in the appendix, ρΨ |= empty(C) implies ρΨ |= empty(BCf ) for any f ∈ ∩α∈pos(C) F (α). Thus, ρΨ∪{C} |=
empty(BCf ). The complement of Ψ ∪ {C} is smaller than the complement of Ψ . By the induction hypothesis, we have eseq(BCf , Ψ ∪{C}) =
true. By equation 6, etype conj(C, Ψ ) = true. Therefore, (a) holds.
(b) follows from (a) and equation 5. (c) follows from (b), equation 8
and lemma 4 in the appendix and (d) follows from (c) and equation 7.
This completes the proof of the lemma.
Lemma 1 establishes the completeness of etype(, ), etype conj(, ),
eseq(, ) and eseq conj(, ) while the following lemma establishes their
soundness.
Lemma 2. Let Ψ be a set of conjunctive type expressions, E a type
expression, C a conjunctive type expression, Θ a sequence expression
and Γ a conjunctive sequence expression.
(a)
(b)
(c)
(d)
ρΨ
ρΨ
ρΨ
ρΨ
|= empty(C)
|= empty(E)
|= empty(Γ )
|= empty(Θ)
if
if
if
if
etype conj(C, Ψ ) = true, and
etype(E, Ψ ) = true, and
etype(Γ, Ψ ) = true, and
etype(Θ, Ψ ) = true.
Proof. It suffices to prove (a) since (b),(c) and (d) follow from (a)
as in lemma 1. The proof is done by induction on dp(C, Ψ ) the depth
of the evaluation tree for etype conj(C, Ψ ).
Basis. dp(C, Ψ ) = 1. etype conj(C, Ψ ) = true implies either (i)
pos(C) ∩ neg(C) 6= ∅ or (ii) ∃C ′ ∈ Ψ.C C ′ . In case (i), empty(C)
is true and ρΨ |= empty(C). Consider case (ii). By the definition of
and ρΨ , we have etype conj(C, Ψ ) = true implies ρΨ |= empty(C).
Induction. dp(C, Ψ ) > 1. Assume etype conj(C, Ψ ) = true and
ρΨ |= ¬empty(C). By lemma 3, there is f ∈ ∩α∈pos(C) F (α) such
that ρΨ |= ¬empty(BCf ). We have ρΨ∪{C} |= ¬empty(BCf ). dp(BCf , Ψ ∪
{C}) < dp(C, Ψ ). By the induction hypothesis, we have etuple(BCf , Ψ ∪
{C}) = false for otherwise, ρΨ∪{C} |= BCf . By equation 6, etype conj(C, Ψ ) =
false which contradicts etype conj(C, Ψ ) = true. So, ρΨ |= empty(C)
if etype conj(C, Ψ ) = true. This completes the induction and the
proof of the lemma.
The following theorem is a corollary of lemmas 1 and 2.
Theorem 1. For any type expression E, etype(E) = true iff empty(E).
Proof. By equation 4, etype(E) = etype(E, ∅). By lemma 1.(b) and
lemma 2.(b), we have etype(E, ∅) = true iff ρ∅ |= empty(E). The
result follows since ρ∅ = true.
5
Complexity
We now address the issue of complexity of the algorithm. We only
consider the worst-case time complexity of the algorithm. The time
spent on evaluating etype(E0 ) for a given type expression E0 can be
measured in terms of the number of nodes in the evaluation tree for
etype(E0 ).
The algorithm cycles through etype(, ), etype conj(, ), eseq(, ) and
eseq conj(, ). Thus, children of a node of the form etype(E, Ψ ) can
only be of the form etype conj(C, Ψ ), and so on.
Let |S| be the number of elements in a given set S. The largest
possible table in the evaluation of etype(E0 ) contains all the conjunctive type expressions of which type atoms are from RTA(E0 ).
Therefore, the table can contain at most 2|RTA(E0 )| conjunctive type
expressions. So, the height of the tree is bounded by O(2|RTA(E0 )| ).
We now show that the branching factor of the tree is also bounded
by O(2|RTA(E0 )| ). By equation 5, the number of children of etype(E, Ψ )
is bounded by two to the power of the number of type atoms in E
which is bounded by |RTA(E0 )| because E can only contain type
atoms from RTA(E0 ). By equation 6, the number of children of
etype conj(C, Ψ ) is bounded by |Σ|. The largest number of children of a node eseq(Θ, Ψ ) is bounded by two to the power of the
number of sequences in Θ where Θ = BCf . For each τ ∈ neg(C),
|push(∼(⊔Afτ ))| is O(arity(f )) and |C| < |RTA(E0 )|. Thus, the
number of sequences in Θ is O(arity(f ) ∗ |RTA(E0 )|) and hence the
number of children of eseq(Θ, Ψ ) is O(2|RTA(E0 )| ) since arity(f ) is a
constant. By equation 8, the number of children of eseq conj(Γ, Ψ )
is bounded by maxf ∈Σ arity(f ). Therefore, the branching factor of
the tree is bounded by O(2|RTA(E0 )| ).
The above discussion leads to the following conclusion.
Proposition 1. The time complexity of the algorithm is O(2|RTA(E0 )| )).
The fact that the algorithm is exponential in time is expected because the complexity coincides with the complexity of deciding the
emptiness of any tree automaton constructed from the type expression and the type definitions. A deterministic frontier-to-root tree
automaton recognising [E0]∆ will consist of 2|RTA(E0 )| states as observed in the proof of lemma 5. It is well-known that the decision
of the emptiness of the language of a deterministic frontier-to-root
tree automaton takes time polynomial in the number of the states
of the tree automaton. Therefore, the worst-case complexity of the
algorithm is the best we can expect from an algorithm for deciding
the emptiness of regular types that contain set operators.
6
Conclusion
We have presented an algorithm for deciding the emptiness of prescriptive regular types. Type expressions are constructed from type
constructors and set operators. Type definitions prescribe the meaning of type expressions.
The algorithm uses tabulation to ensure termination. Though the
tabulation is inspired by Dart and Zobel [10], the decision problem
we consider in this paper is more complex as type expressions may
contain set operators. For that reason, the algorithm can also be
used for inclusion and equivalence problems of regular types. The
way we use tabulation leads to a correct algorithm for regular types
while the Dart-Zobel algorithm has been proved incorrect for regular
types [24] in general. To the best of our knowledge, our algorithm is
the only correct algorithm for prescriptive regular types.
In addition to correctness, our algorithm generalises the work of
Dart and Zobel [10] in that type expressions can contain set operators and type definitions can be parameterised. Parameterised
type definitions are more natural than monomorphic type definitions [12,26,32] while set operators makes type expressions concise.
The combination of these two features allows more natural type declarations. For instance, the type of the logic program append can be
declared or inferred as append(List(α), List(β), List(α⊔β)).
The algorithm is exponential in time. This coincides with deciding the emptiness of the language recognised by a tree automaton
constructed from the type expression and the type definitions. However, the algorithm avoids the construction of the tree automaton
which cannot be constructed a priori when type definitions are parameterised.
Another related field is set constraint solving [3,2,20,18,11]. However, set constraint solving methods are intended to infer descriptive
types [28] rather than for testing the emptiness of a prescriptive
type [28]. Therefore, they are useful in different settings from the al-
gorithm presented in this paper. In addition, algorithms proposed for
solving set constraints [3,4,2,1] are not applicable to the emptiness
problem we considered in this paper. Take for example the constructor rule in [3,2] which states that emptiness of f (E1 , E2 , · · · , Em ) is
equivalent to the emptiness of Ei for some 1 ≤ i ≤ m. However,
empty(List(0)) is not equivalent to empty(0). The latter is true
while the former is false since [List(0)]]∆ = {nil}. The constructor
rule doesn’t apply because it deals with function symbols only but
doesn’t take the type definitions into account.
References
1. A. Aiken, D. Kozen, M. Vardi, and E. Wimmers. The complexity of set constraints.
In Proceedings of 1993 Computer Science Logic Conference, pages 1–17, 1992.
2. A. Aiken and T.K. Lakshman. Directional type checking of logic programs. In
B. Le Charlier, editor, Proceedings of the First International Static Analysis Symposium, pages 43–60. Springer-Verlag, 1994.
3. A. Aiken and E. Wimmers. Solving systems of set constraints. In Proceedings of
the Seventh IEEE Symposium on Logic in Computer Science, pages 329–340. The
IEEE Computer Society Press, 1992.
4. A. Aiken and E. Wimmers. Type inclusion constraints and type inference. In
Proceedings of the 1993 Conference on Functional Programming Languages and
Computer Architecture, pages 31–41, Copenhagen, Denmark, June 1993.
5. C. Beierle. Type inferencing for polymorphic order-sorted logic programs. In
L. Sterling, editor, Proceedings of the Twelfth International Conference on Logic
Programming, pages 765–779. The MIT Press, 1995.
6. L. Cardelli and P. Wegner. On understanding types, data abstraction, and polymorphism. ACM computing surveys, 17(4):471–522, 1985.
7. M. Codish and V. Lagoon. Type dependencies for logic programs using aciunification. In Proceedings of the 1996 Israeli Symposium on Theory of Computing
and Systems, pages 136–145. IEEE Press, June 1996.
8. H. Comon, M. Dauchet, R. Gilleron, D. Lugiez, S. Tison, and M. Tommasi. Tree
Automata Techniques and Applications. Draft, 1998.
9. P.W. Dart and J. Zobel. Efficient run-time type checking of typed logic programs.
Journal of Logic Programming, 14(1-2):31–69, 1992.
10. P.W. Dart and J. Zobel. A regular type language for logic programs. In Frank
Pfenning, editor, Types in Logic Programming, pages 157–189. The MIT Press,
1992.
11. P. Devienne, J-M. Talbot, and S. Tison. Co-definite set constraints with membership expressions. In J. Jaffar, editor, Proceedings of the 1998 Joint Conference and
Symposium on Logic Programming, pages 25–39. The MIT Press, 1998.
12. T. Fruhwirth, E. Shapiro, M.Y. Vardi, and E. Yardeni. Logic programs as types
for logic programs. In Proceedings of Sixth Annual IEEE Symposium on Logic in
Computer Science, pages 300–309. The IEEE Computer Society Press, 1991.
13. J.P. Gallagher and D.A. de Waal. Fast and precise regular approximations of logic
programs. In M. Bruynooghe, editor, Proceedings of the Eleventh International
Conference on Logic Programming, pages 599–613. The MIT Press, 1994.
14. F. Gécseg and M. Steinby. Tree Automata. Akadémiai Kiadó, 1984.
15. F. Gécseg and M. Steinby. Tree languages. In G. Rozenberg and A. Salomma,
editors, Handbook of Formal Languages, pages 1–68. Springer-Verlag, 1996.
16. M. Hanus. Horn clause programs with polymorphic types: semantics and resolution.
Theoretical Computer Science, 89(1):63–106, 1991.
17. N. Heintze and J. Jaffar. A finite presentation theorem for approximating logic
programs. In Proceedings of the seventh Annual ACM Symposium on Principles of
Programming Languages, pages 197–209. The ACM Press, 1990.
18. N. Heintze and J. Jaffar. A decision procedure for a class of set constraints. Technical Report CMU-CS-91-110, Carnegie-Mellon University, February 1991. (Later
version of a paper in Proc. 5th IEEE Symposium on LICS).
19. N. Heintze and J. Jaffar. Semantic types for logic programs. In Frank Pfenning,
editor, Types in Logic Programming, pages 141–155. The MIT Press, 1992.
20. N. Heintze and J. Jaffar. Set constraints and set-based analysis. In Alan Borning,
editor, Principles and Practice of Constraint Programming, volume 874 of Lecture
Notes in Computer Science. Springer, May 1994. (PPCP’94: Second International
Workshop, Orcas Island, Seattle, USA).
21. D. Jacobs. Type declarations as subtype constraints in logic programming. SIGPLAN Notices, 25(6):165–73, 1990.
22. L. Lu. Type analysis of logic programs in the presence of type definitions. In
Proceedings of the 1995 ACM SIGPLAN Symposium on Partial Evaluation and
Semantics-Based program manipulation, pages 241–252. The ACM Press, 1995.
23. L. Lu. A polymorphic type analysis in logic programs by abstract interpretation.
Journal of Logic Programming, 36(1):1–54, 1998.
24. L. Lu and J. Cleary. On Dart-Zobel algorithm for testing regular type inclusion.
Technical report, Department of Computer Science, The University of Waikato,
October 1998. http://xxx.lanl.gov/ps/cs/9810001.
25. P. Mishra. Towards a theory of types in Prolog. In Proceedings of the IEEE international Symposium on Logic Programming, pages 289–298. The IEEE Computer
Society Press, 1984.
26. A. Mycroft and R.A. O’Keefe. A polymorphic type system for Prolog. Artificial
Intelligence, 23:295–307, 1984.
27. Frank Pfenning, editor. Types in logic programming. The MIT Press, Cambridge,
Massachusetts, 1992.
28. U.S. Reddy. Types for logic programs. In S. Debray and M. Hermenegildo, editors,
Logic Programming. Proceedings of the 1990 North American Conference, pages
836–40. The MIT Press, 1990.
29. M. Soloman. Type definitions with parameters. In Conference Record of the Fifth
ACM Symposium on Principles of Programming Languages, pages 31–38, 1978.
30. J. Tiuryn. Type inference problems: A survey. In B. Roven, editor, Proceedings of
the Fifteenth International Symposium on Mathematical Foundations of Computer
Science, pages 105–120. Springer-Verlag, 1990.
31. E. Yardeni, T. Fruehwirth, and E. Shapiro. Polymorphically typed logic programs.
In K. Furukawa, editor, Logic Programming. Proceedings of the Eighth International
Conference, pages 379–93. The MIT Press, 1991.
32. E. Yardeni and E. Shapiro. A type system for logic programs. Journal of Logic
Programming, 10(2):125–153, 1991.
33. J. Zobel. Derivation of polymorphic types for Prolog programs. In J.-L. Lassez, editor, Logic Programming: Proceedings of the fourth international conference, pages
817–838. The MIT Press, 1987.
Appendix
Lemma 3. Let C be a conjunctive type expression. empty(C) iff
∀f ∈ ∩α∈pos(C) F (α).
empty((⊓ω∈pos(C) (⊔Afω ))⊓(⊓τ ∈neg(C) ∼(⊔Afτ )))
Proof. Let t be a sequence of terms and f a function symbol. By
the definition of [·]]∆ , f (t) ∈ [C]]∆ iff f ∈ ∩α∈pos(C) F (α) and t ∈
[⊓ω∈pos(C) (⊔Afω ))]]∆ \ [(⊔τ ∈neg(C) (⊔Afτ ))]]∆ . t ∈ [⊓ω∈pos(C) (⊔Afω ))]]∆ \
[(⊔τ ∈neg(C) (⊔Afτ ))]]∆ iff t ∈ [(⊓ω∈pos(C) (⊔Afω ))⊓(⊓τ ∈neg(C) ∼(⊔Afτ ))]]∆ .
Thus, empty(C) iff empty((⊓ω∈pos(C) (⊔Afω ))⊓(⊓τ ∈neg(C) ∼(⊔Afτ ))) for
each f ∈ ∩α∈pos(C) F (α).
Lemma 4. Let Γ be a conjunctive sequence expression. Then
empty(Γ ) iff ⊔1≤jkΓ kempty(Γ↓j)
Proof. Let kΓ k = Tn and Γ = γ1 ⊓γ2 ⊓ · · · ⊓γm with γi = hγi,1 , γi,2, · · · , γi,n i.
We have [Γ ]∆ = 1≤j≤m [γj]∆ . We have Γ↓j = γ1,j ⊓γ2,j ⊓ · · · ⊓γm,j .
T
∃1 ≤ j ≤ n.empty(Γ↓j) iff ∃1 ≤ j ≤ n. 1≤i≤m [γi,j]∆ = ∅ iff
[Γ ]∆ = ∅ iff empty(Γ ).
Lemma 5. [M]]∆ is a regular term language for any type expression
M.
Proof. The proof is done by constructing a regular term grammar
for M [14]. We first consider the case M ∈ T (Π ∪ {1, 0}). Let
R = hRTA(M), Σ, ∅, Υ, Mi with
Υ = {(α → f (α1 , · · · , αk )) ∈ ground(∆) | α ∈ RTA(M)}
R is a regular term grammar. It now suffices to prove that t ∈ [M]]∆
iff M ⇒∗R t.
– Sufficiency. Assume M ⇒∗R t. The proof is done by induction on
derivation steps in M ⇒∗R t.
• Basis. M ⇒R t. t must be a constant and M → t is in Υ
which implies M → t is in ground(∆). By the definition of
[·]]∆ . t ∈ [M]]∆ .
(n−1)
• Induction. Suppose M ⇒ f (M1 , · · · , Mk ) ⇒R
t. Then
ni
t = f (t1 , · · · , tk ) and Mi ⇒R t with ni ≤ (n − 1). By the
induction hypothesis, ti ∈ [Mi]∆ and hence t ∈ [M]]∆ by the
definition of [·]]∆ .
– Necessity. Assume t ∈ [M]]∆ . The proof is done by the height of
t, denoted as height(t).
• height(t) = 0 implies that t is a constant. t ∈ [M]]∆ implies
that M → t is in ground(∆) and hence M → t is in Υ .
Therefore, M ⇒R t.
• Let height(t) = n. Then t = f (t1 , · · · , tk ). t ∈ [M]]∆ implies
that (M → f (M1, · · · , Mk )) ∈ ground(∆) and ti ∈ [Mi]∆ .
By the definition of Υ , we have (M → f (M1 , · · · , Mk )) ∈
Υ . By the definition of RTA(·), we have Mi ∈ RTA(M).
By the induction hypothesis, Mi ⇒∗R ti . Therefore, M ⇒R
f (M1 , · · · , Mk ) ⇒∗R f (t1 , · · · , tk ) = t.
Now consider the case M ∈ T (Π ∪ {⊓, ⊔, ∼, 1, 0}). We complete
the proof by induction on the height of M.
– height(M) = 0. Then M doesn’t contain set operator. We have
already proved that [M]]∆ is a regular term language.
– Now suppose height(M) = n. If M doesn’t contain set operator
then the lemma has already been proved. If the principal type constructor is one of set operators then the result follows immediately
as regular term languages are closed under union, intersection
and complement operators [14,15,8]. It now suffices to prove the
case M = c(M1 , · · · , Ml ) with c ∈ Π. Let N = c(X1 , · · · , Xl )
where each Xj is a different new type constructor of arity 0.
Let Π ′ = Π{X1, · · · , Xl }, Σ ′ = Σ ∪ {x1 , · · · , xl } and ∆′ = ∆ ∪
{Xj → xj |1 ≤ j ≤ l}. [N ]∆′ is a regular term language on
Σ ∪ {x1 , · · · , xl } because N doesn’t contain set operators. By the
induction hypothesis, [Mj]∆ is a regular term language. By the
definition of [·]]· , we have
[M]]∆ = [N ]∆′ [x1 := [M1]∆ , · · · , xl := [Ml]∆ ]
which is a regular term language [14,15,8]. S[y1 := Sy1 , · · · , ] is
the set of terms each of which is obtained from a term in S by
replacing each occurrence of yj with a (possibly different) term
from Syj . This completes the induction and the proof.
The proof also indicates that a non-deterministic frontier-to-root
tree automaton that recognises [M]]∆ has |RTA(M)| states and that
a deterministic frontier-to-root tree automaton that recognises [M]]∆
has O(2|RTA(M)| ) states.
| 6 |
1
A Study of the Allan Variance for Constant-Mean
Non-Stationary Processes
arXiv:1702.07795v3 [math.ST] 22 Jun 2017
Haotian Xu, Stéphane Guerrier, Roberto Molinari & Yuming Zhang
Abstract—The Allan Variance (AV) is a widely used quantity
in areas focusing on error measurement as well as in the general
analysis of variance for autocorrelated processes in domains
such as engineering and, more specifically, metrology. The form
of this quantity is widely used to detect noise patterns and
indications of stability within signals. However, the properties of
this quantity are not known for commonly occurring processes
whose covariance structure is non-stationary and, in these cases,
an erroneous interpretation of the AV could lead to misleading
conclusions. This paper generalizes the theoretical form of the
AV to some non-stationary processes while at the same time
being valid also for weakly stationary processes. Some simulation
examples show how this new form can help to understand the
processes for which the AV is able to distinguish these from the
stationary cases and hence allow for a better interpretation of
this quantity in applied cases.
Index Terms—Metrology, Sensor Calibration, Bias-Instability,
Longitudinal Studies, Haar Wavelet Variance, Heteroscedasticity.
I. I NTRODUCTION
The Allan Variance (AV) is a widely used quantity in
areas going from engineering to physics where there is an
interest in studying the stochastic stability of error measurements from various instruments such as, among others,
clocks and oscillators. Its usefulness resides in the fact that it
provides an extremely informative summary on the variance
of time series or, more generally, of autocorrelated processes,
especially when these are non-stationary and with infinite
variance. Indeed, [8] underlined how the AV is a better
measure of uncertainty compared to standard methods (e.g.
moving average variance) for processes such as random walks
and non-stationary Fractional Autoregressive-Moving Average
(ARFIMA) models, while being considerably useful also for
stationary processes. For these processes the AV has a well
known form which can help detect the kind of process, for
example, from the log-log plot of the AV of an observed signal.
The behaviour and forms of the AV for stationary and some
non-stationary processes was studied in [8] where the AV is
used to detect and understand the process underlying a signal
issued from different voltage measurements. However, there
are many other applications for which the AV is of interest
H. Xu is a PhD student, Geneva School of Economics and Management,
University of Geneva, 1211, Switzerland (E-mail: [email protected]).
S. Guerrier is Assistant Professor, Department of Statistics, Pennsylvania
State University, PA, 16801, USA.
R. Molinari is Visiting Assistant Professor, Department of Statistics &
Applied Probability, University of California, Santa Barbara, CA, 93117,
USA.
Y. Zhang is a graduate student, Department of Statistics, Pennsylvania State
University, PA, 16801, USA.
such as the detection of noise terms characterising inertial
sensors [see 2] and many others [see 5, for an overview].
However, although the AV is extremely useful in the above
settings, it is not known how it behaves when in the presence
of other types of processes and whether it is able to distinguish
between them. In this paper we intend to investigate the
form of the AV for a particular class of processes which
includes all those processes that have a constant mean but
have a time-varying variance-covariance structure such as, for
example, the Generalized AutoRegressive Conditional Heteroscedasticity (GARCH) models [see 1], while processes with
specific forms of mean or higher-order non-stationarity are
not considered since they can either be dealt with through
statistical regression techniques or simply cannot be detected
by the AV. In particular, we focus on those processes which
are characterized by a dependence structure by blocks since
they are common in settings such as longitudinal studies or
sensor calibration for navigation engineering. In the latter
cases, the AV is often approximated by that of other known
stationary processes such as, for example, the bias-instability
process whose AV is often approximated by that of a firstorder autoregressive process [see, for example, 7]. Moreover,
it is not clear whether the AV can actually help to distinguish
between these processes and those processes for which its
form is currently known. The latter aspect is of particular
relevance since it could lead to an erroneous interpretation of
the observed process, for example assuming stationarity when
this is not the case and reaching false conclusions.
In order to deal with the above mentioned non-stationary
processes, this paper intends to study the theoretical form of
the AV when the covariance structure is non-stationary. The
consequent advantage of this study is that, by considering the
varying covariance structure in the AV definition, it extends
the applicability of those approaches that make use of the
AV and raises awareness on its limitations, and inappropriate
interpretation, in distinguishing and identifying these processes
from stationary ones. With this in mind, Section II briefly
defines the AV and describes its theoretical form for those
processes which have been considered up to now. Section III
introduces the new theoretical form of both the overlapping
and non-overlapping AV for processes whose covariance structure is non-stationary and shows how the form of the AV for
stationary processes is a special case of this new form. In the
same section, three case studies are presented which highlight
the importance of these findings in order to better interpret
processes through the AV. Finally Section IV concludes.
2
II. OVERVIEW OF THE A LLAN VARIANCE
γ(h) = cov(Xt+h , Xt ),
mean to estimate different kind of processes [see for example
4]. However, there are many commonly encountered processes
whose AV is not known and it is unclear to what point
these can actually be distinguished from stationary processes.
The next section delivers a more general form of the AV
which includes these processes and studies if this quantity can
actually be helpful in detecting them.
which depends solely on h, the distance between observations,
2
with σX
= γ(0) being the process variance. We can consequently define the autocorrelation function as
III. A LLAN VARIANCE FOR C ONSTANT-M EAN
N ON -S TATIONARY P ROCESSES
To introduce the AV, let us first define (Xt )t=1,...,T as a
weakly stationary, discrete time and regularly spaced stochastic process with a constant mean µ (i.e. E[Xt ] = µ) and an
autocovariance function defined as follows
ρ(h) =
γ(h)
.
γ(0)
We consider the AV computed at dyadic scales (τ ) starting
from local averages of the process which can be denoted as
n
(n)
X̄t
≡
1X
Xt−n+i ,
n i=1
(1)
where n ∈ {x ∈ N : 1 ≤ x < log2 (T ) − 1} therefore determines the number of consecutive observations considered for
the average. If the process has constant mean µ, this implies
(n)
that X̄t also has the same mean and, based on these averages
and following [6], the maximum-overlapping AV (MOAV)
T
2
1 X
(n)
(n)
E X̄k − X̄k−n
.
(2)
AVarn (Xt ) ≡
2m?
As underlined in the previous sections, the AV is particularly
useful for measuring uncertainty in non-stationary processes,
especially when these have infinite variance. Nevertheless,
there are other forms of non-stationarity for which the properties of the AV are unknown and these consist in those
processes (Xt ) with a constant mean µ (independent of time)
but a non-stationary covariance structure. This implies that the
covariance function between observations at distance h is also
a function of time t and can therefore be denoted as γ(h, t).
This type of process is very common in different areas going
from engineering [see 2] to economics [see 3].
To study the theoretical form of the AV for this class of
(n)
processes, let us first define Xt as being the following vector
of n consecutive observations starting at t − n + 1
(n)
Xt
T
≡ [Xt−n+1 · · · Xt ] .
k=2n
?
where m = T − 2n + 1 and whose corresponding estimator
is given by
[ n (Xt ) =
AVar
T
2
1 X (n)
(n)
x̄k − x̄k−n .
?
2m
(3)
k=2n
(n)
(n)
where x̄t denotes the sample equivalent of X̄t based on a
realization of the process (Xt ). Another version of the AV is
the non-overlapping AV (NOAV) whose estimator however is
not statistically as efficient as that of the MOAV (see App. B
for more details).
Keeping in mind the above definitions of the MOAV, [8]
delivered a general theoretical form of this quantity when
applied to weakly stationary processes which is given by
2
σX
AVarn (Xt ) =
n [1 − ρ(n)]
(4)
n2
n−1
X
+
i [2ρ(n − i) − ρ(i) − ρ(2n − i)] .
i=1
Based on the above equation, the exact form of the AV for
different stationary processes, such as the general class of
AutoRegressive Moving Average (ARMA) models, can be
derived. Moreover, [8] provided the theoretical AV for nonstationary processes such as the random walk and ARFIMA
models for which the AV, as mentioned earlier, represents a
better measure of uncertainty compared to other methods.
Using the known theoretical forms of the AV, it is therefore
possible to detect and distinguish different processes based on
the pattern of their AV. Due to this, this quantity (or similar
quantities such as the Haar wavelet variance) can be used as a
which contains the observations used to build the average in
(1). Using the above vector, for t = n, ..., T , we can then
define the matrix
(n)
(n)
Σt ≡ var Xt
,
(5)
and, for t = 2n, ..., T , we define the matrix
(n)
(n)
(n)
,
Γt ≡ cov Xt−n , Xt
(6)
These matrices represent the covariance matrices of the obser(n)
vations contained in each consecutive average. Indeed, Σt
represents the covariance matrix of the observations within the
(n)
average X̄t which is used in definitions (D-1) and (2) while
(n)
Γt represents the cross-covariance between these two sets
of observations. A visual representation of these quantities is
given in App. A.
In this section we will only consider the non-stationary
MOAV while the form of the non-stationary NOAV is given
in App. B. Based on the above matrices, we can also define
different quantities according to the matrix of reference and
the lags between observations. More specifically, let us first
consider the case in which we are interested in lags h such
that 0 ≤ h < n. Because of the overlapping nature of the
AV, the observations at these lags can belong to the sets of
(n)
(n)
observations within both the matrix Σt and the matrix Γt
(n)
and, for these sets of observations within the matrix Σt , we
can define the following quantity
γ
e(h) ≡
T n−h−1
X
X
1
cov(Xt−n−s−h , Xt−n−s )
?
2m (n − h) t=2n s=0
+ cov(Xt−s−h , Xt−s ).
3
If, however, the observations at the considered lags are among
(n)
the set of observations only within the matrix Γt , we define
the quantity below
T
h
1 XX
γ
e (h) ≡ ?
cov (Xt−n+s−h , Xt−n+s ) .
m h t=2n s=1
∗
Finally, when considering lags h such that n ≤ h ≤ 2n − 1,
the set of observations at these lags can only be considered
(n)
within the matrix Γt and, for this final case, we define the
quantity
γ
e(h) ≡
T 2n−h−1
X
X
1
cov (Xt−s−h , Xt−s ) .
?
m (2n − h) t=2n s=0
The above definitions can be seen as generalized definitions of
the autocovariance which consists in the average autocovariance for a given lag h. Because of this, it must be underlined
that these definitions are not at all equivalent to the covariance
function γ(h, t) but correspond to an average of this function
over all times t. Having specified this, we can now provide
the following lemma.
L EMMA 1: The non-stationary MOAV is given by
(
"
Pn−1
1
?
?
e(0) + 2
e(h)
AVarn = 2m? n2 2nm γ
h=1 2m (n − h) γ
#)
−m? h γ
e∗ (h) −
P2n−1
h=n
m? (2n − h) γ
e(h)
.
The proof of this lemma is given in App. C. Considering
this expression, an aspect that must be underlined is that
the definitions of the functions γ
e(h) and γ
e∗ (h) given earlier
simplify to the autocovariance function γ(h) when dealing
with a weakly stationary process. In the latter case, the form of
the non-stationary AV consequently reduces to the expression
in (4) for which a more detailed discussion is given in App. ??.
As a final note to this result it should also be highlighted that,
in some of the considered non-stationary cases, the estimators
of the MOAV defined in (3) and of the NOAV (see App. B)
do not necessarily have the same expectation.
Having underlined these points and with the general definition of the AV given in Lemma 1, we can now investigate its
properties when assuming the process of interest is within the
class of processes treated in this paper. The next sections report
some simulation studies regarding some of these processes in
attempt to understand also whether the AV is a useful quantity
to detect them and distinguish them from weakly stationary
processes. In all cases, the simulated process is of length
T = 1, 000 (except for the bias-instability process where
T = 2, 500) and is simulated 50 times. The estimated AV
is represented in plots along with the theoretical stationary
and non-stationary forms of the AV in order to understand its
behaviour under these different assumptions.
1) Non-Stationary White Noise: The first process we study
is the non-stationary white noise by which we intend, without loss of generality, all those zero-mean processes whose
variance changes with time. The evolution of the variance in
time can either be completely random or can follow a specific
Fig. 1. Logarithm of the MOAV of the non-stationary white noise process for
scales τ = 2n . Estimated MOAV (light-blue lines); theoretical non-stationary
MOAV (black line with dots) and theoretical stationary AV based on the
average variance (red line with triangles).
parametric model such as, for example, a GARCH process.
The goal of studying the AV for these types of processes is
to understand whether it is able to detect such a structure in
a time series and if it can be distinguished from a stationary
white noise process. For this purpose, the true non-stationary
process considered in the simulation study is generated from
the following model
Xt ∼ N (0, σt2 ),
where σt2 = t, with t = 1, . . . , T . The theoretical stationary
form is based on the average of the variances used to simulate
the processes (i.e. σ̄ 2 = (1+T )/2 in this example). Fig. 1
represents the estimated AVs along with the theoretical forms
(stationary and non-stationary). In this case, it can be seen
how both theoretical forms correspond and the estimated AVs
closely follow these quantities. This example confirms that
the AV is therefore unable to distinguish between a stationary
white noise process and a white noise process whose secondorder behaviour is non-stationary.
2) Bias-Instability: The bias-instability process is a commonly known process in the engineering domain, specifically
for inertial sensor calibration for navigation. The characteristic
of this process is that it consists in different concatenated
sequences (blocks) where, within each block, the realization
of a random variable is repeated (i.e. constant). More formally,
let bi , i = 1, . . . , B, represent the set of time indices belong
iid
to the ith block within the time series, and let Ci ∼ N (0, σ 2 ).
We can then define this process as
Xt = ci
if t ∈ bi .
One realization of the bias-instability process is illustrated
in the top panel Fig. 2 where the length of block bi is 10
for all i = 1, . . . , B, and B = 250. Since the theoretical
form of the AV for this process is not known exactly, it is
often approximated by the AV of a First-Order AutoRegressive
(AR1) process. Although this approximation can be useful, it
is nevertheless still an approximation and, using the form given
in Lemma 1, we can now obtain a theoretical form for the AV
of this process which is represented in the bottom panel of Fig.
4
Fig. 2. Top: Realization of the bias-instability process with σ 2 = 1 and the
length of block bi = 10, ∀i = 1, . . . , B, and B = 250. Bottom: Logarithm
of the MOAV of the bias-instability process for scales τ = 2n . Estimated
MOAV (light-blue lines); theoretical non-stationary MOAV (black line with
dots) and theoretical stationary MOAV of an AR1 approximating the biasinstability MOAV (red line with triangles).
Fig. 3. Top: Realization of the block-structure autoregressive process with
φ = 0.9, σ 2 = 1 and the length of block bi = 10, ∀i = 1, . . . , B, and
B = 100. Bottom: Logarithm of the MOAV of the block-structure firstorder autoregressive process for scales τ = 2n . Estimated MOAV (light-blue
lines); theoretical non-stationary MOAV (black line with dots) and theoretical
stationary MOAV assuming no block structure (red line with triangles).
process
2. Indeed the latter plot shows that the estimated AVs closely
follow the theoretical non-stationary form given earlier. The
red line represents the AV of a stationary AR1 process which
is supposed to approximate the true AV of bias-instability. The
latter is the result of the averaging of the theoretical AV for
a stationary AR1 process estimated via maximum-likelihood
on each of the simulated processes. It is clear how, although
close over some scales, this approximation is not good enough
when considering the logarithmic representation of the AV.
Therefore, knowing the exact form of the AV for this process
would allow to better interpret the signals characterised by
bias-instability.
3) Block-Structure Autoregressive Processes: As a final
example we consider a block-structure AR1 process. Similarly
to the bias-instability process, within this paper, we define
a block-structure process as a process whose parameters are
fixed but is made by concatenated time periods (blocks) where
observations within each block are generated independently
from those in the other blocks. An example is given by the
settings of longitudinal studies in which each subject can be
measured over time and, although the subjects are independent
from each other, these measurements can be explained by
an autocorrelated process within each subject. To define this
(i)
process formally, let Xt ∼ Fθ denote the following AR1
(i)
Xt
(i)
= φXt−1 + t
with parameter vector θ = [φ σ 2 ]T where φ ∈ (−1, 1) and
iid
t ∼ N (0, σ 2 ). If again we let bi denote the ith block, then
the block-structure AR1 process can be defined as
(i)
Xt = Xt
(i)
if t ∈ bi .
(j)
where Xt is independent of Xt , ∀ i 6= j. By defining
φ = 0.9 and σ 2 = 1 for the simulation study, the top
panel of Fig. 3 shows a realization of this process while the
bottom panel of Fig. 3 illustrates the results of the simulations
for this particular process. As for bias-instability, it can be
observed how the stationary form of the AV (that does not
consider the block structure) is not close to the estimated
AVs while the non-stationary form provided in this paper
adequately represents this process and can therefore allow to
distinguish between a stationary autoregressive process and a
block-structure one.
IV. C ONCLUSIONS
Within this paper we wanted to underline an issue concerning the AV which had not yet been studied. Indeed, the
behaviour of the AV in commonly occurring settings where
the covariance structure of the processes is non-stationary was
5
unknown and, in many cases, was either ignored or dealt
with through approximations. The consequence of the latter
approaches would probably consist in erroneous interpretations
and conclusions drawn from an AV analysis. For this reason,
this paper studied the form of the AV for this class of processes thereby generalizing its form also for weakly stationary
processes. Based on this, several examples were provided in
which the properties of the AV were studied, highlighting its
ability to detect these processes and to eventually distinguish
them from stationary ones, making researchers and practitioners more aware of issues related to the interpretation and use
of this quantity in more general and common settings.
Γi
Γ 2n
Σn
Σ 2n
Σ i−
ΓT
n
ΣT
Σi
−n
Γi
Γ 2n
ΣT
ΓT
trix
ce
Ma
an
i
var
Co
R EFERENCES
[1] Bollerslev, T.: Generalized autoregressive conditional heteroskedasticity. Journal of econometrics 31(3), 307–327
(1986)
[2] El-Sheimy, N., Hou, H., Niu, X.: Analysis and modeling of
inertial sensors using allan variance. IEEE Transactions on
instrumentation and measurement 57(1), 140–149 (2008)
[3] Gallegati, M., Semmler, W.: Wavelet applications in economics and finance, vol. 20. Springer (2014)
[4] Guerrier, S., Skaloud, J., Stebler, Y., Victoria-Feser, M.P.:
Wavelet-variance-based estimation for composite stochastic processes. Journal of the American Statistical Association 108(503), 1021–1030 (2013)
[5] Percival, D.B.: A wavelet perspective on the allan variance. IEEE transactions on ultrasonics, ferroelectrics, and
frequency control 63(4), 538–554 (2016)
[6] Percival, D.B., Guttorp, P.: Long-memory processes, the
allan variance and wavelets. Wavelets in geophysics 4,
325–344 (1994)
[7] Unsal, D., Demirbas, K.: Estimation of deterministic and
stochastic imu error parameters. In: Position Location
and Navigation Symposium (PLANS), 2012 IEEE/ION,
pp. 862–868. IEEE (2012)
[8] Zhang, N.F.: Allan variance of time series models for
measurement data. Metrologia 45(5), 549 (2008)
A PPENDIX A
G RAPHICAL ILLUSTRATION OF MOAV
To graphically illustrate the quantities defined in Section
III, Fig. 4 represents the true covariance matrix for a given
process and highlights how the AV is related to this matrix by
overlapping square matrices along the diagonal, each of which
is composed of the quantities defined in Eq. (5) and Eq. (6).
(n)
Fig. 4. Graphical illustration of matrices Σt
k=1
This estimator is less efficient than the MOAV, mainly because
it is based on fewer averages and therefore on a smaller sample
size. To define the theoretical form of the NOAV for the non(n)
stationary processes of interest, we first define the vector Xj
of n consecutive observations starting at (j − 1)n + 1, i.e.
T
(n)
Xj ≡ X(j−1)n+1 · · · Xjn .
Using the above, for k = 1, ..., m, we define the matrices
(n)
(n)
(n)
Σ2k , Σ2k−1 and Γk as follows:
(n)
(n)
(n)
(n)
Σ2k ≡ var X2k , Σ2k−1 ≡ var X2k−1
(n)
(n)
(n)
and Γk ≡ cov X2k−1 , X2k .
As in Appendix A, the above matrices are graphically represented in Fig. 5 where, as opposed to the MOAV, these
matrices do not overlap along the diagonal of the covariance
(n)
(n)
(n)
matrix of the process. We then let σ̄2k , σ̄2k−1 and γ̄k
(n)
(n)
(n)
denote the averages of the matrices Σ2k , Σ2k−1 and Γk ,
respectively, i.e.
n
n
1 X X (n)
(n)
σ̄2k ≡ 2
Σ2k
,
n i=1 j=1
i,j
(n)
A PPENDIX B
T HEORETICAL FORM OF THE NOAV FOR
N ON -S TATIONARY P ROCESSES
k=1
(n)
γ̄k
(D-1)
for the MOAV.
T
where m = b 2n
c. The corresponding estimator for this
quantity is given by
m
2
X
(n)
(n)
] n (Xt ) = 1
AVar
x̄2k − x̄2k−1 .
2m
σ̄2k−1 ≡
The non-overlapping AV (NOAV) is defined as:
m
2
1 X
(n)
(n)
E X̄2k − X̄2k−1
,
AVarn (Xt ) ≡
2m
(n)
and Γt
n
n
1 X X (n)
Σ2k−1
,
n2 i=1 j=1
i,j
n
n
1 X X (n)
≡ 2
Γk
.
n i=1 j=1
i,j
We further define σ̄ (n) and γ̄ (n) as follows:
m
m
i
1 X h (n)
1 X (n)
(n)
σ̄ (n) ≡
σ̄2k + σ̄2k−1
and γ̄ (n) ≡
γ̄k .
2m
m
k=1
k=1
6
L EMMA 2: We define the Non-stationary NOAV as
(
"
Pn−1
1
AVarn = 2mn2 2mn γ
e(h)
e(0) + 2
h=1 2m(n − h) γ
Γ1
Σ1
Σ2
Γ1
Σ 2i−
Γi
1
Σ 2i
Γi
Σ 2m
#)
Γm
−1
∗
Σ 2m
−mh γ
e (h) −
Γm
ce
h=n
m(2n − h) γ
e(h)
.
Proof of Lemma 2:
The proof of Lemma 2 is direct from the above definitions.
Indeed, we have
2
(n)
(n)
(n)
(n)
E X̄2k − X̄2k−1
= var X̄2k + var X̄2k−1
(n)
(n)
−2 cov X̄2k−1 , X̄2k
trix
an
i
var
Co
P2n−1
Ma
(n)
(n)
(n)
= σ̄2k + σ̄2k−1 − 2γ̄k .
(n)
(n)
(n)
Fig. 5. Graphical illustration of matrices Σ2k , Σ2k−1 and Γk
NOAV.
for the
Then, using Eq. (D-1), we obtain
2
Pm
(n)
(n)
1
E
X̄
−
X̄
AVarn
= 2m
k=1
2k
2k−1
1
2m
Pm
(n)
(n)
(n)
σ̄ + σ̄
− 2γ̄
2k(n) 2k−1 (n) k
1
2m σ̄ − 2m γ̄
=
( 2m
"
Pn−1
1
2mn γ
e(0) + 2
= 2mn
e(h)
2
h=1 2m(n − h) γ
=
k=1
#)
Based on the earlier defined matrices, as for the MOAV, we
can also define different quantities according to the matrix of
reference and the lags between observations. More specifically,
let us first consider the case in which we are interested in
lags h such that 0 ≤ h < n. The observations at these
lags can belong to the sets of observations within the matrix
(n)
(n)
(n)
Σ2k , Σ2k−1 and Γk and, for these sets of observations
(n)
(n)
within matrices Σ2k and Σ2k−1 , we can define the following
quantity
2m n−h
XX
1
γ
e(h) ≡
cov X(k−1)n+s , X(k−1)n+s+h .
2m(n − h)
s=1
k=1
If, however, the observations at the considered lags are among
(n)
the set of observations only within the matrix Γk , we define
the quantity below
m
γ
e∗ (h) ≡
h
1 XX
cov X(2k−1)n+s−h , X(2k−1)n+s .
mh
s=1
k=1
Finally, when considering lags h such that n ≤ h ≤ 2n − 1,
the set of observations at these lags can only be considered
(n)
within the matrix Γt and, for this final case, we define the
quantity
∗
−mh γ
e (h) −
P2n−1
h=n
m(2n − h) γ
e(h)
,
which concludes the proof.
A PPENDIX C
(n)
Proof of Lemma 1: In order to prove Lemma 1 let σ̄t
(n)
(n)
(n)
and γ̄t denote the averages of the matrices Σt and Γt ,
respectively, i.e.
n
n
1 X X (n)
(n)
Σt
,
σ̄t ≡ 2
n i=1 j=1
i,j
(n)
γ̄t
n
n
1 X X (n)
≡ 2
Γt
.
n i=1 j=1
i,j
Further, we define σ̄ (n) and γ̄ (n) as follows:
σ̄ (n) ≡
T
X
1
(n)
(n)
σ̄ + σ̄t−n
2(T − 2n + 1) t=2n t
γ̄ (n) ≡
T
X
1
(n)
γ̄ .
T − 2n + 1 t=2n t
m 2n−h
X
X
1
cov X(k−1)n+s , X(k−1)n+s+h .
m(2n − h)
Based on these definitions we have
k=1 s=1
2
(n)
(n)
(n)
(n)
As for the MOAV case, the above definitions can be seen as
E X̄k − X̄k−n
= var X̄k
+ var X̄k−n
generalized definitions of the autocovariance which consists in
(n)
(n)
the average autocovariance for a given lag h. Using the above
−2 cov X̄k , X̄k−n
notations and definitions, we can provide the following result.
(n)
(n)
(n)
= σ̄k + σ̄k−n − 2γ̄k .
γ
e(h) ≡
7
Then, using Eq. (2), we obtain
AVarn
1
2m?
=
=
1
2m?
=
(
=
1
2m? n2
PT
k=2n E
(n)
X̄k
(n)
k=2n σ̄k +
? (n)
PT
1
2m∗
(n)
− X̄k−n
(n)
2
(n)
σ̄k−n − 2γ̄k
− 2m? γ̄ (n)
2m σ̄
"
?
2nm γ
e(0) + 2
Pn−1
h=1
2m? (n − h) γ
e(h)
#)
?
?
−m h γ
e (h) −
which concludes the proof.
P2n−1
h=n
?
m (2n − h) γ
e(h)
,
| 10 |
ACCEPTED BY IEEE TRANSACTIONS ON CYBERNETICS
1
Learning Domain-Invariant Subspace using
Domain Features and Independence Maximization
arXiv:1603.04535v2 [cs.CV] 22 Jun 2017
Ke Yan, Lu Kou, and David Zhang, Fellow, IEEE
Abstract—Domain adaptation algorithms are useful when the
distributions of the training and the test data are different. In
this paper, we focus on the problem of instrumental variation
and time-varying drift in the field of sensors and measurement,
which can be viewed as discrete and continuous distributional
change in the feature space. We propose maximum independence
domain adaptation (MIDA) and semi-supervised MIDA (SMIDA)
to address this problem. Domain features are first defined to
describe the background information of a sample, such as the
device label and acquisition time. Then, MIDA learns a subspace
which has maximum independence with the domain features,
so as to reduce the inter-domain discrepancy in distributions. A
feature augmentation strategy is also designed to project samples
according to their backgrounds so as to improve the adaptation.
The proposed algorithms are flexible and fast. Their effectiveness
is verified by experiments on synthetic datasets and four realworld ones on sensors, measurement, and computer vision. They
can greatly enhance the practicability of sensor systems, as well
as extend the application scope of existing domain adaptation
algorithms by uniformly handling different kinds of distributional
change.
Index Terms—Dimensionality reduction, domain adaptation,
drift correction, Hilbert-Schmidt independence criterion, machine olfaction, transfer learning
I. I NTRODUCTION
I
N many real-world machine learning problems, the labeled
training data are from a source domain and the test ones
are from a target domain. Samples of the two domains are
collected under different conditions, thus have different distributions. Labeling samples in the target domain to develop new
prediction models is often labor-intensive and time-consuming.
Therefore, domain adaptation or transfer learning is needed to
improve the performance in the target domain by leveraging
unlabeled (and maybe a few labeled) target samples [1]. This
topic is receiving increasing attention in recent years due to
its broad applications such as computer vision [2], [3], [4]
and text classification [5], [6]. It is also important in the field
of sensors and measurement. Because of the variations in the
The work is partially supported by the GRF fund from the HKSAR Government, the central fund from Hong Kong Polytechnic University, the NSFC
fund (61332011, 61272292, 61271344), Shenzhen Fundamental Research fund
(JCYJ20150403161923528, JCYJ20140508160910917), and Key Laboratory
of Network Oriented Intelligent Computation, Shenzhen, China.
K. Yan is with the Department of Electronic Engineering, Graduate School
at Shenzhen, Tsinghua University, Shenzhen 518055, China (e-mail: [email protected]).
L. Kou is with the Department of Computing, The Hong Kong Polytechnic
University, Kowloon, Hong Kong (e-mail: [email protected]).
D. Zhang is with the Shenzhen Graduate School, Harbin Institute of
Technology, Shenzhen 518055, China, and also with the Department of Computing, Biometrics Research Centre, The Hong Kong Polytechnic University,
Kowloon, Hong Kong (e-mail: [email protected]).
fabrication of sensors and devices, the responses to the same
signal source may not be identical for different instruments,
which is known as instrumental variation. Furthermore, the
sensing characteristics of the sensors, the operating condition,
or even the signal source itself, can change over time, which
leads to complex time-varying drift. As a result, the prediction
model trained with the samples from the initial device in an
earlier time period (source domain) is not suitable for new
devices or in a latter time (target domains).
A typical application plagued by this problem is machine
olfaction, which uses electronic noses (e-noses) and pattern
recognition algorithms to predict the type and concentration of
odors [7]. The applications of machine olfaction range from
agriculture and food to environmental monitoring, robotics,
biometrics, and disease analysis [8], [9], [10], [11]. However,
owing to the nature of chemical sensors, many e-noses are
prone to instrumental variation and time-varying drift mentioned above [12], [13], which greatly hamper their usage
in real-world applications. Traditional methods dealing with
these two kinds of drift (“drift correction” methods hereinafter)
require a set of transfer samples, which are predefined gas
samples needed to be collected with each device and in each
time period [12], [10], [14], [15]. They are often used to learn
regression models to map the features in the target domain to
the source domain [10], [14]. Nevertheless, collecting transfer
samples repeatedly is a demanding job especially for nonprofessional e-nose users.
In such cases, domain adaptation techniques with unlabeled
target samples are desirable. An intuitive idea is to reduce
the inter-domain discrepancy in the feature level, i.e. to learn
domain-invariant feature representation [5], [16], [17], [3],
[18], [19], [20], [21], [22]. For example, Pan et al. [5]
proposed transfer component analysis (TCA), which finds a
latent feature space that minimizes the distributional difference
of two domains in the sense of maximum mean discrepancy.
More related methods will be introduced in Section II-A.
When applied to drift correction, however, existing domain
adaptation algorithms are faced with two difficulties. First,
they are designed to handle discrete source and target domains.
In time-varying drift, however, samples come in a stream,
so the change in data distribution is often continuous. One
solution is to split data into several batches, but it will lose the
temporal order information. Second, because of the variation in
the sensitivity of chemical sensors, the same signal in different
conditions may indicate different concepts. In other words,
the conditional probability P (Y |X) may change for samples
with different backgrounds, where “background” means when
and with which device a sample was collected. Methods like
ACCEPTED BY IEEE TRANSACTIONS ON CYBERNETICS
TCA project all samples to a common subspace, hence the
samples with similar appearance but different concepts cannot
be distinguished.
In this paper, we present a simple yet effective algorithm
called maximum independence domain adaptation (MIDA).
The algorithm first defines “domain features” for each sample
to describe its background. Then, it finds a latent feature space
in which the samples and their domain features are maximally
independent in the sense of Hilbert-Schmidt independence criterion (HSIC) [23]. Thus, the discrete and continuous change
in distribution can be handled uniformly. In order to project
samples according to their backgrounds, feature augmentation
is performed by concatenating the original feature vector with
the domain features. We also propose semi-supervised MIDA
(SMIDA) to exploit the label information with HSIC. MIDA
and SMIDA are both very flexible. (1) They can be applied in
situations with single or multiple source or target domains
thanks to the use of domain features. In fact, the notion
“domain” has been extended to “background” which is more
informative. (2) Although they are designed for unsupervised
domain adaptation problems (no labeled sample in target domains), the proposed methods naturally allow both unlabeled
and labeled samples in any domains, thus can be applied
in semi-supervised (both unlabeled and labeled samples in
target domains) and supervised (only labeled samples in target
domains) problems as well. (3) The label information can
be either discrete (binary- or multi-class classification) or
continuous (regression).
To illustrate the effect of our algorithms, we first evaluate
them on several synthetic datasets. Then, drift correction
experiments are performed on two e-nose datasets and one
spectroscopy dataset. Note that spectrometers suffer the same
instrumental variation problem as e-noses [24]. Finally, a
domain adaptation experiment is conducted on a well-known
object recognition benchmark: Office+Caltech [25]. Results
confirm the effectiveness of the proposed algorithms. The rest
of the paper is organized as follows. Related work on unsupervised domain adaptation and HSIC is briefly reviewed in
Section II. Section III describes domain features, MIDA, and
SMIDA in detail. The experimental configurations and results
are presented in Section IV, along with some discussions.
Section V concludes the paper.
2
information between all samples and their binary domain
labels, which can be viewed as a primitive version of the
domain features used in this paper. They also minimized the
negated mutual information between the target samples and
their cluster labels to reduce the expected classification error.
The low-rank transfer subspace learning (LTSL) algorithm
presented in [19] is a reconstruction guided knowledge transfer
method. It aligns source and target data by representing each
target sample with some local combination of source samples
in the projected subspace. The label and geometry information
can be retained by embedding different subspace learning
methods into LTSL.
Another class of methods first project the source and the
target data into separate subspaces, and then build connections
between them [17], [25], [26], [3]. Fernando et al. [17] utilized
a transformation matrix to map the source subspace to the
target one, where a subspace was represented by eigenvectors
of PCA. The geodesic flow kernel (GFK) method [25] measures the geometric distance between two different domains
in a Grassmann manifold by constructing a geodesic flow.
An infinite number of subspaces are combined along the flow
in order to model a smooth change from the source to the
target domain. Liu et al. [26] adapted GFK to correct timevarying drift of e-noses. A sample stream is first split into
batches according to the acquisition time. The first and the
latest batches (domains) are then connected through every
intermediate batch using GFK. Another improvement of GFK
is domain adaptation by shifting covariance (DASC) [3].
Observing that modeling one domain as a subspace is not
sufficient to represent the difference of distributions, DASC
characterizes domains as covariance matrices and interpolates
them along the geodesic to bridge the domains.
B. Hilbert-Schmidt Independence Criterion (HSIC)
HSIC is used as a convenient method to measure the
dependence between two sample sets X and Y . Let kx and
ky be two kernel functions associated with RKHSs F and G,
respectively. pxy is the joint distribution. HSIC is defined as
the square of the Hilbert-Schmidt norm of the cross-covariance
operator Cxy [23]:
HSIC(pxy , F, G) = kCxy k2HS
=Exx0 yy0 [kx (x, x0 )ky (y, y 0 )] + Exx0 [kx (x, x0 )]Eyy0 [ky (y, y 0 )]
II. R ELATED W ORK
A. Unsupervised Domain Adaptation
Two good surveys on domain adaptation can be found in
[1] and [2]. In this section, we focus on typical methods
that extract domain-invariant features. In order to reduce the
inter-domain discrepancy while preserving useful information,
researchers have developed many strategies. Some algorithms
project all samples to a common latent space [5], [16], [19].
Transfer component analysis (TCA) [5] tries to learn transfer
components across domains in a reproducing kernel Hilbert
space (RKHS) using maximum mean discrepancy. It is further
extended to semi-supervised TCA (SSTCA) to encode label
information and preserve local geometry of the manifold.
Shi et al. [16] measured domain difference by the mutual
− 2Exy [Ex0 [kx (x, x0 )]Ey0 [ky (y, y 0 )]].
Here Exx0 yy0 is the expectation over independent pairs (x, y)
and (x0 , y 0 ) drawn from pxy . It can be proved that with
characteristic kernels kx and ky , HSIC(pxy , F, G) is zero if
and only if x and y are independent [27]. A large HSIC
suggests strong dependence with respect to the choice of
kernels. HSIC has a biased empirical estimate. Suppose Z =
X × Y = {(x1 , y1 ), . . . , (xn , yn )}, Kx , Ky ∈ Rn×n are the
kernel matrices of X and Y , respectively, then [23]:
HSIC(Z, F, G) = (n − 1)−2 tr(Kx HKy H),
(1)
n×n
where H = I − n−1 1n 1T
is the centering matrix.
n ∈R
Due to its simplicity and power, HSIC has been adopted
for feature extraction [28], [5], [29] and feature selection
ACCEPTED BY IEEE TRANSACTIONS ON CYBERNETICS
3
[27]. Researchers typically use it to maximize the dependence
between the extracted/selected features and the label. However,
to our knowledge, it has not been utilized in domain adaptation
to reduce the dependence between the extracted features and
the domain features.
III. P ROPOSED M ETHOD
A. Domain Feature
We aim to reduce the dependence between the extracted
features and the background information. A sample’s background information should (1) naturally exist, thus can be
easily obtained; (2) have different distributions in training and
test samples; (3) correlate with the distribution of the original
features. The domain label (which domain a sample belongs)
in common domain adaptation problems is an example of such
information. According to these characteristics, the information clearly interferes the testing performance of a prediction
model. Thus, minimizing the aforementioned dependence is
desirable. First, a group of new features need to be designed to
describe the background information. The features are called
“domain features”. From the perspective of drift correction,
there are two main types of background information: the
device label (with which device the sample was collected)
and the acquisition time (when the sample was collected). We
can actually encode more information such as the place of
collection, the operation condition, and so on, which will be
useful in other domain adaptation problems.
Formally, if we only consider the instrumental variation, the
following one-hot coding scheme can be used. Suppose there
are ndev devices, which result in ndev different but related
domains. The domain feature vector is thus d ∈ Rndev , where
dp = 1 if the sample is from the pth device and 0 otherwise. If
the time-varying drift is also considered, the acquisition time
can be further added. If a sample was collected from the pth
device at time t, then d ∈ R2ndev , where
1, q = 2p − 1,
(2)
dq = t, q = 2p,
0, otherwise.
According to (1), the kernel matrix Kd of the domain
features needs to be computed for HSIC. We apply the linear
kernel. Suppose D = [d1 , . . . , dn ] ∈ Rmd ×n , md is the
dimension of a domain feature vector. Then
Kd = DT D.
(3)
Note that in traditional domain adaptation problems with
several discrete domains, the one-hot coding scheme can be
applied to construct domain features, because the problems are
similar to instrumental variation.
B. Feature Augmentation
Feature augmentation is used in this paper to learn
background-specific subspaces. In [30], the author proposed a
feature augmentation strategy for domain adaptation by replicating the original features. However, this strategy requires
that data lie in discrete domains and cannot deal with timevarying drift. We propose a more general and efficient feature
augmentation strategy: concatenating the original features and
the domain features, i.e.
x
x̂ =
∈ Rm+md .
(4)
d
The role of this strategy can be demonstrated through a
linear dimensionality reduction example. Suppose a projection
matrix W ∈ R(m+md )×h has been learned for the augmented
feature vector.
h is
the dimension of the subspace. W has two
Wx
parts: W =
, Wx ∈ Rm×h , Wd ∈ Rmd ×h . The embedWd
ding of x̂ can be expressed as W T x̂ = WxT x + WdT d ∈ Rh ,
which means that a background-specific bias (WdT d)i has
been added to each dimension i of the embedding. From
another perspective, the feature augmentation strategy maps
the samples to an augmented space with higher dimension
before projecting them to a subspace. It will be easier to find
a projection direction in the augmented space to align the
samples well in the subspace.
Take machine olfaction for example, there are situations
when the conditional probability P (Y |X) changes along with
the background. For instance, the sensitivity of chemical
sensors often decays over time. A signal that indicates low
concentration in an earlier time actually suggests high concentration in a later time. In such cases, feature augmentation
is important, because it allows samples with similar appearance but different concepts to be treated differently by the
background-specific bias. The strategy also helps to align the
domains better in each projected dimension. Its effect will be
illustrated on several synthetic datasets in Section IV-A and
further analyzed in the complementary materials.
C. Maximum Independence Domain Adaptation (MIDA)
In this section, we introduce the formulation of MIDA in
detail. Suppose X ∈ Rm×n is the matrix of n samples.
The training and the test samples are pooled together. More
importantly, we do not have to explicitly differentiate which
domain a sample is from. The feature vectors have been
augmented, but we use the notations X and m instead of
X̂ and m + md for brevity. A linear or nonlinear mapping
function Φ can be used to map X to a new space. Based on
the kernel trick, we need not know the exact form of Φ, but
the inner product of Φ(X) can be represented by the kernel
matrix Kx = Φ(X)T Φ(X). Then, a projection matrix W̃ is
applied to project Φ(X) to a subspace with dimension h,
leading to the projected samples Z = W̃ T Φ(X) ∈ Rh×n .
Similar to other kernel dimensionality reduction algorithms
[31], [32], the key idea is to express each projection direction
as a linear combination of all samples in the space, namely
W̃ = Φ(X)W . W ∈ Rn×h is the projection matrix to be
actually learned. Thus, the projected samples are
Z = W T Φ(X)T Φ(X) = W T Kx .
(5)
Intuitively, if the projected features are independent of the
domain features, then we cannot distinguish the background
ACCEPTED BY IEEE TRANSACTIONS ON CYBERNETICS
4
of a sample by its projected features, suggesting that the interdomain discrepancy is diminished in the subspace. Therefore,
after omitting the scaling factor in (1), we get the expression
to be minimized: tr(Kz HKd H) = tr(Kx W W T Kx HKd H),
where Kz is the kernel matrix of Z.
In domain adaptation, the goal is not only minimizing
the difference of distributions, but also preserving important
properties of data, such as the variance [5]. It can be achieved
by maximizing the trace of the covariance matrix of the project
samples. The covariance matrix is
cov(Z) = cov(W T Kx ) = W T Kx HKx W,
(6)
where H = I − n−1 1n 1T
n is the same as that in (1). An
orthonormal constraint is further added on W . The learning
problem then becomes
max
W
− tr(W T Kx HKd HKx W ) + µ tr(W T Kx HKx W ),
s.t. W T W = I,
(7)
where µ > 0 is a trade-off hyper-parameter. Using the
Lagrangian multiplier method, we can find that W is the
eigenvectors of Kx (−HKd H + µH)Kx corresponding to the
h largest eigenvalues. Note that a conventional constraint is
requiring W̃ to be orthonormal as in [29], which will lead
to a generalized eigenvector problem. However, we find that
this strategy is inferior to the proposed one in both adaptation
accuracy and training speed in practice, so it is not used.
When computing Kx , a proper kernel function needs to be
selected. Common kernel functions include linear (k(x, y) =
xT y), polynomial (k(x, y) = (σxT y + 1)d ), Gaussian radial
2
basis function (RBF, k(x, y) = exp( kx−yk
2σ 2 )), and so on.
Different kernels indicate different assumptions on the type
of dependence in using HSIC [27]. According to [27], the
polynomial and RBF kernels map the original features to a
higher or infinite dimensional space, thus are able to detect
more types of dependence. However, choosing a suitable
kernel width parameter (σ) is also important for these more
powerful kernels [27].
The maximum mean discrepancy (MMD) criterion is used
in TCA [5] to measure the difference of two distributions.
Song et al. [27] showed that when HSIC and MMD are
both applied to measure the dependence between features
and labels in a binary-class classification problem, they are
identical up to a constant factor if the label kernel matrix in
HSIC is properly designed. However, TCA is feasible only
when there are two discrete domains. On the other hand,
MIDA can deal with a variety of situations including multiple
domains and continuous distributional change. The stationary
subspace analysis (SSA) algorithm [33] is able to identify
temporally stationary components in multivariate time series.
However, SSA only ensures that the mean and covariance
of the components are stationary, while they may not be
suitable for preserving important properties in data. Concept
drift adaptation algorithms [34] are able to correct continuous
time-varying drift. However, most of them rely on newly
arrived labeled data to update the prediction models, while
MIDA works unsupervisedly.
D. Semi-supervised MIDA (SMIDA)
MIDA aligns samples with different backgrounds without
considering the label information. However, if the labels of
some samples are known, they can be incorporated into
the subspace learning process, which may be beneficial to
prediction. Therefore, we extend MIDA to semi-supervised
MIDA (SMIDA). Since we do not explicitly differentiate the
domain labels of the samples, both unlabeled and labeled
samples can exist in any domain. Similar to [28], [5], [29],
[27], HSIC is adopted to maximize the dependence between
the projected features and the labels. The biggest advantage of
this strategy is that all types of labels can be exploited, such
as the discrete labels in classification and the continuous ones
in regression.
The label matrix Y is defined as follows. For c-class
classification problems, the one-hot coding scheme can be
used, i.e. Y ∈ Rc×n , yi,j = 1 if xi is labeled and belongs to
the jth class; 0 otherwise. For regression problems, the target
values can be centered first. Then, Y ∈ R1×n , yi equals to
the target value of xi if it is labeled; 0 otherwise. The linear
kernel function is chosen for the label kernel matrix, i.e.
Ky = Y T Y.
(8)
The objective of SMIDA is
max
W
s.t.
tr(W T Kx (−HKd H + µH + γHKy H)Kx W ),
W T W = I,
(9)
where γ > 0 is a trade-off hyper-parameter. Its solution
is the eigenvectors of Kx (−HKd H + µH + γHKy H)Kx
corresponding to the h largest eigenvalues. The outline of
MIDA and SMIDA is summarized in Algorithm III.1. The
statements in brackets correspond to those specialized for
SMIDA.
Algorithm III.1 MIDA [or SMIDA]
Input: The matrix of all samples X and their background
information; [the labels of some samples]; the kernel
function for X; h, µ, [and γ].
Output: The projected samples Z.
1: Construct the domain features according to the background information, e.g. Section III-A.
2: Augment the original features with domain features (4).
3: Compute the kernel matrices Kx , Kd (3), [and Ky (8)].
4: Obtain W , namely the eigenvectors of Kx (−HKd H +
µH)Kx [or Kx (−HKd H + µH + γHKy H)Kx ] corresponding to the h largest eigenvalues.
5: Z = W T Kx .
Besides variance and label dependence, another useful property of data is the geometry structure, which can be preserved
by manifold regularization (MR) [35]. MR can be conveniently
incorporated into SMIDA. In our experiments, adding MR
generally increases the accuracy slightly with the cost of three
more hyper-parameters. Consequently, it is not adopted in this
paper.
ACCEPTED BY IEEE TRANSACTIONS ON CYBERNETICS
IV. E XPERIMENTS
In this section, we first conduct experiments on some
synthetic datasets to verify the effect of the proposed methods.
Then, drift correction experiments are performed on two enose datasets and a spectroscopy dataset. To show the universality of the proposed methods, we further evaluate them on a
visual object recognition dataset. Comparison is made between
them and recent unsupervised domain adaptation algorithms
that learn domain-invariant features.
A. Synthetic Dataset
In Fig. 1, TCA [5] and MIDA are compared on a 2D
dataset with two discrete domains. The domain labels were
used to construct the domain features in MIDA according
to the one-hot coding scheme introduced in Section III-A.
The similar definition was used in synthetic datasets 3 and 4.
For both methods, the linear kernel was used on the original
features and the hyper-parameter µ was set to 1. In order to
quantitatively assess the effect of domain adaptation, logistic
regression models were trained on the labeled source data and
tested on the target data. The accuracies are displayed in the
caption, showing that the order of performance is MIDA >
TCA > original feature. TCA aligns the two domains only
on the first projected dimension. However, the two classes
have large overlap on that dimension, because the direction
for alignment is different from that for discrimination. Incorporating the label information of the source domain (SSTCA)
did no help. On the contrary, MIDA can align the two domains
well in both projected dimensions, in which the domainspecific bias on the second dimension brought by feature
augmentation played a key role. A 3D explanation is included
in the supplementary materials. Thus, good accuracy can be
obtained by using the two dimensions for classification.
In Fig. 2, SSA [33] and MIDA are compared on a 2D dataset
with continuous distributional change, which resembles timevarying drift in machine olfaction. Samples in both classes
drift to the upper right. The chronological order of the samples
was used to construct the domain features in MIDA, i.e. d = 1
for the first sample, d = 2 for the second sample, etc. The
parameter setting of MIDA was the same with that in Fig. 1,
whereas the number of stationary components in SSA was set
to 1. The classification accuracies were obtained by training a
logistic regression model on the first halves of the data in both
classes, and testing them on the last halves. SSA succeeds in
finding a direction (z1 ) that is free from time-varying drift.
However, the two classes cannot be well separated in that
direction. In plot (c), the randomly scattered colors suggest
that the time-varying drift is totally removed in the subspace.
MIDA first mapped the 2D data into a 3D space with the
third dimension being time, then projected them to a 2D plane
orthogonal to the direction of drift in the 3D space.
No label information was used in the last two experiments.
If keeping the label dependence in the subspace is a priority,
SMIDA can be adopted instead of MIDA. In the 3D synthetic
dataset in Fig. 3, the best direction (x3 ) to align the two
domains also mixes the two classes, which results in the
output of MIDA in plot (b). The labels in the source domain
5
were used when learning the subspace. From plot (c), we
can observe that the classes are separated. In fact, class
separation can still be found in the third dimension of the space
learned by MIDA. However, for the purpose of dimensionality
reduction, we generally hope to keep the important information
in the first few dimensions.
Nonlinear kernels are often applied in machine learning
algorithms when data is not linearly separable. Besides, they
are also useful in domain adaptation when domains are not
linearly “alignable”, as shown in Fig. 4. In plot (a), the
inter-domain changes in distributions are different for the
two classes. Hence, it is difficult to find a linear projection
direction to align the two domains, even with the domainspecific biases of MIDA. Actually, domain-specific rotation
matrices are needed. Since the target labels are not available,
the rotation matrices cannot be obtained accurately. However,
a nonlinear kernel can be used to map the original features to
a space with higher dimensions, in which the domains may
be linearly alignable. We applied an RBF kernel with width
σ = 10. Although the domains are not perfectly aligned in
plot (c), the classification model trained in the source domain
can be better adapted to the target domain. A comparison
on different kernel and kernel parameters on two synthetic
datasets is included in the supplementary materials.
B. Gas Sensor Array Drift Dataset
The gas sensor array drift dataset1 collected by Vergara et
al. [36] is dedicated to research in drift correction. A total
of 13910 samples were collected by an e-nose with 16 gas
sensors over a course of 36 months. There are six different
kinds of gases at different concentrations. They were split into
10 batches by the authors according to their acquisition time.
Table A in the supplementary material details the dataset. We
aim to classify the type of gases, despite their concentrations.
Similar to [36], [26], we took the samples in batch 1 as
labeled training samples, whereas those in batches 2–10 are
unlabeled test ones. This evaluation strategy resembles the
situation in real-world applications. In the dataset, each sample
is represented by 128 features extracted from the sensors’
response curves [36]. Each feature was first normalized to
zero mean and unit variance within each batch. The timevarying drift of the preprocessed features across batches can
be visually inspected in Fig. 5. It is obvious that samples in
different batches have different distributions. Next, the labeled
samples in batch 1 were adopted as the source domain and
the unlabeled ones in batch b (b = 2, . . . , 10) as the target
domain. The proposed algorithms together with several recent
ones were used to learn domain-invariant features based on
these samples. Then, a logistic regression model was trained
on the source domain and tested on each target one. For multiclass classification, the one-vs-all strategy was utilized.
As displayed in Table I, the compared methods include
kernel PCA (KPCA), transfer component analysis (TCA),
semi-supervised TCA (SSTCA) [5], subspace alignment (SA)
1 http://archive.ics.uci.edu/ml/datasets/
Gas+Sensor+Array+Drift+Dataset+at+Different+Concentrations
ACCEPTED BY IEEE TRANSACTIONS ON CYBERNETICS
2
6
Pos. source data 15
Neg. source data
Pos. target data 10
Neg. target data
1.5
1
2
1.5
1
5
0.5
0.5
x
z2
z2
2
0
0
0
−5
−0.5
−0.5
−10
−1
−1.5
−15
(a)
−2
−2
−1
0
x1
1
2
−1
−1.5
(b)
−20
−10
−5
0
z1
5
(c)
−2
−10
10
−5
0
z1
5
10
Fig. 1. Comparison of TCA and MIDA in a 2D synthetic dataset. Plots (a)-(c) show data in the original space and projected spaces of TCA and MIDA,
respectively. The classification accuracies are 53%, 70% (only using the first projected dimension z1 ), and 88%.
16
10
20
14
15
8
12
10
10
6
5
z2
z2
x
2
8
4
0
6
4
−5
2
Pos. old data
Neg. old data
Pos. new data
Neg. new data
2
0
(a)
−2
−2
0
2
x1
4
−10
0
−15
(b)
−2
−4
6
−2
0
z1
2
(c)
−20
−80
4
−60
−40
−20
z1
0
20
40
Fig. 2. Comparison of SSA and MIDA in a 2D synthetic dataset. Plots (a)-(c) show data in the original space, projected spaces of SSA and MIDA, respectively.
The chronological order of a sample is indicated by color. The classification accuracies are 55%, 74% (only using the first projected dimension z1 ), and 90%.
3
3
Pos. source data
Neg. source data
Pos. target data 2
Neg. target data
(a)
2
1
5
4
(b)
(c)
3
1
0
z2
z2
x3
2
0
0
−1
−1
1
−1
−2
−2
−2
0
2
1.5
2
−0.5
0
0.5
x1
1
x
−2
−3
−10
−1
−5
0
−3
−4
5
−2
0
z1
2
4
6
z1
Fig. 3. Comparison of MIDA and SMIDA in a 3D synthetic dataset. Plots (a)-(c) show data in the original space and projected spaces of MIDA and SMIDA,
respectively. The classification accuracies are 50%, 55%, and 82%.
4
Pos. source data
Neg. source data
Pos. target data
Neg. target data
3
5
1.9
4
1.8
3
1.7
2
1.6
1
1.5
z2
x
2
1
0
z2
2
0
1.4
−1
1.3
−1
−2
−2
−3
−4
−3
(a)
−2
0
2
x
1
4
6
−4
−60
1.2
1.1
(b)
−40
−20
0
z
1
20
40
60
1
(c)
9
9.5
10
10.5
z
1
Fig. 4. Comparison of different kernels in a 2D synthetic dataset. Plots (a)-(c) show data in the original space and projected spaces of MIDA with linear and
RBF kernels, respectively. The classification accuracies are 50%, 57%, and 87%.
ACCEPTED BY IEEE TRANSACTIONS ON CYBERNETICS
7
75
Average classification accuracy (%)
5
PC2
0
−5
Batch 1
Batch 3
Batch 5
Batch 7
Batch 9
−10
−15
−30
−20
−10
0
10
70
65
60
55
50
20
PC1
Fig. 5. Scatter of ethanol (dots) and acetone (plus signs) samples in batches
1,3,5,7,9 in the gas sensor array drift dataset. Samples are projected to a 2D
subspace using PCA. Different colors indicate different batches.
[17], geodesic flow kernel (GFK) [25], manifold regularization with combination GFK (ML-comGFK) [26], informationtheoretical learning (ITL) [16], structural correspondence
learning (SCL) [20], and marginalized stacked denoising autoencoder (mSDA) [21]. For all methods, the hyper-parameters
were tuned for the best accuracy. In KPCA, TCA, SSTCA,
and the proposed MIDA and SMIDA, the polynomial kernel
with degree 2 was used. KPCA learned a subspace based on
the union of source and target data. In TCA, SSTCA, MIDA,
and SMIDA, eigenvalue decomposition needs to be done on
kernel matrices. In order to reduce the computational burden,
we randomly chose at most nt samples in each target domain
when using these methods, with nt being twice the number of
the samples in the source domain. GFK used PCA to generate
the subspaces in both source and target domains. The subspace
dimension of GFK was determined according to the subspace
disagreement measure in [25]. The results of ML-comGFK are
copied from [26]. In SCL, the pivot features were binarized
before training pivot predictors using logistic regression.
We also compared several variants of our methods. In Table
I, the notation “(discrete)” means that two discrete domains
(source and target) were used in MIDA and SMIDA, which is
similar to other compared methods. The domain feature vector
of a sample was thus [1, 0]T if it was from the source domain
and [0, 1]T if it was from the target. However, this strategy
cannot make use of the samples in intermediate batches.
An intuitive assumption is that the distributions of adjacent
batches should be similar. When adapting the information
from batch 1 to b, taking samples from batches 2 to b − 1
into consideration may improve the generalization ability of
the learned subspace. Concretely, nt samples were randomly
selected from batches 2 to b instead of batch b alone. For
each sample, the domain feature was defined as its batch
index, which can be viewed as a proxy of its acquisition time.
MIDA and SMIDA then maximized the independence between
KPCA
TCA
SSTCA
SA
ITL
MIDA (continuous)
SMIDA (continuous)
0
20
40
60
# Projected dimensions (h)
80
100
Fig. 6. Performance comparison on the gas sensor array drift dataset with
respect to the subspace dimension h.
the learned subspace and the batch indices. The results are
labeled as “(continuous)” in Table I. Besides, the accuracies
of continuous SMIDA without feature augmentation (no aug.)
are also shown.
From Table I, we can find that as the batch index increases, the accuracies of all methods generally degrade, which
confirms the influence of the time-varying drift. Continuous
SMIDA achieves the best average domain adaptation accuracy.
The continuous versions of MIDA and SMIDA outperform the
discrete versions, proving that the proposed methods can effectively exploit the chronological information of the samples.
They also surpass ML-comGFK which uses the samples in
intermediate batches to build connections between the source
and the target batches. Feature augmentation is important in
this dataset, since removing it in continuous SMIDA causes
a drop of four percentage points in average accuracy. In Fig.
6, the average classification accuracies with varying subspace
dimension are shown. MIDA and SMIDA are better than other
methods when more than 30 features are extracted.
C. Breath Analysis Dataset
As a noninvasive approach, disease screening and monitoring with e-noses is attracting more and more attention [8],
[11]. The concentration of some biomarkers in breath has
been proved to be related to certain diseases, which makes
it possible to analyze a person’s health state with an e-nose
conveniently. For example, the concentration of acetone in
diabetics’ breath is often higher than that in healthy people
[11]. However, the instrumental variation and time-varying
drift of e-noses hinder the popularization of this technology
in real-world applications. Unsupervised domain adaptation
algorithms can be applied to solve this problem.
We have collected a breath analysis dataset in years 2014–
2015 using two e-noses of the same model [11]. In this
paper, samples of five diseases were selected for experiments,
including diabetes, chronical kidney disease (CKD), cardiopathy, lung cancer, and breast cancer. They have been proved
to be related to certain breath biomarkers. We performed
ACCEPTED BY IEEE TRANSACTIONS ON CYBERNETICS
8
TABLE I
C LASSIFICATION ACCURACY (%) ON THE GAS SENSOR ARRAY DRIFT DATASET. B OLD VALUES INDICATE THE BEST RESULTS .
Batch 2
3
4
5
6
7
8
9
10
Average
80.47
79.26
69.57
77.16
77.39
64.21
52.04
47.87
48.78
66.30
KPCA
75.88
69.04
49.07
57.87
62.65
52.26
37.07
47.66
49.97
55.72
TCA [5]
82.96
81.97
65.22
76.14
89.09
58.98
49.32
66.17
49.50
68.82
SSTCA [5]
84.57
80.90
80.12
75.63
87.26
66.37
54.76
61.28
54.44
71.70
SA [17]
80.79
80.01
71.43
75.63
78.35
64.68
52.04
48.51
49.58
66.78
GFK [25]
77.41
80.26
71.43
76.14
77.65
64.99
36.39
47.45
48.72
64.49
ML-comGFK [26]
80.25
74.99
78.79
67.41
77.82
71.68
49.96
50.79
53.79
67.28
ITL [16]
76.85
79.45
59.63
96.45
78.00
60.95
49.32
77.02
48.58
69.58
SCL [20]
77.57
82.03
68.32
82.74
77.22
65.18
53.74
48.51
48.08
67.04
mSDA [21]
73.87
79.19
65.84
80.20
76.39
65.90
51.70
48.51
48.92
65.61
MIDA (discrete)
81.03
85.62
60.25
75.63
87.61
62.44
48.30
67.87
48.36
68.57
SMIDA (discrete)
80.47
87.07
65.22
75.63
90.04
59.20
50.00
62.77
44.81
68.36
MIDA (continuous)
84.32
81.59
68.32
75.63
91.74
63.13
78.91
62.34
45.14
72.35
SMIDA (no aug.)
82.23
83.17
67.70
75.13
85.22
61.67
51.02
61.49
54.61
69.14
SMIDA (continuous)
83.68
82.28
73.91
75.63
93.00
63.49
79.25
62.34
45.50
73.23
1
0.8
(a)
0.6
0.4
0.2
0
0
Sensor response (Volt)
five binary-class classification tasks to distinguish samples
with one disease from the healthy samples. Each sample was
represented by the steady state responses of nine gas sensors in
the e-nose. When a gas sensor is used to sense a gas sample, its
response will reach a steady state in a few minutes. The steady
state response has a close relationship with the concentration
of the measured gas. Therefore, the 9D feature vector contains
most information needed for disease screening.
To show the instrumental variation and time-varying drift in
the dataset, we draw the steady state responses of two sensors
of the CKD samples in Fig. 7. Each data point indicates a
breath sample. In plot (a), the sensitivity of the sensor in
both devices gradually decayed as time elapsed. In plot (b),
the aging effect was so significant that we had to replace the
sensors in the two devices with new ones on about day 200.
In this case, a signal at 0.3 V will suggest low concentration
on day 0 but high concentration on day 150. In addition, the
responses in different devices are different (e.g. plot (b), after
day 200).
The numbers of samples in the six classes (healthy and the
five diseases mentioned above) are 125, 431, 340, 97, 156,
and 215, respectively. We chose the first 50 samples collected
with device 1 in each class as labeled training samples. Among
the other samples, 10 samples were randomly selected in each
class for validation, the rest for testing. The hyper-parameters
were tuned on the validation sets. Logistic regression was
adopted as the classifier, with F-score as the accuracy criterion.
Results are compared in Table II.
In KPCA, TCA, SSTCA, MIDA, and SMIDA, the RBF kernel was used. Because methods other than stationary subspace
analysis (SSA) [33], MIDA, and SMIDA are not capable of
handling the chronological information, we simply regarded
each device as a discrete domain and learned device-invariant
features with them. The same strategy was used in discrete
MIDA and SMIDA. In continuous MIDA and SMIDA, the
Sensor response (Volt)
Original feature
50
100
150
200
250
300
Acquisition time (day)
350
400
450
Device 1
Device 2
1.5
(b)
1
0.5
0
0
50
100
150
200
250
300
Acquisition time (day)
350
400
450
Fig. 7. Illustration of the instrumental variation and time-varying drift in the
breath analysis dataset. Plots (a) and (b) show the steady state responses of
the CKD samples of sensors 2 and 7, respectively.
domain features were defined according to (4), where t was
the exact acquisition time converted to years and the number of
devices ndev = 2. SSA naturally considers the chronological
information by treating the sample stream as a multivariate
time series and identifying temporally stationary components.
However, SSA cannot deal with time series with multiple
sources, such as the multi-device case in this dataset. Thus,
the samples were arranged in chronological order despite their
device labels.
From Table II, we can find that the improvement made
by SSA is little, possibly because the stationary criterion is
not suitable for preserving important properties in data. For
example, the noise in data can also be stationary [5]. MIDA
and SMIDA achieved obviously better results than other
methods. They can address both instrumental variation and
ACCEPTED BY IEEE TRANSACTIONS ON CYBERNETICS
9
TABLE II
C LASSIFICATION ACCURACY (%) ON THE BREATH ANALYSIS DATASET.
B OLD VALUES INDICATE THE BEST RESULTS .
Task 1
2
3
4
5
Average
Original feature
34.34
63.67
73.71
43.17
42.93
51.57
KPCA
58.05
72.58
84.78
44.95
42.60
60.59
TCA [5]
67.19
68.31
59.93
67.08
68.17
66.14
SSTCA [5]
67.01
68.06
74.14
68.31
67.36
68.97
SA [17]
29.95
72.42
72.74
42.19
44.54
52.37
GFK [25]
41.49
68.50
58.96
75.63
70.16
62.95
ITL [16]
68.59
66.53
74.75
66.67
68.03
68.91
SSA [33]
49.77
72.10
33.49
52.64
55.38
52.68
SCL [20]
32.52
61.16
75.43
35.35
51.86
51.26
mSDA [21]
36.86
69.51
76.69
35.51
50.49
53.81
MIDA (discrete)
62.17
71.74
84.21
67.05
67.06
70.45
SMIDA (discrete)
80.16
84.18
88.47
68.45
52.41
74.73
MIDA (continuous)
68.30
67.54
74.01
73.04
69.63
70.50
SMIDA (no aug.)
82.80
72.57
72.61
80.33
70.05
75.67
SMIDA (continuous)
85.29
80.18
91.67
74.28
66.55
79.59
time-varying drift. With the background-specific bias brought
by feature augmentation, they can compensate for the change
in conditional probability in this dataset. SMIDA is better than
MIDA because the label information of the first 50 samples
in each class was better kept.
D. Corn Dataset
Similar to e-noses, data collected with spectrometers are
one-dimensional signals indicating the concentration of the
analytes. Instrumental variation is also a problem for them
[24]. In this section, we test our methods on the corn dataset2 .
It is a spectroscopy dataset collected with three near-infrared
spectrometers designated as m5, mp5, and mp6. The moisture, oil, protein, and starch contents of 80 corn samples
were measured by each device, with ranges of the measured
values as [9.377, 10.993], [3.088, 3.832], [7.654, 9.711], and
[62.826, 66.472], respectively. Each sample is represented by a
spectrum with 700 features. This dataset resembles traditional
domain adaptation datasets because there is no time-varying
drift. Three discrete domains can be defined based on the three
devices. We adopt m5 as the source domain, mp5 and mp6
as the target ones. In each domain, samples 4, 8, . . . , 76, 80
were assigned as the test set, the rest as the training set.
For hyper-parameter tuning, we applied a three-fold crossvalidation on the training sets of the three domains. After
the best hyper-parameters were determined for each algorithm,
a regression model was trained on the training set from the
source domain and applied on the test set from the target
domains. The regression algorithm was ridge regression with
the L2 regularization parameter λ = 1.
Table III displays the root mean square error (RMSE)
of the four prediction tasks and their average on the two
target domains. We also plot the overall average RMSE of
2 http://www.eigenvector.com/data/Corn/
the two domains with respect to the subspace dimension
h in Fig. 8. ITL was not investigated because it is only
applicable in classification problems. In KPCA, TCA, SSTCA,
MIDA, and SMIDA, the RBF kernel was used. For the semisupervised methods SSTCA and SMIDA, the target values
were normalized to zero mean and unit variance before subspace learning. The domain features were defined according
to the device indices using the one-hot coding scheme. We
can find that when no domain adaptation was done, the
prediction error is large. All domain adaptation algorithms
managed to significantly reduce the error. KPCA also has
good performance, which is probably because the source and
the target domains have similar principal directions, which
also contain the most discriminative information. Therefore,
source regression models can fit the target samples well. In
this dataset, different domains have identical data composition.
As a result, corresponding data can be aligned by subspaces
alignment, which explains the small error of SA. However,
this condition may not hold in other datasets.
MIDA and SMIDA obtained the lowest average errors
in both target domains. Aiming at exploring the prediction
accuracy when there is no instrument variation, we further
trained regression models on the training set of the two target
domains and tested on the same domain. The results are listed
as “train on target” in Table III. It can be found that SMIDA
outperforms these results. This could be attributed to three
reasons: (1) The inter-domain discrepancy in this dataset is
relatively easy to correct; (2) The use of RBF kernel in SMIDA
improves the accuracy; (3) SMIDA learned the subspace on
the basis of both training and test samples. Although the test
samples were unlabeled, they can provide some information
about the distribution of the samples to make the learned
subspace generalize better, which can be viewed as the merit
of semi-supervised learning. To testify this assumption, we
conducted another experiment with multiple target domains.
The training samples from the source domain and the test
ones from both target domains were leveraged together for
subspace learning in MIDA and SMIDA. The average RMSE
for the two target domains are 0.209 and 0.217 for MIDA,
and 0.208 and 0.218 for SMIDA. Compared with the results
in Table III with single target domain, the results have been
further improved, showing that incorporating more unlabeled
samples from target domains can be beneficial.
E. Visual Object Recognition Dataset
In [25], Gong et al. evaluated domain adaptation algorithms
on four visual object recognition datasets, namely Amazon
(A), Caltech-256 (C), DSLR (D), and Webcam (W). Ten
common classes were selected from them, with 8 to 151
samples per class per domain, and 2533 images in total. Each
image was encoded with an 800-bin histogram using SURF
features. The normalized histograms were z-scored to have
zero mean and unit variance in each dimension. Following
the experimental setting provided in the sample code from the
authors of [25], experiments were conducted in 20 random
trials for each pair of domains. For each unsupervised trail,
20 (for A, C, W) or 8 (for D) labeled samples per class
ACCEPTED BY IEEE TRANSACTIONS ON CYBERNETICS
10
TABLE III
R EGRESSION RMSE ON THE CORN DATASET. B OLD VALUES INDICATE THE BEST RESULTS .
Mp5 as target domain
Mp6 as target domain
Moisture Oil
Moisture Oil
Protein Starch Average
Original feature 1.327
0.107 1.155
2.651
1.310
1.433
0.101 1.413
2.776
1.431
KPCA
0.477
0.165 0.215
0.315
0.293
0.396
0.164 0.238
0.290
0.272
TCA [5]
0.539
0.322 0.217
0.402
0.370
0.398
0.145 0.259
0.572
0.343
SSTCA [5]
0.343
0.093 0.140
0.366
0.235
0.367
0.088 0.186
0.318
0.240
SA [17]
0.302
0.094 0.186
0.351
0.233
0.324
0.079 0.158
0.390
0.238
GFK [25]
0.267
0.197 0.342
0.621
0.357
0.263
0.189 0.264
0.485
0.301
SCL [20]
0.283
0.115 0.249
0.619
0.316
0.311
0.108 0.257
0.683
0.340
mSDA [21]
0.264
0.107 0.211
0.446
0.257
0.285
0.097 0.198
0.471
0.263
MIDA
0.317
0.078 0.141
0.378
0.228
0.317
0.084 0.158
0.352
0.228
SMIDA
0.287
0.072 0.143
0.339
0.210
0.316
0.073 0.152
0.325
0.217
Train on target
0.176
0.094 0.201
0.388
0.215
0.182
0.108 0.206
0.414
0.228
0.5
KPCA
TCA
SSTCA
SA
MIDA
SMIDA
Average regression RMSE
0.45
0.4
0.35
0.3
0.25
0.2
0
10
Protein Starch Average
20
30
40
# Projected dimensions (h)
50
60
Fig. 8. Performance comparison on the corn dataset with respect to the
subspace dimension h.
were randomly chosen from the source domain as the training
set (other samples were used unsupervisedly for domain
adaptation), while all unlabeled samples in the target domain
made up the test set. In semi-supervised trails, three labeled
samples per class in the target domain were also assumed to
be labeled. Averaged accuracies on each pair of domains as
well as standard errors are listed in Tables IV and V.
For GFK, low-rank transfer subspace learning (LTSL),
domain adaptation by shifting covariance (DASC), and a
recent method called integration of global and local metrics
for domain adaptation (IGLDA), we copied the best results
reported in the original papers [18], [19], [3], [22]. For other
methods tested, the hyper-parameters were tuned for the best
accuracy. Logistic regression was adopted as the classifier.
The polynomial kernel with degree 2 was used in KPCA,
TCA, SSTCA, MIDA, and SMIDA. The domain features
were defined according to the domain labels using the onehot coding scheme. MIDA and SMIDA achieve the best
average accuracies in both unsupervised and semi-supervised
visual object recognition experiments. We observe that TCA
and SSTCA have comparable performance with MIDA and
SMIDA, which may be explained by the fact that the HSIC
criterion used in MIDA and MMD used in TCA are identical
under certain conditions when there are one source and one
target domain [27]. Besides, the feature augmentation strategy
in MIDA is not crucial in this dataset because there is no
change in conditional probability. On the other hand, TCA
and SSTCA can only handle one source and one target
domains. SSTCA uses the manifold regularization strategy to
preserve local geometry information, hence introduces three
more hyper-parameters than SMIDA. Moreover, computing the
data adjacency graph in SSTCA and the matrix inversion operation in TCA and SSTCA make them slower than MIDA and
SMIDA. We compared their speed on the domain adaptation
experiment C → A. They were run on a server with Intel Xeon
2.00 GHz CPU and 128 GB RAM. No parallel computing
was used. The codes of the algorithms were written in Matlab
R2014a. On average, the running times of each trial of MIDA,
SMIDA, TCA, and SSTCA were 2.4 s, 2.5 s, 3.0 s, and 10.2 s,
respectively. Therefore, MIDA and SMIDA are more practical
to use than TCA and SSTCA. Besides, they were initially
designed for drift correction. This dataset is used to show their
universality.
V. C ONCLUSION
In this paper, we introduced maximum independence domain adaptation (MIDA) to learn domain-invariant features.
The main idea of MIDA is to reduce the inter-domain discrepancy by maximizing the independence between the learned
features and the domain features of the samples. The domain
features describe the background information of each sample,
such as the domain label in traditional domain adaptation
problems. In the field of sensors and measurement, the device
label and acquisition time of the each collected sample can
be expressed by the domain features, so that unsupervised
drift correction can be achieved by using MIDA. The feature
augmentation strategy proposed in this paper adds domainspecific biases to the learned features, which helps MIDA to
align domains.
ACCEPTED BY IEEE TRANSACTIONS ON CYBERNETICS
11
TABLE IV
U NSUPERVISED DOMAIN ADAPTATION ACCURACY (%) ON THE VISUAL OBJECT RECOGNITION DATASET. B OLD VALUES INDICATE THE BEST RESULTS .
X → Y MEANS THAT X IS THE SOURCE DOMAIN AND Y IS THE TARGET ONE .
C→A
Ori. ft.
D→A
W→A
A→C
D→C
W→C
A→D
C→D
W→D
A→W
C→W
D→W
Average
43.2±2.2 34.9±1.1 36.8±0.6 38.5±1.6 31.7±1.2 32.7±0.9 37.3±3.1 40.5±3.6 80.6±2.0 37.5±2.9 37.1±3.6 76.7±2.0 43.97
KPCA
27.4±2.0 27.0±1.6 27.6±1.2 25.8±1.3 22.0±2.4 23.7±1.6 29.2±2.9 27.6±2.7 55.1±1.8 29.0±2.7 27.1±3.2 50.3±2.6 30.97
TCA [5]
49.8±2.7 38.6±1.4 39.2±1.0 42.8±2.1 35.3±1.5 35.7±0.8 42.8±3.2 45.9±3.8 83.9±1.7 41.7±3.3 42.8±5.4 81.5±2.1 48.35
SSTCA [5]
50.5±2.8 39.3±1.6 40.5±0.7 42.4±1.8 36.1±1.5 35.9±0.8 42.8±3.1 46.6±3.5 80.6±2.3 42.5±2.7 42.8±4.7 81.8±1.9 48.48
ITL [16]
41.2±3.0 35.7±2.0 38.4±1.1 34.8±1.8 28.7±1.5 31.4±1.6 34.5±3.0 32.3±3.8 67.0±2.7 31.8±3.6 36.4±4.2 71.1±3.2 40.27
SA [17]
48.4±2.9 36.3±2.6 37.3±1.7 38.5±2.1 33.4±1.9 35.1±0.9 35.0±3.6 39.7±5.0 63.2±2.8 36.7±4.4 41.3±5.5 70.3±2.5 42.93
GFK [18]
40.4±0.7 36.2±0.4 35.5±0.7 37.9±0.4 32.7±0.4 29.3±0.4 35.1±0.8 41.1±1.3 71.2±0.9 35.7±0.9 35.8±1.0 79.1±0.7 42.50
LTSL [19]
50.4±0.4 40.2±0.6 44.1±0.3 38.6±0.3 35.3±0.3 37.4±0.2 38.3±1.1 53.7±0.9 79.8±0.4 38.8±1.3 47.0±1.0 72.8±0.7 48.03
DASC [3]
39.1±0.3 39.3±0.8 37.7±0.7 49.8±0.4 48.5±0.8 45.4±0.9 36.5±0.3 35.6±0.3 88.3±0.4 36.3±0.4 33.3±0.3 79.8±0.9 47.47
SCL [20]
43.5±2.2 34.7±1.3 36.9±0.7 38.5±1.5 31.7±1.3 33.2±0.9 37.8±3.0 40.6±3.6 81.1±1.8 37.6±3.3 37.1±4.0 76.7±2.0 44.10
mSDA [21]
45.3±1.9 37.4±1.2 38.3±0.7 40.3±1.8 33.7±1.3 35.5±1.1 38.1±2.8 40.4±4.0 82.0±1.8 38.5±3.4 37.8±3.7 79.0±2.1 45.51
IGLDA [22] 51.0±2.7 38.4±1.9 38.6±1.2 41.5±1.8 36.4±2.2 34.2±1.5 38.9±2.5 45.1±2.4 82.6±1.8 40.0±3.4 42.2±3.6 82.4±2.4 47.61
MIDA
50.3±2.5 39.2±1.9 39.8±1.0 42.7±1.8 35.5±1.1 35.7±0.7 42.3±2.8 45.7±3.6 82.2±2.0 42.8±2.8 43.6±5.0 82.4±2.0 48.51
SMIDA
50.5±2.4 39.1±1.8 39.8±1.1 42.7±2.0 35.5±1.2 35.4±0.8 42.4±2.6 45.8±3.3 82.5±2.1 42.9±2.8 43.4±5.1 81.9±2.0 48.49
TABLE V
S EMI - SUPERVISED DOMAIN ADAPTATION ACCURACY (%) ON THE VISUAL OBJECT RECOGNITION DATASET. B OLD VALUES INDICATE THE BEST RESULTS .
C→A
D→A
W→A
A→C
D→C
W→C
A→D
C→D
W→D
A→W
C→W
D→W
Average
Ori. ft.
48.8±1.8 44.5±1.6 43.5±1.4 41.6±1.9 36.7±2.1 37.3±1.4 48.1±4.2 49.3±3.2 81.7±2.4 51.0±3.4 50.9±4.4 80.5±2.2 51.17
KPCA
53.3±2.4 46.2±1.7 43.2±1.1 44.1±1.4 39.1±1.8 37.8±1.1 47.3±3.3 53.9±3.4 81.5±2.9 49.7±2.7 54.0±4.1 81.1±2.2 52.60
TCA [5]
55.3±2.2 48.6±1.8 45.7±1.4 46.1±2.0 40.3±2.1 39.7±1.4 52.1±3.0 56.3±4.5 83.7±2.9 55.4±3.5 58.3±4.5 84.2±1.9 55.46
SSTCA [5] 55.3±2.2 48.6±1.8 45.6±1.4 46.0±2.0 40.3±2.1 39.7±1.3 52.1±3.1 56.3±4.6 83.7±2.9 55.4±3.5 58.4±4.5 84.2±1.9 55.47
ITL [16]
51.5±3.1 47.7±2.5 44.1±2.1 40.0±2.2 36.8±3.2 36.6±2.2 44.4±4.1 48.2±4.0 59.7±2.6 51.5±4.5 54.9±3.8 68.5±3.3 48.65
SA [17]
51.8±2.3 47.6±2.8 44.7±1.7 42.6±1.7 36.9±2.9 36.8±2.0 45.3±4.2 49.0±3.9 71.0±2.9 47.2±2.9 49.0±3.6 76.1±2.5 49.82
GFK [18]
46.1±0.6 46.2±0.6 46.2±0.7 39.6±0.4 33.9±0.6 32.3±0.6 50.9±0.9 55.0±0.9 74.1±0.9 56.9±1.0 57.0±0.9 80.2±0.4 51.53
LTSL [19]
50.4±0.5 47.4±0.5 47.8±0.4 39.8±0.4 36.7±0.4 38.5±0.3 59.1±0.7 59.6±0.6 82.6±0.5 59.5±1.1 59.5±0.8 78.3±0.4 54.93
SCL [20]
48.8±1.7 45.0±1.4 43.4±1.3 41.3±1.8 36.3±2.2 37.4±1.4 48.9±4.4 49.3±3.5 81.8±2.5 51.9±3.7 52.0±4.4 81.0±2.2 51.42
mSDA [21] 50.4±2.2 48.1±1.6 45.6±1.6 43.6±1.9 38.9±2.2 39.3±1.5 48.9±4.5 49.2±4.9 82.3±2.5 52.9±3.9 52.1±4.4 81.6±2.1 52.75
MIDA
55.2±2.2 48.6±1.8 45.6±1.4 46.1±2.0 40.4±2.2 39.7±1.4 52.1±3.0 56.4±4.6 83.7±2.8 55.3±3.4 58.5±4.6 84.2±1.8 55.48
SMIDA
55.2±2.2 48.6±1.8 45.7±1.4 46.1±2.0 40.3±2.1 39.7±1.4 52.2±3.1 56.3±4.6 83.7±2.9 55.4±3.5 58.5±4.5 84.3±1.9 55.49
MIDA and SMIDA are flexible algorithms. With the design
of the domain features and the use of the HSIC criterion,
they can be applied in all kinds of domain adaptation problems, including discrete or continuous distributional change,
supervised/semi-supervised/unsupervised, multiple domains,
classification or regression, etc. They are also easy to implement and fast, requiring to solve only one eigenvalue
decomposition problem. Future directions may include further
extending the definition of the domain features for other
applications.
R EFERENCES
[1] S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans.
Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, 2010.
[2] V. M. Patel, R. Gopalan, R. Li, and R. Chellappa, “Visual domain
adaptation: A survey of recent advances,” Signal Processing Magazine,
IEEE, vol. 32, no. 3, pp. 53–69, 2015.
[3] Z. Cui, W. Li, D. Xu, S. Shan, X. Chen, and X. Li, “Flowing on
riemannian manifold: Domain adaptation by shifting covariance,” IEEE
Trans. Cybern., vol. 44, no. 12, pp. 2264–2273, 2014.
[4] W. Bian, D. Tao, and Y. Rui, “Cross-domain human action recognition,”
Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions
on, vol. 42, no. 2, pp. 298–307, 2012.
[5] S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang, “Domain adaptation
via transfer component analysis,” Neural Networks, IEEE Transactions
on, vol. 22, no. 2, pp. 199–210, 2011.
[6] C.-W. Seah, Y.-S. Ong, and I. W. Tsang, “Combating negative transfer
from predictive distribution differences,” IEEE Trans. Cybern., vol. 43,
no. 4, pp. 1153–1165, 2013.
[7] J. W. Gardner and P. N. Bartlett, “A brief history of electronic noses,”
Sens. Actuators B: Chem., vol. 18, no. 1, pp. 210–211, 1994.
[8] F. Röck, N. Barsan, and U. Weimar, “Electronic nose: current status and
future trends,” Chem. Rev., vol. 108, no. 2, pp. 705–725, 2008.
[9] A. Marjovi and L. Marques, “Optimal swarm formation for odor plume
finding,” IEEE transactions on cybernetics, vol. 44, no. 12, pp. 2302–
2315, 2014.
[10] L. Zhang, F. Tian, C. Kadri, B. Xiao, H. Li, L. Pan, and H. Zhou,
“On-line sensor calibration transfer among electronic nose instruments
for monitoring volatile organic chemicals in indoor air quality,” Sens.
Actuators: B. Chem., vol. 160, no. 1, pp. 899–909, 2011.
[11] K. Yan, D. Zhang, D. Wu, H. Wei, and G. Lu, “Design of a breath analysis system for diabetes screening and blood glucose level prediction,”
IEEE Trans. Biomed. Eng., vol. 61, no. 11, pp. 2787–2795, 2014.
[12] S. Marco and A. Gutiérrez-Gálvez, “Signal and data processing for
machine olfaction and chemical sensing: a review,” IEEE Sens. J.,
vol. 12, no. 11, pp. 3189–3214, 2012.
[13] S. Di Carlo and M. Falasconi, Drift correction methods for gas chemical sensors in artificial olfaction systems: techniques and challenges.
InTech, 2012, ch. 14, pp. 305–326.
[14] K. Yan and D. Zhang, “Improving the transfer ability of prediction
ACCEPTED BY IEEE TRANSACTIONS ON CYBERNETICS
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
models for electronic noses,” Sens. Actuators B: Chem., vol. 220, pp.
115–124, 2015.
——, “Calibration transfer and drift compensation of e-noses via coupled task learning,” Sens. Actuators B: Chem., vol. 225, pp. 288–297,
2016.
Y. Shi and F. Sha, “Information-theoretical learning of discriminative
clusters for unsupervised domain adaptation,” in Proceedings of the Intl.
Conf. on Machine Learning (ICML), 2012.
B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars, “Unsupervised
visual domain adaptation using subspace alignment,” in Proceedings of
the IEEE International Conference on Computer Vision, 2013, pp. 2960–
2967.
B. Gong, K. Grauman, and F. Sha, “Learning kernels for unsupervised
domain adaptation with applications to visual object recognition,” International Journal of Computer Vision, vol. 109, no. 1-2, pp. 3–27,
2014.
M. Shao, D. Kit, and Y. Fu, “Generalized transfer subspace learning
through low-rank constraint,” International Journal of Computer Vision,
vol. 109, no. 1-2, pp. 74–93, 2014.
J. Blitzer, M. Dredze, and F. Pereira, “Biographies, bollywood, boomboxes and blenders: Domain adaptation for sentiment classification,” in
ACL, vol. 7, Conference Proceedings, pp. 440–447.
M. Chen, Z. Xu, K. Weinberger, and F. Sha, “Marginalized denoising
autoencoders for domain adaptation,” in 29th International Conference
on Machine Learning, 2012, Conference Proceedings.
M. Jiang, W. Huang, Z. Huang, and G. Yen, “Integration of global
and local metrics for domain adaptation learning via dimensionality
reduction,” IEEE transactions on cybernetics, 2016.
A. Gretton, O. Bousquet, A. Smola, and B. Schlkopf, “Measuring statistical dependence with hilbert-schmidt norms,” in Algorithmic learning
theory. Springer, 2005, pp. 63–77.
R. N. Feudale, N. A. Woody, H. Tan, A. J. Myles, S. D. Brown,
and J. Ferré, “Transfer of multivariate calibration models: a review,”
Chemometr. Intell. Lab., vol. 64, no. 2, pp. 181–192, 2002.
B. Gong, Y. Shi, F. Sha, and K. Grauman, “Geodesic flow kernel
for unsupervised domain adaptation,” in Computer Vision and Pattern
Recognition (CVPR), 2012 IEEE Conference on. IEEE, 2012, pp. 2066–
2073.
Q. Liu, X. Li, M. Ye, S. S. Ge, and X. Du, “Drift compensation for
electronic nose by semi-supervised domain adaption,” IEEE Sens. J.,
vol. 14, no. 3, pp. 657–665, 2014.
L. Song, A. Smola, A. Gretton, J. Bedo, and K. Borgwardt, “Feature
selection via dependence maximization,” J. Mach. Learn. Res., vol. 13,
no. 1, pp. 1393–1434, 2012.
L. Song, A. Gretton, K. M. Borgwardt, and A. J. Smola, “Colored maximum variance unfolding,” in Advances in neural information processing
systems, 2007, pp. 1385–1392.
E. Barshan, A. Ghodsi, Z. Azimifar, and M. Z. Jahromi, “Supervised
principal component analysis: Visualization, classification and regression
on subspaces and submanifolds,” Pattern Recogn., vol. 44, no. 7, pp.
1357–1371, 2011.
H. Daum III, “Frustratingly easy domain adaptation,” in Proc. 45th Ann.
Meeting of the Assoc. for Computational Linguistics, 2007.
B. Schlkopf, A. Smola, and K.-R. Mller, “Nonlinear component analysis
as a kernel eigenvalue problem,” Neural computation, vol. 10, no. 5, pp.
1299–1319, 1998.
B. Scholkopft and K.-R. Mullert, “Fisher discriminant analysis with
kernels,” Neural networks for signal processing, vol. IX, pp. 41–48,
1999.
P. Von Bünau, F. C. Meinecke, F. C. Király, and K.-R. Müller, “Finding
stationary subspaces in multivariate time series,” Physical review letters,
vol. 103, no. 21, p. 214101, 2009.
J. a. Gama, I. Žliobaitė, A. Bifet, M. Pechenizkiy, and A. Bouchachia, “A
survey on concept drift adaptation,” ACM Computing Surveys (CSUR),
vol. 46, no. 4, p. 44, 2014.
M. Belkin, P. Niyogi, and V. Sindhwani, “Manifold regularization: A
geometric framework for learning from labeled and unlabeled examples,”
J. Mach. Learn. Res., vol. 7, pp. 2399–2434, 2006.
A. Vergara, S. Vembu, T. Ayhan, M. A. Ryan, M. L. Homer, and
R. Huerta, “Chemical gas sensor drift compensation using classifier
ensembles,” Sens. Actuators B: Chem., vol. 166, pp. 320–329, 2012.
12
| 2 |
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
1
Large-Scale Low-Rank Matrix Learning with
Nonconvex Regularizers
arXiv:1708.00146v2 [cs.LG] 23 Sep 2017
Quanming Yao, Member IEEE, James T. Kwok, Fellow IEEE, Taifeng Wang, Member IEEE,
and Tie-Yan Liu, Fellow IEEE
Abstract—Low-rank modeling has many important applications in computer vision and machine learning. While the matrix rank is
often approximated by the convex nuclear norm, the use of nonconvex low-rank regularizers has demonstrated better empirical
performance. However, the resulting optimization problem is much more challenging. Recent state-of-the-art requires an expensive full
SVD in each iteration. In this paper, we show that for many commonly-used nonconvex low-rank regularizers, the singular values
obtained from the proximal operator can be automatically threshold. This allows the proximal operator to be efficiently approximated by
the power method. We then develop a fast proximal algorithm and its accelerated variant with inexact proximal step. A convergence
rate of O(1/T ), where T is the number of iterations, can be guaranteed. Furthermore, we show the proposed algorithm can be
parallelized, and the resultant algorithm achieves nearly linear speedup w.r.t. the number of threads. Extensive experiments are
performed on matrix completion and robust principal component analysis. Significant speedup over the state-of-the-art is observed.
Index Terms—Low-rank matrix learning, Nonconvex regularization, Proximal algorithm, Parallel algorithm, Matrix completion, Robust
principle component analysis
F
1
I NTRODUCTION
L
OW -rank matrix learning is a central issue in many
machine learning and computer vision problems. For
example, matrix completion [1], which is one of the most
successful approaches in collaborative filtering, assumes
that the target rating matrix is low-rank. Besides collaborative filtering, matrix completion has also been used on
tasks such as video and image processing [2], [3], [4], [5], [6].
Another important use of low-rank matrix learning is robust
principal component analysis (RPCA) [7], which assumes
that the target matrix is low-rank and also corrupted by
sparse noise. RPCA has been popularly used in computer
vision applications such as shadow removal, background
modeling [7], [8], [9], and robust photometric stereo [10].
Besides, low-rank matrix learning has also been used in face
recognition [11] and subspace clustering [12].
However, minimization of the matrix rank is NP-hard
[1]. To alleviate this problem, a common approach is to
use a convex surrogate such as the nuclear norm (which
is the sum of singular values of the matrix). It is known
that the nuclear norm is the tightest convex lower bound
of the rank. Though the nuclear norm is non-smooth, the
resultant optimization problem can be solved efficiently
using modern tools such as the proximal algorithm [13],
[14], [15], Frank-Wolfe algorithm [16], and active subspace
selection method [17].
Despite the success of the nuclear norm, recently there
have been numerous attempts to use nonconvex surrogates
that better approximate the rank function. The key idea is
•
•
Q. Yao and J. Kwok are with the Department of Computer Science and
Engineering, Hong Kong University of Science and Technology, Clear
Water Bay, Hong Kong. E-mails: {qyaoaa, jamesk}@cse.ust.hk
T. Wang and T. Liu are with Microsoft Research Asia, Beijing, China
100010. E-mails: {taifengw, tyliu}@microsoft.com
that the larger, and thus more informative, singular values
should be less penalized. Example nonconvex low-rank
regularizers include the capped-`1 penalty [18], log-sum
penalty (LSP) [19], truncated nuclear norm (TNN) [3], [9],
smoothly clipped absolute deviation (SCAD) [20], and minimax concave penalty (MCP) [21]. They have been applied
on various computer vision tasks, such as image denoising
[6] and background modeling [9]. Empirically, these nonconvex regularizers achieve better recovery performance than
the convex nuclear norm regularizer. Recently, theoretical
results have also been established [22].
However, the resultant nonconvex optimization problem is much more challenging. Most existing optimization
algorithms that work with the nuclear norm cannot be
applied. A general approach that can still be used is the
concave-convex procedure [23], which decomposes the nonconvex regularizer into a difference of convex functions [3],
[18]. However, a sequence of relaxed optimization problems
have to be solved, and can be computationally expensive
[24], [25]. A more efficient approach is the recently proposed
iteratively re-weighted nuclear norm (IRNN) algorithm [5].
It is based on the observation that existing nonconvex regularizers are concave with non-increasing super-gradients.
Each IRNN iteration only involves computing the supergradient of the regularizer and a singular value decomposition (SVD). However, performing SVD on a m × n matrix
takes O(mn2 ) time (assuming m ≥ n), and can be expensive
on large matrices.
Recently, the proximal algorithm has been used for nonconvex low-rank matrix learning [3], [5], [9], [26]. However,
it requires the full SVD to solve the proximal operator,
which can be expensive. In this paper, we observe that for
the commonly-used nonconvex low-rank regularizers [3],
[9], [18], [19], [20], [21], the singular values obtained from
the corresponding proximal operator can be automatically
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
thresholded. One then only needs to find the leading singular values/vectors in order to generate the next iterate.
Moreover, instead of computing the proximal operator on
a large matrix, one only needs to use the matrix projected
onto its leading subspace. The matrix size is significantly
reduced and the proximal operator can be made much more
efficient. Besides, by using the power method [27], a good
approximation of this subspace can be efficiently obtained.
While the proposed procedure can be readily used with
the standard proximal algorithm, its convergence properties
are not directly applicable as the proximal step here is only
approximately solved. In the sequel, we will show that
inexactness on the proximal step can be controlled, and a
O(1/T ) convergence rate can still be guaranteed. Moreover,
the algorithm can be further speeded up using acceleration.
Effectiveness of the proposed algorithms is demonstrated on
two popular low-rank matrix learning applications, namely
matrix completion and robust principal component analysis
(RPCA). For matrix completion, we show that additional
speedup is possible by exploring the problem’s “sparse
plus low-rank” structure; whereas for RPCA, we extend the
proposed algorithm so that it can handle the two parameter
blocks involved in the RPCA formulation.
With the popularity of multicore shared-memory platforms, we parallelize the proposed algorithms so as to
handle much larger data sets. We will show that they can
achieve almost linear speedup w.r.t. the number of threads.
Experiments are performed on both synthetic and realworld data sets. Results show that the proposed nonconvex low-rank matrix learning algorithms can be several
orders faster than the state-of-the-art, and outperform other
approaches including factorization and the use of nuclear
norm regularization.
Preliminary results of this paper have been reported in
[28]. In this full version, we speed up the algorithm with
acceleration, and demonstrate how it can be applied to two
important instances of low-rank matrix learning problems,
namely matrix completion and RPCA. Besides, we show
how the proposed algorithms can be parallelized. More
extensive empirical evaluations are also performed on both
the sequential and parallel versions of the algorithms.
Notation: In the sequel, vectors are denoted by lowercase boldface, matrices by uppercase boldface, and the
transpose by the superscript (·)> . For a square matrix
X, tr(X) is its trace. For a rectangle matrix X,P
kXkF =
tr(X> X) is its Frobenius norm, and kXk∗ =
i σi (X),
where σi (X) is the ith leading singular value of X, is the
nuclear norm. Given x = [xi ] ∈ Rm , Diag(x) constructs
a m × m diagonal matrix whose ith diagonal element
is xi . I denotes the identity matrix. For a differentiable
function f , we use ∇f for its gradient. For a nonsmooth
function,
we use ∂f for its subdifferential, i.e., ∂f (x) =
s : f (y) ≥ f (x) + s> (y − x) .
2
2.1
where f is a smooth loss, r is a nonsmooth low-rank
regularizer, and λ is a regularization parameter. We make
the following assumptions on f .
A1.
A2.
f is not necessarily convex, but is differentiable with
ρ-Lipschitz continuous gradient, i.e., k∇f (X1 ) −
∇f (X2 )kF ≤ ρkX1 − X2 kF . Without loss of generality, we assume that ρ ≤ 1.
f is bounded below, i.e., inf f (X) > −∞, and
limkXkF →∞ f (X) = ∞.
In recent years, the proximal algorithm [29] has been
popularly used for solving (1). At iteration t, it produces
Xt+1 = prox λ r (Xt −
τ
1
∇f (Xt )),
τ
(2)
where τ > ρ is the stepsize, and
1
prox λ r (Z) ≡ arg min kX − Zk2F + λr(X)
(3)
τ
X 2
is the proximal operator [29]. The proximal step in (2) can
also be rewritten as Xt+1 = arg minY tr(∇f (Xt )> (Y −
Xt )) + τ2 kY − Xt k2F + λr(Y).
When f and r are convex, the proximal algorithm converges to the optimal solution at a rate of O(1/T ), where T
is the number of iterations. This can be further accelerated
to the rate of O(1/T 2 ), by replacing Xt in (2) with a proper
linear combination of Xt and Xt−1 [30]. Recently, the accelerated proximal algorithm has been extended to problems
where f or r may be nonconvex [25], [31]. The state-ofthe-art is the nonmonotone accelerated proximal gradient
(nmAPG) algorithm [25] (Algorithm 1). Each iteration may
perform two proximal steps (steps 4 and 8). Acceleration is
performed in step 3. The objective is then checked to determine whether Xat+1 is accepted (step 5). As the problem
is nonconvex, its convergence rate is still open. However,
empirically it is much faster.
Algorithm 1 Nonmonotone APG (nmAPG) [25].
Input: choose τ > ρ, δ > 0 and η ∈ [0, 1);
1: initialize X0 = X1 = Xa
1 = 0, α0 = α1 = 1, b1 = F (X1 )
and q1 = 1;
2: for t = 1, 2, . . . , T do
−1
3:
Yt = Xt + ααt−1
(Xat − Xt−1 ) + αt−1
(Xt − Xt−1 );
αt
t
1
a
4:
Xt+1 = prox λ r (Yt − τ ∇f (Yt ));
τ
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
BACKGROUND
2
if F (Xat+1 ) ≤ bt − 2δ kXat+1 − Yt k2F then
Xt+1 = Xat+1 ;
else
Xpt+1 = prox λ r (Xt − τ1 ∇f (Xt ));
( τ
Xat+1 if F (Xat+1 ) ≤ F (Xpt+1 )
Xt+1 =
;
Xpt+1 otherwise
end if
p
αt+1 = 21 ( 4αt2 + 1 + 1);
1
bt+1 = qt+1
(ηqt bt + F (Xt+1 )) where qt+1 = ηqt + 1;
end for
return XT +1 .
Proximal Algorithm
In this paper, we consider low-rank matrix learning problems of the form
min F (X) ≡ f (X) + λr(X),
X
(1)
2.2
Nonconvex Low-Rank Regularizers
For the proximal algorithm to be successful, the proximal
operator has to be efficient. The following shows that the
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
proximal operator of the nuclear norm k · k∗ has a closedform solution.
>
Proposition 2.1 ([32]). proxµk·k∗ (X) = U (Σ − µI)+ V ,
where UΣV> is the SVD of X, and (A)+ = [max(Aij , 0)].
While the (convex) nuclear norm makes low-rank optimization easier, it may not be a good enough approximation
of the matrix rank [3], [5], [6], [9], [26]. As mentioned in
Section 1, a number of nonconvex surrogates have been
recently proposed. In this paper, we make the following
assumption on the low-rank regularizer r in (1), which is
satisfied by all nonconvex low-rank regularizers in Table 1.
A3.
r is possibly non-smooth
and nonconvex, and of the
Pm
form r(X) = i=1 r̂(σi (X)), where r̂(α) is concave
and non-decreasing for α ≥ 0 and r̂(0) = 0.
TABLE 1
r̂’s for some popular nonconvex low-rank regularizers. For the TNN
regularizer, θ ∈ {1, . . . , n} is the number of leading singular values that
are not penalized; for SCAD, θ > 2; and for the others, θ > 0.
µr̂(σi (X))
capped-`1 [18]
µ min(σi (X), θ)
1
LSP [19]
(µ log θ σi (X) + 1
µσi (X)
0
TNN [3], [9]
SCAD [20]
MCP [21]
µσ (X)
i2
if i > θ
otherwise
−σi (X)+2θµσi (X)−µ2
2(θ−1)
(θ+1)µ2
2
(
σ 2 (X)
µσi (X) − i2θ
θµ2
2
if σi (X) ≤ µ
if µ < σi (X) ≤ θµ
otherwise
if σi (X) ≤ θµ
otherwise
Recently, the iteratively reweighted nuclear norm (IRNN)
algorithm [5] has been proposed to handle this nonconvex
low-rank matrix optimization problem. In each iteration, it
solves a subproblem in which the original nonconvex regularizer is approximated
by a weighted version of the nuclear
Pm
norm kXkw = i=1 wi σi (X) and 0 ≤ w1 ≤ · · · ≤ wm . The
subproblem has a closed-form solution, but SVD is needed
which takes O(mn2 ) time. Other solvers that are designed
for specific nonconvex low-rank regularizers include [8] (for
capped-`1 ), [3], [9] (for TNN), and [21] (for MCP). All these
(including IRNN) perform SVD in each iteration, which
takes O(mn2 ) time and are slow.
While the proximal algorithm has mostly been used on
convex problems, recently it is also applied to nonconvex
problems [3], [5], [6], [8], [9], [26]. The generalized proximal gradient (GPG) algorithm [26] is the first proximal
algorithm which can handle all the above nonconvex regularizers. In particular, its proximal operator can be computed as follows.
Proposition 2.2 (Generalized singular value thresholding
(GSVT) [26]). For any r satisfying assumption A3, proxµr (Z) =
UDiag(y∗ )V> , where UΣV> is the SVD of Z, and y∗ = [yi∗ ]
with
1
2
(4)
yi∗ ∈ arg min (yi − σi (Z)) + µr̂(yi ).
yi ≥0 2
In [26], problem (4) is solved by fixed-point iteration.
However, closed-form solutions indeed exist for regularizers in Table 1 [24]. Nevertheless, Proposition 2.2 still
involves SVD, which takes O(mn2 ) time.
3
3
P ROPOSED A LGORITHM
In this section, we show how the proximal algorithm can be
made much faster by using approximate GSVT.
3.1
Automatic Thresholding of Singular Values
The following Proposition shows that yi∗ in (4) becomes zero
when σi (Z) is smaller than a regularizer-specific threshold.
Proof can be found in Appendix C.1.
Proposition 3.1. There exists a threshold γ > 0 such that yi∗ =
0 when σi (Z) ≤ γ .
Together with Proposition 2.2, solving the proximal operator (3) only needs the leading singular values/vectors
of Z. For the nonconvex regularizers in Table 1, simple
closed-form solutions of γ can be obtained by examining
the optimality conditions of (4). Proof can be found in
Appendix C.2.
Corollary 3.2. The γ values for the following regularizers are:
√
•
Capped-`1 : γ = min 2θµ, µ ;
•
LSP: γ = min µθ , θ ;
•
TNN: γ = max (µ, σθ+1 (Z));
•
SCAD: γ =√µ;
•
MCP: γ = θµ if 0 < θ < 1, and µ otherwise.
This can also be used with the nuclear norm.
It can be
shown that γ = λτ , and yi∗ = max σi (A) − λτ , 0 . However,
since our focus is on nonconvex regularizers, the case for
nuclear norm will not be further pursued in the sequel.
3.2
Approximate GSVT
Proposition 2.2 computes the proximal operator by exact
SVD. In this section, we show that one can use approximate
SVD, which is more efficient.
3.2.1 Reducing the Size of SVD
Assume that Z has k̂ singular values larger than γ , then
we only need to a rank-k SVD on Z with k ≥ k̂ . Let the
rank-k̂ SVD of Z be Uk̂ Σk̂ Vk̂> . The following Proposition
shows that proxµr (Z) can be obtained from the proximal
operator on a smaller matrix.1 The proof can be found in
Appendix C.3.
Proposition 3.3. Assume that Q ∈ Rm×k , where k ≥ k̂ ,
is orthogonal and span(Uk̂ ) ⊆ span(Q). Then, proxµr (Z) =
Q proxµr (Q> Z).
3.2.2 Obtaining an Approximate GSVT
To obtain such a Q, we use the power method [27]
(Algorithm 2) which has been recently used to approximate
the SVT in nuclear norm minimization [15], [17]. As in [17],
we set the number of power iterations to 3. Warm-start
can be used via matrix R in Algorithm 2. This is particularly useful because of the iterative nature of proximal
algorithm. Obtaining an approximate Q using Algorithm 2
takes O(mnk) time. As in [14], [34], [35], the PROPACK
1. We noticed a similar result in [33] after the conference version of
this paper [28] has been accepted. However, [33] only considers the case
where r is the nuclear norm regularizer.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
4
algorithm [36] can also be used to obtain Q in O(mnk) time.
However, it finds the Q exactly and cannot benefit from
warm-start. Hence, though it has the same time complexity
as power method, empirically it is much less efficient [15].
Xt for the proximal gradient algorithm in (2)). An approximate proximal step solution X̃p is generated in step 4, and
we try to ensure
Algorithm 2 Powermethod(Z, R).
(note that this is less stringent than
where c1 = τ −ρ
4
the condition in Lemma 3.4). If (5) holds, we accept X̃p ;
otherwise, we improve X̃p by using Ṽp−1 to warm-start the
next iterate. The following Proposition shows convergence
of Algorithm 4. The proof can be found in Appendix C.4.
m×n
n×k
Input: Z ∈ R
, R ∈ R
and the number of power
iterations J = 3.
1: Y1 = ZR;
2: for j = 1, 2, . . . , J do
3:
Qj = QR(Yj ); // QR decomposition (returning
only the Q matrix)
4:
Yj+1 = Z(Z> Qj );
5: end for
6: return QJ .
The approximate GSVT procedure is shown in
Algorithm 3. Step 1 uses the power method to efficiently
obtain an orthogonal matrix Q that approximates span(Uk̂ ).
Step 2 performs a small SVD. Though this SVD is still
exact, Q> Z is much smaller than Z (k × n vs m × n), and
SVD(Q> Z) takes only O(nk 2 ) time. In step 3, the singular
values Σii ’s are thresholded using Corollary 3.2. Steps 6-8
obtains an (approximate) proxµr (Z) using Proposition 2.2.
The time complexity for GSVT is reduced from O(mn2 ) to
O(mnk).
F (X̃p ) ≤ F (X) − c1 kX̃p − Xk2F ,
(5)
Proposition 3.5. If k ≥ k̂A , where k̂A is the number of singular
values in A larger than γ , then limp→∞ X̃p = prox λ r (A).
τ
Algorithm 4 Inexact proximal step: InexactPS(X, R).
Input: X ∈ Rm×n , and R ∈ Rn×k for warm-start;
1: A = X − τ1 ∇f (X);
2: Ṽ0 = R;
3: for p = 1, 2, . . . do
4:
[X̃p , Ṽp ] = ApproxGSVT (A, Ṽp−1 , λτ );
5:
if F (X̃p ) ≤ F (X) − c1 kX̃p − Xk2F then
6:
break;
7:
end if
8: end for
9: return X̃p .
Algorithm 3 Approximate GSVT: ApproxGSVT(Z, R, µ).
Input: Z ∈ Rm×n and R ∈ Rn×k for warm-start;
1: Q = PowerMethod(Z, R);
2: [U, Σ, V] = SVD(Q> Z);
3: a = number of Σii ’s that are > γ in Corollary 3.2;
4: Ua = a leading columns of U;
5: Va = a leading columns of V;
6: for i = 1, 2, . . . , a do
7:
obtain yi∗ from (4);
8: end for
9: return
low-rank components of X̃ (QUa ,
Diag([y1∗ , . . . , ya∗ ]) and Va> ), and V.
3.3
Inexact Proximal Step
In this section, the proximal step will be inexact, and so it
can utilize the approximate GSVT in Algorithm 3. Inexact
proximal step has been considered in [37], [38]. However, r
in (1) is assumed to be convex in [38]. Attouch et al. [37]
considered nonconvex r, but they require a difficult and
expensive condition to control inexactness (an example is
provided in Appendix B).
Let A = X − τ1 ∇f (X). The following shows that the
objective F is always decreased (as τ > ρ) after an exact
proximal step.
Lemma 3.4 ( [24], [37]). F prox λ r (A)
≤ F (X) −
τ −ρ
(A)
2 kprox λ
τr
− Xk2F .
τ
Motivated by this Lemma, we propose to control the
proximal step’s inexactness by Algorithm 4 (note that X =
3.4
The Complete Procedure
The complete procedure for solving (1) is shown in
Algorithm 5, and will be called FaNCL (Fast NonConvex
Lowrank). Similar to [15], [17], we perform warm-start using
the column spaces of the previous iterates (Vt and Vt−1 ).
For further speedup, we employ a continuation strategy at
step 3 as in [5], [14], [34]. Specifically, λt is initialized to a
large value and then decreases gradually.
Algorithm 5 FaNCL (Fast NonConvex Low-rank) algorithm.
Input: choose τ > ρ, λ0 > λ and ν ∈ (0, 1);
1: initialize V0 , V1 ∈ Rn as random Gaussian matrices
and X1 = 0;
2: for t = 1, 2, . . . T do
3:
λt = (λt−1 − λ)ν t + λ;
4:
Rt = QR([Vt , Vt−1 ]); // warm start
5:
Xt+1 = InexactPS(Xt , Rt );
6: end for
7: return XT +1 .
Assume that evaluations of f and ∇f take O(mn)
time, which is valid for many applications such as matrix
completion and RPCA. Let rt be the rank of Xt at the tth
iteration, and kt = rt + rt−1 . In Algorithm 5, step 4 takes
O(nkt2 ) time; and step 5 takes O(mnpkt ) time as Rt has kt
columns. The iteration time complexity is thus O(mnpkt ). In
the experiment, we set p = 1, which is enough to guarantee
(5) empirically. The iteration time complexity of Algorithm 5
thus reduces to O(mnkt ). In contrast, exact GSVT takes
O(mn2 ) time, and is much slower as kt n. Besides, the
space complexity of Algorithm 5 is O(mn).
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
5
Proposition 3.6. r can be decomposed as r̆ − r̃, where r̆ and r̃
are convex.
O(mnkt ). Besides, its space complexity is O(mn), which is
the same as Algorithm 5.
There are several major differences between Algorithm 6
and nmAPG. First, the proximal step of Algorithm 6 is only
inexact. To make the algorithm more robust, we do not allow
nonmonotonous update (i.e., F (Xt+1 ) cannot be larger than
F (Xt )). Moreover, we use a simpler acceleration scheme
(step 4), in which only Xt and Xt−1 are involved. On matrix
completion problems, this allows using the “sparse plus
low-rank” structure [14], [15] to greatly reduce the iteration
complexity (Section 4.1). Finally, we do not require extra
comparison of the objective at step 10. This further reduces
the iteration complexity.
Based on this decomposition, we introduce the definition
of critical point.
Algorithm 6 Accelerated FaNCL algorithm (FaNCL-acc).
3.5
Convergence Analysis
The inexact proximal algorithm is first considered in [38],
which assumes r to be convex. The nonconvex extension
is considered in [37]. However, as discussed in Section 3.3,
they use an expensive condition to control inexactness of the
proximal step. Thus, their analysis cannot be applied here.
It is known that r̂ in Assumption A3 can be decomposed
as a difference of convex functions [24]. The following
Proposition shows that r also admits such a decomposition.
The proof is in Appendix C.5.
Definition 1 ([39]). If 0 ∈ ∇f (X)+λ (∂ r̆(X) − ∂ r̃(X)), then
X is a critical point of F .
The following Proposition shows that Algorithm 5 generates a bounded sequence. The proof is in Appendix C.6.
Proposition 3.7. The sequence {Xt } generated
Algorithm 5 is bounded, and has at least one limit point.
from
Let G λ r (Xt ) = Xt − prox λ r (Xt − τ1 ∇f (Xt )), which
τ
τ
is known as the proximal mapping of F at Xt [29]. If
G λ r (Xt ) = 0, Xt is a critical point of (1) [24], [37]. This
τ
motivates the use of kG λ r (Xt )k22 to measure convergence
τ
in [31]. However, kG λ r (Xt )k22 cannot be used here as r
τ
is nonconvex and the proximal step is inexact. As Proposition 3.7 guarantees the existence of limit points, we use
kXt+1 − Xt k2F instead to measure convergence. If the proximal step is exact, kG λ r (Xt )k2F = kXt+1 − Xt k2F . The
τ
following Corollary shows convergence of Algorithm 5. Its
proof can be found in Appendix C.7.
Corollary 3.8. mint=1,...,T kXt+1 − Xt k2F ≤
F (X1 )−inf F
c1 T
.
The following Theorem shows that any limit point is also
a critical point. The proof is in Appendix C.8.
Theorem 3.9. Assume that Algorithm 4 returns X only when
X = prox λ r (X− τ1 ∇f (X)) (i.e., the input is returned as output
τ
only if it is a limit point). Let {Xtj } be a subsequence of {Xt }
generated by Algorithm 5 such that limtj →∞ Xtj = X∗ . Then,
X∗ is a critical point of (1).
3.6
Acceleration
In convex optimization, acceleration has been commonly
used to speed up convergence of proximal algorithms [30].
Recently, it has also been extended to nonconvex optimization [25], [31]. A state-of-the-art algorithm is the nmAPG
[25] (Algorithm 1).
In this section, we integrate nmAPG with FaNCL. The
whole procedure is shown in Algorithm 6. The accelerated
iterate is obtained in step 4. If the resultant inexact proximal
step solution can achieve a sufficient decrease (step 7) as in
(5), this iterate is accepted (step 8); otherwise, we choose
the inexact proximal step solution obtained with the nonaccelerated iterate Xt (step 10). Note that step 10 is the same
as step 5 of Algorithm 5. Thus, the iteration time complexity
of Algorithm 6 is at most twice that of Algorithm 5, and still
Input: choose τ > ρ, λ0 > λ, δ > 0 and ν ∈ (0, 1);
1: initialize V0 , V1 ∈ Rn as random Gaussian matrices,
X0 = X1 = 0 and α0 = α1 = 1;
2: for t = 1, 2, . . . T do
3:
λt = (λt−1 − λ)ν + λ;
−1
4:
Yt = Xt + αt−1
(Xt − Xt−1 );
αt
5:
Rt = QR([Vt , Vt−1 ]); // warm start
6:
Xat+1 = InexactPS(Yt , Rt );
7:
if F (Xat+1 ) ≤ F (Xt ) − 2δ kXat+1 − Yt k2F then
8:
Xt+1 = Xat+1 ;
9:
else
10:
Xt+1 = InexactPS(Xt , Rt );
11:
end if
p
12:
αt+1 = 12 ( 4αt2 + 1 + 1);
13: end for
14: return XT +1 .
The following Proposition shows that Algorithm 6 generates a bounded sequence. Proof can be found in Appendix C.9.
Proposition 3.10. The sequence {Xt } generated
Algorithm 6 is bounded, and has at least one limit point.
from
In Corollary 3.8, kXt+1 − Xt k2F is used to measure
progress before and after the proximal step. In Algorithm 6,
the proximal step may use the accelerated iterate Yt or the
non-accelerated iterate Xt . Hence, we use kXt+1 − Ct k2F ,
where Ct = Yt if step 8 is performed, and Ct = Xt otherwise. Similar to Corollary 3.8, the following shows a O(1/T )
convergence rate. Proof can be found in Appendix C.10.
Corollary 3.11. For Algorithm 6, mint=1,...,T kXt+1 −Ct k2F ≤
F (X1 )−inf F
min(c1 ,δ/2)T .
On nonconvex optimization problems, the optimal convergence rate for first-order methods is O(1/T ) [31], [40].
Thus, the convergence rate of Algorithm 6 (Corollary 3.11)
cannot improve that of Algorithm 5 (Corollary 3.8). However, in practice, acceleration can still significantly reduce
the number of iterations on nonconvex problems [25], [31].
On the other hand, as Algorithm 6 may need a second
proximal step (step 10), its iteration time complexity can
be higher than that of Algorithm 5. However, this is much
compensated by the speedup in convergence. As will be
demonstrated in Section 6.1, empirically Algorithm 6 is
much faster.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
The following Theorem shows that any limit point of the
iterates from Algorithm 6 is also a critical point. Proof can
be found in Appendix C.11.
Theorem 3.12. Let {Xtj } be a subsequence of {Xt } generated
by Algorithm 6 such that limtj →∞ Xtj = X∗ . With the assumption in Theorem 3.9, X∗ is a critical point of (1).
4
A PPLICATIONS
In this section, we consider two important instances of problem (1), namely, matrix completion [1] and robust principal
component analysis (RPCA) [7]. As the accelerated FaNCL
algorithm (Algorithm 6) is usually faster than its nonaccelerated variant, we will only consider the accelerated
variant here. For matrix completion (Section 4.1), we will
show that Algorithm 6 can be made even faster and require
much less memory by using the “sparse plus low-rank”
structure of the problem. In Section 4.2, we show how
Algorithm 6 can be extended to deal with the two parameter
blocks in RPCA.
4.1
Matrix Completion
Matrix completion attempts to recover a low-rank matrix
O ∈ Rm×n by observing only some of its elements [1]. Let
the observed positions be indicated by Ω ∈ {0, 1}m×n , such
that Ωij = 1 if Oij is observed, and 0 otherwise. Matrix
completion can be formulated as an optimization problem
in (1), with
F (X) =
1
kPΩ (X − O)k2F + λr(X),
2
(6)
where [PΩ (A)]ij = Aij if Ωij = 1 and 0 otherwise. In the
following, we show that the time and space complexities of
Algorithm 6 can be further reduced.
4.1.1
Utilizing the Problem Structure
First, consider step 7, which checks the objectives. Computing F (Xt ) relies only on the observed positions in Ω and the
singular values of Xt . Hence, instead of explicitly constructing Xt , we maintain the SVD Ut Σt Vt> of Xt and a sparse
matrix PΩ (Xt ). Computing F (Xt ) then takes O(kΩk1 rt )
time. Computing F (Xat+1 ) takes O(kΩk1 kt ) time, as Rt has
rank kt . Next, since Yt is a linear combination of Xt and
Xt−1 in step 4, we can use the above SVD-factorized form
and compute kXat+1 − Yt k2F in O((m + n)kt2 ) time. Thus,
step 7 then takes O(kΩk1 kt + (m + n)kt2 ) time.
Steps 6 and 10 perform inexact proximal step. For the
first proximal step (step 6), Yt (defined in step 4) can be
rewritten as (1+βt )Xt −βt Xt−1 , where βt = (αt−1 −1)/αt .
When it calls InexactPS, step 1 of Algorithm 4 has
1
A = Yt + PΩ (Yt − O)
τ
= (1 + βt )Xt − βt Xt−1 +
1
PΩ (Yt − O).
τ
(7)
The first two terms involve low-rank matrices, while the
last term involves a sparse matrix. This special “sparse
plus low-rank” structure [14], [15] can speed up matrix
6
multiplications. Specifically, for any V ∈ Rn×k , AV can be
obtained as
>
AV =(1 + βt )Ut Σt (Vt> V) − βt Ut−1 Σt−1 (Vt−1
V)
1
+ PΩ (O − Yt )V.
(8)
τ
Similarly, for any U ∈ Rm×k , U> A can be obtained as
>
U> A =(1 + βt )(U> Ut )Σt Vt> −βt (U> Ut−1 )Σt−1 Vt−1
1
+ U> PΩ (O − Yt ).
(9)
τ
Both (8) and (9) take O((m + n)kt k + kΩk1 k) time (instead
of O(mnk)). As Rt in step 5 of Algorithm 6 has kt columns,
each call to approximate GSVT takes O((m+n)kt2 +kΩk1 kt )
time [15] (instead of O(mnkt )). Finally, step 5 in Algorithm 4
also takes O((m + n)kt2 + kΩk1 kt ) time. As a result, step 6
of Algorithm 6 takes a total of O((m + n)kt2 + kΩk1 kt ) time.
Step 10 is slightly cheaper (as no Xt−1 is involved), and its
time complexity is O((m + n)rt kt + kΩk1 rt ). Summarizing,
the iteration time complexity of Algorithm 6 is
O((m + n)kt2 + kΩk1 kt ).
(10)
Usually, kt n and kΩk1 mn [1], [14]. Thus, (10) is
much cheaper than the O(mnkt ) complexity of standard
FaNCL-acc (Section 3.6).
The space complexity is also reduced. We only need to
store the low-rank factorizations of Xt and Xt−1 , and the
sparse matrices PΩ (Xt ) and PΩ (Xt−1 ). These take a total of
O((m+n)kt +kΩk1 ) space (instead of O(mn) in Section 3.6).
These techniques can also be used on Algorithm 5. It can
be easily shown that its iteration time complexity is O((m +
n)rt kt + kΩk1 rt ), and its space complexity is O((m + n)rt +
kΩk1 ) (as no Xt−1 is involved).
4.1.2
Comparison with Existing Algorithms
Table 2 compares the convergence rates iteration time complexities, and space complexities of various matrix completion algorithms that will be empirically compared in
Section 6. Overall, the proposed algorithms (Algorithms 5
and 6) enjoy fast convergence, cheap iteration complexity
and low memory cost. While Algorithms 5 and 6 have
the same convergence rate, we will see in Section 6.1 that
Algorithm 6 (which uses acceleration) is significantly faster.
4.2
Robust Principal Component Analysis (RPCA)
Given a noisy data matrix O ∈ Rm×n , RPCA assumes that
O can be approximated by the sum of a low-rank matrix X
plus some sparse noise S [7]. Its optimization problem is:
min F (X, S) ≡ f (X, S) + λr(X) + υg(S),
X,S
(11)
where f (X, S) = 21 kX+S−Ok2F , r is a low-rank regularizer,
and g is a sparsity-inducing regularizer. Here, we allow
both r and g to be nonconvex and nonsmooth. Thus, (11)
can be seen as a nonconvex extension of RPCA (which uses
the nuclear norm regularizer for r and `1 -regularizer for g ).
Some examples of nonconvex r are shown in Table 1, and
examples of nonconvex g include the `1 -norm, capped-`1 norm [18] and log-sum-penalty [19].
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
7
TABLE 2
Comparison of the iteration time complexities, convergence rates and space complexity of various matrix completion solvers. Here, kt = rt + rt−1 ,
ν ∈ (0, 1) and integer Ta > 0 are constants. For the active subspace selection method (active) [17], Ts is the number of inner iterations required.
regularizer
(convex)
nuclear norm
fixed-rank factorization
nonconvex
method
APG [13], [34]
active [17]
AIS-Impute [15]
LMaFit [41]
ER1MP [42]
IRNN [5]
GPG [26]
FaNCL
FaNCL-acc
convergence rate
O(1/T 2 )
O(ν T −Ta )
O(1/T 2 )
—
O(ν T )
—
—
O(1/T )
O(1/T )
While (11) involves two blocks of parameters (X and
S), they are not coupled together. Thus, we can use the
separable property of proximal operator [29]:
proxλr+υg ([X, S]) = [proxλr (X), proxυg (S)].
For many popular sparsity-inducing regularizers, computing proxυg (S) takes only O(mn) time [24]. For
P
example, when g(S) =
=
i,j |Sij |, [proxυg (S)]ij
sign(Sij ) max(|Sij | − υ, 0), where sign(x) is the sign of x.
However, directly computing proxλr (X) requires O(mn2 )
time and is expensive. To alleviate this problem, Algorithm 5
can be easily extended to Algorithm 7. The iteration time
complexity, which is dominated by the inexact proximal
steps in steps 6 and 13, is reduced to O(mnkt ).
Algorithm 7 FaNCL-acc algorithm for RPCA.
Input: choose τ > ρ, λ0 > λ, δ > 0, ν ∈ (0, 1) and c1 =
τ −ρ
4 ;
1: initialize V0 , V1 ∈ Rn as random Gaussian matrices,
X0 = X1 = 0, S0 = S1 = 0 and α0 = α1 = 1;
2: for t = 1, 2, . . . T do
3:
λ
(λt−1
− λ)ν + λ ;
t =
YtX , YtS = Xt , St
4:
−1
Xt , St − Xt−1 , St−1 ;
+ αt−1
αt
5:
Rt = QR([Vt , Vt−1 ]);
// warm start
6:
Xat+1 = InexactPS(YtX , Rt );
7:
Sat+1 = prox υ g (YtS − τ1 ∇S f (YtX , YtS ));
τ
8:
∆t = kXat+1 − YtS k2F + kSat+1 − YtS k2F ;
9:
if F (Xat+1 , Sat+1 ) ≤ F (Xt , St ) − 2δ ∆t then
10:
Xt+1 = Xat+1 ;
11:
St+1 = Sat+1 ;
12:
else
13:
Xt+1 = InexactPS(Xt , Rt );
14:
St+1 = prox υ g (St − τ1 ∇S f (Xt , St ));
τ
15:
end if
p
16:
αt+1 = 12 ( 4αt2 + 1 + 1);
17: end for
18: return XT +1 and ST +1 .
iteration time complexity
O(mnrt )
O(kΩk1 kt Ts )
O(kΩk1 kt + (m + n)kt2 )
O(kΩk1 rt + (m + n)rt2 )
O(kΩk1 )
O(mn2 )
O(mn2 )
O (kΩk1 rt + (m + n)rt kt )
O kΩk1 kt + (m + n)kt2
space complexity
O(mn)
O((m + n)kt + kΩk1 )
O((m + n)kt + kΩk1 )
O((m + n)rt + kΩk1 )
O((m + n)rt + kΩk1 )
O(mn)
O(mn)
O((m + n)rt + kΩk1 )
O((m + n)kt + kΩk1 )
Theorem 4.3. Let [Xtj , Stj ] be a subsequence of {[Xt , St ]}
generated by Algorithm 6 such that limtj →∞ Xtj = X∗ and
limtj →∞ Stj = S∗ With the assumption in Theorem 3.9,
[X∗ , S∗ ] is a critical point of (11).
5
PARALLEL FA NCL FOR M ATRIX C OMPLETION
In this section, we show how the proposed algorithms can
be parallelized. We will only consider the matrix completion problem in (6). Extension to other problems, such as
RPCA in Section 4.2, can be similarly performed. Moreover, for simplicity of discussion, we focus on the simpler FaNCL algorithm (Algorithm 5). Its accelerated variant
(Algorithm 6) can be similarly parallelized and is shown in
Appendix A.
Parallel algorithms for matrix completion have been
proposed in [43], [44], [45]. However, they are based on
stochastic gradient descent and matrix factorization, and
cannot be directly used here.
5.1
Proposed Algorithm
Convergence results in Section 3.6 can be easily extended
to this RPCA problem. Proofs of the following can be found
in Appendices C.12, C.13, and C.14.
Operations on a matrix X are often of the form: (i) multiplications U> X and XV for some U, V (e.g., in (8), (9)); and
(ii) element-wise operation (e.g., evaluation of F (X) in (5)).
A popular scheme in parallel linear algebra is block distribution [46]. Assume that there are q threads for parallelization.
Block distribution partitions the rows and columns of X into
q parts, leading to a total of q 2 blocks. Figure 1 shows how
computations of XV, U> X and element-wise operation can
be easily parallelized. In Algorithm 5, the most important
variables are the low-rank factorized form Ut Σt Vt> of
Xt , and the sparse matrices PΩ (Xt ), PΩ (O). Using block
distribution, they are thus partitioned as in Figure 2.
The resultant parallelized version of FaNCL is shown
in Algorithm 8. Steps that can be parallelized are marked
with “B”. Two new subroutines are introduced, namely,
IndeSpan-PL (step 6) which replaces QR factorization, and
ApproxGSVT-PL (step 9) which is the parallelized version
of Algorithm 3. They will be discussed in more detail in
the following Sections. Note that Algorithm 8 is equivalent
to Algorithm 5 except that it is parallelized. Thus, the
convergence results in Section 3.5 still hold.
Proposition 4.1. The sequence {[Xt , St ]} generated from
Algorithm 7 is bounded, and has at least one limit point.
Corollary 4.2. Let Ct = YtX , YtS if steps 10 and
11 are performed, and Ct = [Xt , St ] otherwise. Then,
(X1 ,S1 )−inf F
mint=1,...,T k[Xt+1 , St+1 ] − Ct k2F ≤ Fmin(c
.
1 ,δ/2)T
5.1.1 Identifying the Span (Step 5)
In step 4 of Algorithm 5, QR factorization is used to find the
span of matrix [Vt , Vt−1 ]. This can be parallelized with the
Householder transformation and Gaussian elimination [46],
which however is typically very complex. The following
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
8
(a) U> X.
(b) XV.
(c) Element-wise operation.
Fig. 1. Parallelization of different matrix operations. Here, the number of threads q is equal to 3. Each dotted path denotes operation of a thread.
Moreover, though step 3 uses SVD it only takes O(k 3 ) time.
Algorithm 9 Parallel algorithm to identify the span of A:
IndeSpan-PL(A).
(a) UΣV> .
(b) O.
Fig. 2. Partitioning of variables UΣV> and O, and three threads are
used (q = 3).
Input: matrix A ∈ Rn×k ;
1: B B = A> A;
2: [U, Σ, V] = SVD(B);
3: construct w as in Proposition 5.1;
4: V = VDiag(w);
5: B Q = AV;
> >
6: return Q. // Q = [Q>
1 , . . . , Qp ]
Algorithm 8 FaNCL in parallel: FaNCL-PL.
Input: choose τ > ρ, λ0 > λ and ν ∈ (0, 1);
1: initialize V0 , V1 ∈ Rn as random Gaussian matrices
and X1 = 0;
2: partition X1 , PΩ (X1 ) and PΩ (O);
3: start q threads for parallelization;
4: for t = 1, 2, . . . T do
5:
λt = (λt−1 − λ)ν t + λ;
6:
B Rt = IndeSpan-PL ([Vt , Vt−1 ]);
7:
B At = Xt − τ1 PΩ (Xt − O);
8:
for p = 1, 2, . . . do
9:
B [X̃p , Rt ] = ApproxGSVT-PL(At , Rt , λτ );
10:
B ap = F (X̃p );
11:
B at = F (Xt );
12:
B aF = kX̃p − Xt k2F ;
13:
if ap ≤ at − c1 aF then
14:
break;
15:
end if
16:
end for
17:
B Xt+1 = X̃p ;
18: end for
19: return XT +1 .
Proposition proposes a simpler method to identify the span
of a matrix. Proof can be found in Appendix C.15.
Proposition 5.1. Given a matrix A, let the SVD of A> A be
VΣV> , w = [wi ] where wi = Σii if Σii > 0 and 1 otherwise.
−1
Then, AV (Diag(w)) 2 is orthogonal and contains span(A).
The resultant parallel algorithm is shown in Algorithm 9.
Its time complexity is O(( nq + q)k 2 + k 3 ). Algorithm 8 calls
Algorithm 9 with input [Vt , Vt−1 ], and thus takes O(( nq +
q)kt2 + kt3 ) time, where kt = rt + rt−1 . We do not parallelize
steps 2-4, as only k × k matrices are involved and k is small.
5.1.2 Approximate GSVT (Step 8)
The key steps in approximate GSVT (Algorithm 3) are
the power method and SVD. The power method can be
parallelized straightforwardly as in Algorithm 10, in which
we also replace the QR subroutine with Algorithm 9.
Algorithm 10 Parallel power method: PowermethodPL(Z, R).
Input: matrix Z ∈ Rm×n , R ∈ Rn×k .
1: B Y1 = ZR;
2: for j = 1, 2, . . . , J do
3:
B Qj = IndeSpan-PL(Yj );
4:
B Yj+1 = Z(Z> Qj );
5: end for
6: return QJ .
As for SVD, multiple QR factorizations are usually
needed for parallelization [46], which are complex as discussed in Section 5.1.1. The following Proposition performs
it in a simpler manner. Proof can be found in Appendix C.16.
Proposition 5.2. Given a matrix B ∈ Rn×k , let P ∈ Rn×k
be orthogonal and equals span(B), and the SVD of P> B be
UΣV> . Then, the SVD of B is (PU)ΣV.
The resultant parallelized procedure for approximate
GSVT is shown in Algorithm 11. At step 5, a small SVD
is performed (by a single thread) on the k × k matrix
P> B. At step 8 of Algorithm 8, X̃p is returned from
Algorithm 11, and we keep X̃p in its low-rank factorized
form. Besides, when Algorithm 11 is called, Z = At and
has the “sparse plus low-rank” structure mentioned earlier. Hence, (8) and (9) can be used to speed up matrix
multiplications2 . As Rt has kt columns in Algorithm 8,
2. As no acceleration is used, βt is equal to 0 in these two equations.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
2
PowerMethod-PL in step 1 takes O( kqt kΩk1 + m+n
q kt ) time,
n
2
3
steps 2-6 take O(( q + q)kt + kt ) time, and the rest takes
O(kt ) time. The total time complexity for Algorithm 11 is
2
2
O( kqt kΩk1 + m+n
q kt + (q + kt )kt ).
Algorithm 11 Approximate GSVT in parallel: ApproxGSVTPL(Z, R, µ).
Input: partitioned matrix Z ∈ Rm×n and R ∈ Rn×k ;
1: B Q = PowerMethod-PL(Z, R);
2: B B = Z> Q; // B ∈ Rn×k
3: B P = Iden-Span(B);
4: B A = P> B;
5: [U, Σ, V] = SVD(A); // U, Σ, V, A ∈ Rk×k
6: B U = PU;
7: a = number of Σii ’s that are > γ in Corollary 3.2;
8: B Ua = a leading columns of U;
9: B Va = a leading columns of V;
10: for i = 1, 2, . . . , a do
11:
obtain yi∗ from (4);
12: end for
13: return
the low-rank components of X̃ (QUa ,
Diag([y1∗ , . . . , ya∗ ]) and Va> ), and V.
5.1.3 Checking of Objectives (steps 9-11)
As shown in Figures 1(c), computation of kPΩ (Xt − O)k2F
in F (·) can be directly parallelized and takes O( p1 kΩk1 )
time. As r only relies on Σt , only one thread is needed to
evaluate r(Xt ). Thus, computing F (Xt ) takes O( rqt kΩk1 )
time. Similarly, computing F (X̃p ) takes O( kqt kΩk1 ) time.
>
>
As kX̃p − Xt k2F = tr(X̃>
p X̃p − 2X̃p Xt − Xt Xt ), the lowrank factorized forms of X̃p and Xt can be utilized. Based
2
on Figures 1(a) and 1(b), it can be performed in O( m+n
q kt )
time. Thus, the time complexity for steps 9-11 in Algorithm 8
2
is O( kqt kΩk1 + m+n
q kt ).
The iteration time complexity for Algorithm 8 is thus
O( 1q ((m + n)kt2 + kΩk1 kt ) + (q + kt )kt2 ). Compared with
(10), the speedup w.r.t. the number of threads q is almost
linear.
6
E XPERIMENTS
In this section, we perform experiments on matrix completion, RPCA and the parallelized variant of Algorithm 6
(Appendix A). Experiments are performed on a Windows
server 2013 system with Intel Xeon E5-2695-v2 CPU (12
cores, 2.4GHz) and 256GB memory. All the algorithms in
Sections 6.1 and 6.2 are implemented in Matlab. For Section 6.3, we use C++, the Intel-MKL package3 for matrix operations, and the standard thread library4 for multi-thread
programming.
6.1
Matrix Completion
We compare a number of low-rank matrix completion
solvers, including models based on (i) the commonly used
(convex) nuclear norm regularizer; (ii) fixed-rank factorization models [41], [42], which decompose the observed
3. https://software.intel.com/en-us/intel-mkl
4. http://www.cplusplus.com/reference/thread/thread/
9
matrix O into a product of rank-k matrices U and V. Its optimization problem can be written as: minU,V 12 kPΩ (UV −
O)k2F + λ2 (kUk2F + kVk2F ); and (iii) nonconvex regularizers,
including √
the capped-`1 (with θ in Table 1 set to 2λ), LSP
(with θ = λ), and TNN (with θ = 3).
The nuclear norm minimization algorithms to be compared include:
1)
2)
3)
Accelerated proximal gradient (APG) algorithm
[13], [34], with the partial SVD by PROPACK [36];
AIS-Impute [14], an inexact and acceleration proximal algorithm. The “sparse plus low-rank” structure
of the matrix iterate is utilized to speed up computation (Section 4.1); and
Active subspace selection (denoted “active”) [17],
which adds/removes rank-one subspaces from the
active set in each iteration. The nuclear norm optimization problem is then reduced to a smaller
problem defined only on this active set.
We do not compare with the Frank-Wolfe algorithm [16] and
stochastic gradient descent [47], as they have been shown to
be less efficient [15], [17].
For the fixed-rank factorization models (where the rank
is tuned by the validation set), we compare with the two
state-of-the-art algorithms:
1)
2)
Low-rank matrix fitting (LMaFit) algorithm [41]; and
Economical rank-one matrix pursuit (ER1MP) [42],
which pursues a rank-one basis in each iteration.
We do not compare with the concave-convex procedure [3],
[18], since it has been shown to be inferior to IRNN [24].
For models with nonconvex low-rank regularizers, we
compare with the following solvers:
1)
2)
3)
Iterative reweighted nuclear norm (IRNN) [5];
Generalized proximal gradient (GPG) algorithm
[26], with the underlying problem (4) solved using
the closed-form solutions in [24]; and
The proposed FaNCL algorithm (Algorithm 5) and
its accelerated variant FaNCL-acc (Algorithm 6). We
set J = 3 and p = 1.
All the algorithms are stopped when the difference in objective values between consecutive iterations becomes smaller
than 10−5 .
6.1.1
Synthetic Data
The observed m × m matrix is generated as O = UV + G,
where the elements of U ∈ Rm×k , V ∈ Rk×m (with k = 5)
are sampled i.i.d. from the standard normal distribution
N (0, 1), and elements of G sampled from N (0, 0.1). A
total of kΩk1 = 2mk log(m) random elements in O are
observed. Half of them are used for training, and the rest
as validation set for parameter tuning. Testing is performed
on the unobserved elements.
For performance evaluation, we use (i) the normalized mean squared error NMSE = kPΩ⊥ (X −
UV)kF /kPΩ⊥ (UV)kF , where X is the recovered matrix
and Ω⊥ denotes the unobserved positions; (ii) rank of
X; and (iii) training CPU time. We vary m in the range
{500, 1000, 2000}. Each experiment is repeated five times.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
10
TABLE 3
Matrix completion performance on the synthetic data. Here, NMSE is scaled by 10−2 , CPU time is in seconds and the number in brackets is the
data sparsity. The best results (according to the pairwise t-test with 95% confidence) are highlighted.
m = 500 (12.43%)
m = 1000 (6.91%)
m = 2000 (3.80%)
NMSE
rank
time
NMSE
rank
time
NMSE
rank
time
nuclear norm
APG
4.26±0.01
50
12.6±0.7
4.27±0.01
61
99.6±9.1
4.13±0.01
77
1177.5±134.2
AIS-Impute
4.11±0.01
55
5.8±2.9
4.01±0.03
57
37.9±2.9
3.50±0.01
65
338.1±54.1
active
5.37±0.03
53
12.5±1.0
6.63±0.03
69
66.4±3.3
6.44±0.10
85
547.3±91.6
fixed rank
LMaFit
3.08±0.02
5
0.5±0.1
3.02±0.02
5
1.3±0.1
2.84±0.03
5
4.9±0.3
ER1MP
21.75±0.05
40
0.3±0.1
21.94±0.09
54
0.8±0.1
20.38±0.06
70
2.5±0.3
capped `1
IRNN
1.98±0.01
5
14.5±0.7
1.99±0.01
5
146.0±2.6
1.79±0.01
5
2759.9±252.8
GPG
1.98±0.01
5
14.8±0.9
1.99±0.01
5
144.6±3.1
1.79±0.01
5
2644.9±358.0
FaNCL
1.97±0.01
5
0.3±0.1
1.98±0.01
5
1.0±0.1
1.79±0.01
5
5.0±0.4
FaNCL-acc
1.97±0.01
5
0.1±0.1
1.95±0.01
5
0.5±0.1
1.78±0.01
2.3±0.2
LSP
IRNN
1.96±0.01
5
16.8±0.6
1.89±0.01
5
196.1±3.9
1.79±0.01
5
2951.7±361.3
GPG
1.96±0.01
5
16.5±0.4
1.89±0.01
5
193.4±2.1
1.79±0.01
5
2908.9±358.0
FaNCL
1.96±0.01
5
0.4±0.1
1.89±0.01
5
1.3±0.1
1.79±0.01
5
5.5±0.4
FaNCL-acc
1.96±0.01
5
0.2±0.1
1.89±0.01
5
0.7±0.1
1.77±0.01
2.4±0.2
TNN
IRNN
1.96±0.01
5
18.8±0.6
1.88±0.01
5
223.1±4.9
1.77±0.01
5
3220.3±379.7
GPG
1.96±0.01
5
18.0±0.6
1.88±0.01
5
220.9±4.5
1.77±0.01
5
3197.8±368.9
FaNCL
1.95±0.01
5
0.4±0.1
1.88±0.01
5
1.4±0.1
1.77±0.01
5
6.1±0.5
FaNCL-acc
1.96±0.01
5
0.2±0.1
1.88±0.01
5
0.8±0.1
1.77±0.01
2.9±0.2
Results are shown in Table 3. As can be seen, nonconvex
regularization (capped-`1 , LSP and TNN) leads to much
lower NMSE’s than convex nuclear norm regularization and
fixed-rank factorization. Moreover, the nuclear norm and
ER1MP output much higher ranks. In terms of speed among
the nonconvex low-rank solvers, FaNCL is fast, and FaNCLacc is the fastest. The larger the matrix, the higher is the
speedup of FaNCL and FaNCL-acc over GPG and IRNN.
6.1.2 MovieLens
Experiment is performed on the popular MovieLens data set
(Table 4), which contain ratings of different users on movies.
We follow the setup in [42], and use 50% of the observed ratings for training, 25% for validation and the rest for testing.
For performance evaluation, we use
q the root mean squared
(a) capped-`1 .
error on the test set Ω̄: RMSE = kPΩ̄ (X − O)k2F /kΩ̄k1 ,
where X is the recovered matrix. The experiment is repeated
five times.
TABLE 4
Recommendation data sets used in the experiments.
#users
#movies
#ratings
MovieLens 100K
943
1,682
100,000
1M
6,040
3,449
999,714
10M
69,878
10,677
10,000,054
netflix
480,189
17,770
100,480,507
yahoo
249,012
296,111
62,551,438
(b) LSP.
Fig. 3. Objective vs CPU time for the capped-`1 and LSP on MovieLens100K. The plot of TNN is similar and thus not shown.
Results are shown in Table 5. Again, nonconvex regularizers lead to the lowest RMSE’s. Moreover, FaNCLacc is also the fastest among nonconvex low-rank solvers,
even faster than the state-of-the-art GPG. In particular,
FaNCL and its accelerated variant FaNCL-acc are the only
solvers (for nonconvex regularization) that can be run on
the MovieLens-1M and 10M data sets. Figure 3 compares the
objectives vs CPU time for the nonconvex regularization
solvers on MovieLens-100K. As can be seen, FaNCL and
FaNCL-acc decrease the objective and RMSE much faster
than the others. Figure 4 shows the testing RMSEs on
MovieLens-10M. Again, FaNCL-acc is the fastest.
use 50% of the observed ratings for training, 25% for validation and the rest for testing. Each experiment is repeated
five times.
Results are shown in Table 6. APG, GPG and IRNN
cannot be run as the data set is large. AIS-Impute has similar
running time as LMaFit but inferior performance, and thus is
not compared. Again, the nonconvex regularizers converge
faster, yield lower RMSE’s and solutions of much lower
ranks. Figure 5 shows the objectives and RMSE vs time, and
FaNCL-acc is the fastest.5
6.1.3 Netflix and Yahoo
Next, we perform experiments on two very large recommendation data sets, Netflix and Yahoo (Table 4). We randomly
5. On these two data sets, ER1MP easily overfits as the rank increases.
Hence, the validation set selects a smaller rank (relative to that obtained
by the nuclear norm) and ER1MP stops earlier. However, as can be
seen, its RMSE is much worse.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
11
TABLE 5
Matrix completion results on the MovieLens data sets (time is in seconds). The best results (according to the pairwise t-test with 95% confidence)
are highlighted.
MovieLens-100K
MovieLens-1M
MovieLens-10M
RMSE
rank
time
RMSE
rank
time
RMSE
rank
time
nuclear
APG
0.877±0.001
36
47.1±7.6
0.818±0.001
67
2174.4±117.3
—
—
> 105
norm
AIS-Impute
0.878±0.002
36
1.2±0.1
0.819±0.001
67
12.4±0.9
0.813±0.001
100
540.6±107.9
active
0.878±0.001
36
2.7±0.3
0.820±0.001
67
82.6±5.2
0.814±0.001
100
2338.3±304.7
fixed
LMaFit
0.865±0.002
2
1.2±0.2
0.806±0.003
6
24.8±1.3
0.792±0.001
9
501.7±95.2
rank
ER1MP
0.917±0.003
5
0.1±0.1
0.853±0.001
13
1.3±0.1
0.852±0.002
22
62.7±17.8
capped
`1
LSP
TNN
IRNN
GPG
FaNCL
FaNCL-acc
IRNN
GPG
FaNCL
FaNCL-acc
IRNN
GPG
FaNCL
FaNCL-acc
0.854±0.003
0.855±0.002
0.855±0.003
0.860±0.009
0.856±0.001
0.856±0.001
0.856±0.001
0.853±0.001
0.854±0.004
0.853±0.005
0.865±0.016
0.861±0.009
3
3
3
3
2
2
2
2
3
3
3
3
289.9±60.6
223.6±49.7
0.5±0.2
0.3±0.1
286.6±48.1
277.5±56.9
0.5±0.1
0.2±0.1
133.8±30.7
591.3±127.2
1.2±0.2
0.4±0.1
—
—
0.788±0.002
0.791±0.001
—
—
0.786±0.001
0.787±0.001
—
—
0.786±0.001
0.786±0.001
—
—
5
5
—
—
5
5
—
—
5
5
> 104
> 104
16.8±2.9
4.7±0.5
> 104
> 104
24.6±1.5
5.4±0.5
> 104
> 104
23.8±0.9
5.4±0.3
—
—
0.783±0.001
0.778±0.001
—
—
0.779±0.001
0.779±0.001
—
—
0.780±0.001
0.778±0.001
—
—
8
8
—
—
9
9
—
—
8
9
> 105
> 105
341.6±58.5
98.4±14.7
> 105
> 105
641.4±82.6
209.3±42.5
> 105
> 105
712.0±142.6
207.6±56.9
TABLE 6
Results on the netflix and yahoo data sets (CPU time is in minutes). The best results (according to the pairwise t-test with 95% confidence) are
highlighted.
netflix
yahoo
RMSE
rank
time
RMSE
rank
time
fixed rank
LMaFit
0.811±0.001
15
116.4±12.2
0.666±0.001
10
229.3±62.8
ER1MP
0.862±0.006
25
7.1±1.1
0.810±0.003
77
27.9±7.5
capped-`1
FaNCL
0.798±0.001
13
220.0±24.0
0.656±0.001
8
333.0±113.9
FaNCL-acc
0.795±0.001
13
92.8±13.8
0.651±0.001
8
90.2±16.2
LSP
FaNCL
0.794±0.001
15
145.5±10.2
0.652±0.001
9
339.8±93.9
FaNCL-acc
0.792±0.001
15
69.6±21.1
0.650±0.001
8
76.1±33.2
TNN
FaNCL
0.797±0.001
13
275.1±16.7
0.657±0.001
7
321.2±69.1
FaNCL-acc
0.795±0.001
13
104.3±17.1
0.650±0.001
7
72.9±16.1
(a) netflix.
Fig. 4. RMSE vs CPU time on the MovieLens-10M data sets.
6.2
6.2.1
Robust Principal Component Analysis
Synthetic Data
In this section, we first perform experiments on a synthetic
data set. The observed m × m matrix is generated as
O = UV+ S̃+G, where elements of U ∈ Rm×k , V ∈ Rk×m
(with k = 0.01m) are sampled i.i.d. from N (0, 1), and
elements of G are sampled from N (0, 0.1). Matrix S̃ is
sparse, with 1% of its elements randomly set to 5kUVk∞
or −5kUVk∞ with equal probabilities. The whole data set
is then randomly split into training and test sets of equal
size. The standard `1 regularizer is used as the sparsity
regularizer g in (11), and different convex/nonconvex lowrank regularizers are used as r. Hyperparameters λ and υ
in (11) are tuned using the training set.
For performance evaluation, we use (i) NMSE = k(X +
S) − (UV + S̃)kF /kUV + S̃kF , where X and S are the recovered low-rank and sparse components, respectively; (ii)
(b) yahoo.
Fig. 5. RMSE vs CPU time on the netflix and yahoo data sets.
accuracy on locating the sparse support of S̃ (i.e., percentage
of entries that S̃ij and Sij are nonzero or zero together);
(iii) the recovered rank and (iv) CPU time. We vary m in
{500, 1000, 2000}. Each experiment is repeated five times.
Note that IRNN and active subspace selection cannot be
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
12
TABLE 7
RPCA performance on synthetic data. NMSE is scaled by 10−3 , and CPU time is in seconds. The best results (according to the pairwise t-test with
95% confidence) are highlighted.
m = 500
m = 1000
m = 2000
NMSE
rank
time
NMSE
rank
time
NMSE
rank
time
nuclear norm
APG
4.88±0.17
5
4.3±0.2
3.31±0.06
10
24.5±1.0
2.40±0.05
20
281.2±26.7
capped-`1
GPG
4.51±0.16
5
8.5±2.6
2.93±0.07
10
42.9±6.6
2.16±0.05
20
614.1±64.7
FaNCL-acc
4.51±0.16
5
0.8±0.2
2.93±0.07
10
2.8±0.1
2.16±0.05
20
24.9±2.0
LSP
GPG
4.51±0.16
5
8.3±2.3
2.93±0.07
10
42.6±5.9
2.16±0.05
20
638.8±72.6
FaNCL-acc
4.51±0.16
5
0.8±0.1
2.93±0.07
10
2.9±0.1
2.16±0.05
20
26.6±4.1
TNN
GPG
4.51±0.16
5
8.5±2.4
2.93±0.07
10
43.2±5.8
2.16±0.05
20
640.7±59.1
FaNCL-acc
4.51±0.16
5
0.8±0.1
2.93±0.07
10
2.9±0.1
2.16±0.05
20
26.9±2.7
TABLE 8
PSNR (in dB) and CPU time (in seconds) on the video background removal experiment. The PSNRs for all the input videos are 16.47dB.
bootstrap
campus
escalator
hall
PSNR
time
PSNR
time
PSNR
time
PSNR
time
nuclear norm
APG
23.07±0.02
524±84
22.47±0.02
101±6
24.01±0.01
594±86
24.25±0.03
553±85
capped-`1
GPG
23.81±0.01 3122±284
23.21±0.02 691±43
24.62±0.02 5369±238
25.22±0.03 4841±255
FaNCL-acc
24.05±0.01
193±18
23.24±0.02
53±5
24.68±0.02
242±22
25.22±0.03
150±10
LSP
GPG
23.93±0.03 1922±111
23.61±0.02 324±27
24.57±0.01 5053±369
25.37±0.03 2889±222
FaNCL-acc
24.30±0.02
189±15
23.99±0.02
69±8
24.56±0.01
168±15
25.37±0.03
144±9
TNN
GPG
23.85±0.03 1296±203
23.12±0.02 671±21
24.60±0.01 4091±195
25.26±0.04 4709±367
FaNCL-acc
24.12±0.02
203±11
23.14±0.02
49±5
24.66±0.01
254±30
25.25±0.06
148±11
used here. Their objectives are of the form “smooth function
plus low-rank regularizer”, but RPCA also has a nonsmooth
`1 regularizer. Similarly, AIS-Impute is only for matrix completion. Moreover, FaNCL, which has been shown to be
slower than FaNCL-acc, will not be compared.
Results are shown in Table 7. The accuracies on locating
the sparse support are always 100% for all methods, and
thus are not shown. Moreover, while both convex and nonconvex regularizers can perfectly recover the matrix rank
and sparse locations, the nonconvex regularizers have lower
NMSE’s. As in matrix completion, FaNCL-acc is much
faster; the larger the matrix, the higher the speedup.
6.2.2
1
[6]: PSNR = −10 log10 ( mn
kX − Ok2F ) where X ∈ Rm×n is
the recovered video, and O ∈ Rm×n is the ground-truth.
Results are shown in Table 8. As can be seen, the nonconvex regularizers lead to better PSNR’s than the convex
nuclear norm. Moreover, FaNCL-acc is much faster than
GPG. Figure 7 shows PSNR vs CPU time on the bootstrap
and campus data sets. Again, FaNCL-acc converges to higher
PSNR much faster. Results on hall and escalator are similar.
Background Removal on Videos
In this section, we use RPCA for background removal in
videos. Four benchmark videos in [7], [8] are used (Table 9),
and example frames are shown in Figure 6. As in [7], the
image background is considered low-rank, while the foreground moving objects contribute to the sparse component.
TABLE 9
Videos used in the experiment.
bootstrap
campus
escalator
#pixels / frame
19,200
20,480
20,800
total #frames
9,165
4,317
10,251
(a) bootstrap.
(b) campus.
(c) escalator.
(a) bootstrap.
hall
25,344
10,752
(d) hall.
Fig. 6. Example image frames in the videos.
Given a video with n image frames, each m1 × m2 frame
is first reshaped as a m-dimensional column vector (where
m = m1 m2 ), and then all the frames are stacked together
to form a m × n matrix. The pixel values are normalized
to [0, 1], and Gaussian noise from N (0, 0.15) is added. The
experiment is repeated five times. For performance evaluation, we use the commonly used peak signal-to-noise ratio
(b) campus.
Fig. 7. PSNR vs CPU time on the bootstrap and campus videos.
6.3
Parallel Matrix Completion
In this section, we experiment with the proposed parallel algorithm in Section 5 on the Netflix and Yahoo data
sets (Table 4). We do not compare with factorization-based
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
algorithms [44], [45], as they have inferior performance
(Section 6.1). The machine has 12 cores, and we use one
thread for each core. As suggested in [44], we randomly
shuffle all the matrix columns and rows
√ before partitioning.
We use the LSP penalty (with θ = λ) and fix the total
number of iterations to 250. The hyperparameters are the
same as in Section 6.1.3. Experiments are repeated five times.
Convergence of the objective for a typical run is shown
in Figure 8. As we have multiple threads running on a single
CPU, we report the clock time instead of CPU time. As can
be seen, the accelerated algorithms are much faster than the
non-accelerated ones, and parallelization provides further
speedup.
(a) netflix.
(b) yahoo.
Fig. 8. Objective value vs clock time for the sequential/parallel versions
of FaNCL on the netflix and yahoo data sets.
Figure 9 shows the speedup with different numbers
of threads. As can be seen, the parallelized variants scale
well with the number of threads. In particular, scaling is
better on yahoo. The observed entries in its partitioned data
submatrices are distributed more evenly, which improves
performance of parallel algorithms [48]. Another observation is that the speedup can be larger than one. As discussed
in [49], in performing multiplications with a large sparse
matrix, a significant amount of time is spent on indexing
its nonzero elements. When the matrix is partitioned, each
submatrix becomes smaller and easier to be indexed. Thus,
the memory cache also becomes more effective.
7
C ONCLUSION
In this paper, we considered the challenging problem of
nonconvex low-rank matrix optimization. The key observations are that for the popular low-rank regularizers, the
singular values obtained from the proximal operator can
13
Fig. 9. Speedup vs the number of threads for parallel FaNCL. The red
dashed line indicates linear speedup.
be automatically thresholded, and the proximal operator
can be computed on a smaller matrix. This allows the
proximal operator to be efficiently approximated by the
power method. We extended the proximal algorithm in
this nonconvex optimization setting with acceleration and
inexact proximal step. We further parallelized the proposed
algorithm, which scales well w.r.t. the number of threads.
Extensive experiments on matrix completion and RPCA
show that the proposed algorithm is much faster than the
state-of-the-art. It also demonstrates that nonconvex lowrank regularizers outperform the standard (convex) nuclear
norm regularizer.
In the parallel setting, typically the observed entries are
non-uniformly distributed in the partitioned matrices, and
so workloads in the different threads are not well balanced.
One future direction is to allow asynchronized updates of
the parallel algorithm. This can help to reduce the waiting
time for threads with light workloads, and makes more
efficient use of the CPU. Moreover, while parallel algorithms
on multicore machines are easier to implement and do
not have communication issues, they are less scalable than
distributed algorithms [50]. To allow further scaleup to
massive data sets, we will consider extending the proposed
algorithms to a distributed computing environment.
R EFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
E. Candès and B. Recht, “Exact matrix completion via convex
optimization,” Foundations of Computational Mathematics, vol. 9,
no. 6, pp. 717–772, 2009.
H. Ji, C. Liu, Z. Shen, and Y. Xu, “Robust video denoising using
low rank matrix completion,” in Proceedings of the 23rd Conference
on Computer Vision and Pattern Recognition, 2010, pp. 1791–1798.
Y. Hu, D. Zhang, J. Ye, X. Li, and X. He, “Fast and accurate matrix completion via truncated nuclear norm regularization,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 35,
no. 9, pp. 2117–2130, 2013.
J. Liu, P. Musialski, P. Wonka, and J. Ye, “Tensor completion for
estimating missing values in visual data,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 208–
220, 2013.
C. Lu, J. Tang, S. Yan, and Z. Lin, “Nonconvex nonsmooth low
rank minimization via iteratively reweighted nuclear norm,” IEEE
Transactions on Image Processing, vol. 25, no. 2, pp. 829–839, 2016.
S. Gu, Q. Xie, D. Meng, W. Zuo, X. Feng, and L. Zhang, “Weighted
nuclear norm minimization and its applications to low level vision,” International Journal of Computer Vision, vol. 121, no. 2, pp.
183–208, 2017.
E. Candès, X. Li, Y. Ma, and J. Wright, “Robust principal component analysis?” Journal of the ACM, vol. 58, no. 3, p. 11, 2011.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
Q. Sun, S. Xiang, and J. Ye, “Robust principal component analysis
via capped norms,” in Proceedings of the 19th International Conference on Knowledge Discovery and Data Mining, 2013, pp. 311–319.
T.-H. Oh, Y. Tai, J. Bazin, H. Kim, and I. Kweon, “Partial sum
minimization of singular values in robust PCA: Algorithm and
applications,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 38, no. 4, pp. 744–758, 2016.
L. Wu, A. Ganesh, B. Shi, Y. Matsushita, Y. Wang, and Y. Ma,
“Robust photometric stereo via low-rank matrix completion and
recovery,” in Proceedings of the 10th Asian Conference on Computer
Vision, 2010, pp. 703–717.
J. Yang, L. Luo, J. Qian, Y. Tai, F. Zhang, and Y. Xu, “Nuclear norm
based matrix regression with applications to face recognition with
occlusion and illumination changes,” IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 39, no. 1, pp. 156–171, 2017.
G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma, “Robust recovery
of subspace structures by low-rank representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp.
171–184, 2013.
S. Ji and J. Ye, “An accelerated gradient method for trace norm
minimization,” in Proceedings of the 26th International Conference on
Machine Learning, 2009, pp. 457–464.
R. Mazumder, T. Hastie, and R. Tibshirani, “Spectral regularization
algorithms for learning large incomplete matrices,” Journal of
Machine Learning Research, vol. 11, pp. 2287–2322, 2010.
Q. Yao and J. Kwok, “Accelerated inexact Soft-Impute for fast
large-scale matrix completion,” in Proceedings of the 24th International Joint Conference on Artificial Intelligence, 2015, pp. 4002–4008.
X. Zhang, D. Schuurmans, and Y.-L. Yu, “Accelerated training for
matrix-norm regularization: A boosting approach,” in Advances in
Neural Information Processing Systems, 2012, pp. 2906–2914.
C.-J. Hsieh and P. Olsen, “Nuclear norm minimization via active
subspace selection,” in Proceedings of the 31st International Conference on Machine Learning, 2014, pp. 575–583.
T. Zhang, “Analysis of multi-stage convex relaxation for sparse
regularization,” Journal of Machine Learning Research, vol. 11, pp.
1081–1107, 2010.
E. Candès, M. Wakin, and S. Boyd, “Enhancing sparsity by
reweighted `1 minimization,” Journal of Fourier Analysis and Applications, vol. 14, no. 5-6, pp. 877–905, 2008.
J. Fan and R. Li, “Variable selection via nonconcave penalized likelihood and its oracle properties,” Journal of the American Statistical
Association, vol. 96, no. 456, pp. 1348–1360, 2001.
C. Zhang, “Nearly unbiased variable selection under minimax
concave penalty,” Annals of Statistics, vol. 38, no. 2, pp. 894–942,
2010.
H. Gui, J. Han, and Q. Gu, “Towards faster rates and oracle
property for low-rank matrix estimation,” in Proceedings of the 33rd
International Conference on Machine Learning, 2016, pp. 2300–2309.
A. Yuille and A. Rangarajan, “The concave-convex procedure,”
Neural Computation, vol. 15, no. 4, pp. 915–936, 2003.
P. Gong, C. Zhang, Z. Lu, J. Huang, and J. Ye, “A general iterative
shrinkage and thresholding algorithm for non-convex regularized
optimization problems,” in Proceedings of the 30th International
Conference on Machine Learning, 2013, pp. 37–45.
H. Li and Z. Lin, “Accelerated proximal gradient methods for nonconvex programming,” in Advances in Neural Information Processing
Systems, 2015, pp. 379–387.
C. Lu, C. Zhu, C. Xu, S. Yan, and Z. Lin, “Generalized singular
value thresholding,” in Proceedings of the 29th AAAI Conference on
Artificial Intelligence, 2015, pp. 1805–1811.
N. Halko, P.-G. Martinsson, and J. Tropp, “Finding structure with
randomness: Probabilistic algorithms for constructing approximate matrix decompositions,” SIAM Review, vol. 53, no. 2, pp.
217–288, 2011.
Q. Yao, J. Kwok, and W. Zhong, “Fast low-rank matrix learning
with nonconvex regularization,” in Proceedings of the 15th International Conference on Data Mining, 2015, pp. 539–548.
N. Parikh and S. Boyd, “Proximal algorithms,” Foundations and
Trends in Optimization, vol. 1, no. 3, pp. 127–239, 2014.
A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding
algorithm for linear inverse problems,” SIAM Journal on Imaging
Sciences, vol. 2, no. 1, pp. 183–202, 2009.
S. Ghadimi and G. Lan, “Accelerated gradient methods for nonconvex nonlinear and stochastic programming,” Mathematical Programming, vol. 156, no. 1-2, pp. 59–99, 2016.
14
[32] J.-F. Cai, E. Candès, and Z. Shen, “A singular value thresholding
algorithm for matrix completion,” SIAM Journal on Optimization,
vol. 20, no. 4, pp. 1956–1982, 2010.
[33] T. Oh, Y. Matsushita, Y. Tai, and I. Kweon, “Fast randomized
singular value thresholding for nuclear norm minimization,” in
Proceedings of the 28th Conference on Computer Vision and Pattern
Recognition, 2015, pp. 4484–4493.
[34] K.-C. Toh and S. Yun, “An accelerated proximal gradient algorithm
for nuclear norm regularized linear least squares problems,” Pacific
Journal of Optimization, vol. 6, no. 615-640, p. 15, 2010.
[35] Z. Lin, M. Chen, and Y. Ma, “The augmented Lagrange multiplier
method for exact recovery of corrupted low-rank matrices,” School
of EECS, Peking University, Tech. Rep. arXiv:1009.5055, 2010.
[36] R. Larsen, “Lanczos bidiagonalization with partial reorthogonalization,” Department of Computer Science, Aarhus University,
DAIMI PB-357, 1998.
[37] H. Attouch, J. Bolte, and B. Svaiter, “Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms,
forward-backward splitting, and regularized Gauss-Seidel methods,” Mathematical Programming, vol. 137, no. 1-2, pp. 91–129, 2013.
[38] M. Schmidt, N. Roux, and F. Bach, “Convergence rates of inexact
proximal-gradient methods for convex optimization,” in Advances
in Neural Information Processing Systems, 2011, pp. 1458–1466.
[39] J. Hiriart-Urruty, “Generalized differentiability, duality and optimization for problems dealing with differences of convex functions,” Convexity and Duality in Optimization, pp. 37–70, 1985.
[40] Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic
Course. Springer, 2004.
[41] Z. Wen, W. Yin, and Y. Zhang, “Solving a low-rank factorization model for matrix completion by a nonlinear successive
over-relaxation algorithm,” Mathematical Programming Computation, vol. 4, no. 4, pp. 333–361, 2012.
[42] Z. Wang, M. Lai, Z. Lu, W. Fan, H. Davulcu, and J. Ye, “Orthogonal
rank-one matrix pursuit for low rank matrix completion,” SIAM
Journal on Scientific Computing, vol. 37, no. 1, pp. A488–A514, 2015.
[43] R. Gemulla, E. Nijkamp, P. Haas, and Y. Sismanis, “Large-scale
matrix factorization with distributed stochastic gradient descent,”
in Proceedings of the 17th International Conference on Knowledge
Discovery and Data Mining, 2011, pp. 69–77.
[44] H.-F. Yu, C.-J. Hsieh, S. Si, and I. Dhillon, “Scalable coordinate descent approaches to parallel matrix factorization for recommender
systems,” in Proceedings of the 12nd International Conference on Data
Mining, 2012, pp. 765–774.
[45] B. Recht and C. Ré, “Parallel stochastic gradient algorithms for
large-scale matrix completion,” Mathematical Programming Computation, vol. 5, no. 2, pp. 201–226, 2013.
[46] J. Demmel, M. Heath, and H. Van Der Vorst, “Parallel numerical
linear algebra,” Acta Numerica, vol. 2, pp. 111–197, 1993.
[47] H. Avron, S. Kale, V. Sindhwani, and S. Kasiviswanathan, “Efficient and practical stochastic subgradient descent for nuclear norm
regularization,” in Proceedings of the 29th International Conference on
Machine Learning, 2012, pp. 1231–1238.
[48] Y. Zhuang, W.-S. Chin, Y.-C. Juan, and C.-J. Lin, “A fast parallel
SGD for matrix factorization in shared memory systems,” in
Proceedings of the 7th ACM Conference on Recommender Systems,
2013, pp. 249–256.
[49] G. Goumas, K. Kourtis, N. Anastopoulos, V. Karakasis, and
N. Koziris, “Understanding the performance of sparse matrixvector multiplication,” in Proceedings of the 16th Euromicro Conference on Parallel, Distributed and Network-Based Processing, 2008, pp.
283–292.
[50] D. Bertsekas and J. Tsitsiklis, Parallel and Distributed Computation:
Numerical Methods. Athena Scientific, 1997.
[51] D. Bertsekas, Nonlinear Programming. Athena Scientific, 1999.
[52] C. Rao, “Separation theorems for singular values of matrices and
their applications in multivariate analysis,” Journal of Multivariate
Analysis, vol. 9, no. 3, pp. 362–377, 1979.
[53] P. Arbenz, “Solving large scale eigenvalue problems,” Department
of Mathematics, ETH Zürich, Lecture Notes, 2010. [Online].
Available: http://people.inf.ethz.ch/arbenz/ewp/
[54] A. Lewis and H. Sendov, “Nonsmooth analysis of singular values,” Set-Valued Analysis, vol. 13, no. 3, pp. 243–264, 2005.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
A PPENDIX A
PARALLEL FA NCL- ACC
Algorithm 12 shows the parallel version of FaNCL-acc. Acceleration is performed at step 6. The first inexact proximal
step is performed at steps 8-18. Step 19 checks whether the
accelerated iterate is accepted. If the condition fails, a second
inexact proximal step is performed at steps 22-32. Note that
the algorithm is equivalent to Algorithm 6, and thus the
convergence analysis in Section 3.6 still holds.
Algorithm 12 FaNCL-acc in parallel: FaNCL-acc-PL.
Input: choose τ > ρ, λ0 > λ, δ > 0 and ν ∈ (0, 1);
1: initialize V0 , V1 ∈ Rn as random Gaussian matrices,
X0 = X1 = 0 and α0 = α1 = 1;
2: partition X0 , X1 , PΩ (X0 ), PΩ (X1 ) and PΩ (O);
3: start q threads for parallelization;
4: for t = 1, 2, . . . T do
5:
λt = (λt−1 − λ)ν t + λ;
α
−1
6:
B Yt = Xt + t−1
(Xt − Xt−1 );
αt
7:
B Rt = IndeSpan-PL ([Vt , Vt−1 ]);
8:
B Aat = Yt − τ1 PΩ (Yt − O);
9:
for p = 1, 2, . . . do
10:
B [X̃p , Rt ] = ApproxGSVT-PL(Aat , Rt , λτ );
11:
B ap = F (X̃p );
12:
B at = F (Xt );
13:
B aF = kX̃p − Xt k2F ;
14:
if ap ≤ at − c1 aF then
15:
break;
16:
end if
17:
end for
18:
B Xat+1 = X̃p ;
19:
if F (Xat+1 ) ≤ F (Xt ) − 2δ kXat+1 − Yt k2F then
20:
B Xt+1 = Xat+1 ;
21:
else
22:
B At = Xt − τ1 PΩ (Xt − O);
23:
for p = 1, 2, . . . do
24:
B [X̃p , Rt ] = ApproxGSVT-PL(At , Rt , λτ );
25:
B bp = F (X̃p );
26:
B bt = F (Xt );
27:
B bF = kX̃p − Xt k2F ;
28:
if bp ≤ bt − c1 bF then
29:
break;
30:
end if
31:
end for
32:
B Xt+1 = X̃p ;
33:
end if
p
34:
αt+1 = 12 ( 4αt2 + 1 + 1);
35: end for
36: return XT +1 .
15
example. Using Proposition 3.6, we can decompose r(X̃p )
as r̆(X̃p ) + r̃(X̃p ), where
r̆(X̃p )
r̃(X̃p )
1
kX̃p k∗ ,
θ "
!#
n
X
σi (X̃p )
σi (X̃p )
.
=
− log 1 +
θ
θ
i=1
=
Let the SVD of X̃p be UΣV> . Assume that X̃p has k
singular values larger than 0. Let Uk (resp. Vk ) be the
matrix containing the first k columns of U (resp. V). Then,
1
∂ r̆(X̃p ) =
Uk Vk> + B ,
θ
where B ∈ {C : U>
k C = 0, CVk = 0, and σ1 (C) ≤ 1}. Let
c = [ci ] with ci = θ1 − σi (X1p )+θ . Then,
∂ r̃(X̃p ) = UDiag(c)V> .
Thus, a full SVD on X̃p is needed, which is expensive and
impractical for large matrices.
A PPENDIX C
P ROOFS
C.1
Proposition 3.1
For simplicity of notations, we write σi (Z) as σi . First,
we introduce the definition of super-gradient for a concave
function and two lemmas.
Definition 2 ( [51]). For a concave function f , its super-gradient
ˆ ≡ ∂ (−f ).
is given by g ∈ ∂f
Lemma C.1 ( [51]). (i) inf g∈∂ˆr̂(y) g ≥ 0; (ii) Assume that yj ≥
yi ≥ 0. Then, supgj ∈∂ˆr̂(yj ) gj ≤ inf gi ∈∂ˆr̂(yi ) gi .
Lemma C.2. (i) yi∗ − max (σi − µgi , 0) = 0, where gi ∈
∂ˆr̂(yi∗ ); (ii) if yi∗ > 0, then yi∗ increases with σi .
Proof. (Part (i)): Let gi ∈ ∂ˆr̂(yi∗ ). From the first-order optimality condition of (4), consider the two possibilities:
(a)
(b)
σi + µgi ≤ 0: In other words, the optimal solution is
achieved at the boundary, and yi∗ = 0.
σi + µgi > 0: We have 0 = yi∗ − σi + µgi , and yi∗ > 0.
Combining these two cases, the relationship between yi∗
and σi can be expressed as
yi∗ = max (σi − µgi , 0) .
(12)
(Part (ii)): Assume that yi∗ > 0. Then, (12) becomes
yi∗ = σi − µgi .
(13)
Let σi becomes larger as σj . according to (13), we have two
possibilities for its corresponding yj∗ , i.e.,
A PPENDIX B
T HE C HECKING C ONDITION IN [37]
The condition in [37] to accept an approximate X̃p is:
∃A ∈ ∂ r̆(X̃p ) − ∂ r̃(X̃p ) where r̃ and r̆ are two convex
functions, such that kA + ∇f (X̃p )k2F ≤ bkX̃p − Xk2F for
some constant b > 0. Using the LSP regularizer as an
•
•
yj∗ > yi∗ : Then, supgj ∈∂ˆr̂(y∗ ) gj ≤ inf gi ∈∂ˆr̂(y∗ ) gi from
j
i
Lemma C.1. Together with the fact that σj > σi , there
exists a yj∗ which is not smaller than yi∗ to make (12)
hold.
yj∗ ≤ yi∗ : Then, inf gj ∈∂ˆr̂(y∗ ) gj ≥ supgi ∈∂ˆr̂(y∗ ) gi from
j
i
Lemma C.1. However, such a solution may not exist
(e.g., when r̂(α) = α).
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
Thus, while there can be multiple solutions to ensure (13),
the first case must exist. We take the largest solution of all
possible candidates. Thus, if σi gets larger, yi∗ also becomes
larger.
Proof of Proposition 3.1. From Lemma C.2, we have
and
if 0 ≤ σi ≤ µ
0
p∗i = σi − µ if µ < σi ≤ µ + θ .
θ
otherwise
qi∗
0 = yi∗ − max (σi − µgi , 0) .
We can see that y ∗ = 0 once σi − µgi ≤ 0. However,
if σi becomes smaller, σi − µgi will reach 0 before σi
reaches zero. This comes from two facts. First, yi∗ becomes
smaller as σi gets smaller (Lemma C.2), but inf gi ∈∂ˆr̂(y∗ ) gi
i
will not become smaller (Lemma C.1). Second, we have
limy→0+ inf g∈∂ˆr̂(y) g > 0. An illustration of the relationships
among σi , yi∗ and gi is shown in the following figure. Thus,
there exists γ > 0 such that once σi ≤ γ , σi − µgi ≤ 0, and
yi∗ becomes 0.
16
(14)
Let qi∗ = arg minqi >θ h2 (qi ). As h2 is also quadratic and
cannot be θ, we have
(
no solution if 0 ≤ σi ≤ θ
∗
qi =
.
(15)
σi
otherwise
Note that when σi ∈ [0, θ], there is no solution to qi∗ , as qi∗
can arbitrarily close to θ. Since h1 (θ) = h2 (θ), the possibility
for θ = arg minyi ≥0 h(yi ) is covered by arg min0≤pi ≤θ h1 .
Thus, (14) and (15) have covered all possibilities of yi∗ . Using
them, we have
1)
If θ ≤ µ, then
h1 (0)
min (h (0), h (σ ))
1
2
i
min h =
min (h1 (σi − µ), h2 (σi ))
yi ≥0
min (h1 (θ), h2 (σi ))
0 ≤ σi ≤ θ
θ < σi ≤ µ
.
µ < σi ≤ µ+θ
σi > µ+θ
In order to get yi∗ = 0, we need
min (h1 (0), h2 (σi )) = h1 (0),
which leads to
p
σi ≤ 2µθ.
√
Thus, if 0 ≤ σi ≤ min
2µθ, µ , then yi∗ = 0.
C.2
2)
Corollary 3.2
In this section, we show how to derive the threshold γ for
the capped-`1 penalty. Derivations for the other penalties
can be obtained similarly.
C.2.1
Capped-`1 Penalty
Proof. Note that problem (4) considers each singular value
separately. For simplicity of notations, let σi denote σi (Z).
For the ith singular value, let
h(yi ) ≡
1
2
(yi − σi ) + µ min (yi , θ) .
2
Thus,
Finally, combining the above
two cases, we can con√
clude that√ once σi ≤ min( 2θµ, µ), then yi∗ = 0. Thus,
γ = min( 2θµ, µ).
Proposition 3.3
Proof. First, we introduce the following theorem.
,
where
1
(pi − σi )2 +µpi ,
2
1
h2 (qi ) = (qi − σi )2 + µθ.
2
Note that h1 is quadratic. There are only three possibilities
for p∗i = arg min0≤pi ≤θ h1 (pi ), i.e.,
1 2
if p∗i = 0
2 σi
1 2
min h1 (pi ) = µσi − 2 µ
if p∗i = σi − µ ,
0≤pi ≤θ
1
2
∗
2 (θ − σi ) + µσi if pi = θ
h1 (pi ) =
0 ≤ σi ≤ µ
µ < σi ≤ θ
.
θ < σi ≤ µ+θ
σi > µ+θ
Thus, if 0 ≤ σi ≤ µ, then we have yi∗ = 0.
C.3
(
arg min0≤yi ≤θ h1 (yi )
arg min h(yi ) =
yi ≥0
arg minyi >θ h2 (yi )
If θ > µ, then
h1 (0)
h (σ − µ)
1 i
min h =
yi ≥0
min
(h1 (σi − µ), h2 (σi ))
min (h1 (θ), h2 (σi ))
Theorem C.3 (Separation theorem [52]). Let A ∈ Rm×n and
B ∈ Rm×r with B> B = I. Then
σi B> A ≤ σi (A), for i = 1, . . . , min(r, n).
Let the SVD of Z be UΣV> . Z can then be rewritten as
Σk̂
Z = [Uk̂ ; U⊥ ]
[Vk̂ ; V⊥ ]> ,
(16)
Σ⊥
where Uk̂ contains the k̂ leading columns of U, and U⊥ the
remaining columns. Similarly, Σk̂ (resp. Vk̂ ) contains the k̂
leading eigenvalues (resp. columns) of Σ (resp. V). Hence,
σi Q> Z
= max ũ>
Q> Z ṽi .
(17)
i
ũi ,ṽi
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
Let
ũi = Q> ui
and
ṽi = vi ,
(18)
u>
i Zvi
σi (Z),
>
p
>
>
Then, kBp B>
p − Uk Uk kF ≤ η kB0 B0 − Uk Uk kF , where
η ∈ (0, 1) is a constant.
Proof. At the pth iteration of Algorithm 4, inside
Algorithm 2 (step 3), since Z = A and R = Ṽp−1 , we
have
where ui (resp. vi ) is the ith column of U (resp. V).
Q> Z ṽi = u>
QQ> Zvi
ũ>
i
i
=
=
17
Q1 = QR(AṼp−1 ) = Bp−1 .
(19)
(20)
where (19) is due to span(Uk̂ ) ⊆ span(Q). From Theorem C.3, by substituting Q = B and A = Z, we have
σi (Q> Z) ≤ σi (Z). Combining with (20), we obtain that (18)
is the optimal solution of (17). Thus, the rank-k̂ SVD of Q> Z
is (Q> Uk̂ )Σk̂ Vk̂> , with the corresponding left and right
singular vectors contained in Q> Uk̂ and Vk̂ , respectively.
Again, by Theorem C.3, we have
σk̂+1 Q> Z ≤ σk̂+1 (Z) ≤ γ.
Besides, using (16),
>
ṽi .
σi Q> Z =max ũ>
Q> Uk̂ Σk̂ Vk̂> +Q> U⊥ Σ⊥ V⊥
i
Then, for Ṽp , inside Algorithm 3 (step 2), we have
span(Ṽp ) = span(A> Q).
Thus,
span(AṼp ) = span(A(A> Q))
= span(A(A> QJ ))
= span(YJ+1 ),
= QR(AṼp ) = Bp .
>
>
>
kBp B>
p − Uk Uk kF = kQJ+1 QJ+1 − Uk Uk kF
>
≤ αJ kQ1 Q>
1 − Uk Uk kF
>
proxµr (Q Z)
=
>
proxµr (Q Uk̂ Σk̂ Vk̂> + Q> U⊥ Σ⊥ V⊥
)
>
>
>
>
)
proxµr (Q Uk̂ Σk̂ Vk̂ ) + proxµr (Q U⊥ Σ⊥ V⊥
>
>
proxµr (Q Uk̂ Σk̂ Vk̂ ).
>
= ηkBp−1 B>
p−1 − Uk Uk kF ,
>
(22)
where η = αJ ∈ (0, 1). Thus,
>
p
>
>
kBp B>
p − Uk Uk kF ≤ η kB0 B0 − Uk Uk kF .
(23)
where (22) follows from that Q> Uk̂ (resp. Vk̂ ) is orthogonal to QU⊥ (resp. V⊥ ). (21) shows that there are
only k̂ singular values in Q> Z larger than γ . Thus,
>
proxµr (Q> U⊥ Σ⊥ V⊥
) = 0 and we get (23). Finally,
Qproxµr (Q> Z) = Q Q> Uk̂ proxµr (Σk̂ )Vk̂>
= Uk̂ proxµr (Σk̂ )Vk̂>
(24)
= proxµr (Z),
(25)
where (24) comes from span(Uk̂ ) ⊆ span(Q); (25) comes
from that rank-k̂ SVD of Z is Uk̂ Σk̂ Vk̂> and Z only has k̂
singular values larger than γ .
C.4
(30)
Note that C = Q1 in Lemma C.4. Together with (27) and
(30), we have
Then,
=
(29)
QJ+1 = QR(YJ+1 )
ũ,ṽ
=
(28)
where (28) comes from the fact that Q is returned after J
iterations of Algorithm 2; and (29) from the definition of
Yj+1 at step 4 in Algorithm 2. Thus,
ũi ,ṽi
The first k̂ singular values are from the term Q> Uk̂ Σk̂ Vk̂ .
Hence,
>
ṽ ≤ γ. (21)
σk̂+1 Q> Z = max ũ> Q> U⊥ Σ⊥ V⊥
(27)
Proposition 3.5
Proof. First, we introduce the following Lemmas.
Lemma C.4 ( [27], [53]). In Algorithm 2, let the SVD of Z be
ŪΣ̄V̄> , and Ūk contain the first k columns of Ū. We have
>
j−1
kQj Q>
kCC> − Ūk Ū>
j − Ūk Ūk kF ≤ α
k kF ,
where α = σk+1 (Z)/σk (Z) ∈ (0, 1) and C = QR(ZR).
Lemma C.5. In Algorithm 4, let the rank-k SVD of A be
Uk Σk Vk> , and
Bp = QR(AṼp ).
(26)
(Proof of Proposition 3.5) For Bp in (26), we have
limp→∞ Bp = Uk from Proposition C.5 where Uk comes
from rank-k SVD of A. As k ≥ k̂A , span(Uk̂A ) ⊆ span(Uk ).
Then, from Proposition 3.3, we have
Uk prox λ r (U>
k A) = prox λ r (A).
τ
τ
Thus, limp→∞ X̃p = prox λ r (A).
τ
C.5
Proposition 3.6
Proof. First, we introduce Lemma C.6.
Pm
Lemma C.6 ( [54]). Let φ(X) =
i=1 f (σi (X)). If f is
convex, φ is also convex on X.
For r̂ in Assumption A3, it can be rewritten as r̂(α) =
r̂1 (α) − r̂2 (α), where r̂1 (α) = κα (for some constant κ) and
r̂2 (α) = κα − r̂(α). Obviously, both r̂1 and r̂2 are convex.
Define
m
m
X
X
r̆(X) =
r̂1 (σi (X)), and r̃(X) =
r̂2 (σi (X)).
i=1
i=1
From Lemma C.6, both r̆ and r̃ are convex. Thus, r can
also be written as a difference of convex functions: r(X) =
r̆(X) − r̃(X).
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
C.6
C.9
Proposition 3.7
Proof. From step 5 of Algorithm 5 (which ensures (5)), we
have
F (Xt+1 ) ≤ F (Xt ) − c1 kXt+1 − Xt k2F .
Proposition 3.10
Proof. Consider the two cases:
1)
Step 8 in Algorithm 6 is performed: Then,
2)
δ
F (Xt+1 ) ≤ F (Xt ) − kXt+1 − Yt k2F .
2
Step 10 is performed: Then,
Summing this from t = 1 to T , we have
c1
T
X
kXt+1 − Xt k2F ≤ F (X1 ) − F (XT +1 )
F (Xt+1 ) ≤ F (Xt ) − c1 kXt+1 − Xt k2F .
t=1
≤ F (X1 ) − inf F.
(31)
As F is bounded from below (Assumption A2),
a1 ≡ F (X1 ) − inf F
is a positive constant. Let T → ∞, we have
∞
X
kXt+1 − Xt k2F ≤
t=1
a1
.
c1
(32)
lim
f (X) → ∞,
F (X1 ) − F (XT +1 )
X δ
X
≥
kXt+1 − Yt k2F +
c1 kXt+1 − Xt k2F .
2
1
2
t∈ΩT
Corollary 3.8
Proof. Combining (31) and (32), we have
min kXt+1 − Xt k2F
t=1,...,T
≤
T
1X
kXt+1 − Xt k2F
T t=1
≤
∞
1X
kXt+1 − Xt k2F
T t=1
≤
F (X1 ) − inf F
.
c1 T
lim
kXkF →∞
As {Xtj } is a subsequence of {Xt } with limit point X∗ ,
lim Xtj +1 = InexactPS lim Xtj , lim Rtj , (33)
tj →∞
lim kXt+1 − Xt k2F = 0,
X
3)
Both |Ω1∞ | and |Ω2∞ | are infinite: As in the above
two cases, {Xt } is bounded when either of |Ω1∞ |
and |Ω2∞ | is infinite.
Combining the above, {Xt } generated from Algorithm 6 is
bounded and has at least one limit point.
t→∞
which implies
tj →∞
|Ω1∞ | is infinite but |Ω2∞ | is finite: For tj ∈ Ω1∞ , note
that, F (Xtj +1 ) ≤ F (Xtj ) due to (35) and (36). From
Assumption A2,
then the sequence {F (Xtj )} is bounded. Again,
from Assumption A2, we have (40), then {Xtj } is
also bounded which has at least one limit point [51].
From (32), we have
lim Xtj +1 = lim Xtj = X∗ ,
(40)
inf F (X) > −∞,
Lemma C.7 ( [24], [37]). If X = prox λ r (X − τ1 ∇f (X)), then
τ
X is a critical point of (1).
tj →∞
f (X) = ∞,
which indicates that maxtj =1,...,∞ kXtj kF < ∞.
Together with (39), the sequence {Xt } is bounded,
which has at least one limit point [51].
Proof. First, we introduce Lemma C.7.
tj →∞
|Ω1∞ | is finite but |Ω2∞ | is infinite: For tj ∈ Ω2∞ , we
have from (38)
X
a1
kXtj +1 − Xtj k2F ≤ .
(39)
c1
2
From Assumption A2, we also have
Theorem 3.9
tj →∞
(38)
t∈Ω∞
tj ∈Ω∞
2)
C.8
(37)
where a1 = F (X1 ) − inf F > 0 is a constant. Consider the
three cases:
1)
C.7
(36)
t∈ΩT
t∈Ω∞
which implies that maxt=1,...,∞ kXt kt < ∞. Together with
(32), {Xt } is a bounded sequence with at least one limit
point [51].
(35)
Partition the iterations {1, . . . , T } into two sets Ω1T and Ω2T ,
such that t ∈ Ω1T if step 8 is performed, and t ∈ Ω2T if step 10
is performed. Sum (35) and (36) from t = 1 to T ,
As F is bounded from below (Assumption A2),
X
Xδ
kXt+1 − Yt k2F +
c1 kXt+1 − Xt k2F ≤ a1 .
2
2
1
From Assumption A2, we also have
kXkF →∞
18
(34)
for {Xtj }. Combining (33) and (34), we have
lim X∗ = InexactPS lim Xtj , lim Rtj ,
tj →∞
tj →∞
tj →∞
= InexactPS X∗ , lim Rtj .
C.10
Corollary 3.11
Proof. Let c2 = min(δ/2, c1 ). From (37), we have
F (X1 ) − F (XT +1 )
X
X
≥ c2 (
kXt+1 − Yt k2F +
kXt+1 − Xt k2F )
t∈Ω1T
tj →∞
Thus, X∗ = prox λ r (X∗ − τ1 ∇f (X∗ )) holds by the assumpτ
tion. From Lemma C.7, X∗ is a critical point of (1).
= c2
T
X
t=1
kXt+1 − Ct k2F .
t∈Ω2T
(41)
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
Thus,
min kXt+1 − Ct k2F ≤
T
1X
kXt+1 − Ct k2F
T t=1
≤
∞
1X
kXt+1 − Ct k2F
T t=1
t=1,...,T
Combining (45) and (46), we have
lim Xtj +1 = InexactPS lim Ytj , lim Rtj
tj →∞
tj →∞
tj →∞
= InexactPS X∗ , lim Rtj
tj →∞
= X∗ .
F (X1 ) − inf F
≤
,
c2 T
Thus, by the assumption, we also have
1
∇f (X∗ )).
τ
From Lemma C.7, X∗ is also a critical point of (1).
X∗ = prox λ r (X∗ −
where the last inequity comes from (41).
C.11
τ
Theorem 3.12
Proof. Partition the iterations {1, . . . , ∞} into two sets Ω1∞
and Ω2∞ , such that t ∈ Ω1∞ if step 8 is performed, and t ∈
Ω2∞ if step 10 is performed. Consider the three cases:
1)
|Ω1∞ | is finite but |Ω2∞ | is infinite: Let {Xtj } be
a subsequence of {Xt } where t ∈ Ω2∞ , and
limtj →∞ Xtj = X∗ . From (39), we have
lim Xtj = lim Xtj +1 = X∗ .
tj →∞
tj →∞
(42)
Besides,
lim Xtj +1 = InexactPS
tj →∞
3)
C.12
1)
Steps 10 and 11 are performed: Then,
2)
F (Xt+1 , St+1 ) ≤ F (Xt , St )
(47)
δ
− (kXt+1 − YtX k2F + kSt+1 − YtS k2F ).
2
Steps 13 and 14 are performed: Then,
(43)
tj →∞
= X∗ .
Proposition 4.1
Proof. Consider the two cases:
tj →∞
Combining (42) and (43), we have
lim Xtj +1 = InexactPS lim Xtj , lim Rtj
tj →∞
tj →∞
tj →∞
= InexactPS X∗ , lim Rtj
Both |Ω1∞ | and |Ω2∞ | are infinite: From the above
two cases, we can see that the limit point X∗ is also
a critical point of (1) when either |Ω1∞ | or |Ω2∞ | is
infinite.
Thus, limit points of {Xt } are also critical points of (1).
lim Xtj , lim Rtj .
tj →∞
F (Xt+1 , St+1 ) ≤ F (Xt , St )
−(c1 kXt+1 −
ΘT =
1
X∗ = prox λ r (X∗ − ∇f (X∗ )).
τ
τ
Xδ
t∈Ω1T
+
From Lemma C.7, X∗ is also a critical point of (1).
|Ω1∞ | is infinite but |Ω2∞ | is finite: Let {Xtj } be
a subsequence of {Xt } where t ∈ Ω1∞ , and
limtj →∞ Xtj = X∗ . From (37), we have
X δ
kXt+1 − Yt k2F < ∞
2
1
2
X
τ −ρ
(c1 kXt+1 − Xt k2F +
kSt+1 − St k2F ).
2
2
t∈ΩT
Summing (47) and (48) from t = 1 to T , we have
F (X1 , S1 ) − F (XT +1 , ST +1 ) ≥ ΘT .
(49)
As F is bounded from below (Assumption A2), we have
Θ∞ ≤ a2 ,
which indicates
(50)
where a2 = F (X1 , S1 ) − inf F . We consider the three cases:
lim Xtj +1 − Ytj = 0.
tj →∞
(44)
From (44), we have
lim Ytj = lim Xtj +1 = X∗ .
tj →∞
(45)
Besides,
lim Xtj +1 = InexactPS
1)
|Ω1∞ | is finite but |Ω2∞ | is infinite; For tj ∈ Ω2∞ ,
X
a2
kXtj +1 − Xtj k2F ≤ ,
c1
tj ∈Ω2∞
X
2a2
kStj +1 − Stj k2F ≤
.
τ
−ρ
2
tj ∈Ω∞
tj →∞
(48)
−ρ
2
kSt+1 − St kF ).
2
(kXt+1 − YtX k2F + kSt+1 − YtS k2F )
t∈Ω∞
tj →∞
τ
Xt k2F +
Partition {1, . . . , T } into two sets as Ω1T and Ω2T , where
t ∈ Ω1T if steps 10-11 are performed; otherwise, t ∈ Ω2T (and
steps 13-14 are performed). Let
Thus, by the assumption, we also have
2)
19
lim Ytj , lim Rtj .
tj →∞
tj →∞
(46)
Again from Assumption A2, we have
lim
kXkF →∞ or kSkF →∞
f (X, S) → ∞.
(51)
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
Thus, maxtj =1,...,∞ k[Xtj , Stj ]kF < ∞, and the
sequence {[Xtj , Stj ]} is bounded with at least one
limit point [51].
2)
3)
|Ω1∞ | is infinite but |Ω2∞ | is finite: For tj ∈ Ω1∞ ,
from (47) and (48), we have F (Xtj +1 , Stj +1 ) ≤
F (Xtj , Stj ). As, F is bounded from below (Assumption A2), the sequence {F ([Xtj , Stj ])} must be
bounded. Besides, again from Assumption A2, we
have (51), which indicates the sequence [Xtj , Stj ])
must be bounded with at least one limit point [51].
|Ω1∞ |
20
Partition {1, . . . , ∞} into two sets as Ω1∞ and Ω2∞ , where
t ∈ Ω1∞ if steps 10-11 are performed; otherwise, t ∈ Ω2∞ (and
steps 13-14 are performed), we consider three cases here.
1)
|Ω1∞ | is finite but |Ω2∞ | is infinite: Let {[Xtj , Stj ]} be
a subsequence of {[Xt , St ]} where t ∈ Ω2∞ , and
lim Xtj , Stj = [X∗ , S∗ ] .
tj →∞
From (49), we have
∞
X
|Ω2∞ |
Both
and
are infinite: As in the above
two cases, {[Xt , St ]} is bounded with at least one
limit point once |Ω1∞ | or |Ω2∞ | is infinite.
kXt+1 − Xt k2F < ∞,
t=1
∞
X
kSt+1 − St k2F < ∞.
t=1
Thus, the sequence {[Xt , St ]} generated from Algorithm 6
is bounded and has at least one limit point.
These indicate
lim Xtj +1 − Xtj = 0,
(53)
lim Stj +1 − Stj = 0.
(54)
tj →∞
C.13
Corollary 4.2
tj →∞
Proof. Let c2 = min(δ/2, c1 ). First, we have
Xδ
t∈Ω1T
+
2
(kXt+1 −
YtX k2F
+ kSt+1 −
From (53), we have
YtS k2F )
lim Xtj +1 = lim Xtj = X∗ .
tj →∞
X
τ −ρ
(c1 kXt+1 − Xt k2F +
kSt+1 − St k2F )
2
2
T
X
k[Xt+1 , St+1 ] − Ct k2F .
(52)
t=1
tj →∞
Together with (49) and (50), we have
= X∗ .
min k[Xt+1 , St+1 ] − Ct k2F
Thus,
t=1,...,T
1
∇X f (X∗ , S∗ ))
(56)
τ
holds by the assumption. Then, the proximal operator is always exact for S. Using (54), we have
T
1X
≤
k[Xt+1 , St+1 ] − Ct k2F
T t=1
≤
∞
X
X∗ = prox λ r (X∗ −
τ
k[Xt+1 , St+1 ] − Ct k2F
lim Stj +1 = lim prox µ g (Stj −
t=1
tj →∞
F (X1 , S1 ) − inf F
,
≤
c2 T
τ
Definition 3 ( [39]). If X and S satisfy
0 ∈ ∇X f (X, S)+λ (∂ r̆(X)−∂ r̃(X)) ,
0 ∈ ∇S f (X, S) + λ (∂ğ(S) − ∂g̃(S)) ,
1
∇S f (Xtj , Stj ))
τ
1
∇S f (X∗ , S∗ ))
τ
(57)
Combining with (56) and (57), [X∗ , S∗ ] is a critical
point of (11) by using Lemma C.8.
2)
|Ω1∞ | is infinite but |Ω2∞ | is finite: Let {[Xtj , Stj ]} be
a subsequence of {[Xt , St ]} where t ∈ Ω1∞ , and
lim Xtj , Stj = [X∗ , S∗ ] .
tj →∞
From (49), we have
X
kXtj +1 − YtXj k2F ≤ ∞,
tj ∈Ω2∞
then [X, S] is a critical point of F .
X
Lemma C.8 ( [37]). If X and S satisfy
1
X = prox λ r (X− ∇X f (X, S)),
τ
τ
1
S = prox ν g (S − ∇S f (X, S)),
τ
τ
then [X, S] is a critical point of F .
τ
= S∗
Theorem 4.3
Proof. Let g = ğ + g̃ be the difference of convex decomposition of g . As two blocks of variables are involved, its critical
points are defined as follows.
tj →∞
= prox µ g (S∗ −
where the last inequality comes from (52).
C.14
(55)
Combing (53) and (55), we have
lim Xtj +1 = InexactPS lim Xtj , lim Rtj
tj →∞
tj →∞
tj →∞
= InexactPS X∗ , lim Rtj
t∈ΩT
≥ c2
tj →∞
kStj +1 − YtSj k2F ≤ ∞,
tj ∈Ω2∞
and then
lim Xtj +1 − YtXj = 0,
(58)
lim Stj +1 − YtSj = 0.
(59)
tj →∞
tj →∞
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, SUBMITTED
C.16
Thus,
lim Xtj +1 = lim YtXj = X∗ .
tj →∞
(60)
tj →∞
Combing (58) and (60), we have
lim Xtj +1 = InexactPS lim YtXj , lim Rtj
tj →∞
tj →∞
tj →∞
= InexactPS X∗ , lim Rtj
tj →∞
1
∇X f (X∗ , S∗ ))
(61)
τ
τ
holds by the assumption. Then, the proximal operator is always exact for S. Using (59),
X∗ = prox λ r (X∗ −
lim Stj +1 = lim prox µ g (YtSj
τ
t →∞
t →∞
j
j
= prox µ g (S∗ −
τ
1
− ∇S f (YtXj , YtSj ))
τ
1
∇S f (X∗ , S∗ ))
τ
= S∗
(62)
Combining with (61) and (62), [X∗ , S∗ ] is a critical
point of (11) by using Lemma C.8.
Both |Ω1∞ | and |Ω2∞ | are infinite: As above, either
|Ω1∞ | or |Ω2∞ | is infinite, a limit point [X∗ , S∗ ] is a
critical point of (11).
Thus, the limit points of the sequence {[Xt , St ]} are also
critical points of (11).
Proposition 5.1
Proof. As the SVD of A> A is VΣV> , the SVD of A can
1
be written as UΣ 2 V> where U is an orthogonal matrix
containing the span of A. From the construction of w, we
have
AV (Diag(w))
− 21
1
= UΣ 2 (V> V) (Diag(w))
1
= UΣ 2 (Diag(w))
− 12
− 12
.
Consider the two cases.
1)
2)
Proof. Let the SVD of B be ŪΣ̄V̄> . Then,
P> B = (P> Ū)Σ̄V̄> .
Note that
(P> Ū)> P> Ū = Ū> (PP> )Ū
= Ū> (ŪŪ> )Ū = I,
span(P) = span(Ū).
Thus,
C.15
Proposition 5.2
where the second equality comes from
= X∗ .
3)
21
A is of full column rank: Then,
1
1
1
−1
UΣ 2 (Diag(w)) 2 = U Σ 2 Σ− 2 = U,
which contains the span of A.
Assume that A has k columns and its rank is k̄ < k :
Then,
1
−1
UΣ 2 (Diag(w)) 2
1
1
2
= UDiag Σ11
, . . . , Σk̄2k̄ , 0, . . . , 0
−1
−1
Diag Σ112 ,. . ., Σk̄k̄2 , 1,. . ., 1
= [Uk̄ , 0] ,
where Uk̄ contains the first k̄ columns of U. As A is
1
−1
only of rank k̄ , UΣ 2 (Diag(w)) 2 again covers the
span of A.
The Proposition then follows.
>
>
(63)
Thus, the SVD of P B is (P Ū)Σ̄V̄. As a result, we have
V = V̄, Σ = Σ̄. Finally, from U = P> Ū, we have
PU = PP> Ū = Ū(Ū> Ū) = Ū,
where the second equality again comes from (63).
| 2 |
When 3D-Aided 2D Face Recognition Meets Deep Learning:
An extended UR2D for Pose-Invariant Face Recognition
arXiv:1709.06532v1 [cs.CV] 19 Sep 2017
Xiang Xu, Pengfei Dou, Ha A. Le, Ioannis A. Kakadiaris∗
Computational Biomedicine Lab
University of Houston
4800 Calhoun Rd. Houston, TX, USA
Abstract
Most of the face recognition works focus on specific modules or demonstrate a research idea. This paper presents a pose-invariant 3D-aided 2D face recognition system
(UR2D) that is robust to pose variations as large as 90◦ by leveraging deep learning
technology. The architecture and the interface of UR2D are described, and each module is introduced in detail. Extensive experiments are conducted on the UHDB31 and
IJB-A, demonstrating that UR2D outperforms existing 2D face recognition systems
such as VGG-Face, FaceNet, and a commercial off-the-shelf software (COTS) by at
least 9% on the UHDB31 dataset and 3% on the IJB-A dataset on average in face identification tasks. UR2D also achieves state-of-the-art performance of 85% on the IJB-A
dataset by comparing the Rank-1 accuracy score from template matching. It fills a gap
by providing a 3D-aided 2D face recognition system that has compatible results with
2D face recognition systems using deep learning techniques.
Keywords: Face Recognition, 3D-Aided 2D Face Recognition, Deep Learning,
Pipeline
2010 MSC: 00-01, 99-00
Preprint submitted to SI on Biometrics in the wild, Image and Vision Computing
September 20, 2017
Figure 1: Depiction of existing pose problem from selected samples. Distribution of yaw angles are from
−90◦ to +90◦ in (T) constrained dataset UHDB31 [1] and in (B) the wild dataset IJB-A [2].
1. Introduction
Face recognition is an application in which the computer either classifies human
identity according to the face (face identification) or verifies whether two images belong to the same subject (face verification). A common face recognition system has
two steps: enrollment and matching. Specifically, in the enrollment stage, features
are obtained from a facial image or a set of images to obtain a signature or a template for each subject. The enrollment usually has three steps: (i) face detection,
(ii) face alignment, and (iii) signature generation. In the matching stage, these signatures are compared to obtain a distance for the identification or verification problem.
Recently, face recognition technology has significantly advanced by the deployment
of deep learning technology, especially using Convolutional Neural Networks (CNN).
Pure 2D face recognition (2D-FR) systems have achieved human performance or even
better. DeepFace, proposed by Taigman et al. [3], first reported performance on the
Labeled Faces in the Wild (LFW) standard benchmark [4] that was better than human
efforts. FaceNet, proposed by Schroff et al. [5], used triplet loss to train a deep neural
∗ Corresponding
author
Email address: [email protected] (Ioannis A. Kakadiaris)
2
network using 200 million labeled faces, and obtained a performance of 99.63% verification accuracy on the same dataset. The success of deep learning techniques in face
recognition indeed relies on the following four aspects: (i) a large amount of data either
from public datasets such as WebFace [6] and Ms-Celeb-1M [7], or private datasets,
(ii) advanced network architecture such as VGG [8] and ResNet [9], (iii) discriminative learning approaches such as Triplet Loss [5], Center Loss [10], Range Loss [11],
SphereFace [12], and (iv) regularization methods such as Noisy Softmax [13].
However, face recognition is still not a solved problem in real-world conditions.
Some datasets, such as LFW, use Viola-Jones face detector, which is not designed to
work in the whole pose distribution from −90◦ to +90◦ . In an unconstrained scenario,
especially using surveillance camera, there is a plethora of images with large variations
in head pose, expression, illumination, and occlusions. To overcome these challenges,
a 3D face model can be applied to assist a 2D face recognition. A 3D facial model is
intrinsically invariant to pose and illumination. To use a 3D face model, a model should
is fitted on the facial images and a 3D-2D projection matrix is estimated. With the help
of a projection matrix and fitted 3D model, it is easy to rotate the face out-plane and
align the input images from any arbitrary large pose positions to the frontal position for
the feature extraction and signature matching.
In the last few years, researchers focused on the 2D face recognition from pure
2D image view and have developed numerous loss function approaches to learn the
discriminative features from the different poses. A limited number of 3D-aided 2D
face recognition systems (3D2D-FR) have been developed using the 3D model to help
align 2D images. Kakadiaris et al. [14] proposed a pose and illumination invariant
system which frontalized the face image using annotated face model (AFM). Hu et al.
[15] proposed a unified 3D morphable model (U-3DMM) which has additional PCA
subspace for perturbation.
To address the problem mentioned above, this paper presents a 3D-aided 2D face
recognition system called UR2D which significantly improves face recognition performance using the AFM and deep learning technology, especially in large pose scenarios.
There is enormous demand [16] for pose-invariant face recognition systems because
frontal face recognition can be considered as a solved problem.
3
UR2D consists of several independent modules: face detection, landmark detection, 3D model reconstruction, pose estimation, lifting texture, signature generation,
and signature/template matching. Despite face detection methods, all other methods
are developed in the Computational Biomedicine Lab. It provides sufficient tools and
interfaces to use different sub-modules designed in the system. The core code is written in efficient C++, which provides bindings to Python. The system leverages several
open-sourced libraries such as OpenCV [17], glog [18], gflags [19], pugixml [20],
JSON for modern C++ [21], and Caffe [22].
In UR2D, after detecting the face and 2D landmarks from image, a 3D model is
constructed from a 2D image or several 2D images. By estimating the 3D-2D projection matrix, the correspondence between the 3D model and 2D image can be computed.
Then, a 3D model is used to help frontalize the face. The pose-robust features and occlusion encodings are extracted to represent the face. For matching, we use cosine
similarity to compute the similarity between two signature vectors.
In summary, this paper extends Xu et al. [23] and make the following contributions:
• A brief survey of recent face recognition pipeline and each module are summarized;
• A pose-invariant 3D-aided 2D face recognition system using deep learning is
developed. The intrinsic value of a 3D model is explored to frontalize the face,
and the pose-invariant features are extracted for representation. We demonstrate
results that a 3D-aided 2D face recognition system exhibits a performance that
is comparable to a 2D only FR system. Our face recognition results outperform
the VGG-Face, FaceNet, and COTS by at least 9% on the UHDB31 dataset and
3% on the IJB-A dataset on average. In addition, we demonstrate that UR2D can
generate template signatures from multiple images and achieve state-of-the-art
performance of 85% on the IJB-A dataset.
The rest of the paper is organized as follows: modern face recognition systems
are reviewed in Sec. 2. In Sec. 3, the architecture of UR2D and its functionalities
are discussed. In Sec. 4, each module separately is introduced in detail. Detailed
evaluations on the indoor and in-the-wild datasets are reported in Sec. 5.
4
2. Related work
We divide the current existing face-related work into two categories: In Sec. 2.1, we
discuss detailed recent related work for each module in the common face recognition
pipeline from an academic view. System level papers about the implementation are
discussed in Sec. 2.2 .
2.1. Modules
Face Detection: Face detection is the first step, as well as the most studied topic, in
the face recognition domain. Zefeiriou et al. [24] presented a comprehensive survey on
this topic. They divided the approaches into two categories: rigid template-based methods, and deformable-parts-models-based methods. In addition to the methods summarized in [24], the approaches of object detection under the regions with a convolutional
neural network (R-CNN) framework [25] have been well developed. Some techniques
can be directly integrated to face detection [26]. Li et al. [27] used a 3D mean face
model and divided the face into ten parts. They joined face proposals into a single
R-CNN model. The approach proposed by Hu and Ramanan [28] explored context
and resolution of images to fine-tune the residual networks (ResNet) [29], which was
demonstrated to detect a face as small as three pixels. Despite the two-stage face detectors above using proposal and classification technique, single-stage detectors have
also been developed. SSD [30] and YOLO [31] classify a fixed grid of boxes and learn
regression functions to map to the objects simultaneously. Lin et al. [32] address the
issue that the performance of single-stage detectors are not as strong as two-stage detectors because of unbalanced positive and negative samples. With focal loss, they also
trained state-of-the-art single-stage object detector. Very recently, SSH has been proposed by Najibi et al. [33] using multi-task loss for both classification and regression
in the network.
Face Alignment: Face alignment refers to aligning the face image to a specific position. Usually, researchers include landmark detection in this topic. Jin and Tan [34]
summarized the categories of popular approaches for this task. Cascaded regression
was a major trend in this topic and classification frameworks tend to be popular recently. Zhu et al. [35] searched for similar shapes from exemplars and regressed the
5
shapes by using SIFT features and updating the probability of shapes. An ensemble
of random ferns [36] are used to learn the local binary discriminative features. Xu
and Kakadiaris [37] proposed to jointly learn head pose estimation and face alignment tasks in a single framework (JFA) using global and local CNN features. Some
researchers treat the face alignment task as a classification problem. KEPLER [38]
joined CNN features from different layers and captured the response map to localize
the landmarks. Wu et al. [39] proposed the GoDP algorithm to localize landmarks
under a fully convolutional network (FCN) framework by exploring two-pathway information. Some recent works use generative adversarial networks (GAN) to frontalize
the face [40, 41, 42]. Huang et al. [40] used two-pathway GAN (TP-GAN) for photorealistic frontal synthesis images, but kept identity and details of texture. Yin et al. [41]
incorporated a 3D model with GAN to frontalize faces for large poses in the wild. DRGAN was proposed by Tran et al. [42] to generate the frontalized face from face images
under different poses. They also demonstrated the usage of GAN in face recognition.
Signature Generation: An emerging topic in face recognition research is generating a discriminative representation for a subject. When training with millions of face
images using deep learning technology, many feature descriptors have been proposed
recently. Parkhi et al. [8] proposed the VGG-Face descriptor within VGG-Very-Deep
architectures. Triplet loss was proposed by Schroff et al. [5] to train a deep neural
network using 200 million labeled faces from Google. Masi et al. [43] developed face
recognition for unconstrained environments by fine-tuning the ResNet and VGG-Face
on 500K 3D rendering images. In addition to frontalizing the face, they also rendered
face images to half-profile 40◦ , and full-profile (75◦ ). Masi et al. [44] addressed the
question of whether we need to collect millions of faces for training a face recognition system. They argued that we can use synthesized images instead of real images
to train the model and still obtain comparable results. Despite triplet loss, many other
loss functions have been proposed recently. Center loss was added by Wen et al. [10]
alongside cross entropy loss to obtain more discriminative features for deep face recognition. Range loss [11] was designed by Zhang et al. to train deep neural networks with
a long tail distribution. A-Softmax Loss [12] was used in SphereFace and demonstrated
efficiency in learning the discriminative features. Marginal Loss [45] was proposed to
6
Name
Category
Core
Detection
Alignment
Representation
Matching
OpenBR [46]
2D
C++
X
X
X
X
FaceID1-3 [47]
2D
-
X
X
X
DeepFace [3]
2D
-
X
X
X
FaceNet [5]
2D
-
X
X
X
VGG-Face [8]
2D
-
X
X
X
OpenFace [48]
2D
Torch
X
X
X
U-3DMM [15]
3D2D
-
X
X
X
UR2D
3D2D
C++
X
X
X
X
X
X
X
Modern
Active
X
Table 1: Comparison of recent existing 2D face recognition pipelines. We employ the same definition of
modern and active made by Klontz et al. [46] (“-” means that this information is not provided in the paper).
enhance the discriminative ability by maximizing the inter-class distances from large
scale training data.
2.2. System
OpenCV and OpenBR are some well known open-source computer vision and pattern recognition libraries. However, the eigenface algorithm in OpenCV is out-ofdate. OpenBR has not been updated since 9/29/2015. Both libraries only support
nearly frontal face recognition, since the face detector can only detect the frontal face.
OpenFace is an open-source implementation of FaceNet [5] by Amos et al. [48] using
Python and Torch, which provides four demos for usage. OpenFace applied Dlib face
detector and landmark detector to do the pre-processing, which is better than OpenBR.
There is another official Tensorflow implementation of FaceNet in which the authors
use MTCNN [49] to detect and align face, which boosts performance speed and detection accuracy.
To the best of our knowledge, there is a limited amount of well-designed system
papers. Most face-related papers focus on different sub-modules or the research of
face representations. The comparison of recent existing 2D face recognition systems is
presented in Tab. 1 including the research on face representation.
7
3. System Design
UR2D is a 3D-aided 2D face recognition system designed for pose-invariant face
recognition. Moreover, this system is suitable for face-related research, and can fast
pre-process images, provide baselines, plot the results, and support further development.
3.1. System requirements
UR2D is written in clean and efficient C++, which is developed on a Linux platform (Ubuntu system). It requires GCC 4.9 or above for compilation. It leverages a
list of open-sourced libraries and tools such as CMake, Boost, OpenCV, gflags, glog,
puxixml, JSON, and Caffe. Most of the dependencies are available in the Ubuntu
repository except Caffe. Therefore, to install dependencies, it only requires installing
Caffe manually.
3.2. Architecture Overview
Figure 2 illustrates the architecture of UR2D, which explicitly illustrates modules
and functionality. The blue blocks are external shared libraries. The other three components belong to our system. As a base of the software, green blocks provide the basic
functions. The algorithm modules are constructed as high-level APIs. The applications
and GUIs are the top of the software and are built by combining these APIs. The users
can directly call these applications and obtain the results. The advantages of this architecture are that it is simple and well-structured. With full development of libraries, the
system can use CPUs/GPU and other features easily.
3.3. Data Structures
In UR2D, the basic element is File on the disk. All operations or algorithms are
based on the files. The basic data structure is Data, which is a hash table with pairs
of keys and values. Both keys and values are in string type. Unlike OpenBR [46], to
avoid saving giant data in the memory, we only keep the file path in the memory.
8
GUI
Applications
Detection
…
File System
External
Core
Caffe Utility
Matching
Logging
Dataset Utility
IO
GLog
Caffe
GFlag
Modules
Internal
QT
OpenCV
Apps
Figure 2: Depiction of UR2D’s architecture. In addition to external libraries, it includes some other base
libraries to process files, use CUDA, manage the data files, etc. Based on these basic libraries, high level
APIs were implemented by calling function from each module. Based on UR2D’s SDK, it is easier to write
various applications for different purposes. Also, we created the GUIs to demonstrate our UR2D.
3.4. Configuration
We have two approaches to run UR2D. The first one is defining the configuration
file (JSON format), which points out the datasets, input files, output directories, involving modules and their model locations, and evaluation. Attribute dataset contains
the information of input dataset including the name and path. Attribute input contains the list of galleries and probes. Attribute output defines the output directories.
Attribute pipelines defines the modules used in the pipeline. The pip command
line application only accepts the argument of the configuration file, which will parse
the configuration file, load the models, and run defined modules. The advantages of
this approach are simplicity and flexibility. Unlike the OpenBR framework, it does
not require a detailed understanding of the option or input long arguments in the command line. The users only need to change some values in the attributes dataset and
9
input (e.g., set dataset directory and file to enroll), and program pip will generate
the output they defined in this configuration file.
3.5. Command Line Interface
To make full use of SDK of UR2D, we created some corresponding applications to
run each module. All applications accept the file list (text or csv file by default, which
includes tag at the top line), a folder, or a single image. The IO system will load the
data in the memory and process the data according to the data list.
The arguments specify the location of the input file/directory and where the output
should be saved. UR2D’s enrollment is executed and generates signatures to the output
directory. The path of the signature is recorded in the Data. By calling the API from
IO system, the list of Data will be written to the file (default is in .csv format).
4. Face Recognition
Figure 3 depicts the overview of enrollment in the UR2D, which contains face
detection, face alignment, 3D face reconstruction, pose estimation, texture lifting, and
signature generation.
4.1. Face Detection
A serious problem in OpenBR [46], OpenFace [48], and even the commercial offthe-shelf face recognition software (COTS) is the face detection rate. OpenBR only
supports OpenCV frontal face detector. OpenFace also supports Dlib [50] face detector.
However, in recent years, many face detection algorithms have been developed [51, 27,
28] with deep learning technology to support multi-view images.
To detect the face in multi-view poses, some modern detectors such as Headhunter
[52] and DDFD [51], and Dlib-DNN face detector are supported in our system. Mathias
et al. [52] trained Headhunter by using multi-scale templates. DDFD face detector is
proposed by Farfade et al. [51] by fine-tuning AlexNet [53] and using non-maximum
suppression (NMS-max, NMS-avg).
To support different face detectors for downstream modules, we perform the bounding box regression on detected bounding box to reduce the variations of the bounding
10
Generate
Bounding box
Face Detection
Gallery
Lift textures
from original image
according to 3D-2D
correspondence
Reconstruct
3D face model
from a single
image
3D Reconstruction
Euclidean
Distance
Texture Lifting
Refinement
Landmark Detection
Pose Estimation
Representation
Localize
landmarks
based on the
response map
Compute
3D-2D
projection
matrix
Residual network
to learn
pose-invariant
features
Probe
Cosine
Similarity
𝑃"" 𝑃"# 𝑃"$ 𝑃"&
𝑃#" 𝑃## 𝑃#$ 𝑃#&
𝑃$" 𝑃$# 𝑃$$ 𝑃$&
Feature &
Occlusion
Encoding
Images
Enrollment
Matching
Figure 3: Depiction of the whole pipeline (follow the arrow in the middle) of UR2D. The rounded rectangles
represent different modules. Dashed arrows represent the workflow. The enrollment encompasses the modules listed. A face is first detected and then transferred to localize landmarks. A 3D model is constructed
directly from a 2D image with a bounding box. With 2D landmarks and a 3D model, a 3D-2D projection matrix can be estimated. The frontalized image and occlusion map are generated according to the 3D model and
projection matrix. The pose robust features are extracted from these images along with occlusion encoding.
The matching step computes features from visible parts and outputs a similarity score.
box. The first advantage of this approach is that we do not need to re-train or fine-tune
the models for downstream modules after switching the face detector. The second advantage is that this approach provides a more robust bounding box for the landmark
localization module.
4.2. Landmark Localization
To detect face landmarks, we use GoDP proposed by Wu et al. [39], which is
demonstrated to be robust to pose variations. GoDP landmark detector relies on confidence maps generated by a fully convolutional network. A confidence map is generated
for each landmark to indicate the possibility of a landmark appearing at a specific location in the original image. The prediction is made by simply selecting the location that
has the maximum response in the confidence map. This winner-take-all strategy helps
to suppress false alarms generated by background regions and improves the robustness
11
of the algorithm under large head pose variations. Compared to other confidence-mapbased landmark detectors, the novel architecture of GoDP merges the information of
the deep and shallow layers based on a new loss function, increases the resolution and
discrimination of the confidence maps, and achieves state-of-the-art results on multiple
challenging face alignment databases.
4.3. 3D Reconstruction of Facial Shape
To reconstruct the 3D facial shape of the input 2D image, we integrate into our
pipeline the E2FAR algorithm proposed by Dou et al. [54]. It uses a subspace model to
represent a 3D AFM as a parameter vector and employs CNN to estimate the optimal
parameter values from a single 2D image. To train the deep neural network, a large
set of synthetic 2D and 3D data has been created using the 3D rendering of randomly
generated AFMs. To improve the robustness to illumination variation, the deep neural
network is pre-trained on real facial images and fine-tuned on the synthetic data. Compared with existing work, it is more efficient due to its end-to-end architecture, which
requires a single feed-forward operation to predict the model parameters. Moreover,
it only relies on face detection to localize the facial region of interest on the image.
As a result, compared with landmark-based approaches, it is more robust to the pose
variation that can degrade landmark detection accuracy.
4.4. Pose estimation
Given 2D landmarks X2D obtained from landmark detection and 3D landmarks
X3D obtained from a 3D model, the transformation matrix P can be estimated by
solving a least-squares problem as follows:
min ||X2D − P X3D ||22 + λ||P ||22 .
P
(1)
In our implementation, we use the Levenberg-Marquardt algorithm, also known as
DLS, to solve this equation.
4.5. Texture Lifting
Facial texture lifting is a technique first proposed by Kakadiaris et al. [14], which
lifts the pixel values from the original 2D images to a UV map. Given the 3D-2D
12
projection matrix P , 3D AFM model M , and original image I, it first generates the
geometry image G, each pixel of which captures the information of an existing or interpolated vertex on the 3D AFM surface. With G, a set of 2D coordinates referring to
the pixels on an original 2D facial image is computed. In this way, the facial appearance is lifted and represented into a new texture image T . A 3D model M and Z-Buffer
technique are used to estimate the occlusion status for each pixel. This process generates an occlusion mask Z.
This module has the following two advantages: It generates the frontal normalized
face images, which is convenient for feature extraction and comparison. Second, it generates occlusion masks, which identify the parts of the face images that are occluded,
providing the evidence to exclude the face regions.
4.6. Signatures
To improve the performance of face recognition in matching non-frontal facial images, we integrate into our pipeline the algorithm proposed by Dou et al. [55] for
extracting Pose-Robust Face Signature (PRFS), a part-based face representation with
discriminative local facial features and explicit pose and self-occlusion encoding. The
facial texture T and the self-occlusion mask Z are first divided into multiple local
patches. Then, on each local patch, discriminative features are extracted and selfocclusion encoding is computed. The ensemble of local features, each enhanced by
the self-occlusion encoding, forms the pose-robust face signature. We use two types of
local features, namely the DFD feature proposed by Lei et al.[56] and a deep feature
we trained by following Wen et al.[10] using center loss. To train the DFD feature, we
use a small subset of the FRGC2 database that consists of 907 frontal facial images of
109 subjects. We divide the facial texture into 64 non-overlapping patches and train
a DFD feature extractor for each local patch separately. To train the deep feature, the
CASIA WebFace dataset [6] is used as training data. We divide the facial texture into
8 partially-overlapping patches and train a deep neural network for each local patch
separately. In this paper, we call the face signature with the DFD feature PRFS, and
the face signature with the deep feature DPRFS.
13
5. Experiments
In this section, we provide a systematical and numerical analysis on two challenging datasets in both constrained and in-the-wild scenarios. First, the datasets used to
verify UR2D are introduced. Then, a fair comparison of UR2D with VGG face descriptor (VGG-Face) and a commercial face recognition software (COTS) on these two
challenging datasets is conducted for the image matching. In the end, the template
matching experiments on IJB-A dataset was performed.
Dataset
Images
Subjects
Environment
UHDB31
IJB-A
24,255
77
Constrained
25,808
500
In-the-wild
Poses
Illuminations
Usage
21
3
2D-2D, 3D-2D, 3D-3D face recognition
Various
Various
2D unconstrained face recognition
Table 2: Comparison of datasets: UHDB31 [1] and IJB-A [2]. Both are challenging due to pose variations,
illumination, and resolution.
5.1. Datasets
UHDB31 [1] was created in a controlled lab environment, which allows facerelated research on pose and illumination issues. In addition to 2D images, it also
provides the corresponding 3D model of subjects. An interesting fact of this dataset is
that pose follows the uniform distribution on three dimensions: pitch, yaw, and roll. For
each subject, a total of 21 high-resolution 2D images from different views and 3D data
are collected at the same time. Then, a 3D model is registered from the 3D data from
different poses to generate a specific 3D face model. In addition to three illuminations,
the resolutions are downsampled to 128, 256, and 512 from the original size.
IJB-A [2] is another challenging dataset which consists of images in the wild. This
dataset was proposed by IARPA and is managed by NIST. This dataset merges images
and frames together and provides evaluations on the template level. A template contains one or several images/frames of a subject. According to the IJB-A protocol, it
splits galleries and probes into 10 folders. In our experiment, we modify this protocol
to use it for close-set face identification. The details will be introduced in Sec. 5.4. A
summary of these two datasets is presented in Tab. 2. Our system provides dataset
utility to parse and load the data from these two datasets.
14
Yaw
−90◦
−60◦
−30◦
0◦
+30◦
+60◦
+90◦
14/11/58/
69/32/95/
94/90/100/
99/100/100/
95/93/99/
79/38/92/
19/7/60/
47/82
90/99
100/100
100/100
100/99
95/99
47/75
22/9/84/
88/52/99/
100/99/100/
100/100/100/
94/73/99/
27/10/91/
81/96
100/100
100/100
100/100
100/100
84/96
8/0/44/
2/19/80/
91/90/99/
96/99/99/
96/98/97/
52/15/90/
9/3/35/
44/74
90/97
99/100
100/100
99/100
95/96
58/78
Pitch
+30◦
0◦
−30◦
-
Table 3: Comparison of Rank-1 percentage of different systems on UHDB31.R128.I03. The methods are
ordered as VGG-Face, COTS v1.9, FaceNet, UR2D-PRFS, and UR2D-DPRFS. The index of poses are
ordered from the left to right and from the top to bottom (e.g., pose 3 is pitch −30◦ and yaw −90◦ , pose 11
is pitch 0◦ and yaw 0◦ ). The frontal face is gallery while the other poses are probes. In all cases, our system
achieves the best performance compared with the state-of-the-art.
5.2. Baselines
To perform a fair comparison with current state-of-the-art face recognition systems,
we choose VGG-Face and COTS v1.9 as baselines.
The VGG-Face descriptor was developed by Parkhi et al. [8]. The original release contains a Caffe model and a MATLAB example. We re-used their model, implemented their embedding method on multi-scaled images, and fused the features in
C++. In our implementation, we tried different combinations of descriptor and matching methods. We found that embedding features with cosine similarity metrics works
the best for the VGG-Face. In our experiment, we use VGG-Face to represent the embedding features with matching using a cosine similarity metric. As in the baseline
module, UR2D provides API to obtain the features.
The FaceNet algorithm was proposed by Schroff et al. [5]. We use a personal
implemented FaceNet from GitHub
1
trained using WebFace [6] and MS-Celeb-1M
[7]. They first use MTCNN [49] to align face and extract 128 dimensions features.
They provide pre-trained models that achieves 99.20% ± 0.30% accuracy on the LFW
dataset. The accuracy is a little bit lower than the original paper, but still can be con1 https://github.com/davidsandberg/facenet
15
sidered state-of-the-art.
COTS is a commercial software developed for scalable face recognition. It provides
SDK and applications which can be used directly. In our experiments, we used version
1.9 to compare with our system. This version is considered to be a significant boost
compared with previous versions.
In our experiment, we report the performance using both PRFS and DPRFS features. The summary of software configuration is reported in Tab. 4. We compute the
Rank-1 identity accuracy from successfully enrolled signatures.
System
Features
Dims
Metric
VGG-Face
Embedding
4096
Cosine
COTS v1.9
-
-
-
FaceNet
-
128
Cosine
UR2D
PRFS
64 × 1024
Cosine
UR2D
DPRFS
8 × 1024
Cosine
Table 4: Comparison of systems configuration used in our experiments.
5.3. UHDB31: Pose-Invariant Face Recognition
In this experiment, we chose a configuration from the UHDB31 dataset named
UHDB31.R0128.I03. This is a subset in which all images are down-sampled to the
size 153 × 128 in the neutral illumination. This subset was chosen to demonstrate that
our system, UR2D, is robust to different poses. Therefore, we use this configuration
to exclude the other variations such as illumination, expressions, etc, but only keep the
pose variations.
We treated the frontal face images (pose 11) as gallery and images from the other
20 poses (poses 1 − 10, 12 − 21) as probes, independently. Both the gallery and the
probe contain 77 images, each of which belongs to a subject. The face identification
experiment was performed using 20 pairs of sigsets.
Table 3 depicts the comparison of Rank-1 accuracy among 20 poses (except pose
11, which is used for gallery), which indicates that UR2D is robust to the different poses
16
compared with other systems. We observed the VGG-Face and COTS v1.9 algorithms
cannot generalize all pose distributions. FaceNet works better than VGG-Face and
COTS v1.9 on the extreme poses. One possible answer is that this model is trained
from the most available datasets using Ms-Celeb-1M and WebFace, which provide
more extreme pose cases. However in cases such as pose 3 (−30◦ , −90◦ ) and pose 21
(−30◦ , −90◦ ) in Tab. 3, the performance of 2D only face recognition pipelines still has
significant room for improvement. On the other hand, with the help of the 3D model,
our system keeps the consistent and symmetric performance among the different poses.
Even in the cases with yaw −90◦ or +90◦ , our system can tolerate the pose variations,
and achieves around 80% Rank-1 identity accuracy with DPRFS features and around
50% Rank-1 identity accuracy with PRFS features on average.
5.4. IJB-A: In-the-Wild Face Recognition
However, in a real-world case, a face recognition system does not suffer only from
pose variations. In this experiment, we want to explore whether our system is can
also be used in an in-the-wild environment. We designed a different protocol for face
identification experiments based on the original 10 splits. Unlike the original templatelevel comparison, we conducted an image pairs comparison. First, we removed some
samples in the IJB-A splits to make 10 close-set comparison pairs. Then, we cropped
the face according to the annotations. Image thumbnails with resolution below 50 were
up-sampled, while those with resolution larger than 1000 were down-sampled. Herein,
we do not compare with FaceNet since there are overlapping samples between the
training set and IJB-A dataset.
Method
Split-1
Split-2
Split-3
Split-4
Split-5
Split-6
Split-7
Split-8
Split-9
Split-10
Avg.
VGG-Face
76.18
74.37
24.33
47.67
52.07
47.11
58.31
54.31
47.98
49.06
53.16
COTS v1.9
75.68
76.57
73.66
76.73
76.31
77.21
76.27
74.50
72.52
77.88
75.73
UR2D-PRFS
47.61
49.27
47.71
47.71
48.97
44.83
52.98
44.14
43.40
49.02
47.56
UR2D-DPRFS
78.20
76.97
77.31
79.00
78.01
79.00
81.15
78.40
74.97
78.57
78.16
Table 5: Comparison of Rank-1 percentage of different systems on 10 splits of IJB-A. UR2D achieves the
best performance with DPRFS features on each split.
Table 5 depicts the rank-1 identification rate with different methods on IJB-A
17
dataset. Our system UR2D with DPRFS reports better performance compared with
VGG-Face and COTS v1.9. Also, our system results are consistent on 10 splits, which
indicates that our system is robust. Why do PRFS features in our system not perform
well on the IJB-A dataset? One possible answer is that PRFS features are trained on
the FRGC dataset, which has notably fewer variations of pose, illumination, and resolution problems. The current PRFS features cannot generalize on these images with
large variances. The corresponding solution is retraining the PRFS feature model on
the in-the-wild dataset. Third, COTS performs well on this challenging dataset, since
it is designed for the real scenario. By comparing the experiment in Sec. 5.3, we are
left with the question why does our system perform only slightly better than baselines? We argue that in in-the-wild scenarios there are complicated combinations of
pose variations, illumination, expression, and occlusions. A robust face recognition
system should take all cases into consideration. In addition, COTS dropped hard samples and enrolled fewer signatures than ours, which would boost the performance to
some extent.
Method
Rank-1 (%)
OpenBR [46]
24.60 ± 1.10
Wang et al. [57]
82.20 ± 2.30
DCNN [58]
85.20 ± 1.80
PAM [43]
77.10 ± 1.60
DR-GAN [42]
85.50 ± 1.50
UR2D
85.65 ± 1.74
Table 6: Comparison of Rank-1 percentage of different systems on 10 splits of IJB-A. UR2D achieves the
best performance with DPRFS features on each split.
We extended UR2D to enroll the several images for a subject to generate a template.
The template is an average of signatures computed by generating a unified 3D model
from several 2D images. Here we use the results from [42] to do the comparison. Table 6 lists the average Rank-1 identification accuracy for each method. UR2D achieved
18
Method
Split-1
Split-2
Split-3
Split-4
Split-5
Split-6
Split-7
Split-8
Split-9
Split-10
Avg.
VGG-Face
74.44
74.26
70.68
73.96
69.60
72.64
72.91
70.03
72.25
71.78
72.25
UR2D(DPRFS)
87.22
86.82
83.68
86.05
83.52
88.22
85.14
83.59
84.86
87.38
85.65
Table 7: Detailed Rank-1 percentage of different systems on 10 splits of IJB-A.
the best performance. The detailed comparison of Rank-1 identification accuracy with
VGG-Face is summarized in Tab. 7 for 10 splits in the IJB-A dataset.
5.5. Memory Usage and Running Time
We conducted the analysis of UR2D in terms of both memory and time. Cafferelated implementation runs on GPU (GTX TITAN X). COTS v1.9 makes full use of
eight CPUs. Table 8 summarizes the system run-times for different systems. Some
modules in our implementation or external libraries run on CPU, such as face detection, pose estimation, text-lifting, and PRFS feature extraction. Therefore, the time
used by PRFS features takes 1.5 s more than DPRFS features. Due to loading several
large models, DPRFS requires more memory. The user can define the suitable feature
extractors according to their needs. Since we optimized Memory for DPRFS, it shares
the memory block in the GPU. The memory cost is reduced to the same level as PRFS.
We use DPRFS by default.
System
GPU
Memory (GB)
Time (s)
VGG-Face
Full
1.2
0.9
COTS v1.9
No
0.1
0.5
UR2D (PRFS)
Partial
2.4
2.5
UR2D (DPRFS)
Partial
2.4
1.0
Table 8: Comparison of system run-times. “Partial” in GPU column denotes part of the code does not support
GPU acceleration. Time means the average enrollment time for a single image.
19
6. Conclusion
In this paper, a well-designed 3D-aided 2D face recognition system (UR2D) that
is robust to pose variations as large as 90◦ using deep learning technology has been
presented. An overview of the architecture, interface, and each module in UR2D are
introduced i detailed. Extensive experiments are conducted on UHDB31 and IJB-A
to demonstrate that UR2D is robust to the pose variations, and it outperforms existing
2D-only face recognition systems such as VGG face descriptor, FaceNet, and a commercial face recognition software by at least 9% on UHDB31 dataset and 3% on IJB-A
dataset in average. And the system achieves the state-of-the-art performance of 85% in
template matching on IJB-A dataset.
7. Acknowledgment
This material is based upon work supported by the U.S. Department of Homeland
Security under Grant Award Number 2015-ST-061-BSH001. This grant is awarded
to the Borders, Trade, and Immigration (BTI) Institute: A DHS Center of Excellence
led by the University of Houston, and includes support for the project “Image and
Video Person Identification in an Operational Environment” awarded to the University
of Houston. The views and conclusions contained in this document are those of the
authors and should not be interpreted as necessarily representing the official policies,
either expressed or implied, of the U.S. Department of Homeland Security.
8. References
References
[1] H. Le, I. A. Kakadiaris, UHDB31: A dataset for better understanding face recognition
across pose and illumination variation, in: Proc. IEEE International Conference on Computer Vision Workshops, Venice, Italy, 2017.
[2] B. F. Klare, B. Klein, E. Taborsky, A. Blanton, J. Cheney, K. Allen, P. Grother, A. Mah,
M. Burge, A. K. Jain, Pushing the frontiers of unconstrained face detection and recognition:
IARPA janus benchmark A, in: Proc. IEEE Conference on Computer Vision and Pattern
Recognition, Boston, Massachusetts, 2015, pp. 1931–1939.
20
[3] Y. Taigman, M. Yang, M. Ranzato, L. Wolf, DeepFace: Closing the gap to human-level performance in face verification, in: Proc. IEEE Conference on Computer Vision and Pattern
Recognition, Columbus, Ohio, 2014, pp. 1701 – 1708.
[4] G. B. Huang, M. Mattar, T. Berg, E. Learned-Miller, Labeled faces in the Wild: A database
for studying face recognition in unconstrained environments, in: Proc. Workshop on Faces
in ’Real-Life’ Images: Detection, Alignment, and Recognition, Marseille, France, 2008.
[5] F. Schroff, D. Kalenichenko, J. Philbin, FaceNet: A unified embedding for face recognition
and clustering, in: Proc. IEEE Conference on Computer Vision and Pattern Recognition,
Boston, Massachusetts, 2015, pp. 815–823.
[6] D. Yi, Z. Lei, S. Liao, S. Z. Li, Learning face representation from scratch, ArXiv preprint
arXiv:1411.7923 (2014) 1–9.
[7] Y. Guo, L. Zhang, Y. Hu, X. He, J. Gao, MS-Celeb-1M: A dataset and benchmark for
large-scale face recognition, in: Proc. 14th European Conference on Computer Vision,
Amsterdam, Netherlands, 2016, pp. 87–102.
[8] O. M. Parkhi, A. Vedaldi, A. Zisserman, Deep face recognition, in: Proc. British Machine
Vision Conference, Swansea, UK, 2015, pp. 1–12.
[9] K. He, X. Zhang, S. Ren, J. Sun, Identity mappings in deep residual networks, in: Proc.
European Conference on Computer Vision, Amsterdam, the Netherlands, 2016, pp. 1–15.
[10] Y. Wen, K. Zhang, Z. Li, Y. Qiao, A discriminative feature learning approach for deep
face recognition, in: Proc. 14th European Conference on Computer Vision, Amsterdam,
Netherlands, 2016, pp. 499–515.
[11] X. Zhang, Z. Fang, Y. Wen, Z. Li, Y. Qiao, Range loss for deep face recognition with
long-tail, ArXiv preprint arXiv:1611.08976 (2016) 1–9.
[12] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, L. Song, SphereFace: deep hypersphere embedding
for face recognition, in: Proc. IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Hawaii, 2017, pp. 212 – 220.
[13] B. Chen, W. Deng, J. Du, Noisy softmax: improving the generalization ability of dcnn via
postponing the early softmax saturation, in: Proc. IEEE Conference on Computer Vision
and Pattern Recognition, Honolulu, Hawaii, 2017, pp. 5372–5381.
21
[14] I. A. Kakadiaris, G. Toderici, G. Evangelopoulos, G. Passalis, D. Chu, X. Zhao, S. K. Shah,
T. Theoharis, 3D-2D face recognition with pose-illumination normalization, Computer Visiona and Image Understanding 154 (2017) 137–151.
[15] G. Hu, F. Yan, C. Chan, W. Deng, W. Christmas, J. Kittler, N. M. Robertson, Face recognition using a unified 3D morphable model, in: Proc. 14th European Conference on Computer Vision, Amsterdam, Netherlands, 2016.
[16] C. Ding, D. Tao, A comprehensive survey on pose-invariant face recognition, ACM Transactions on intelligent systems and technology 7 (3) (2016) 1–40.
[17] OpenCV, http://opencv.org.
[18] Glog, https://github.com/google/glog.
[19] Gflags, https://github.com/gflags/gflags.
[20] Pugixml, https://github.com/zeux/pugixml.
[21] JSON for modern C++, https://github.com/nlohmann/json.
[22] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. B. Girshick, S. Guadarrama,
T. Darrell, Caffe: Convolutional architecture for fast feature embedding, in: Proc. International Conference on Multimedia, Orlando, Florida, USA, 2014, pp. 675–678.
[23] X. Xu, H. Le, P. Dou, Y. Wu, I. A. Kakadiaris, Evaluation of 3D-aided pose invariant 2D
face recognition system, in: Proc. International Joint Conference on Biometrics, Denver,
Colorado, 2017.
[24] S. Zafeiriou, C. Zhang, Z. Zhang, A survey on face detection in the wild: past, present and
future, Computer Vision and Image Understanding 138 (2015) 1–24.
[25] R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object
detection and semantic segmentation, in: Proc. IEEE Conference on Computer Vision and
Pattern Recognition, Columbus, OH, 2014, pp. 580–587.
[26] H. Jiang, E. Learned-Miller, Face detection with the faster R-CNN, in: Proc. 12th IEEE
International Conference on Automatic Face & Gesture Recognition, Washington, DC,
2017, pp. 650–657.
22
[27] Y. Li, B. Sun, T. Wu, Y. Wang, Face detection with end-to-end integration of a ConvNet
and a 3D model, in: Proc. 14th European Conference on Computer Vision, Amsterdam,
Netherlands, 2016, pp. 420–436.
[28] P. Hu, D. Ramanan, Finding tiny faces, in: Proc. IEEE Conference on Computer Vision
and Pattern Recognition, Honolulu, Hawaii, 2017, pp. 951–959.
[29] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proc.
Computer Vision and Pattern Recognition, Las Vegas, NV, 2016, pp. 770–778.
[30] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, A. C. Berg, SSD: single
shot multibox detector, in: Proc. European Conference on Computer Vision, Amsterdam,
Netherlands, 2016, pp. 21–37.
[31] J. Redmon, A. Farhadi, YOLO9000: better, faster, stronger, in: Proc. IEEE Conference on
Computer Vision and Pattern Recognition, Honolulu, Hawaii, 2017, pp. 7263–7271.
[32] T.-Y. Lin, P. Goyal, R. Girshick, K. He, P. Dollar, Focal loss for dense object detection,
ArXiv preprint arXiv:1708.02002 (2017) 1–10.
[33] M. Najibi, P. Samangouei, R. Chellappa, L. S. Davis, SSH: single stage headless face
detector, ArXiv preprint arXiv:1708.03979 (2017) 1–10.
[34] X. Jin, X. Tan, Face alignment in-the-wild: A survey, Computer Vision and Image Understanding (2017) 1–22.
[35] S. Zhu, C. Li, C. C. Loy, X. Tang, Face alignment by coarse-to-fine shape searching, in:
Proc. IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, 2015,
pp. 4998–5006.
[36] X. Xu, S. Shah, I. A. Kakadiaris, Face alignment via an ensemble of random ferns, in:
Proc. IEEE International Conference on Identity, Security and Behavior Analysis, Sendai,
Japan, 2016.
[37] X. Xu, I. A. Kakadiaris, Joint head pose estimation and face alignment framework using
global and local CNN features, in: Proc. 12th IEEE Conference on Automatic Face &
Gesture Recognition, Washington, DC, 2017, pp. 642–649.
23
[38] A. Kumar, A. Alavi, R. Chellappa, KEPLER: keypoint and pose estimation of unconstrained faces by learning efficient h-cnn regressors, in: Proc. 12th IEEE Conference on
Automatic Face & Gesture Recognition, Washington, DC, 2017, pp. 258–265.
[39] Y. Wu, S. K. Shah, I. A. Kakadiaris, GoDP: Globally optimized dual pathway system for
facial landmark localization in-the-wild, Image and Vision Computing (2017) 1–16(Under
review).
[40] R. Huang, S. Zhang, T. Li, R. He, Beyond face rotation: global and local perception gan for photorealistic and identity preserving frontal view synthesis, ArXiv preprint
arXiv:1704.04086 (2017) 1–11.
[41] X. Yin, X. Yu, K. Sohn, X. Liu, M. Chandraker, Towards large-pose face frontalization in
the wild, ArXiv preprint arXiv:1704.06244 (2017) 1–12.
[42] L. Tran, X. Yin, X. Liu, Disentangled representation learning GAN for pose-invariant face
recognition, in: Proc. IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Hawaii, 2017, pp. 1415 – 1424.
[43] I. Masi, S. Rawls, G. Medioni, P. Natarajan, Pose-aware face recognition in the wild, in:
Proc. IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, 2016,
pp. 4838 – 4846.
[44] I. Masi, A. Trn, T. Hassner, J. Leksut, G. Medioni, Do we really need to collect millions of
faces for effective face recognition?, in: Proc. European Conference on Computer Vision,
Amsterdam, The Netherlands, 2016, pp. 579–596.
[45] J. Deng, Y. Zhou, S. Zafeiriou, Marginal loss for deep face recognition, in: Proc. IEEE
Conference on Computer Vision and Pattern Recognition, Honolulu, Hawaii, 2017, pp.
60–68.
[46] J. Klontz, B. Klare, S. Klum, A. Jain, M. Burge, Open source biometric recognition, in:
Proc. IEEE Conference on Biometrics: Theory, Applications and Systems, Washington
DC, 2013.
[47] Y. Sun, D. Liang, X. Wang, X. Tang, DeepID3: face recognition with very deep neural
networks, arXiv preprint arXiv:1502.00873 (2015) 1–5.
24
[48] B. Amos, B. Ludwiczuk, S. Mahadev, OpenFace: A general-purpose face recognition library with mobile applications, Tech. Rep. CMU-CS-16-118, CMU School of Computer
Science, Pittsburgh, PA (2016).
[49] K. Zhang, Z. Zhang, Z. Li, Y. Qiao, Joint face detection and alignment using multitask
cascaded convolutional networks, IEEE Signal Processing Letters 23 (10) (2016) 1499–
1503.
[50] D. E. King, Dlib-ml: A machine learning toolkit, Journal of Machine Learning Research
10 (2009) 1755–1758.
[51] S. S. Farfade, M. Saberian, L. Li, Multi-view face detection using deep convolutional neural networks, in: Proc. 5th ACM on International Conference on Multimedia Retrieval,
Shanghai, China, 2015, pp. 643–650.
[52] M. Mathias, R. Benenson, M. Pedersoli, L. V. Gool, Face detection without bells and whistles, in: Proc. 13th European Conference on Computer Vision, Zurich, Switzerland, 2014,
pp. 720–735.
[53] A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional
neural networks, in: Proc. Neural Information Processing Systems, Lake Tahoe, NV, 2012,
pp. 1097–1105.
[54] P. Dou, S. K. Shah, I. A. Kakadiaris, End-to-end 3D face reconstruction with deep neural networks, in: Proc. IEEE Conference on Computer Vision and Pattern Recognition,
Honolulu, Hawaii, 2017, pp. 1–10.
[55] P. Dou, L. Zhang, Y. Wu, S. K. Shah, I. A. Kakadiaris, Pose-robust face signature for
multi-view face recognition, in: Proc. International Conference on Biometrics: Theory,
Applications and Systems, Arlington, VA, 2015, pp. 1–8.
[56] Z. Lei, M. Pietikainen, S. Li, Learning discriminant face descriptor, IEEE Transactions on
Pattern Analysis and Machine Intelligence 36 (2) (2014) 289–302.
[57] D. Wang, C. Otto, A. K. Jain, Face search at scale, IEEE Transactions on Pattern Analysis
and Machine Intelligence 39 (2017) 1122 – 1136.
[58] J.-C. Chen, J. Zheng, V. M. Patel, R. Chellappa, Unconstrained face verification using deep
cnn features, in: Proc. Winter Conference on Applications of Computer Vision (WACV),
Lake Placid, NY, USA, 2016.
25
| 1 |
arXiv:1611.05067v1 [math.AC] 15 Nov 2016
On S-coherence
Driss Bennis1,a and Mohammed El Hajoui1,b
1. Department of Mathematics, Laboratory of Analysis, Algebra and Decision
Support, Faculty of Sciences, B.P. 1014,
Mohammed V University in Rabat, Rabat, Morocco
a. [email protected]; driss [email protected]
b. [email protected]
Abstract. Recentely, Anderson and Dumitrescu’s S-finiteness has attracted
the interest of several authors. In this paper, we introduce the notions of Sfinitely presented modules and then of S-coherent rings which are S-versions of
finitely presented modules and coherent rings, respectively. Among other results, we give an S-version of the classical Chase’s characterization of coherent
rings. We end the paper with a brief discussion on other S-versions of finitely
presented modules and coherent rings. We prove that these last S-versions can
be characterized in terms of localization.
Key Words. S-finite, S-finitely presented, S-coherent modules, S-coherence rings.
2010 Mathematics Subject Classification. 13E99.
1
Introduction
Throughout this paper all rings are commutative with identity; in particular, R
denotes such a ring, and all modules are unitary. S will be a multiplicative subset
of R. We use (I : a), for an ideal I and an element a ∈ R, to denote the quotient
ideal {x ∈ R; xa ∈ I}.
According to [1], an R module M is called S-finite if there exists a finitely generated
submodule N of M such that sM ⊆ N for some s ∈ S. Also, from [1], an R-module
1
2
D. Bennis and M. El Hajoui
M is called S-Noetherian if each submodule of M is S-finite. In particular, R is
said to be an S-Noetherian ring, if it is S-Noetherian as an R-module; that is, every
ideal of R is S-finite. It is clear that every Noetherian ring is S-Noetherian.
The notions of S-finite modules and of S-Noetherian rings were introduced by
Anderson and Dumitrescu motivated by the works done in [8] and [2]. They succeeded to generalize several well-known results on Noetherian rings including the
classical Cohen’s result and Hilbert basis theorem under an additional condition.
Since then the S-finiteness has attracted the interest of several authors (see for instance [6, 7, 10, 11, 12, 14]). Recentely, motivated by the work of Anderson and
Dumitrescu, S-versions of some classical notions have been introduced (see for instance [6, 10]). In this paper we are inerested in S-versions of finitely presented
modules and coherent rings. Actually, there are two possibilities which could be
considered as S-versions of finitely presented modules which lead to two S-versions
of coherent rings. We prove that the S-version of coherent rings defined by one of
them has a characterization similar to the classical one given by Chase for coherent
rings [3, Theorem 2.2]. This is why we adopt this notion as the suitable S-version
of finitely presented modules. However, it seems not evident to characterize this
notion in terms of localization. We prove that indeed it is the other S-version,
which is briefly studied at the end of the paper, has a characterization in terms of
localization.
The organization of the paper is as follows: In Section 2, we introduce and study
an S-version of finitely presented modules. We call it an S-finitely presented module
(see Definition 2.1). Then, we study the behavior of S-finiteness in short exact
sequences (see Theorem 2.5). We end Section 2 with some change of rings results
(see Proposition 2.7 and Corollary 2.8). Section 3 is devoted to the S-version of
coherent rings which are called S-coherent rings (see Definition 3.3). Our main
result represents the S-counterpart of Chase’s result [3, Theorem 2.2] (see Theorem
3.8). Also an S-version of coherent modules is introduced (see Definition 3.1 and
Proposition 3.2). We end the paper with a short section which presents the other Sversion of S-finiteness (see Definitions 4.1 and 4.4). We prove that these notions can
be characterized in terms of localization (see Proposition 4.3 and Theorem 4.7). We
end the paper with results which relate S-finiteness with the notion of S-saturation
(see Propositions 4.9 and 4.8 and Corollary 4.10).
3
On S-coherence
2
S-finitely presented modules
In this section, we introduce and investigate an S-version of the classical finitely
presented modules. Other version is discuted in Section 4.
Definition 2.1 An R-module M is called S-finitely presented, if there exists an
exact sequence of R-modules 0 −→ K −→ F −→ M −→ 0, where K is S-finite and
F is a finitely generated free R-modules.
Clearly, every finitely presented module is S-finitely presented. However, the converse does not hold in general. For that, it suffices to note that when R is a nonNoetherian S-Noetherian ring, then there is an S-finite ideal I which is not finitely
generated. Then, the R-module R/I is S-finitely presented but it is not finitely
presented.
Also, it is evident that every S-finitely presented module is finitely generated. To
give an example of a finitely generated module which is not S-finitely presented,
it suffices to consider an ideal I which is not S-finite and then use Proposition 2.4
given hereinafter.
One could remark that in Definition 2.1 we assume that the free module F is
finitely generated rather than S-finite. In fact, because of the following result both
of notions coincide for free modules.
Proposition 2.2 Every S-finite free R-module is finitely generated.
Proof. Let M =
L
Rei be an S-finite free R-module, where (ei )i∈I is a basis of M
i∈I
and I is an index set. Then, there exist a finitely generated R-module N and an s ∈ S
such that sM ⊆ N ⊆ M . Then, N = Rm1 + · · · + Rmn for some m1 , ..., mn ∈ M
(n > 0 is an integer). For every k ∈ {1, ..., n}, there exists a finite subset Jk of I
n
S
P
Jk . Then, the finitely generated R-module
λkj ej . Let J =
such that mk =
j∈Jk
k=1
L
Rej contains N . We show that M ′ = M . Deny. There exists an i0 ∈ I\J
M′ =
j∈J
P ′
λj ej for some λ′j ∈ R.
such that ei0 ∈
/ M ′ . But sei0 ∈ N ⊆ M ′ and so sei0 =
j∈J
This is impossible since (ei )i∈I is a basis.
Remark 2.3 Similarly to the proof of Proposition 2.2 above, one can prove that
any S-finite torsion-free module cannot be decomposed into an infinite direct sum
of non-zero modules. This shows that any S-finite projective module is countably
4
D. Bennis and M. El Hajoui
generated by Kaplansky [9, Theorem 1]. Then, naturaly one would ask of the existence of S-finite projective module which is not finitely generated. For this, consider
∞
Q
the Boolean ring R =
ki , where ki is the field of two elements for every i ∈ N.
i=1
Consider the projective ideal M =
∞
L
ki and the element e = (1, 0, 0, ...) (see [4,
i=1
Example 2.7]). Then, S = {1, e} is a multiplicative subset of R. Since eM = k1 is a
finitely generated R-module, M is the desired example of S-finite projective module
which is not finitely generated.
However, determining rings over which every S-finite projective module is finitely
generated could be of interest. It is worth noting that rings over which every projective module is a direct sum of finitely generated modules satisfy this condition.
These rings were investigated in [13].
Next result shows that, as in the classical case [5, Lemma 2.1.1], an S-finitely
presented module does not depend on one specific short exact sequence of the form
given in Definition 2.1.
Proposition 2.4 An R-module M is S-finitely presented if and only of M is finitely
f
generated and, for every surjective homomorphism of R-modules F −→ M −→ 0,
where F is a finitely generated free R-module, ker f is S-finite.
Proof. (⇐) Obvious.
(⇒) Since M is S-finitely presented, there exists an exact sequence of R-modules
0 −→ K −→ F ′ −→ M −→ 0, where K is S-finite and F ′ is finitely generated and
free. Then, by Schanuel’s lemma, K ⊕ F ∼
= ker f ⊕ F ′ , then ker f is S-finite.
The following result represnts the behavior of S-finiteness in short exact sequences.
It is a generalization of [5, Theorem 2.1.2] for modules with λ-dimension at most
1. Note that one can give an S-version of the classical λ-dimension (see [5, page]).
However, here we prefer to focus on the notion of S-finitely presented modules, and
a discussion on the suitable S-version of the λ-dimension could be the subject of a
further work.
f
g
Theorem 2.5 Let 0 −→ M ′ −→ M −→ M ′′ −→ 0 be an exact sequence of Rmodules. The following assertions hold:
1. If M ′ and M ′′ are S-finite, then M is S-finite.
In particular, every finite direct sum of S-finite modules is S-finite.
2. If M ′ and M ′′ are S-finitely presented, then M is S-finitely presented.
In particular, every finite direct sum of S-finitely presented modules is Sfinitely presented.
5
On S-coherence
3. If M is S-finite, then M ′′ is S-finite.
In particular, a direct summand of an S-finite module is S-finite
4. If M ′ is S-finite and M is S-finitely presented, then M ′′ is S-finitely presented.
5. If M ′′ is S-finitely presented and M is S-finite, then M ′ is S-finite.
Proof. 1. Since M ′′ is S-finite, there exist a finitely generated submodule N ′′ of
n
P
M ′′ and an s ∈ S such that sM ′′ ⊆ N ′′ . Let N ′′ =
Rei for some ei ∈ M ′′ and
i=1
n ∈ N. Since g is surjective, there exists an mi ∈ M such that g(mi ) = ei for every
i ∈ {1, ..., n, }. Let x ∈ M , so sx ∈ N = g −1 (N ′′ ). Then g(sx) ∈ g(N ) = N ′′ , and
n
n
n
n
P
P
P
P
so g(sx) =
αi ei =
αi g(mi ) = g( αi mi ). Then, g(sx −
αi mi ) = 0. Thus,
(sx −
n
P
i=1
i=1
i=1
i=1
αi mi ) ∈ ker g = Imf which is S-finite. So there exist a finitely generated
i=1
submodule N ′ of Imf and an s′ ∈ S such that s′ Imf ⊆ N ′ . Then, s′ sx ∈ N ′ +
and so s′ sM is a submodule of N ′ +
n
P
n
P
Rmi
i=1
Rmi which is a finitely generated submodule
i=1
of M . Therefore, M is S-finite.
2. Since M ′ and M ′′ are S-finitely presented, there exist two shorts exacts sequences:
0 −→ K ′ −→ F ′ −→ M ′ −→ 0 and 0 −→ K ′′ −→ F ′′ −→ M ′′ −→ 0, with K ′ and
K ′′ are S-finite R-modules and F ′ and F ′′ are finitely generated free R-modules.
Then, by Horseshoe Lemma, we get the following diagram
0O
0O
0O
0
/ M′
O
/M
O
/ M ′′
O
/0
0
/ F′
O
/ F ′ ⊕ F ′′
O
/ F ′′
O
/0
0
/ K′
O
/K
O
/ K ′′
O
/0
0
0
0
By the first assertion, K is S-finite. Therefore, M is S-finitely presented.
3. Obvious.
6
D. Bennis and M. El Hajoui
4. Since M is S-finitely presented, there exists a short exact sequence of R-modules
0 −→ K −→ F −→ M −→ 0, where K is S-finite and F is a finitely generated free
R-module. Consider the following pullback diagram
0O
0O
0
/ M′
O
/M
O
/ M ′′
/0
0
/D
O
/F
O
/ M ′′
/0
KO
KO
0
0
By (1), D is S-finite. Therefore, M ′′ is S-finitely presented.
5. Since M ′′ is S-finitely presented, there exists a short exact sequence 0 −→ K −→
F −→ M ′′ −→ 0 where K is S-finite and F is a finitely generated free R-module.
Consider the following pullback diagram
0O
0O
0
/ M′
/M
O
/ M ′′
O
/0
0
/ M′
/D
O
/F
O
/0
KO
KO
0
0
Since F is free, D ∼
= M ′ ⊕ F , and so D is S-finite (since M ′ and F are S-finite).
′
Therefore, M is S-finite.
As a simple consequence, we get the following result which extends [5, Corollary
2.1.3].
Corollary 2.6 Let N1 and N2 be two S-finitely presented submodules of an Rmodule. Then, N1 + N2 is S-finitely presented if only if N1 ∩ N2 is S-finite.
7
On S-coherence
Proof. Use the short exact sequence of R-modules 0 −→ N1 ∩ N2 −→ N1 ⊕ N2 −→
N1 + N2 −→ 0.
We end this section with the following change of rings results.
The following result extends [5, Theorem 2.1.7].
Proposition 2.7 Let A and B be rings, let φ : A −→ B be a ring homomorphism
making B a finitely generated A-module and let V be a multiplicative subset of A
such that 0 6∈ φ(V ). Every B-module which is V -finitely presented as an A-module
it is φ(V )-finitely presented as a B-module.
Proof. Let M be a B-module which is V -finitely presented as an A-module. Then
M is a finitely generated A-module. Then, M is a finitely generated B-module.
Thus there is an exact sequence of B-modules 0 −→ K −→ B n −→ M −→ 0, where
n > 0 is an integer. This sequence is also an exact sequence of A-modules. Since
M is an V -finitely presented A-module and B n is a finitely generated A-module
(since B is a finitely generated A-module), K is an V -finite A-module, and so K is a
φ(V )-finite B-module. Therefore, M is a φ(V )-finitely presented B-module.
The following result extends [5, Theorem 2.1.8 (2)].
Proposition 2.8 Let I be an ideal of R and let M be an R/I-module. Assume that
I ∩ S = ∅ so that T := {s + I ∈ R/I; s ∈ S} is a multiplicative subset of R/I. Then,
1. M is an S-finite R-module if and only if M is a T -finite R/I-module.
2. If M is an S-finitely presented R-module, then M is a T -finitely presented
R/I-module. The converse holds when I is an S-finite ideal of R.
Proof. 1. Easy.
2. Use the canonical ring surjection R −→ R/I and Proposition 2.7.
Conversely, if M is a T -finitely presented R/I-module. Then, there is an exact
sequence of R/I-modules, and then of R-modules
0 −→ K −→ (R/I)n −→ M −→ 0,
where n > 0 is an integer and K is a T -finite R/I-module. By the first assertion,
K is also an S-finite R-module. And since I is an S-finite ideal of R, (R/I)n is an
S-finitely presented R-module. Therefore, by Theorem 2.5 (4), M is an S-finitely
presented R-module.
8
3
D. Bennis and M. El Hajoui
S-coherent rings
Before giving the definition of S-coherent rings, we give, following the calssical case,
the definition of S-coherent modules.
Definition 3.1 An R-module M is said to be S-coherent, if it is finitely generated
and every finitely generated submodule of M is S-finitely presented.
Clearly, every coherent module is S-coherent. However, using Proposition 3.2(1)
below, one can show that, for an S-finite ideal I of R which is not finitely generated,
the R-module R/I is S-coherent but it is not coherent.
The reason of why we consider finitely generated submodules rather than S-finite
submodules is explained in assertion (4) of Remark 3.4.
The following result studies the behavior of S-coherence of modules in short exact
sequences. It generalizes [5, Theorem 2.2.1].
f
g
Proposition 3.2 Let 0 −→ P −→ N −→ M −→ 0 be an exact sequence of Rmodules. The following assertions hold:
1. If P is S-finite and N is S-coherent, then M is S-coherent.
2. If M and P are S-coherent, then so is N . In particular, every finite direct
sum of S-coherent modules is S-coherent.
3. If N is S-coherent and P is finitely generated, then P is S-coherent.
Proof. 1. It is clear that M is finitely generated. Let M ′ be a finitely generated
submodule of M . Then, f (P ) ⊆ g −1 (M ′ ), so there exist two shorts exacts sequences
of R-modules
0 −→ K −→ Rn −→ P −→ 0 and 0 −→ K ′ −→ Rm −→ M ′ −→ 0, where n and m
9
On S-coherence
are two positive integers. Then, by Horseshoe Lemma, we get the following diagram
0O
0O
0O
0
/P
O
/ g −1 (M ′ )
O
/ M′
O
/0
0
/ Rn
O
/ Rn+m
O
/ Rm
O
/0
0
/K
O
/ K ′′
O
/ K′
O
/0
0
0
0
Since g −1 (M ′ ) is a finitely generated submodule of the S-coherent module N , g −1 (M ′ )
is S-finitely presented. Then, using Theorem 2.5 (5), K ′′ is S-finite, and so K ′ is
S-finite. Therefore, M ′ is S-finitely presented.
2. Clearly N is finitely generated. Let N ′ be a finitely generated submodule of
f
g
N . Consider the exact sequence 0 −→ Ker(g/N ′ ) −→ N ′ −→ g(N ′ ) −→ 0. Then,
g(N ′ ) is a finitely generated submodule of the S-coherent module M . Then, g(N ′ )
is S-finitely presented. Then, Ker(g/N ′ ) is finitely generated by Theorem 2.5 (5),
and since P is S-coherent, Ker(g/N ′ ) is S-finitely presented. Therefore, by (2) of
Theorem 2.5, N ′ is S-finitely presented.
3. Evident since a submodule of P can be seen as a submodule of N .
Now we set the definiton of S-coherent rings.
Definition 3.3 A ring R is called S-coherent, if it is S-coherent as an R-module;
that is, if every finitely generated ideal of R is S-finitely presented.
Remark 3.4
1. Note that every S-Noetherian ring is S-coherent. Indeed, this follows from
the fact that when R is S-Noetherian, every finitely generated free R-module
is S-Noetherian (see the discussion before [1, Lemma 3]). Next, in Example
3.6, we give an example of an S-coherent ring which is not S-Noetherian.
2. Clearly, every coherent ring is S-coherent. The converse is not true in general. As an example of an S-coherent ring which is not coherent, we consider the trivial extension A = Z ⋉ (Z/2Z)(N) and the multiplicative set
10
D. Bennis and M. El Hajoui
V = {(2, 0)n ; n ∈ N}. Since (0 : (2, 0)) = 0 ⋉ M is not finitely generated, T is
not coherent. Now, for every ideal I of A, (2, 0)I is finitely generated; in fact,
(2, 0)I = 2J ⋉ 0, where J = {a ∈ Z; ∃b ∈ (Z/2Z)(N) , (a, b) ∈ I}. Since J is an
ideal of Z, J = aZ for some element a ∈ Z. Then, (2, 0)I = 2J ⋉ 0 = (2a, 0)A.
This shows that A is V -Noetherian and so V -coherent.
3. It is easy to show that, if M is an S-finitely presented R-module, then MS is a
finitely presented RS -module. Thus, if R is a S-coherent ring, RS is a coherent
ring. However, it seems not evident to give a condition so that the converse
holds, as done for S-Noetherian rings (see [1, Proposition 2 (f)]). In Section
4, we give another S-version of coherent rings which can be characterized in
terms of localization.
4. One would propose for an S-version of coherent rings, the following condition
“S-C: every S-finite ideal of R is S-finitely presented”. However, if R satisfies
the condition S-C, then in particular, every S-finite ideal of R is finitely
generated. So, every S-finite ideal of R is finitely presented; in particular,
R is coherent. This means that the notion of rings with the condition S-C
cannot be considered as an S-version of the classical coherence. Nevertheless,
these rings could be of particular interest as a new class of rings between the
class of coherent rings and the class of Noetherian rings.
To give an example of a coherent ring which does not satisfy the condition
∞
Q
S-C, one could consider the Boolean ring B =
ki , where ki is the field of
i=1
two elements for every i ∈ N, and the multiplicative subset V = {1, e} of B,
∞
L
where e = (1, 0, 0, ...) ∈ B. Indeed, the ideal B =
ki is V -finite but not
i=1
finitely generated.
Also, note that the following condition “S-c: every S-finite ideal of R is finitely
generated” could be of interest. Indeed, clearly one can show the following
equivalences:
(a) A ring R satisfies the condition S-C if and only if R is coherent and
satisfies the condition S-c.
(b) A ring R is coherent if and only if R is S-coherent and satisfies the
condition S-c.
(c) A ring R is Noetherian if and only if R is S-Noetherian and satisfies the
condition S-c.
To give an example of an S-coherent ring which is not S-Noetherian, we use the
following result.
11
On S-coherence
Proposition 3.5 Let R =
S =
n
Y
n
Y
Ri be a direct product of rings Ri (n ∈ N) and
i=1
Si be a cartesian product of multiplicative sets Si of Ri . Then, R is S-
i=1
coherent if and only if Ri is Si -coherent for every i ∈ {1, ..., n}.
Proof. The result is proved using standard arguments.
Example 3.6 Consider the ring A given in Remark 3.4 (2). Let B be a coherent
ring which has a multiplicative set W such that BV is not Noetherian. Then, A × B
is V × W -coherent (by Proposition 3.5), but it is not V × W -Noetherian (by [1,
Proposition 2 (f )]).
Now, we give our main result. It is the S-counterpart of the classical Chase’s result
[3, Theorem 2.2]. We mimic the proof of [5, Theorem 2.3.2]. So we use the following
lemma.
Lemma 3.7 ([5], Lemma 2.3.1) Let R be a ring, let I = (u1 , u2 , ..., un ) be a
finitely generated ideal of R (n ∈ N) and let a ∈ R. Set J = I + Ra. Let F be
f
a free module on generators x1 , x2 , ..., xr+1 and let 0 −→ K −→ F −→ J −→ 0, be
an exact sequence with f (xi ) = ui (1 ≤ i ≤ r) and f (xr+1 ) = a. Then there exists
n
L
g
an exact sequence 0 −→ K ∩ F ′ −→ K −→ (I : a) −→ 0, where F ′ =
Rxi .
i=1
Theorem 3.8 The following assertions are equivalent:
1. R is S-coherent.
2. Every S-finitely presented R-module is S-coherent.
3. Every finitely generated R-submodule of a free R-module is S-finitely presented.
4. (I : a) is an S-finite ideal of R, for every finitely generated ideal I of R and
a ∈ R.
5. (0 : a) is an S-finite ideal of R for every a ∈ R and the intersection of two
finitely generated ideals of R is an S-finite ideal of R.
12
D. Bennis and M. El Hajoui
Proof. The proof is similar to that of [3, Theorem 2.2] (see also [5, Theorem 2.3.2]).
However, for the sake of completeness we give its proof here.
(1⇒2) Follows from Proposition 3.2 (1).
(2⇒1) Obvious.
(1⇒3) Let N be a finitely generated submodule of a free R-module F . Hence, there
exists a finitely generated free submodule F ′ of F containing N . Then, by (1), F ′
is S-coherent. Therefore, N is S-finitely presented.
(3⇒1) Trivial.
(1⇒4) Let I be a finitely generated ideal of R. Then, I is S-finitely presented.
Consider J = I + Ra, where a ∈ R. Then, J is finitely generated, and so it is
S-finitely presented. Thus, there exists an exact sequence 0 −→ K −→ Rn+1 −→
J −→ 0, where K is S-finite. By Lemma 3.7, there exists a surjective homomorphism
g : K −→ (I : a) which shows that (I : a) is S-finite.
(4 ⇒ 1) This is proved by induction on n, the number of generators of a finitely
generated ideal I of R. For n = 1, use assertion (4) and the exact sequence 0 −→
(0 : I) −→ R −→ I −→ 0. For n > 1, use assertion (4) and Lemma 3.7.
(1 ⇒ 5) Since R is S-coherent, Proposition 2.4 applied on the exact sequence 0 −→
(0 : a) −→ R −→ aR −→ 0 shows that the ideal (0 : a) is S-finite. Now, Let I and
J be two finitely generated ideals of R. Then, I + J is finitely generated and so Sfinitely presented. Then, applying Theorem 2.5 (5) on the short the exact sequence
0 −→ I ∩ J −→ I ⊕ J −→ I + J −→ 0, we get that I ∩ J is S-finite.
(5 ⇒ 1) This is proved by induction on the number of generators of a finitely
generated ideal I of R, using the two short exact sequences used in 1 ⇒ 5.
It is worth noting that, in Chase’s paper [3], coherent rings were characterized
using the notion of flat modules. Then, naturaly one can ask of an S-version of
flatness that characterizes S-coherent rings similarly to the classical case. We leave
it as an interesting open question.
We end this section with some change of rings results.
The following results extends [5, Theorem 2.4.1].
Proposition 3.9 Let I be an S-finite ideal of R. Assume that I ∩ S = ∅ so that
T := {s + I ∈ R/I; s ∈ S} is a multiplicative subset of R/I. Then, an R/I-module
M is T -coherent if only if it is an R-module S-coherent. In particular, the following
assertions hold:
1. If R is an S-coherent ring, then R/I is a T -coherent ring.
2. If R/I is a T -coherent ring and I is an S-coherent R-module, then R is an
S-coherent ring.
On S-coherence
13
Proof. Use Proposition 2.8.
Next result generalizes [5, Theorem 2.4.2]. It studies the transfer of S-coherence
under localizations.
Lemma 3.10 Let f : A → B be a ring homomorphism such that B is a flat Amodule, and let V be a multiplicative set of A. If an A-module M is V -finite (resp., a
V -finitely presented), then M ⊗A B is an f (V )-finite (resp., f (V )-finitely presented)
B-module.
Proof. Follows using the fact that flatness preserves injectivity.
Proposition 3.11 If R is S-coherent, then RT is an ST -coherent ring for every
multiplicative set T of R.
Proof. Let J be a finitely generated ideal of RT . Then, there is a finitely generated
ideal I of R such that J = IT . Since R is S-coherent, I is S-finitely presented.
Then, using Lemma 3.10, the ideal J = I ⊗R RT of RT is ST -finitely presented, as
desired.
4
Other S-version of finiteness
In this short section, we present another S-version of S-finiteness and we prove that
this notion can be characterized in terms of localization.
The following definition gives another S-version of finitely presented modules.
Definition 4.1 An R module M is called c-S-finitely presented, if there exists a
finitely presented submodule N of M such that sM ⊆ N ⊆ M for some s ∈ S.
Remark 4.2
1. Clearly, every finitely presented module is c-S-finitely presented.
However, the converse does not hold in general. For that it suffices to consider
a coherent ring which has an S-finite module which is not finitely generated.
An example of a such ring is given in Remark 3.4 (4).
2. The inclusions in Definition 4.1 complicate the study of the behavior of of cS-finitely presented modules in short exact sequences as done in Theorem 2.5.
This is why we think that c-S-finitely presented modules will be mostly used
by commutative rings theorists rather than researchers interested in notions
of homological algebra. This is the reason behind the use of the letter “c” in
“c-S-finitely presented”.
14
D. Bennis and M. El Hajoui
3. It seems that there is not any relation between the two notions of c-S-finitely
presented and S-finitely presented modules. Nevertheless, we can deduce that
in a c-S-coherent ring (defined below), every S-finitely presented ideal is c-Sfinitely presented.
It is well-known that if, for an R-module M , MS is a finitely presented RS -module,
then there is a finitely presented R-module N such that MS = NS . Nevertheless,
what doest not make things work with respect to localization for S-finitely presented
modules is the fact that the module N which satisfies MS = NS is not necessarily a
submodule of M . For c-S-finitely presented modules we give the following result.
Proposition 4.3
1. If an R-module M is c-S-finitely presented, then MS is a
finitely presented RS -module.
2. A finitely generated R-module M is c-S-finitely presented if and only if there
is a finitely presented submodule N of M such that MS = NS .
Proof. 1. Obvious.
2. (⇒) Clear.
(⇐) Since M is finitely and MS = NS , there is an s ∈ S such that sM ⊆ N , as
desired.
Now we define the other S-version of the classical coherence of rings.
Definition 4.4 A ring R is called c-S-coherent, if every S-finite ideal of R is Sfinitely presented.
Clearly, every coherent ring is c-S-coherent. The converse is not true in general.
The ring given in Example 3.4 (2) can be used as an example of a c-S-coherent ring
which is not coherent.
Also, it is evident that every S-Noetherian ring is c-S-coherent. As done in Example 3.6, we use the following result to give an example of a c-S-coherent ring which
is not S-Noetherian.
Proposition 4.5 Let R =
S =
n
Y
n
Y
Ri be a direct product of rings Ri (n ∈ N) and
i=1
Si be a cartesian product of multiplicative sets Si of Ri . Then, R is c-
i=1
S-coherent if and only if Ri is c-Si -coherent for every i ∈ {1, ..., n}.
Proof. The result is proved using standard arguments.
On S-coherence
15
Example 4.6 Consider a c-S-coherent ring A which is not coherent. Let B be
a coherent ring which has a multiplicative set W such that BV is not Noetherian.
Then, A×B is c-V ×W -coherent (by Proposition 4.5), but it is not V ×W -Noetherian
(by [1, Proposition 2 (f )]).
The follwoing result characterizes c-S-coherent rings can be characterized in terms
of localization.
Theorem 4.7 The following assertions are equivalent:
1. R is c-S-coherent.
2. Every finitely generated ideal of R is c-S-finitely presented.
3. For every finitely generated ideal I of R, there is a finitely presented ideal
J ⊆ I such that IS = JS . In particular, RS is a coherent ring.
Proof. (1⇒ 2 ⇒ 3 ) Straightforward.
(3⇒1) Let I be an S-finite ideal of R. Then, there exist an s ∈ S and a finitely
generated ideal J of R such that sI ⊆ J ⊆ I. By assertion (3), there is a finitely
presented ideal K ⊆ J such that KS = JS . Then, there is a t ∈ S such that tJ ⊆ K.
Therefore, tsI ⊆ K ⊆ I, as desired.
We end the paper with a result which relates c-S-coherent rings with the notion
of S-saturation.
In [1], the notion of S-saturation is used to characterize S-Noetherian rings. Assume that R is an integral domain. Let SatS (I) denotes the S-saturation of an ideal
I of R; that is, SatS (I) := IRS ∩ R. In [1, Proposition 2 (b)], it is proved that if
SatS (I) is S-finite, then I is S-finite and SatS (I) = (I : s) for some s ∈ S. This
fact was used to prove that a ring R is S-Noetherian if and only if RS is Noetherian and, for every finitely generated ideal of R, SatS (I) = (I : s) for some s ∈ S
(see [1, Proposition 2 (f)]). The following result shows that the implication of [1,
Proposition 2 (b)] is in fact an equivalence in more general context.
Consider N ⊆ M an inclusion of R-modules. Let f : M → MS be the canonical
R-module homomorphism. Denote by f (N )RS the RS -submodule of MS generated
by f (N ). We set SatS,M (N ) := f −1 (f (N )RS ) and (N :M s) := {m ∈ M ; sm ∈ N }.
Proposition 4.8 Let N be an R-submodule of an R-module M . SatS,M (N ) is Sfinite if and only if N is S-finite and SatS,M (N ) = (N :M s) for some s ∈ S.
16
D. Bennis and M. El Hajoui
Proof. (⇒) Set K = SatS,M (N ). Since K is S-finite, there exist an s ∈ S and a
finitely generated R-module J such that sK ⊆ J ⊆ K. Thus, sN ⊆ sK ⊆ J. We
can write J = Rx1 + Rx2 + · · · + Rxn for some x1 , x2 , ..., xn ∈ J. For each xi , there
n
Q
exists a ti ∈ S such that ti xi ∈ N . We set t =
ti . Then, tsN ⊆ tsK ⊆ tJ ⊆ N .
i=1
Then, N is S-finite. On the other hand, since sK ⊆ tJ ⊆ N ⊆ K, K ⊆ (N :M s).
Conversely, let x ∈ (N :M s). Then, sx ∈ N , so x ∈ K, as desired.
(⇐) Since N is S-finite, there exist a t ∈ S and a finitely generated R-module J
such that tN ⊆ J ⊆ N . On the other hand, since K = (N : s) for some s ∈ S,
sK ⊆ N . Consequently, tsK ⊆ tN ⊆ J ⊆ N ⊆ K. Therefore, K is S-finite.
The following result is proved similarly to the proof of Proposition 4.8. However,
to guarantee the preservation of finitely presented modules when multiplying by
elements of S, we assume that S does not contain any zero-divisor of R.
Proposition 4.9 Assume that every element of S is regular. Let N be an Rsubmodule of an R-module M . SatS,M (N ) is c-S-finitely presented if and only if
N is c-S-finitely presented and SatS,M (N ) = (N :M s) for some s ∈ S.
Corollary 4.10 Assume that every element of S is regular. The following assertions are equivalent:
1. For every finitely generated ideal I of R, SatS (I) is c-S-finitely presented.
2. R is c-S-coherent and, for every finitely generated ideal I of R, SatS (I) =
(I : s) for some s ∈ S.
Acknowledgement. A part of this work was presented by the second author at the
Scientific day of Algebra “JA GRAAF 2016” held in Faculty of Sciences of Rabat
(May 17, 2016).
The authors would like to thank Professor Zine El Abidine Abdelali for his helpful
comments during the preparation of this paper.
References
[1] D. D. Anderson and T. Dumitrescu, S-Noetherian rings, Comm. Algebra, 30
(2002), 4407–4416.
[2] D. D. Anderson, D. J. Kwak and M. Zafrullah, Agreeable domains, Comm.
Algebra, 23 (1995), 4861–4883.
On S-coherence
17
[3] S. U. Chase, Direct products of modules, Trans. Amer. Math. Soc. 97 (1960),
457–473.
[4] D. Costa, Parameterizing families of non-Noetherian rings, Comm. Algebra 22
(1994), 3997–4011.
[5] S. Glaz, Commutative coherent rings, Lecture Notes in Math., Springer-Verlag,
Berlin, 1989.
[6] A. Hamed and S. Hizem, Modules Satisfying the S-Noetherian Property and
S-ACCR, Comm. Algebra 44 (2016), 1941–1951.
[7] A. Hamed and S. Hizem, S-Noetherian rings of the forms A[X] and A[[X]],
Comm. Algebra 43 (2015), 3848–3856.
[8] E. Hamann, E. Houston and J. L. Johnson, Properties of uppers to zero in
R[X], Com. Alg. 23 (1995), 4861–4883.
[9] I. Kaplansky, Projective modules, Ann. Math. 68 (1958), 372–377.
[10] H. Kim, M. O. Kim and J. W. Lim, On S-strong Mori domains, J. Algebra,
416 (2014), 314–332.
[11] J. W. Lim and D. Y. Oh, S-Noetherian properties on amalgamated algebras
along an ideal, J. Pure Appl. Algebra 218 (2014), 1075–1080.
[12] J. W. Lim and D. Y. Oh, S-Noetherian properties of composite ring extensions,
Comm. Algebra 43 (2015), 2820–2829.
[13] W. W. McGovern, G. Puninski and P. Rothmaler, When every projective module
is a direct sum of finitely generated modules, J. Algebra 31 (2007), 454–481.
[14] L. Zhongkui, On S-Noetherian rings, Arch. Math. (Brno) 43 (2007), 55–60.
| 0 |
Fast construction of efficient composite likelihood
arXiv:1709.03234v1 [math.ST] 11 Sep 2017
equations
Zhendong Huang and Davide Ferrari
∗
School of Mathematics and Statistics, The University of Melbourne
Abstract
Growth in both size and complexity of modern data challenges the applicability
of traditional likelihood-based inference. Composite likelihood (CL) methods address
the difficulties related to model selection and computational intractability of the full
likelihood by combining a number of low-dimensional likelihood objects into a single
objective function used for inference. This paper introduces a procedure to combine
partial likelihood objects from a large set of feasible candidates and simultaneously
carry out parameter estimation. The new method constructs estimating equations balancing statistical efficiency and computing cost by minimizing an approximate distance
from the full likelihood score subject to a `1 -norm penalty representing the available
computing resources. This results in truncated CL equations containing only the most
informative partial likelihood score terms. An asymptotic theory within a framework
where both sample size and data dimension grow is developed and finite-sample properties are illustrated through numerical examples.
Keywords: Composite likelihood estimation, likelihood truncation, `1 -penalty.
∗
Corresponding author: Davide Ferrari, School of Mathematics and Statistics, The University of Mel-
bourne, Parkville, VIC 3010, Australia. E-mail: [email protected].
1
1
Introduction
Since the idea of likelihood was fully developed by Fisher (1922), likelihood-based inference
has played a role of paramount importance in statistics. The complexity of modern data,
however, poses nontrivial challenges to traditional likelihood methods. One issue is related
to model selection, since the full likelihood function can be difficult or impossible to specify
in complex multivariate problems. Another difficulty concerns computing and the necessity
to obtain inferences quickly. These challenges have motivated the development of composite
likelihood (CL) methods, which avoid intractable full likelihoods by compounding a set of
low-dimensional likelihood objects. Besag (1974) pioneered CL inference in the context of
spatial data; Lindsay (1988) developed CL inference in its generality. Due to its flexible
framework and established theory, the CL framework has become a popular tool in many
areas of applied statistics; see Varin et al. (2011) for an overview of CL inference and common
applications.
Consider n independent observations on the d × 1 random vector X = (X1 , . . . , Xd )T
with pdf in the parametric family {f (x; θ), x ∈ X , θ ∈ Θ ⊆ Rp }, where θ∗ ∈ Θ denotes the true parameter. In this paper, we are mainly concerned with large data sets
where both the data dimension d and the sample size n are large. Given i.i.d. observaP
tions X (1) , . . . , X (n) on X, we write EFn (g) = n−1 i≤n g(X (i) ) for the empirical mean of
P
the function g, where Fn (x) = n−1 i≤n I(X (i) ≤ x) is the empirical cdf, and use E(g)
to denote its expected value. The operator “∇” denotes differentiation with respect to θ.
In the CL setting, the maximum likelihood score uM L (·, θ) = ∇ log f (·, θ) and the associated estimating equations EFn uM L (θ) = 0 are intractable due to difficulties in computing
or specifying the full d-dimensional density f (·; θ). Suppose, however, that one can obtain
m tractable pdfs f1 (s1 ; θ), . . . , fm (sm ; θ) for sub-vectors S1 , . . . , Sm of X, where each Sj has
dimension much smaller than d. For example, S1 could represent a single element of X like
X1 , a variable pair like (X1 , X2 ), or a conditional sub-vector like (X1 , X2 )|X1 . Typically, the
total number of sub-models m grows quickly with d; for instance, taking all variable pairs
2
in X results in m = d(d − 1)/2 candidate sub-likelihoods. The specific choice for the set
of pdfs {fj , j = 1, . . . , m} is sometimes referred to as CL design (Lindsay et al., 2011) and
is typically specified by the practitioner . For simplicity, here the CL design is treated as
given, and we assume f1 = · · · = fm , as it is often the case in applications.
b
We focus on the maximum composite likelihood estimator (MCLE), θ(w),
defined as the
solution to the CL estimating equations
0 = EFn [u(θ, w)] = EFn [w1 u1 (θ) + · · · + wm um (θ)],
(1)
where uj (·, θ) = ∇ log{fj (·; θ)} is the jth partial score (sub-likelihood score) associated with
the jth subset Sj of X. Here w ∈ Rm is a given vector of coefficients to be determined,
which we refer to as composition rule. In addition to well-known computational advantages
compared to MLE and flexible modeling, the MCLE enjoys first-order properties analogous
to those of the maximum likelihood estimator (MLE). Since the partial scores commonly
define unbiased estimating equations (i.e. Euj (θ) = 0 at θ = θ∗ , for all 1 ≤ j ≤ m), the CL
b
score u(θ, w) in (1) is also unbiased, a property leading to consistency of θ(w).
Unfortunately,
the MCLE does not have the same second-order properties as the MLE since the asymptotic
b
variance of θ(w)
is generally different from the inverse of Fisher information −E[∇uM L (θ∗ )],
with the two coinciding only in special families of models.
The choice of the composition rule w is crucial in determining both efficiency and computb
ing cost associated with θ(w).
Established theory of unbiased estimating equations prescribes
b
to find w so to minimize the asymptotic variance of θ(w)
(Heyde, 2008, Chapter 2), given
by the inverse of the p × p Godambe information matrix
G(θ, w) = E{∇u(θ, w)} var {u(θ, w)}−1 E{∇u(θ, w)}.
(2)
Although theoretically appealing, this is a notoriously difficult task due the well-known
instability of common estimators of the term var {u(θ, w)} in G(θ, w) (Lindsay et al., 2011).
3
On the other hand, the common practice of retaining all terms in (1) by choosing fixed
wj 6= 0 for all j ≥ 1 (e.g. wj = 1, j ≥ 1) is undesirable from both computational and
statistical efficiency perspectives, especially when the partial scores uj exhibit pronounced
correlation. Cox and Reid (2004) discuss the detrimental effect caused by the presence of
b
many correlated scores on the variance of θ(w)
when n is small compared to m in pairwise likelihood estimation. In the most serious case where the correlation between scores is
overwhelming, keeping all the terms in (1) may lead to lack of consistency for the implied
b
MCLE θ(w).
Motivated by the above considerations, we introduce a new method called sparse composite likelihood estimation and selection (SCLE) consisting of two main steps: a truncation
Step (T-Step) and an estimation Step (E-Step). In the T-Step, the composition rule w is
obtained by minimizing an approximate distance between the unknown full likelihood score
uM L (θ) and the CL score u(θ, w), subject to a `1 -norm constraint on w. This step may be
viewed as maximizing statistical accuracy for given afforded computing. Alternatively it may
be interpreted as minimizing the computing cost for given level of statistical efficiency. Due
to the geometry of the `1 -norm, the resulting composition rule, say w,
b contains a number
of non-zero elements (see Lemma 3.1). While the most useful terms for improving MCLE’s
statistical accuracy are retained, the noisy sub-likelihoods contributing little or no improvement are dropped. In the E-step, we solve the estimating equations (1) with w = w
b and find
b w).
the final estimator θ(
b Compared to traditional CL estimation, the main advantage of our
approach is to reduce the computational burden, while retaining relatively high efficiency
in large data sets. The reduced number of terms in the estimating equations (1) translates
into fast computing and enhanced stability for the final estimator at a relatively small cost
in terms of statistical efficiency.
The remainder of this paper is organized as follows. In Section 2, we describe the main
methodology for simultaneous likelihood truncation and parameter estimation. In Section
3, we study the properties of the truncated composition rule and for the implied estimator
4
within a framework where both the sample size n and the data dimension d are allowed to
diverge. Section 4 illustrates the properties of our methodology in the context of estimation
of location and scale for multivariate normal models. In Section 5, we study the trade-off
between computational and statistical efficiency in finite samples through numerical simulations. Section 6 concludes with final remarks. Technical lemmas used in our main results
are deferred to the appendix.
2
Main methodology
Throughout the paper, we consider unbiased partial scores {uj (θ), 1 ≤ j ≤ m} satisfying
Euj (θ) = 0, for all 1 ≤ j ≤ m,
(3)
when θ = θ∗ and assume that θ∗ is the unique solution for all the equations in (3). The
approach described in this section is applicable to problems with arbitrary sample size n and
data dimension d, but we are mainly concerned with the situation where the data dimension
d (and number of available sub-likelihood objects m) is large compared to the sample size
n. Although we focus on log-likelihood partial scores for concreteness, our methodology and
the properties in Section 3 remain essentially unchanged if uj (θ) is any arbitrary unbiased
M-estimating equation. For instance, when θ is a location parameter, a more appropriate
choice in the presence of outliers may be the Huber-type partial score uj (θ) = ψ(sj − θ),
where ψ(z) = −k if z ≤ k, ψ(z) = z if |z| ≤ k and ψ(z) = k if z ≥ k, with k > 0. Another
suitable choice in the same setting is the Lq-likelihood estimating equation of Ferrari and
Yang (2010) defined by uj (θ) = ∇ logq {fj (sj ; θ)}, where logq (z) = log(u) if q = 1, and
logq (z) = (z 1−q − 1)/(1 − q) if q 6= 1.
In the rest of the paper we use U (θ) to denote the p × m matrix with column vectors u1 (θ), . . . , um (θ) and define the m × m matrix S(θ) = U (θ)T U (θ) with (jk)th entry
{S(θ)}j,k = uj (θ)T uk (θ). We write UA (θ) for the sub-matrix of U (θ) with columns corre5
sponding to A ⊆ {1, . . . , m}, while U\A (θ) denotes the sub-matrix containing the remaining
columns. Accordingly, we define the |A| × |A| matrix SA (θ) = U (θ)TA UA (θ) and use wA
to denote the sub-vector of w with elements {wj , j ∈ A}, while w\A represents the vector
containing all the elements in w not in wA .
2.1
Sparse and efficient estimating equations
Our main objective is to solve the CL estimating equations 0 = EFn u(θ, w) defined in (1)
with respect to θ using coefficients w = wλ (θ) obtained by minimizing the ideal criterion
m
X
1
wj uj (θ)
Qλ (θ, w) = E uM L (θ) −
2
j=1
2
+λ
2
m
X
αj |wj | ,
(4)
j=1
where k · k2 denotes the Euclidean norm, λ ≥ 0 is a given constant, and the αj s are pre-set
constants not depending on the data. For clarity of exposition, we set αj = 1 for all j ≥ 1 in
the remainder of the paper. The optimal solution wλ (θ) is interpreted as one that maximizes
the statistical accuracy of the implied CL estimator, subject to a given level of computing.
Alternatively, wλ (θ) may be viewed as to minimize the complexity of the CL equations,
subject to given efficiency compared to MLE. The tuning constant λ balances the trade-off
between statistical efficiency and computational burden
The first term in Qλ (θ, w) aims to obtain efficient estimating equations by finding a CL
score close to the ML score. When λ = 0 and θ = θ∗ , the composition rule w0∗ = w0 (θ∗ )
is optimal in the sense that the score function u(θ, w0∗ ) is closest to the MLE score uM L (θ).
Although this choice gives estimators with good statistical efficiency, it offers no control
for the CL score complexity since all the partial likelihood scores are included in the final
P
estimating equation. The second term λ m
j=1 αj |wj | in (4) is a penalty discouraging overly
complex estimating equations. In Section 3.1, we show that typically this form of penalty
implies a number of elements in wλ (θ) exactly zero for any λ > 0. For relatively large λ,
many elements in wλ (θ) are exactly zero, thus simplifying considerably the CL estimating
6
equations 0 = EFn u(θ, wλ (θ)). When a very large fraction of such elements is zero, we say
that wλ (θ) and the CL equations 0 = EFn u(θ, wλ (θ)) are sparse. Sparsity is a key advantage
of our approach to reduce the computational burden when achievable without loosing much
statistical efficiency. On the other hand, if λ is too large, one risks to miss the information
in some useful data subsets which may otherwise improve statistical accuracy.
2.2
Empirical criterion and one-step estimation
Obvious difficulties related to direct minimization of the ideal criterion Qλ (θ, w) are the presence of the intractable likelihood score uM L and the expectation depending on the unknown
parameter θ∗ . To address these issues, first note that, up to a negligible term not depending
on w, Criterion (4) can be written as
m
X
1
E
wj uj (θ)
2
j=1
2
−
2
m
X
m
X
ML T
wj E u (θ) uj (θ) + λ
αj |wj | .
j=1
(5)
j=1
If θ = θ∗ , we have E[uM L (θ)uj (θ)T ] = E[uj (θ)uj (θ)T ]. To see this, recall that partial
scores are unbiased and differentiate both sides of 0 = Euj (θ) under appropriate regularity
conditions. This result is used to eliminate the explicit dependency on the score uM L . Finally,
replacing expectations in (5) by empirical averages leads to the following empirical objective:
m
X
1
b
Qλ (θ, w) = EFn
wj uj (θ)
2
j=1
2
−
2
m
X
j=1
m
X
T
wj EFn uj (θ) uj (θ) + λ
αj |wj | .
(6)
j=1
Under appropriate regularity conditions, the empirical criterion (6) estimates consistently
the population criterion (4) up to an irrelevant constant not depending on w, with the caveat
that θ must be close to θ∗ . These considerations motivate the following estimation strategy:
b compute the truncated
1) T-Step. Given a preliminary root-n consistent estimator θ,
7
composition rule w
bλ by solving
b w).
bλ (θ,
w
bλ = argmin Q
(7)
w∈Rm
2) E-Step. Update the parameter estimator by the one-step Newton-Raphson iteration
h
i−1
b
b
b
bw
θλ = θ − EFn ∇u(θ, w
bλ )
EFn u(θ,
bλ ).
(8)
Theorem 3.2 shows that the convex minimization problem in the T-Step has unique solution.
Particularly, let Eb ⊆ {1, . . . , m} is the subset of partial scores such that
n
o
T
b
b
EFn uj (θ) rj (θ, w
bλ ) ≥ λ,
(9)
where rj is the pseudo-residual defined by rj (θ, w) = uj (θ) − u(θ, w) and and write \Eb for
b Then the solution of the T-Step is
the set {1, . . . m} \ E.
o
o−1 n
n
b − λ sign(w
b
bλ,Eb) , w
bλ,\Eb = 0,
diag{EFn SEb(θ)}
w
bλ,Eb = EFn SEb(θ)
(10)
b sign(w) is the vector
where: SEb = UEbT UEb and UEb is a matrix with column vectors {uj , j ∈ E};
sign function with jth element taking values −1, 0 and 1 if wj < 0, wj = 0 and wj > 0,
respectively; and diag(A) denotes the diagonal of the square matrix A.
More insight on the meaning of (9) may be useful. Differentiating (5) in wj 6= 0 and then
expanding around θ∗ under Conditions C.1 and C.2 in Section 3.1 gives
n
o
b T rj (θ,
b w) = E uj (θ∗ )T uM L (θ∗ ) − u(θ∗ , w) + op (1).
EFn uj (θ)
(11)
Combining (9) and (11) highlights that the jth partial likelihood score uj (θ) is selected
when it is sufficiently correlated with the residual difference uM L (θ) − u(θ, w). Hence, our
8
criterion retains only those uj s which are maximally useful to explain the gap between the
full likelihood score uM L (θ) and the CL score u(θ, w), while it drops the remaining scores.
When λ = 0, we have Eb = {1, . . . , m} meaning that the corresponding composition rule
w
b0 does not contain zero elements. From (10) for λ = 0 it is required that the empirical
b is non-singular which is violated when n < m.
covariance matrix for all partial scores EFn S(θ)
b may be nearly singular due to the presence of largely
Even for n > m, however, EFn S(θ)
correlated partial scores. On the other hand, setting λ > 0 always gives a non-singular
b and guarantees existence of w
matrix EFn SEb(θ)
bλ,Eb.
The proposed approach requires an initial root-n consistent estimator, which is often
easy to obtain when the partial scores are unbiased. One simple option entails solving
EFn u(w, θ) = 0 with w = (1, . . . , 1)T . If m is large, one may choose w by the stochastic CL
strategy of Dillon and Lebanon (2010), where the elements of w may be set as either 0 or
1 randomly according to some user-specified scheme. Although the initial estimator θb could
be quite inefficient, the one-step update (8) improves upon this situation. Moreover, the
estimator θbλ and coefficients w
bλ can be refined by iterating the T-Step (with θb = θbλ ) and
the E-Step a few times.
2.3
Computational aspects: LARS implementation and selection
of λ
The empirical composition rule w
bλ in (7) cannot be computed using derivative-based apb w). To address this issue, we propose an
bλ (θ,
proaches due to non-differentiability of Q
implementation based on the least-angle regression (LARS) algorithm of Efron et al. (2004)
originally developed for sparse parameter estimation in the context of linear regression modb our implementation of LARS minimizes Q
b w) by including one score
bλ (θ,
els. For given θ = θ,
b at the time in the composite likelihood score u(θ,
b w). In each step, the score with the
uj (θ)
b − u(θ,
b w) is included,
largest correlation with the currently available residual difference uj (θ)
followed by an adjustment step on w. The numerical examples in Section 5, suggest that
9
our implementation of the LARS algorithm for CL selection is very fast. In at most m × p
steps, it returns a path of estimated composition rules w
bλ1 , . . . , w
bλm , where λj here is the
value of the tuning constant λ in (6) at which the jth partial score enters the CL estimating
equation.
Selection of λ is of practical importance since it balances the trade-off between statistical
and computational efficiency. For a given budget on afforded computing, say λ∗ , we include
one partial score at the time, for example using the LARS approach above, and stop when
b = max{λ : φ(λ) > τ }, for some user-specified 0 < τ ≤ 1, where
we reach λ
φ(λ) =
b
tr{EFn Sλ (θ)}
I(λ > λ∗ ).
b
tr{EFn S(θ)}
(12)
Here EFn Sλ = EFn UλT Uλ denotes the empirical covariance matrix for the selected partial
scores indexed by the set Ebλ = {j : w
bλ,j 6= 0}. The criterion φ(λ) can be viewed as the
proportion of score variability explained by currently selected partial scores. In practice, we
choose τ close to 1, such as τ = 0.9, 0.95 or 0.99. If the computing budget is reached, we set
b = λ∗ . In analogy with principal component analysis, the selected combination of scores
λ
accounts for the largest variability in the collection of empirical scores.
3
Properties
This section investigates the asymptotic behavior of the sparse composition rule w
bλ and the
corresponding SCLE θbλ defined in (8) within a setting where m – the number of candidate
partial likelihoods – is allowed to grow with the sample size n. We use m∗ = EkuM L (θ∗ )k22
to denote the trace of Fisher information based on the full likelihood. Here m∗ may be
interpreted as the maximum knowledge about θ if the full likelihood score uM L were available.
Although m∗ can grow with m, reflecting the rather natural notion the one can learn more
about the true model as the overall data size increases, it is not allowed to grow as fast as
n; e.g., m∗ = o(log n). This is a rather common situation in CL estimation occuring, for
10
instance, when the sub-likelihood scores are substantially correlated or they are independent
but with heterogeneous and increasing variances (see examples in Section 4.1).
3.1
Sparsity and optimality of the composition rule
In this section, we give conditions ensuring uniqueness of the empirical composition rule w
bλ
and weak convergence to its population counterpart wλ∗ . To this end, we work θ within the
root-n neighborhood of θ∗ , Θn = {θ : kθ − θ∗ k < c0 n−1/2 }, for some c0 > 0, and assume the
following regularity conditions on S(θ):
C.1 There exist positive constants c1 , c2 > 0 such that E{supθ∈Θn S(θ)j,k } < c1 , and
V ar{supθ∈Θn S(θ)j,k } < c2 , for all j, k ≥ 1.
C.2 Each element ES(θ)j,k is continuous with uniformly bounded first and second order
derivatives on Θn .
Our analysis begins by deriving the Karush-Kuhn-Tucker Condition (KKTC) (Kuhn, 2014)
for the population objective Qλ (θ∗ , w) defined in (4). The KKTC characterizes the amount
of sparsity – and, the computational complexity – associated with the selected estimating
equations depending on the value of the tuning constant λ. Let c(θ, w) = diag(S(θ))−S(θ)w,
where S(θ) = U (θ)U (θ)T is as defined in Section 2
Lemma 3.1 (KKTC). Under Condition C.1, the minimizer wλ∗ of Qλ (θ∗ , w) defined in (4)
satisfies
E|{c(θ∗ , wλ∗ )j }| = λ · γj , j = 1, . . . , m,
∗
∗
∗
where γj ∈ {1} if wλ,j
> 0, γj ∈ {−1} if wλ,j
< 0, and γj ∈ [−1, 1] if wλ,j
= 0; c(·, ·)j is the
jth element of vector c(·, ·).
∗
Proof. Let dj = −Ec(θ∗ , wλ∗ )j +λ·sign(wλ,j
) and note that the Tayor expansion of Qλ (θ∗ , wλ∗ )
∗
around wλ,j
6= 0 is
Qλ (θ∗ , wλ∗ + ) = Qλ (θ∗ , wλ∗ ) + dj +
11
2
tr{Ij (θ∗ )},
2
(13)
where = (0, . . . , j , . . . , 0)T , and Ij (θ∗ ) = E uj (θ∗ )uj (θ∗ )T is the p × p Fisher information
matrix for the jth likelihood component, and tr{Ij (θ∗ )} < c1 by Condition C.1.
∗
6= 0, we have dj = 0. Otherwise, if dj 6= 0, choosing j such that sign(j ) =
If wλ,j
−sign(dj ) and |j | < 2|dj |/c1 , implies Qλ (θ∗ , wλ∗ + ) < Qλ (θ∗ , wλ∗ ), but this is a contra∗
= 0, we need to show |Ec(θ∗ , wλ∗ )j | ≤ λ.
diction since wλ∗ minimizes Qλ (θ∗ , ·). If wλ,j
Assume |Ec(θ∗ , wλ∗ )j | > λ and take j such that sign(j ) = sign(Ec(θ∗ , wλ∗ )j ) and |j | <
2|Ec(θ∗ , wλ∗ )j − λ|/|c1 |. Then dj + 2 tr{I(θ∗ )}/2 < −|j |(|Ec(θ∗ , wλ∗ )j | − λ) + 2 c1 /2 < 0,
which implies Qλ (θ∗ , wλ∗ +) < Qλ (θ∗ , wλ∗ ). But this is contradicted by wλ∗ being the minimizer
of Qλ . Hence, Ec(θ∗ , wλ∗ )j = λ · γj , for all j = 1, . . . , m.
An argument analogous to that used in the proof of Lemma 3.1 leads to the KKTC for
b w). Specifically, for w
bw
b θ,
w
bλ , the minimizer of the empirical loss Q(
bλ we have EFn c(θ,
bλ )j =
λ·γ
bj , j = 1, . . . , m, where γ
bj ∈ {1} if w
bλ,j > 0, γ
bj ∈ {−1} if w
bλ,j < 0, and γ
bj ∈ [−1, 1] if
w
bλ,j = 0.
Lemma 3.1 has important implications in our current setting, since it relates λ to the
size of the covariance between the jth sub-likelihood score uj (θ) and the residual difference
uM L (θ) − u(θ, w) at θ = θ∗ . Particularly, if such a covariance is sufficiently small, i.e.
λ > E{c(θ∗ , wλ∗ )j } = |E {uj (θ∗ ) [u(θ∗ , wλ∗ ) − uj (θ∗ )]}| = E uj (θ∗ ) u(θ∗ , wλ∗ ) − uM L (θ∗ )
,
∗
then the correspondent coefficient is wλ,j
= 0. Thus, the tuning parameter λ controls the level
of sparsity of the composite score u(θ∗ , wλ∗ ) by forcing the weights of those non-important
score components with small pseudo-covariance c(θ∗ , wλ∗ )j to be exactly zero.
For uniqueness of wλ∗ and w
bλ , a simple condition is that the partial scores cannot replace
each other, i.e. we require that the scores are in general position. Specifically, we say that
the score components u1 , . . . , um are in general position if any affine subspace L ⊂ Rm of
dimension l < m contains at most l + 1 elements of {±u1 , ..., ±um } excluding antipodal pairs
of points.
12
C.3 The partial scores uj (x, θ), j ≥ 1, are continuous and in general position with probability 1 for all θ ∈ Θn .
Theorem 3.2. Under Conditions C.1-C.3 the solution of the T-Step, w
bλ , defined in (7) is
unique and is given by (10) for any λ > 0. Moreover, w
bλ contains at most np ∧ m non-zero
elements.
Proof. Let Eb = {j ∈ {1, . . . , m} : |b
γj | = 1} to be the index set of non-zero elements of w
bλ
where where γj is as defined after Lemma 3.1. First note that the composite likelihood score
bw
b Tw
b ·) defined in (6), due
bλ (θ,
u(θ,
bλ ) = U (θ)
bλ is unique for all solutions w
bλ which minimize Q
b w). Uniqueness of u(θ,
bw
bλ (θ,
to strict convexity of Q
bλ ) implies that γ
b and the corresponding
index set Eb are unique by Lemma 3.1.
b we first note that the square matrix
Next, to show uniqueness of w
bλ,j for all j ∈ E,
b has full rank. Otherwise, some row the matrix can be written as a linear
b T U b(θ)]
EFn [UEb(θ)
E
b
b = P aj EFn [uj (θ)
b T U b(θ)].
b T U b(θ)]
b i.e. EFn [uk (θ)
combination of other rows in the set E,
k6=j
E
E
P
b 2 −P aj EFn uj (θ)
b 2 = λb
Then Lemma 3.1 implies also the event EFn uk (θ)
γk − j6=k aj λb
γj for
j6=k
the same set of coefficients aj s, which has probability equals to 0 since each uj is continuous
b b(θ)
b T ] has full rank, meaning that the size of Eb satisfies |E|
b ≤
and random. Thus, E[UEb(θ)U
E
b T ] implies strict convexity of
b b(θ)
b full rank of E[U b(θ)U
np ∧ m. For fixed wj = 0, j ∈ \E,
E
E
b w b) where w b is the sub-vector of w containing elements indexed by E.
bλ (θ,
b Hence, w
Q
bλ,Eb
λ,E
λ,E
is unique.
The arguments in Theorem 3.2 go through essentially unchanged for the population
composition rule wλ∗ by showing the full rank of E[UE (θ∗ )T UE (θ∗ )] using Condition C.3 and
Lemma 3.1, where E is the index set of non-zero elements in wλ∗ . This implies also uniqueness
of wλ∗ . Next, we turn to convergence of the empirical composition rule w
bλ to wλ∗ , thus showing
bλ (θ, w) (6) is a suitable replacement for the intractable criterion Qλ (θ, w)
that the objective Q
(4). Since criterion
b w) = wT EFn {S(θ)}w/2
b
b T w + λkwk1
bλ (θ,
Q
− EFn {diag(S(θ))}
13
is used as an approximation of the population criterion Qλ (θ∗ , w) defined in (4), clearly the
b and ES(θ∗ ) affects the accuracy of such an approximation. Let
distance between EFn S(θ)
b and
r1 = supθ∈Θn kEFn S(θ) − ES(θ∗ )k2 be the supreme variation between matrices EFn S(θ)
ES(θ∗ ), where kAk2 is the matrix induced 2-norm for matrix A. As n → ∞, the rate at
which r1 goes to 0 depends mainly on the number of partial scores m and the behavior of
the random elements in S, which can vary considerably in different models. For example,
when the elements of S(θ) are sub-Gaussian, one needs only log(m)/n = o(1) (Cai et al.,
2010). In more general cases, m4 /n = o(1) suffices to ensure r1 = op (1) (Vershynin, 2012).
Next we investigate how m and m∗ should increase compared to r1 when λ → 0 as n → ∞
to ensure a suitable behavior for w
bλ . To obtain weak convergence of w
bλ to wλ∗ , we introduce
the additional requirement that the covariance matrix of the partial scores ES(θ∗ ) does not
shrink to zero too fast.
2
C.4 There exists a sequence cn , such that cn r1λm∗2 → ∞ and xT ES(θ∗ )x ≥ cn kxk1 for any
x ∈ Rm , as n → ∞.
Condition C.4 is analogous to the compatibility condition in `1 -penalized least-squares estimation for regression (Bühlmann and Van De Geer, 2011), where it ensures a good behavior
of the observed design matrix of regressors. Differently from the sparse regression setting,
where Condition C.4 is applied to the set of true nonzero regression coefficients, here no
sparsity assumption on the composition rule w is imposed.
P
Theorem 3.3. Under Conditions C.1-C.4, if r1 m∗ 2 λ−2 = op (1) then kw
bλ − wλ∗ k1 → 0, as
n → ∞.
b w
Proof. From Lemma 7.3, EFn kU (θ)(
bλ − wλ∗ )k22 = op (1). Note that
b w
b w
EFn kU (θ)(
bλ − wλ∗ )k22 = (w
bλ − wλ∗ )T EFn S(θ)(
bλ − wλ∗ )
n
o
∗
∗ T
∗
∗
∗ T
b
=(w
bλ − wλ ) ES(θ )(w
bλ − wλ ) + (w
bλ − wλ ) EFn S(θ) − ES(θ ) (w
bλ − wλ∗ ),
14
and the second term of the last equality is Op (r1 m∗ 2 λ−2 ) by Lemma 7.2. Thus, (w
bλ −
P
wλ∗ )T ES(θ∗ )(w
bλ −wλ∗ ) = Op (r1 m∗ 2 λ−2 ), which implies kw
bλ −wλ∗ k1 → 0 by Condition C.4.
Corollary 3.4. Let λ be a sequence such that λ → 0 as n → ∞. Under Conditions C.1-C.4,
if r1 m∗ 2 λ−2 = op (1), we have
P
sup EFn ku(θ, w
bλ ) − u(θ, wλ∗ )k2 → 0,
as n → ∞.
(14)
θ∈Θn
bw
b w∗ )k2 = op (1). The result follows by noting
Proof. From Lemma 7.3, EFn ku(θ,
bλ ) − u(θ,
λ 2
bw
b w∗ )k2 =
that for any θ ∈ Θn , the difference EFn ku(θ, w
bλ ) − u(θ, wλ∗ )k22 − EFn ku(θ,
bλ ) − u(θ,
λ 2
b w
bλ − wλ∗ ) is op (1) according to Conditions C.1, C.2 and Theorem
(w
bλ − wλ∗ )T EFn [S(θ) − S(θ)](
3.3
Corollary 3.4 states that the composite likelihood score u(θ, w
bλ ) is a reasonable approximation to u(θ, wλ∗ ). Particularly, even for λ close to zero, the composite score u(θ, w
bλ ) still
b
uses a fraction |E|/m
of sub-likelihood components. At the same time u(θ, w
bλ ) is near the
optimal score u(θ, w0∗ ), where w0∗ is the composition rule yielding the closest CL score u(θ)
to the maximum likelihood score uM L (θ). Moreover, the implied Godambe information
G(θ, w
bλ ) = E{∇u(θ, w
bλ )}var{∇u(θ, w
bλ )}−1 E∇{u(θ, w
bλ )}
is expected to be close to G(θ, w) with w = w0∗ . However, while the MCLE based on w0∗ (or
other choice of wj 6= 0, j ≥ 1) may be unavailable or computationally intractable due to
common difficulties in estimating var{∇u(θ, w0∗ )} (Lindsay et al., 2011; Varin et al., 2011),
our truncated composition rule w
bλ implies a more stable estimation of G(θ, w
bλ ) by requiring
only a fraction of scores.
15
3.2
Asymptotic behavior of the one-step SCLE
In this section, we show consistency and give the asymptotic distribution for the SCLE
bw
θbλ = θ(
bλ ) defined in the E-Step (8). One advantage of one-step estimation is that consistency and asymptotic normality are treated separately. The one-step estimator θbλ inherits
b under standard rethe properties leading to consistency from the preliminary estimator θ,
quirements on S(θ). For normality, additional conditions on the sub-likelihood scores are
needed. Let H(θ) be the p×mp matrix obtained by stacking all the p×p sub-matrices ∇uj (θ).
Let r2 = supθ∈Θn maxj,k |EFn H(θ)j,k −EH(θ∗ )j,k | be the maximum variation between the emb − Euj (θ∗ )k1 be
pirical and the optimal Hessian matrices. Let r3 = supθ∈Θn maxj kEFn uj (θ)
the supreme variation between empirical scores and their expected value around Θn . In the
rest of this section, we use Jλ∗ = Cov [u(θ∗ , wλ∗ )] and Kλ∗ = −E∇u(θ∗ , wλ∗ ) to denote the
population variability and sensitivity p × p matrices, respectively, both depending implicitly
on n. We further assume:
C.5 There exist positive constants c5 and c6 such that E[supθ∈Θn H(θ)j,k ] < c5 , and
V ar[supθ∈Θn H(θ)j,k ] < c6 , for all j, k ≥ 1.
C.6 Each element EH(θ)j,k , j, k ≥ 1 of the matrix EH(θ) is continuous with uniformly
bounded first and second derivatives on θ ∈ Θn .
Theorem 3.5. Suppose there exist N > 0 such that Kλ∗ is non-singular with all eigenvalues
bounded away from 0 for all n > N . Under Conditions C.1 - C.6, if r1 m∗ 2 λ−2 = op (1),
√
r2 m∗ λ−1 = op (1) and r3 m∗ λ−1 = op (1), then as n → ∞ we have
P
(i) kθbλ − θ∗ k1 → 0, and
(ii)
√
1
D
nJλ∗ − 2 Kλ∗ (θbλ − θ∗ ) → Np (0, I),
where Jλ∗ = Cov [u(θ∗ , wλ∗ )] and Kλ∗ = −E∇u(θ∗ , wλ∗ ) denote p × p population variability and
sensitivity matrices.
16
Proof. Without loss of generality, we only prove the case p = 1. Since p is fixed, the
bλ =
proof can be easily generalized to the case p > 1 without additional conditions. Let K
bw
−EFn ∇u(θ,
bλ ) be the empirical sensitivity matrix. Then θbλ can be written as θbλ = θb +
bw
b −1 EFn u(θ,
K
bλ ), with θb being a consistent preliminary estimator. Note that Eu(θ∗ , wλ∗ ) = 0
λ
and
bw
bw
b w∗ )k1
kEFn u(θ,
bλ ) − Eu(θ∗ , wλ∗ )k1 ≤ kEFn u(θ,
bλ ) − EFn u(θ,
λ
(15)
b w∗ ) − EFn u(θ∗ , w∗ )k1 + kEFn u(θ∗ , w∗ ) − Eu(θ∗ , w∗ )k1 .
+ kEFn u(θ,
λ
λ
λ
λ
The first term on the right hand side of (15) is op (1) by Lemma 7.3. The second term is op (1)
b − EFn uj (θ∗ )|kw∗ k1 , which converges
b w∗ ) − EFn u(θ∗ , w∗ )k1 ≤ maxj |EFn uj (θ)
since kEFn u(θ,
λ
λ
λ
to 0 by Theorem’s assumptions and Lemma 7.2. The last term of (15) is also op (1) by the
P
bw
Law of Large Numbers. This shows that EFn u(θ,
bλ ) → 0. Moreover, from Lemma 7.4,
b λ − K ∗ k1 = op (1). Since K ∗ has all eigenvalues bounded away from 0 for large n, we have
kK
λ
λ
P
P
P
bw
bw
b −1 EFn u(θ,
b −1 EFn u(θ,
K
bλ ) → 0. Since θb → θ∗ , we have θbλ = θb + K
bλ ) → θ∗ , which shows
λ
λ
part (i) of the theorem.
bw
b −1 EFn u(θ,
To show normality in (ii), re-arrange θbλ = θb + K
bλ ) and obtain
λ
bw
b λ (θbλ − θ∗ ) = K
b λ (θb − θ∗ ) + EFn u(θ,
bλ − K
e λ ](θb − θ∗ )
K
bλ ) = EFn u(θ∗ , w
bλ ) + [K
bλ − K
e λ ](θb − θ∗ ),
=EFn u(θ∗ , wλ∗ ) + [EFn u(θ∗ , w
bλ ) − EFn u(θ∗ , wλ∗ )] + [K
(16)
ew
e λ = −EFn ∇u(θ,
where K
bλ ) and θe is some value between θb and θ∗ . The second equality
b For the first term in (16), we
follows from the first-order expansion of EFn u(θ∗ , w)
b at θ.
√
1
D
have nJλ∗ − 2 EFn u(θ∗ , wλ∗ ) → Np (0, I), since the Lindeberg-Feller Central Limit Theorem
applies to u(θ∗ , wλ∗ ) by Lemma 7.6. By Lemma 7.5 Jλ∗ = O(m∗ ), so the first term in (16) is
p
Op ( m∗ /n). The second term in (16) EFn u(θ∗ , w
bλ ) − EFn u(θ∗ , wλ∗ ) = EFn U (θ∗ )T (w
bλ − wλ∗ )
P
is of smaller order compared to first term EFn u(θ∗ , wλ∗ ) = EFn U (θ∗ )T wλ∗ since kw
bλ − wλ∗ k → 0
17
by Theorem 3.3. For the last term in (16), we have
b − E∇uj (θ∗ )
bλ − K
e λ )(θb − θ )|1 ≤ max EFn ∇uj (θ)
|(K
∗
j
∗
e − E∇uj (θ )
+ max EFn ∇uj (θ)
j
kw
bλ k1 |θb − θ∗ |
≤2r2 kw
bλ k1 |θb − θ∗ |.
(17)
and the last expression in (17) is op (r2 m∗ n−1/2 λ−1 ) by Lemma 7.2. Theorem’s assumption
that r1 m∗ 2 λ−2 = op (1) implies that the last term in (16) is of smaller order compared to the
b λ − K ∗ k1 = op (1) according to Lemma 7.4, Slutsky’s Theorem
first term. Finally, since kK
λ
implies the desired result.
Consistency and asymptotic normality for the one-step estimator θbλ follow mainly from
w
bλ converging in probability to the target composition rule wλ∗ . Since each sub-likelihood
score is unbiased and asymptotically normal, their linear combination is also normally dis√
1
tributed. The overall convergence rate is given by k nJλ∗ − 2 Kλ∗ k1 which is of order between
√
√
n and nm. The actual order depends on the underlying correlation between partial
√
scores u1 , . . . , um . While the optimal rate nm is achieved when the scores are perfectly
independent, combining highly correlated scores into the final estimating equation will give
√
rates closer to n.
4
Examples for special families of models
In this section, we illustrate the SCLE through estimation of location and scale estimation
for special multivariate normal models.
4.1
Estimation of common location for heterogeneous variates
Let X ∼ Nm (θ1m , Σ), where the m × m covariance matrix Σ has off-diagonal elements σjk
(j 6= k) and diagonal elements σk2 (j = k). Computing the MLE of θ requires Σ−1 and in
18
b = EFn X T X. When n < m, Σ
b is singular and the MLE
practice Σ is replaced by the MLE Σ
of θ is not available in practice, whilst CL estimation is still feasible. The jth partial score is
uj (θ) = (Xj − θ)/σj2 and the CL estimating equation (1) based on the sample X (1) , . . . , X (n)
is
0 = EFn u(θ, w) =
m
n
X
wj X (i)
(Xj − θ),
2
nσ
j
j=1
i=1
leading to the profiled MCLE
b
θ(w)
=
m
X
!
wj σj−2 X j
j=1
/
m
X
!
wk σj−2
,
(18)
j=1
which is a weighted average of marginal sample means X j = n−1
Pn
i=1
(i)
Xj , j ≥ 1. In
this example, one can work out directly the optimal composition rule wλ∗ and no estimation
is required. Particularly, it is useful to inspect the special case where X has independent
components (σjk = 0 for all j 6= k). This corresponds to the fixed-effect meta-analysis model
where estimators from m independent studies are combined to improve accuracy. Under
independence, we have the explicit solution
∗
wλ,j
= (1 − σj2 λ)I σj2 < λ−1 , 1 ≤ j ≤ m,
which highlights that overly noisy data subsets with variance σj2 ≥ λ−1 are dropped and
thus do not influence the final estimator (18). The number of non-zero elements in wλ∗ is
Pm
2
−1
j=1 I(σj < λ ).
Note that when λ = 0, we have uniform weights w0∗ = (1, . . . , 1)T and the corresponding
b ∗)
MCLE is the usual optimal meta-analysis solution. Although the implied estimator θ(w
0
has minimum variance, it offers no control for the overall computational cost since all m
sub-scores are selected. On the other hand, choosing judiciously λ > 0 may lead to low
computational burden with negligible loss for the resulting estimator. For instance, assuming
19
σj2 = j 2 , for θ ∈ Θn , a straightforward calculation shows
E [u(θ, wλ∗ ) − u(θ, w0∗ )]2 ≤ λ2
X
j∈E
Since the number of the non-zero scores
Pm
j=1
j2 +
X
j −2 + o(1),
(19)
j ∈E
/
1
I (j 2 < λ−1 ) = bλ− 2 c, the first term the
mean squared difference between u(θ, wλ∗ ) and the optimal score u(θ, w0∗ ) is bounded by
1
1
λ2 λ−1 λ− 2 = λ 2 , up to a vanishing term. Thus, if λ = o(1), the composite score u(θ, wλ∗ )
converges to the optimal composite score u(θ, w0∗ ). Particularly, if λ decreases at a sufficiently
slow rate, the truncated score u(θ, wλ∗ ) can still contain a relatively small number of terms,
b ∗ ) is approximately equal the optimal estimator θ(w
b ∗)
while the correspondent estimator θ(w
0
λ
in terms of statistical accuracy.
If the elements of X are correlated (σjk 6= 0 for j 6= k), the partial scores contain
overlapping information on θ. In this case, tossing away some highly correlated partial
scores improves computing while maintaining satisfactory statistical efficiency for the final
estimator. Figure 1 shows the solution path of wλ∗ and the asymptotic relative efficiency of
b ∗ ) compared to MLE for different values of λ. When m is large
the corresponding SCLE θ(w
λ
(e.g. m = 1000), the asymptotic relative efficiency drops gradually until a few scores are left.
This example illustrates that a relatively high efficiency can be achieved by our truncated
CL equations, when a few partial scores already contains the majority of information about
θ. In such cases, the final SCLE with a sparse composition rule is expected to achieve a good
trade-off between computational cost and statistical efficiency.
4.2
Location estimation in exchangeable normal variates
In our second example we consider exchangeable variables with X ∼ Nm (θ1m , Σ) with
Σ = (1 − ρ)Im + ρ1m 1Tm , 0 < ρ < 1. The marginal scores uj (θ) = Xj − θ are identically
distributed and exchangeable with equal correlation. Differently from Example 4.1, the
20
ρ = 0, m = 20
Number of sub-likelihoods
1
200
0
−2
log(λ)
0
−7
−1
0.8
0.0
0.4
ARE
0.8
0
−5
^
log(λ)
0.0
0.4
ARE
0.8
^
log(λ)
0.0
−2
log(λ)
−3
log(λ)
0.0
−4
^
log(λ)
−4
1
0.4
−2
log(λ)
10
w*λ,j
0.0
w*λ,j
0.0
w*λ,j
−4
ARE
3
1.0
11
ρ = 0.5, m = 1000
Number of sub-likelihoods
1.0
3
1.0
14
ρ = 0.5, m = 20
Number of sub-likelihoods
−4
−2
log(λ)
0
−8
−4
log(λ)
0
Figure 1: Top Row: Solution paths for the minimizer wλ∗ of Criterion Q(θ∗ , w) in (4) for
different values of λ with corresponding number of sub-likelihoods. Bottom Row: Asymptotic
b ∗ ) compared to MLE. The vertical dashed lines on
relative efficiency (ARE) of the SCLE θ(w
λ
b selected by Criterion (12) with τ = 0.9. Results correspond to the
the bottom represent λ
common location model X ∼ Nm (θ∗ 1m , Σ)√with jth diagonal element of Σ equal to j, and
(jk)th off-diagonal element of Σ equal to ρ jk.
21
solution wλ∗ to Criterion (4) has equal elements
∗
=
wλ,j
1−λ
I(λ < 1), 1 ≤ j ≤ m,
ρ(m − 1) + 1
b ∗ ) = Pm X j /m regardless of the value of λ.
so the optimal parameter estimator is θ(w
λ
j=1
The first eigenvalue of ES(θ) is θ(m − 1) + 1, whilst the remaining m − 1 eigenvalues are
all equal to 1 − θ, suggesting that the first score contains a relatively large information on
b ∗ )} =
θ compared to the other scores. When m is much larger than n, we have var{θ(w
0
[ρ2 (m − 1) + 1]/(mn) ρ2 /n. The trade-off between statistical and computational efficiency
may be measured by the ratio of estimator’s variance with m = ∞ compared to that with
m < ∞. This ratio is t(m) = ρ2 m/{ρ2 (m − 1) + 1}, which increases quickly for smaller m
and much slower for larger m (e.g., t(5) = 0.83, t(9) = 0.90 and t(50) = 0.98, if ρ = 0.75).
Thus, although all the elements in wλ∗ are nonzero, a few partial scores contain already the
majority of the information on θ. This suggests that in practice taking a sufficiently large
value for λ, so that the sparse empirical solution w
bλ contains only a few of zero elements,
bw
already ensures a relatively high statistical efficiency for the corresponding MLCE θ(
bλ ).
4.3
Exponentially decaying covariances
Let X ∼ Nd (0, Σ(θ)), where the jkth element of Σ(θ) is σjk (θ) = exp{−θd(j, k)}. The
quantity d(j, k) may be regarded as the distance between spatial locations j and k. Evaluating the ML score in this example is computationally expensive when d is large, since it
requires computing the inverse of Σ(θ), a task involving O(d3 ) operations. On the other
hand, the CL score is obtained by inverting 2 × 2 covariance matrices, thus requiring at most
b
O(d2 ) operations. Given i.i.d. observations X (1) , . . . , X (n) on X, the MCLE θ(w)
solves the
22
equation
0=
X
wjk
j<k
n
X
(i)
(i)
ujk (θ, Xj , Xk )
i=1
(i) 2
(i) 2
(i) (i)
n
X
σjk (θ){Xj + Xk − 2Xj Xk σjk (θ)}
σjk (θ)d(j, k)
=
wjk
2 }2
{1
−
σ
(θ)
jk
i=1
j<k
#
"
(i) (i)
n
X
X
σjk (θ) + Xj Xk
−
σjk (θ)d(j, k),
wjk
1 − σjk (θ)2
i=1
j<k
X
where ujk corresponds to the score of a bivariate normal distribution for the pair (Xj , Xk ).
Figure 2 shows the analytical solution path of the minimizer wλ∗ of Criterion (4) for
b ∗ ) compared to
different values of λ, and the asymptotic relative efficiency of the SCLE θ(w
λ
MLE. We consider a number of pairs ranging from m = 45 to m = 1225 for various choices of
θ. When λ = 0, the SCLE has relatively high asymptotic efficiency. Interestingly, efficiency
remains steady around 90% until only a few sub-likelihoods are left. This suggests again
that a very small proportion of partial-likelihood components contains already the majority
of the information about θ. In such cases, the SCLE reduces dramatically the computing
burden while retaining satisfactory efficiency for the final estimator.
5
Numerical examples
In this section, we study the finite-sample performance of the SCLE in terms by assessing
its mean squared error and computing cost when the data dimension d increases. As a
b
preliminary estimator, we use the MCLE θ(w)
with w = (1, . . . , 1)T , which is perhaps the
most common choice for w in CL applications (Varin et al., 2011).
5.1
Example 1
We generate samples of size 50 from X ∼ Nm (θ1m , Σl ), l = 1, . . . , 4. We specify the following
covariance structures: Σ1 = Im ; Σ2 is diagonal with kth diagonal elements σk2 = k; Σ3 has
23
458 200
w*λ,j
0
−4
0
−5
−2
log(λ)
0
−1
−3
log(λ)
−1
0.8
ARE
0.0
0.0
0.4
ARE
0.4
0.0
−4
−3
log(λ)
^
log(λ)
0.8
^
log(λ)
0.8
^
log(λ)
−2
log(λ)
0.4
−2
log(λ)
60
−0.4
−0.4
0.2
w*λ,j
0.2
−0.4
w*λ,j
−4
ARE
18
0.6
44
θ = 0.6, d = 50, m = 1225
Number of sub-likelihoods
0.6
18
0.6
44
θ = 0.6, d = 10, m = 45
Number of sub-likelihoods
0.2
θ = 0.4, d = 10, m = 45
Number of sub-likelihoods
−4
−2
log(λ)
0
−5
Figure 2: Top Row: Solution paths for the minimizer wλ∗ of Criterion Q(θ∗ , w) defined in
(4) for different values of λ with corresponding number of sub-likelihoods reported. Botb ∗ ) compared to MLE. The
tom Row: Asymptotic relative efficiency (ARE) of SCLE θ(w
λ
b selected by Criterion (12) with
vertical dashed lines on the bottom row correspond to λ
τ = 0.9. Results
correspond to the model X ∼ Nd (0, Σ(θ)) with (j, k)th element of Σ equal
p
to exp{−θ 2|j − k|}.
24
unit diagonal elements with the first 10 elements of X uncorrelated with any other element
while the other elements in X have pairwise correlations 0.8|j−k| (10 < j < k < d); Σ4
has unit diagonal elements and a block diagonal structure with independent blocks of six
elements each and within-block correlation of 0.6.
Figure 3 (left), shows the relative mean squared error of the SCLE θbλ compared to that
of the MLE for a moderate data dimension (d = m = 30). The points in the trajectories
correspond to inclusion of a new sub-likelihood component according to the the least-angle algorithm described in Section 2.3. The SCLE θbλ achieves more than 90% efficiency compared
to MLE for all the covariance structures considered, always before all the candidate partial
likelihoods are included. The advantage of SCLE becomes evident when the sub-likelihood
scores exhibit relatively strong correlation. For example, for Σ = Σ4 where sub-likelihoods
are independent between blocks, the maximum efficiency is achieved when only a few representative partial scores are selected from each block.
Figure 3 (right) shows the ratio between mean squared error of the SCLE compared
and that of the MLE for a relatively large data dimension (d = m = 1000) compared to the
sample size (n = 50). Although here the MLE is used as a theoretical benchmark, in practice
such an estimator is not available as m is larger than the sample size n. Interestingly, when
the sample size n is fixed, including all the sub-likelihoods eventually leads to substantial loss
of efficiency. In this examples, selecting too many sub-likelihoods not only wastes computing
resources but also implies estimators with larger errors . On the other hand, a proper choice
of the tuning constant λ (corresponding to about 20 selected sub-likelihoods) can balance
computational and statistical efficiency.
5.2
Example 2
In our second numerical example, we consider covariance estimation for the model X ∼
Nd (0, Σ(θ)) with Σ(θ)j,k = exp{−θ2(j − k)2 }. Here the covariance between components
Xj and Xk in the random vector X decreases rapidly as the distance (j − k)2 between
25
Σ=Σ1
Σ=Σ2
Σ=Σ3
Σ=Σ4
0
5
MSEMLE MSESCLE
MSEMLE MSESCLE
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
10 15 20 25 30
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
m=50
m=100
m=200
m=500
m=1000
0
Number of partial scores included
10
20
30
40
50
Number of partial scores included
Figure 3: Monte-Carlo estimate of the mean square error of the MLE (MSEMLE ) divided
by that of the SCLE (MSESCLE ), for the model X ∼ Nm (θ1m , Σ). Each trajectory is based
on 1000 Monte-Carlo samples of size n = 50. Each point in the trajectories correspond to
inclusion of a new sub-likelihood component based on the least-angle algorithm described in
Section 2.3. Left: Different specifications for Σ detailed in Section 5 with m = 30. Right:
Covariance Σ = Σ2 with m ranging from 50 to 1000.
components of X increases. Figure 4 shows Monte-Carlo estimates for the mean square
bw
error of the SCLE θ(
bλ ) compared to that of the MCLE with uniform composition rule (i.e.
b
θ(w)
with w = (1, . . . , 1)T ), for θ = 0.2 and 0.4. Each point in the trajectories correspond
to inclusion of a new sub-likelihood component using the least-angle algorithm described in
Section 2.3. The SCLE is already more efficient than the uniform MCLE when a handful of
partial scores are selected. For example if θ = 0.2 and m = 1035, selecting ten sub-likelihoods
already ensures 1.5 times the accuracy of the uniform MCLE. Since the uniform MCLE uses
all the m = 1035 pairs of sub-likelihoods, the SCLE obtains more accurate results at a much
lower computing cost.
26
θ=0.2
m=45
m=435
m=1035
1.0
0.5
1.5
MSEUnif MSESCLE
MSEUnif MSESCLE
1.5
θ=0.4
m=45
m=435
m=1035
1.0
0.5
0.0
0.0
0 10 20 30 40 50 60
Number of partial scores included
0 10 20 30 40 50 60
Number of partial scores included
Figure 4: Monte-Carlo estimate of the mean square error of the MCLE with w = (1, . . . , 1)T
(MSEUnif ) divided by that for the SCLE (MSESCLE ). Each point in the trajectories corresponds to the inclusion of a new sub-likelihood component based on the least-angle algorithm
described in Section 2.3. Results are based on 1000 Monte Carlo samples of size n = 50 from
the model X ∼ Nd (0, Σ(θ)) with Σ(θ)j,k = exp{−2θ(j − k)2 }. Trajectories correspond to
θ = 0.2 (left) and θ = 0.4 (right) for different numbers of sub-likelihoods, m, ranging from
45 to 1035.
6
Conclusion and final remarks
In recent years, inference for complex and large data sets has become one of the most active
research areas in statistics. In this context, CL inference has played an important role in
applications as a remedy to the drawbacks of traditional likelihood approaches. Despite the
popularity of CL methods, how to address the trade-off between computational parsimony
and statistical efficiency in CL inference from a methodological perspective remains a largely
unanswered question. Motivated by this gap in the literature, we introduced a new likelihood
selection methodology which is able truncate quickly overly complex CL equations potentially
encompassing many terms, while attaining relatively low mean squared error for the implied
estimator. This is achieved by selecting CL estimating equations satisfying a `1 -constraint
on the CL complexity while minimizing an approximate `2 -distance from the full-likelihood
score. Inference based on statistical objective functions with `1 -penalties on the parameter
27
θ is not new in the statistical literature (e.g., see Giraud (2014) for a book-length exposition
on this topic). Note, however, that differently from existing approaches the main goal here is
to reduce the computational complexity of the overall CL estimating equations regardless of
the model parameter θ, which is viewed as fixed in size. Accordingly, our `1 -penalty involves
only the composition rule w, but not the model parameter θ. In the future, developing
approaches for simultaneous penalization on θ and w may be useful to deal with situations
where both the data dimension and the size of the parameter space increase.
Two main perks of the proposed approach make it an effective alternative to traditional
CL estimation from practitioner’s perspective. The first advantage is that the SCLE methodology constructs CL equations and returns inferences very quickly. Theorem 3.2 shows that
for any λ > 0 the empirical composition rule w
bλ retains at most np ∧ m non-zero elements.
This is an important feature of our method, which reduces – sometimes dramatically – the
bw
amount of computing needed to obtain the implied MCLE θ(
bλ ) and its standard error.
Lemma 3.1 highlights that the non-zero elements in w
bλ correspond to partial scores maximally correlated with the residual difference r(θ, w) = uM L (θ) − u(θ, w). This means that
our approach constructs estimators with relatively high efficiency by dropping only those uj s
contributing the least in the CL equations for approximating uM L (θ). The second desirable
feature of our method concerns model selection and the ability to reduce the complexity of
large data sets. In essence, the truncation step (T-Step) described in (7) is a dimensionreduction step: starting from observations on a possibly large the d-dimensional vector X,
our method generates a collection of lower-dimensional subsets Sλ = {Sj , j ∈ Ebλ , λ > 0}
where Ebλ = {j : w
bλ,j 6= 0}. While individually the selected data subsets in Sλ are of size
much smaller than d, collectively they contain most of the information on θ for a given level
of computing represented by λ.
From a theoretical perspective, little work has been devoted to study the properties of
CL estimators when the number of sub-likelihoods m diverges. Cox and Reid (2004) discuss
estimators based CL equations with m = d2 + d terms by taking all pairwise and marginal
28
scores for the d-dimensional vector X. They take non-sparse and more rigid composition
rules compared to ours with wjk = 1 for all pairs (j 6= k) and wjj = −a × d for all marginals
(j = k), where a is a tuning constant used to increase efficiency. To our knowledge, the
current paper is the first studying the behavior of more flexible sparsity-inducing composition
rules and implied CL estimating equations in the setting where both m and n grow.
Theorem 3.3 and Corollary 3.4 provide us with guidance on when the selected score
u(θ, w
bλ ) is a meaningful approximation to the unknown ML score in the sense of the objective
(4). A first requirement is that the total information on θ available if the full likelihood were
actually known, m∗ = kuM L (θ∗ )k22 , is not overwhelming compared to the sample size n.
If X ∼ Nm (θ1m , Σ), we require m∗ /n = tr{Σ−1 }/n → 0. This condition is very mild
when relatively few elements X contain a strong signal on θ, whilst the remaining elements
are noisy and with heterogeneous variances. In Section 4.1, we illustrate this by taking
Σ diagonal with increasing diagonal elements. A second requirement is that the tuning
√
√
constant λ dominates asymptotically m∗ r1 , where r1 represents the convergence rate of
the empirical covariance of scores EFn {S(θ)}. For instance, if the elements of S(θ) are subGaussian, we have r1 = op (log(m)/n), meaning that λ should be asymptotically larger than
p
m∗ log(m)/n.
Finally, we show that statistical optimality and computationally parsimony can co-exhist
within the same selection procedure when λ is judiciously selected. If λ → 0 at the rate
described in Theorem 3.3, the truncated composition rule w
bλ with |Ebλ | scores approximates
the optimal composition rule w0∗ consisting of m nonzero terms. Accordingly, Corollary 3.4
suggests that the implied truncated CL score function u(θ, w
bλ ) approximates the optimal
score u(θ, w
bλ ), uniformly on a neighborhood of θ∗ . Extending this type of result and developing further theoretical insight on the interplay between the type of penalty and the MCLE
accuracy beyond the current i.i.d. setting would represent another exciting future research
direction. For example, findings would be particularly valuable in spatial statistics where
often the number of sub-likelihood components is overwhelming and poses serious challenges
29
to traditional CL methods.
Appendix
In this section, we show technical lemmas required by the main results in Section 3.
Lemma 7.1. kw
bλ k1 and kwλ∗ k1 are decreasing in λ.
Proof. Denote the first term of Criterion Qλ (θ, w) defined in (4) (without the penalty term)
by Q1 (θ, w). Suppose λ1 > λ2 , and let w1 , w2 be the minimizers of Qλ1 (θ∗ , w), Qλ2 (θ∗ , w)
respectively. Then, Q1 (w1 )+λ1 kw1 k1 ≤ Q1 (w2 )+λ1 kw2 k1 and Q1 (w1 )+λ2 kw1 k1 ≥ Q1 (w2 )+
λ2 kw2 k1 . Subtracting the last two inequalities gives (λ1 − λ2 )kw1 k1 ≤ (λ1 − λ2 )kw2 k1 .
Since λ1 > λ2 , we have kw1 k1 ≤ kw2 k1 . An analogous argument shows that kw
bλ k1 is
decreasing.
Lemma 7.2. Under Conditions C.1 - C.3, if r1 /λ = op (1), then kwλ∗ k1 = O(m∗ /λ) and
kw
bλ k1 = Op (m∗ /λ).
Proof. For wλ∗ , note that Qλ (θ∗ , wλ∗ ) = EkuM L (θ∗ ) − u(θ∗ , wλ∗ )k22 /2 + λkwλ∗ k1 ≤ Qλ (θ∗ , 0) =
m∗ /2, or λkwλ∗ k1 ≤ m∗ /2.
Hence, kwλ∗ k1 = O(m∗ /λ).
bw
b θ,
For w
bλ , we have Q(
bλ ) =
bw
b Tw
b 0) = 0. Since EuM L (θ∗ )T uj (θ∗ ) =
b θ,
w
bλT EFn S(θ)
bλ /2 − diag(EFn S(θ))
bλ + λkw
bλ k1 ≤ Q(
Euj (θ∗ )T uj (θ∗ ), we have
b Tw
b − ES(θ∗ ))T w
λkw
bλ k1 ≤ diag(EFn S(θ))
bλ = diag(EFn S(θ)
bλ + EuM L (θ∗ )T u(θ∗ , w
bλ ),
b − ES(θ∗ ))T w
b − ES(θ∗ ))T |j,k kw
with diag(EFn S(θ)
bλ ≤ maxj,k |EFn S(θ)
bλ k1 ≤ r1 kw
bλ k1 and
EuM L (θ∗ )T u(θ∗ , w
bλ ) = Op (m∗ )
Hence, kw
bλ k1 = Op (m∗ /λ).
30
Lemma 7.3. Let λ → 0 as n → ∞. Under Conditions C.1-C.3, if r1 m∗ 2 λ−2 = op (1), we
bw
b w∗ )
have EFn u(θ,
bλ ) − u(θ,
λ
P
2
→ 0, as n → ∞, where θb is the preliminary root-n consistent
estimator used to compute w
bλ in the T-Step (7).
Proof. Note that
r1 m∗2
λ2
= op (1) implies r1 /λ = op (1). Therefore, we have kw
bλ − wλ∗ k1 =
bw
b w∗ ) gives
bλ (θ,
bλ (θ,
Op (m∗ /λ) by Lemma 7.2. Moreover, re-arranging Q
bλ ) ≤ Q
λ
1
b T (w
b ∗ } ≤ EFn {U 2 (θ)
bw
bλ k1 + λkwλ∗ k1 .
bλ − wλ∗ )} − λkw
bλ − wλ∗T S(θ)w
EF {w
bT S(θ)
λ
2 n λ
b (θ)
b T w∗ }T (w
Subtracting EFn {U (θ)U
bλ − wλ∗ ) from both sides gives
λ
1
b T (w
b w∗ )}T (w
EFn kU (θ)
bλ − wλ∗ )k22 ≤ EFn {c(θ,
bλ − wλ∗ ) − λkw
bλ k1 + λkwλ∗ k1
λ
2
h
iT
h
iT
b w∗ ) − Ec(θ∗ , w∗ ) w
b w∗ ) − Ec(θ∗ , w∗ ) w∗
= EFn c(θ,
bλ − EFn c(θ,
λ
λ
λ
λ
λ
+ Ec(θ∗ , wλ∗ )T w
bλ − λkw
bλ k1 − Ec(θ∗ , wλ∗ )T wλ∗ − λkwλ∗ k1
h
iT
∗
∗
∗
b
≤ EFn c(θ, wλ ) − Ec(θ , wλ ) (w
bλ − wλ∗ )
h
iT
∗
∗
b
b
= diag EFn S(θ) − ES(θ ) − EFn S(θ)) − E(S(θ ) wλ∗ (w
bλ − wλ∗ ),
where the inequality is implied by Lemma 3.1. The last expression is op (1), since r1 m∗ 2 /λ2 =
op (1) and kw
bλ −wλ∗ k1 = Op (m∗ /λ) by Lemma 7.2, and the matrix maximum norm is bounded
by matrix 2-norm.
√
Lemma 7.4. If r1 m∗ 2 λ−2 = op (1) and r2 m∗ λ−1 = op (1), then under conditions C.1-C.6,
b λ − K ∗ k1 = op (1).
kK
λ
P
b λ − K ∗ k1 ≤ r2 kw
Proof. This is a direct result since kK
b − w∗ k1 , r2 → 0 according to lemma
λ
P
assumption and kw
b − w∗ k1 → 0 by Theorem 3.3.
Lemma 7.5. Under Conditions C.1-C.6, Eku(θ∗ , wλ∗ )k22 = O(m∗ ).
Proof. Note that EkuM L (θ∗ )−u(θ∗ , wλ∗ )k22 ≤ EkuM L (θ∗ )−u(θ∗ , wλ∗ )k22 +λkwλ∗ k1 ≤ EkuM L (θ∗ )k22 =
31
m∗ . Expanding EkuM L (θ∗ ) − u(θ∗ , wλ∗ )k22 gives
Eku(θ∗ , wλ∗ )k22 ≤ 2EuM L (θ∗ )T u(θ∗ , wλ∗ )
q
√ q
≤ 2 EkuM L (θ∗ )k22 · Eku(θ∗ , wλ∗ )k22 = 2 m∗ Eku(θ∗ , wλ∗ )k22 .
Re-arranging gives Eku(θ∗ , wλ∗ )k22 ≤ 4m∗ .
Lemma 7.6. Assume Conditions C.1-C.6. For every > 0, we have
n
o
p
1 X n
∗
∗ 2
∗
∗
∗
)
→ 0,
E
u
(θ
,
w
)
I(|u(θ
,
w
)|
≥
nJ
i
λ
λ
λ
nJλ∗ i=1
where ui (θ, w) =
Pm
j=1
as n → ∞,
(i)
wj ∇ log fj (Xj ; θ) is the composite likelihood score corresponding to
the ith observation.
Proof. Without loss of generality, assume p = 1. Recall that Jλ∗ = Eu(θ∗ , wλ∗ )2 . For every
> 0, and constants a, b > 1 such that 1/a + 1/b = 1
n
o
p
1 X n
∗
∗ 2
∗
∗
∗
E
u
(θ
,
w
)
I(|u(θ
,
w
)|
≥
nJ
)
i
λ
λ
λ
nJλ∗ i=1
o
p
1 n
= ∗ E ui (θ∗ , wλ∗ )2 I(|u(θ∗ , wλ∗ )| ≥ nJλ∗ )
Jλ
1
1
1 b
(J ∗ )a−1
∗
∗ 2a
≤ ∗ E |ui (θ , wλ )|
· 2
= 2λ 1/b ,
Jλ
n
( n)
(20)
where the inequality follows by applying Hölder’s and Chebyshev’s inequalities. By the
assumption at the beginning of Section 3 that m∗ = o(log(n)), Lemma 7.5 implies Jλ∗ =
o(log(n)). Hence, (20) converges to 0 as n → ∞, which proves the desired result.
References
J. Besag. Spatial interaction and the statistical analysis of lattice systems. Journal of the
Royal Statistical Society. Series B (Methodological), pages 192–236, 1974.
32
P. Bühlmann and S. Van De Geer. Statistics for high-dimensional data: methods, theory and
applications. Springer Science & Business Media, 2011.
T. T. Cai, C.-H. Zhang, H. H. Zhou, et al. Optimal rates of convergence for covariance
matrix estimation. The Annals of Statistics, 38(4):2118–2144, 2010.
D. R. Cox and N. Reid. A note on pseudolikelihood constructed from marginal densities.
Biometrika, 91(3):729–737, 2004.
J. V. Dillon and G. Lebanon. Stochastic composite likelihood. Journal of Machine Learning
Research, 11(Oct):2597–2633, 2010.
B. Efron, T. Hastie, I. Johnstone, R. Tibshirani, et al. Least angle regression. The Annals
of statistics, 32(2):407–499, 2004.
D. Ferrari and Y. Yang. Maximum lq-likelihood estimation. The Annals of Statistics, 38(2):
753–783, 2010.
R. A. Fisher. On the mathematical foundations of theoretical statistics. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical
or Physical Character, pages 309–368, 1922.
C. Giraud. Introduction to high-dimensional statistics, volume 138. CRC Press, 2014.
C. C. Heyde. Quasi-likelihood and its application: a general approach to optimal parameter
estimation. Springer Science & Business Media, 2008.
H. W. Kuhn. Nonlinear programming: a historical view. In Traces and Emergence of
Nonlinear Programming, pages 393–414. Springer, 2014.
B. G. Lindsay. Composite likelihood methods. Contemporary Mathematics, 80(1):221–39,
1988.
33
B. G. Lindsay, G. Y. Yi, and J. Sun. Issues and strategies in the selection of composite
likelihoods. Statistica Sinica, 21(1):71, 2011.
C. Varin, N. M. Reid, and D. Firth. An overview of composite likelihood methods. Statistica
Sinica, 21(1):5–42, 2011.
R. Vershynin. How close is the sample covariance matrix to the actual covariance matrix?
Journal of Theoretical Probability, 25(3):655–686, 2012.
34
| 10 |
Noname manuscript No.
(will be inserted by the editor)
Identifying Hazardousness of Sewer-Pipeline
Gas-Mixture using Classification Methods
A Comparative Study
arXiv:1707.00561v1 [cs.NE] 16 May 2017
Varun Kumar Ojha · Parmartha Dutta ·
Atal Chaudhuri
Received: date / Accepted: date
Abstract In this work, we formulated a real-world problem related to sewerpipeline gas detection using the classification-based approaches. The primary
goal of this work was to identify the hazardousness of sewer-pipeline to offer
safe and non-hazardous access to sewer-pipeline workers so that the human
fatalities, which occurs due to the toxic exposure of sewer gas components,
can be avoided. The dataset acquired through laboratory tests, experiments,
and various literature-sources were organized to design a predictive model
that was able to identify/classify hazardous and non-hazardous situation of
sewer-pipeline. To design such prediction model, several classification algorithms were used and their performances were evaluated and compared, both
empirically and statistically, over the collected dataset. In addition, the performances of several ensemble methods were analyzed to understand the extent
of improvement offered by these methods. The result of this comprehensive
study showed that the instance-based-learning algorithm performed better
than many other algorithms such as multi-layer perceptron, radial basis function network, support vector machine, reduced pruning tree, etc. Similarly, it
was observed that multi-scheme ensemble approach enhanced the performance
of base predictors.
V. K. Ojha
IT4Innovations, VŠB Technical University of Ostrava, Ostrava, Czech Republic and Dept.
of Computer Science & Engineering, Jadavpur University, Kolkata, India
E-mail: [email protected]
P. Dutta
Dept. of Computer & System Sciences, Visva-Bharati University, India
E-mail: [email protected]
A Chaudhuri
Dept. of Computer Science & Engineering, Jadavpur University, Kolkata, India E-mail:
[email protected]
Neural Computing and Applications
DOI: 10.1007/s00521-016-2443-0
2
Varun Kumar Ojha et al.
Keywords Sewer gas detection · Neural network · Classification · KS test
1 Introduction
This is in the view of providing a solution to a real-world problem using technology, where the human fatalities need to be avoided. Hence, the technology
should be as simple as possible. In this work, we addressed a complex realworld problem related to sewer-pipeline gas detection, where sewer-pipeline
safety detection (in terms of non-toxic environment) was required to allow
maintenance and cleaning of the pipeline. The sewer gas detection is a highly
complex problem because of the presence of several toxic gases in a mixture
form, and a single gas detector may not offer reliable solution. Therefore, we
studied the complexity of this problem in terms of gas mixture. The primary
goal was to offer a simple solution with a high accuracy so that it was easy to
categorize the hazardous situation in straightforward way such as “hazardous”
or “non-hazardous.” To meet this simplicity, we formulated sewer-pipeline gas
detection problem as a classification problem.
Sewer-pipeline contains a mixture of several toxic gases such as hydrogen sulphide (H2 S), ammonia (NH3 ), methane (CH4 ), carbon dioxide (CO2 ),
nitrogen oxides (NOx ), etc., [1, 2, 3]. Usually, this mixture is generated due
to the biodegradation of the waste and the sewage into the sewer-pipeline.
Such toxic gas-mixture is fatal for those who come to the proximity/exposure
of these gases. Following this, an alarming number of human fatalities are
reported each year by the newspapers and the other agencies [4, 5,6]. The
authorities those are responsible for maintaining and cleaning of the sewer
pipeline provides various electronic portable gas detectors available in the market to the employed persons so that they can determine the safeness of the
sewer-environment before physically get involve into the maintenance work.
However, the available electronic portable gas detectors are not providing satisfactory results. It is evident from the recent comments from the judiciary to
these authorities. In a judgment to a civil appeal number 5322 of 2011, the
Supreme Court of India stated, “the State and its agencies/instrumentalities
cannot absolve themselves of the responsibility to put in place effective mechanism for ensuring safety of the workers employed for maintaining and cleaning
the sewage system [7].” Similarly, in another judgment, the Supreme Court of
India stated, “...entering sewer lines without safety gears should be made a
crime even in emergency situations... [8, 9].” This motivated us to carry out
our research in this domain and to come out with a simple solution so that
without having the minimum knowledge of the technicalities of gas composition and safety limits, a person is able to understand the environment of a
sewer system before entering.
To ensure the simplicity in model, we collected and preprocessed data to
realize sewer gas-detection as a binary class classification problem. However, in
this work, apart from the objective of constructing a prediction model, we set
a secondary objective, which was to analyze the performances of the classifiers,
Identifying Hazardousness of Sewer-Pipeline
3
both empirically and statistically. To meet these objectives, we used 12 base
predictors from four different categories such as neural network based classifiers, tree based classifiers, instance based classifiers, and rule based classifiers.
The algorithms were applied over the collected dataset and the performance
of the algorithms were collected in terms of the accuracy. The collected results
were then used for analyzing the performance superiority of the one algorithm
over another or the one category of algorithms over another.
We observed that the performance of the algorithms were independent of
the category they belong to. For example, the performance of instance based
k-nearest neighbor, logistic model tree, and support vector machine came from
three different categories, but they had a very competitive performance. However, we must consider the “No-free-lunch theorem” that suggests that some
algorithms perform better on some problem and some on another [10]. Therefore, to find out which predictor performs best in this case, we used 12 base
predictors and nine ensemble methods.
Rest of the article is organized as follows. A background study is provided
in Section 2.1, which leads to setting ground for describing our contribution to
the sewer-pipeline gas detection. In Section 2.2, we provide a detailed description of the data collection and preprocessing mechanisms, which constitute
the core and significant part for formulating gas detection problem as a binary classification problem. Section 2.3 deals with the brief descriptions of
the classifiers/algorithms and methods used for constructing the prediction
model. The design of comprehensive experiment set for the evaluation of the
classifiers is reported in Section 3. Whereas, Section 3 describes empirical and
statistical evaluation of the classifiers, discussions and conclusion are reported
in Sections 4 and 5, respectively.
2 Methodology
In this Section, we put together the background study, the data collection
mechanisms, and the classification methods definitions. The background study
describes the significance of the sewer-pipeline gas detection problem and the
data collection mechanism describes the formulation of gas-detection as a classification problem.
2.1 Background Study
Literature review was conducted in the perspective of electronic-nose (ENOSE) and gas-detection-system to cover a broad area of research in the field
of gas detection and modeling using intelligent computing techniques/algorithms.
Although not much work specifically on sewer gas-mixture-detection was reported in the past, few notable contributions were observed. Li et al. [11]
reported a noticeable research work on the development and design of an
electronic nose (E-NOSE) and gas detection system, where a neural network
4
Varun Kumar Ojha et al.
(NN)-based mixed gas (NOx, and CO) measurement system was developed.
On the other hand, Sirvastava et al. [12, 13] proposed a design of intelligent ENOSE system using backpropagation (BP) and neuro-genetic approach. Llobet
et al. [14] presented a pattern recognition approach, based on the wallet transformation for gas mixture analysis using single tin-oxide sensor. Liu et al. [13]
addressed a genetic-NN algorithm to recognize patterns of mixed gases (a
mixture of three component gases) using infrared gas sensor. Lee et al. [15]
illustrated uses of micro gas sensor array (GSA) combined with NN for recognizing combustible leakage gases. Ambard et al. [16] have demonstrated use
of NN for gas discrimination using a tin-oxide GSA for the gases H2 , CO and
CH4 . In [17], authors have illustrated a NN-based technique for developing a
gas sensory system for sensing gases in a dynamic environment. Pan et al. [18]
have shown several applications of E-NOSE. Wongchoosuka et al. [19] have
proposed an E-NOSE detection system based on carbon nanotube-SnO2 gas
sensors for detecting methanol. Zhang et al. [20] developed a knowledge-based
genetic algorithm for detecting mixed gas in mines. Won et al. [21] proposed
a system for estimation of hazardous gas release rate using optical sensor and
NN-based technique. The following salient points came out of the above mentioned articles:
– Mainly, BP and NN-based approaches were studied so far for detecting
gas-mixtures.
– Mostly, the E-Nose systems reported in the past were developed for the
gas-mixtures of only two or three gases and the sensors of the gases used
were less cross-sensitive to the other gases in mixtures.
– Cross-sensitivity during sensing is an important factor in gas detection system, which was least reported in literature as yet. However, Ojha et al. [22,
23, 24, 25,26, 27] offered a few methods such as neuro-genetic, neuro-swarm,
ant-colony-based, neuro-simulated annealing, etc., where cross-sensitivity
factor has been addressed to some extent. However, these works were primarily related to regression modeling.
– The impact of humidity and temperature on sensors remained ignored so
far.
– The gas detection system or E-Nose was viewed only in the framework of
regression problems and not classification problem.
Classification based approach led us to determine the hazardous and nonhazardous situation of a sewer-pipeline. In addition, the collection, organization, and the preprocessing of the collected data enabled us to address
the cross-sensitivity issue firmly. The cross-sensitivity issue occurs because
of the sensitivity of one gas-sensor towards multiple gases. So was our case,
where a semiconductor-based GSA was designed using five gas-sensors. Each
gas-sensor was typically meant for detecting its respective target gas. Hence,
when the GSA was used for collecting data for a mixture of gases, the crosssensitivity in the sensed values (collected data) became inevitable. Therefore,
rather than considering pure results of the respective gases, we registered the
cross-sensitive results as a part of our-collected data. Since a computation-
Identifying Hazardousness of Sewer-Pipeline
5
ally intelligent model learned from the data and also maintained the crosssensitivity patterns registered in terms of data values itself, a learned model
accurately predicts an unknown gas mixture.
2.2 Equipment and Data Collection Mechanism
(A/trained/Classifier/embedded/into/
electronic/(EEPROM)./
(Buzzer/light)/
Intelligent/unit./
Output/Unit
(Figure/2)
Gas/Sensor/
Array/
Before explaining the details of data collection and equipment, we need to
explain the basic design and the purpose of our work, which is to offer an
intelligent gas detection system (an electronic portable gas detector) that will
be a result of embedding learned-predictor (trained-classifier) into an electronic system. The data flow into our developed intelligent system is shown in
Fig. 1, which describes the entire process of the intelligent system design, which
is divided into three phases: 1) The data acquisition unit, which consists of
gas suction-motor chamber, GSA, and data acquisition-cum data-preprocessor
block; 2) An intelligent unit (classifier unit), which receives data from dataacquisition unit and classifying the acquired data patterns; 3) The output unit,
which prompts the result in terms of colored light and buzzer. Hence, our objective here was limited to only train a classifier using the collected data. We
describe the data collection process as follows.
Fig. 1 Block diagram of intelligent system design (real time data flow process)
At first, we collected the data samples from the data-sheets, literature, and
laboratories test of the collected gas mixture samples from sewer-pipelines.
Second, we designed our own metal oxide semiconductor (MOS) gas sensors
array (GSA) that was used for verifying the literature and laboratory data and
for generating the data samples for the purpose experiments. Our designed
GSA consists of five gas-sensors for sensing five different gases. They include
hydrogen sulphide (H2 S), ammonia (NH3 ), methane (CH4 ), carbon dioxide
(CO2 ), and nitrogen oxides (NOx ). Typically, MOS sensors are resistance-type
electrical sensors, where responses are change in circuit resistance proportional
to gas concentration, A resistance type sensor responds to change in resistance
due to change in the concentration of gases. The change in resistance is given
as δRs /R0 , where δRs is change in MOS sensor resistance and R0 is base
resistance or the sensing resistance at a specifics gas concentration in clean
air [19]. The R0 of the sensors MiCS - 4514, MQ - 7, MQ - 136, MQ - 135, and
MQ - 4 is 0.25 ppm, 100 ppm, 10 ppm, 100 ppm and 1000 ppm, respectively.
Here, ppm is the unit for measuring concentration of gas into air which is
defined as follows: 1 ppm is equal to 1 volume of a gas into 106 volume of air.
6
Varun Kumar Ojha et al.
A typical arrangement of a gas sensor array is shown in Fig. 2. The circuitry
shown in Fig. 2 (left) was developed in our laboratory. Here, the fabricated
and installed sensors were MiCS - 4514, MQ - 7, MQ - 136, MQ - 135, and
MQ - 4 for gases NO2 , CO, H2 S, NH3 , and CH4 , respectively [28, 29].
The gas sensors used were sensitive to not only their target gases, but they
were sensitive also to other gases in the gas-mixture [30, 31]. Hence, crosssensitivity effect over MOS sensors was confirmed [32]. It was moreover confirmed that the sensor responses were noisy and accordingly the pattern of such
noise were considered and recorded as an instance into our dataset. Hence, a
non-intelligent use of raw values of sensor response for hazardousness prediction may be misleading in operating (real-world) environment. Therefore, a
training electronic portable gas detector may be used to predict sewer hazardousness, accurately. So was the effort in this work to provide a classifier.
Data collection had vital role in training of a classifier. Data samples were
collected as per the following steps. At first, several manhole samples collected
from the Kolkata, India municipal area were tested in laboratory to identify
the presence of several toxic gases such as nitrogen dioxide (NO2 ), carbon
monoxide (CO), hydrogen sulphide (H2 S), ammonia (NH3 ), methane (CH4 ),
and carbon dioxide (CO2 ). Secondly, gas sensors were identified for each of the
respective gases. As a result we came out with the procurement of gas sensor
MiCS - 4514, MQ - 7, MQ - 136, MQ - 135, and MQ - 4 for NO2 , CO, H2 S,
NH3 , and CH4 , respectively. We collected data sheets form the companies
for the respective sensors. In the third step, a laboratory was setup for the
verification and collection of the sensor response of the respective gas sensors
in certain range of their concentration. Specifically, the concentration range in
ppm laid down in sensor manuals of sensors MiCS - 4514, MQ - 7, MQ - 136,
MQ - 135, and MQ - 4 are [0.25 - 5], [20 - 1000], [1 - 100], [10 - 300] and [300
- 10000] of the gases NO2 , CO, H2 S, NH3 , and CH4 , respectively. In addition,
the lab was setup (see Fig. 2 [right]), where gas cylinders were connected to
a gas concentration measuring unit called mass flow controller (MFC), which
was further connected to a gas chamber, where each gas was allowed to pass
in a specific concentration over an array of gas sensor. More specifically, the
behavior of each of the gas sensors was recorded.
Fig. 2 Laboratory-scale gas sensor array (GSA) [28, 29]
Identifying Hazardousness of Sewer-Pipeline
7
The following steps were used for preparing data sample for the classifiers’
training. First, hazardous (safety) limits of the component gases of manhole
gas mixture were collected. Secondly, three different levels, (i) above safetylimit, (ii) at safety-limit, and (iii) below safety-limit for each manhole gas were
recognized. Thirdly, gases were mixed in different combination to prepare several mixture sample that were used to pass over GSA. Table 1 indicates few
examples of such mixture of gases in different combinations. For example, when
we mix five gases each of which has three different recognized concentration
levels, we get 243 different combinations (35 ). In addition, we considered the
role of humidity and temperature to influence the sensor’s behavior. Accordingly, the data values were recorded. Hence, our collected dataset contained
seven input features and an output class. Each sample was labeled with “0” for
safe sample (if the responses of all five sensors were under the maximum safety
limit) or “1” for unsafe sample (if the responses of any among the five sensors were above the maximum safety limit). The safety limits of the manhole
gases are as follows: safety limit of NH3 is between 25 ppm and 40 ppm [33],
CO is in between 35 ppm and 100 ppm [34], H2 S is in between 50 ppm and
100 ppm [35], CO2 is in between 5000 ppm and 8000 ppm [36] and CH4 is in
between 5000 ppm and 10000 ppm [37]. Table 2 illustrates a fraction of the
collected data samples.
2.3 Classification Based Approach
We categorized the classifiers in the four different groups of classifiers. Each
category of classifiers contains three classifiers.
Table 1 Samples of gas-mixture in different concentration.
Concentration of gases in ppm
#
Humidity
Temperature
NO2
CO
H2S
NH
CH
Class
Status
1
2
3
:
7535
7536
7537
:
16036
16037
16038
65
65
65
20
20
20
0
0
0
10
10
10
10
10
10
20
20
20
2000
5000
10000
0
0
1
safe
safe
unsafe
65
65
65
30
30
30
0
0
0
10
10
10
10
10
10
20
20
20
2000
5000
10000
0
1
0
safe
unsafe
safe
75
75
75
50
50
50
20
20
20
50
50
50
50
50
50
50
100
100
10000
2000
5000
1
1
1
unsafe
unsafe
unsafe
8
Varun Kumar Ojha et al.
2.3.1 Network Based Classifiers
Multi-layer perceptron (MLP) is a computational model that imitates human
brain, and learn from environment, i.e., data. In our work, we used threelayered MLP, where layers are input layer, hidden layer, and output layer [38].
Radial Basis Function Network (RBF) is a special class of MLP, where inputs
are mapped onto a hidden layer that consists of radial basis function, which
does the non-linear mapping of input to a hidden layer [39].
Support vector machine (SVM) is a supervised learning computational model
that maps input to a high dimension feature space using kernel trick. Hence,
non-linear separable patterns in input space are linearly classified on a high
dimensional feature space [40].
2.3.2 Tree Based Classifiers
Reduced pruning tree (REP) is a tree based classifier method, where a treelike structure is designed for predicting target class based on the input variables [41, 42]. More specifically, the leaves of tree offers decision of the class
based on the conjunction of the input feature represented by the branches of
the tree. REP tree is a decision tree, where the tree size is reduced by pruning
inefficient branches [43].
Naive Bayes tree (NBT) is a special class of decision tree, where the leaf nodes
of decision tree that offer decision on the class is replaced by a Naive Bayes
classifier, which decides the class label, based on the features and learned
threshold [44].
Table 2 Samples of calibrated sensor responses based on the knowledge gathered from
literature, data-sheets, lab tests, and scaling process.
Sensors response (δRs /R0 )
#
Humidity
Temperature
InNO2
InCO
InH2S
InNH
InCH
Class
1
2
3
:
7535
7536
7537
:
16036
16037
16038
65
65
65
20
20
20
0.813
1.301
1.035
6.929
7.521
6.658
5.938
5.525
5.841
3.433
3.521
3.633
3.985
2.178
1.620
0
0
1
65
65
65
30
30
30
1.038
1.054
0.642
7.565
6.694
7.210
5.658
5.745
5.819
3.228
3.692
1.326
2.275
1.268
3.530
0
1
0
75
75
75
50
50
50
4.645
4.712
4.911
2.764
2.985
2.433
0.608
0.641
0.381
2.709
1.228
0.937
0.499
0.450
0.481
1
1
1
Identifying Hazardousness of Sewer-Pipeline
9
Logistic Model Trees (LMT) is similar to NBT that does the transformation
of leaves of a decision tree into a logistic regression node. A logistic regression maps independent variables to categorical dependent variables using a
logistic function [45, 46]. Hence, LMT is a simple idea, where nodes of a decision/classification tree are replaced by logistic regression model [47].
2.3.3 Rule Based Classifiers
Decision Table (DT) is a simple representation of data into a table based system, where the decision is made based on the features matching or searched
into a decision table. On a successful search, the majority class label is returned, otherwise the majority class label of the entire dataset is returned as
a decision for an unlabeled data [48].
PART is a rule based classification method based on partial decision tree that
generates a list of rules, used subsequently for making prediction of unknown
data instance. The rules are generated based on the partial decision tree, which
splits dataset into subsets until the entire dataset gets exhausted to form nodes
and leaf nodes of the tree [49].
Majority Predictor (Zero R) is the simplest possible form of classification
method. It is based on the majority of class label into a dataset. In simple
words, it always predicts the majority class.
2.3.4 Instance Based Classifiers
Instance-Based Learning (IBK) provides the concept description which is the
primary output of an IBK algorithm. It is a function that maps an instance to
a category (class label). The concept description function is updated based on
training procedure that involves two functions similarity and classification. The
similarity function computes the similarity between the training instances and
the pre-stored instances, and returns a numeric-value. Then, the classification
function provides class label to the instances based on the results of similarity
function. Accordingly, the concept description is updated [50].
K∗ (K Star) is an instance-based learner that uses an entropy-based similarity matching function for searching/matching test instances to the learned
instances [51].
Locally Weighted Learning (LWL). In a locally weighted learning, the prediction models are allowed to create at local points in a dataset or the specific
point of interest rather than creating model for entire dataset. Hence, a linear
regression or naive Bayes classifier or any other classifier may be used to create local models. In this case, we use Decision Stamp, which is a single level
decision tree model for prediction [52, 53].
10
Varun Kumar Ojha et al.
2.3.5 Ensemble Methods
In this work, we tried to exploit different method of making ensemble. For an
ensemble to perform well, we need to take into account two things which are
accuracy of predictors and diversity among the predictors [54]. For example,
Bagging maintains diversity by bootstrapping dataset, AddBoost combines
several weak predictors, Random Subspace maintains diversity by splitting
feature space, Random committee maintains diversity by creating predictors
using different random seeds, and Rotation forest maintains diversity by splitting and extracting feature subspace using principal component analysis. Similarly, in multi-scheme and voting scheme, we combine several predictors to
maintain diversity. Here, we describe the ensemble methods as follows.
Bagging. In Bagging, several copies of same predictor is created. Each copy
of the predictor learns a different replicate of learning set created from the
complete training set using bootstrapping. Finally, the predictor’s decision is
combined using plurality voting method [55].
Adaptive Boosting (AdaBoost) is an ensemble technique that combines several
weak predictors and inaccurate rules to create an accurate predictor [56].
Random Subspace (Random SUB). In random subspace ensemble method feature space is divided into several feature subset. Hence, predictors are constructed for each feature subset. Finally, the decision of each constructed predictors are combined using voting method [57]. Random Committee (Random
COM): In a random committee ensemble, several predictors are constructed
over similar dataset, but they use different random seeds to maintain diversity
in the ensemble.
Rotation Forest (Rotation FRST). In this approach, training set for the predictors are created by splitting feature set into K subsets, and Principal Component Analysis is applied to extract all the principle components [58]. Hence,
diversity among the predictors are maintained by K axis rotation to form new
feature set for training [58].
Ensemble Selection (Ensemble SEL). In the ensemble selection approach, the
ensemble starts with an empty bag, and the predictors (chosen from a library of trained predictors) maximizing the performance of ensemble are added
to the bag one by one to compute the decision of ensemble by using voting
method [59].
Voting Scheme (Vote). The voting scheme combines probability distribution
of several chosen predictors/classifiers (or predictors available in a bag for
making ensemble) using majority voting combination method [60].
Identifying Hazardousness of Sewer-Pipeline
11
Multi-Scheme (Multi). The multi-scheme ensemble approach uses a bag of
predictors and selects the output class by selecting a predictor from the bag
of predictors based on cross-validation performance of the predictors [60].
Weighted Predictor Ensemble (WPE). In this scheme of ensemble, the weight
of predictors were determined. Subsequently, the ensemble output of k many
predictors were computed as follows:
c
y = arg max
j=1
k
X
wj I (Pj = ωj ) ,
j=1
where c is the number of classes (here it is two), I (Pj = ωj ) is a function that
returns value one for the predicted class ωj .
3 Experimental Framework and Results
Our aim in the experiment design was to obtain a highly accurate model for
predicting hazardousness of the environment in a sewer pipeline. The sewerpipeline environment was represented by the collected dataset. The second
objective of the experiment design was to obtain results for analyzing the
classifiers (predictors). Accordingly, the results of the classifiers were collected.
Table 3 represents the parameter setting of the chosen classifiers. For the
evaluation of the classifiers, we repeated our experiments 10 times. Finally,
the results were compared based on empirical and statistical (Kolmogorov–
Smirnov test) evaluation. We used WEKA [61] and MATLAB tools [62] for
the purpose of our experiments.
We organized the experimental results into three parts as reflected in Table 4. The first part in the table describes the category wise performance of
classifier. Hence, the performance of the category of classifiers was evaluated.
We represented the performance of the classifiers as per their training and
test accuracy. An accuracy close to 1.0 indicates 100% classification accuracy.
Accordingly, the standard deviation (std) of training and test accuracies were
reported for understanding the consistency of the classifiers’ performance. In
Table 4, the performance of the classifiers were arranged as follows. The category is arranged in the ascending order of their average accuracy over 10-fold
CV test set, i.e., better performing classifier to the less performing classifier.
The dataset was portioned into 10 equal sets and each time 9 sets were used
for training and one set for testing. This process was repeated 10 times and
each time a unique test set was used.
In the second part, we organized the results according to rank of the classifiers’ performance over 10-fold test set. It may please be noted that for each
classifier, we collected 10 instances of 10-fold CV training and test results.
Hence, the results in Table 5 reflect averaged training and test accuracy of
the classifiers. However, ranking the classifiers based only on the average results does not say much about the quality of the classifier. Hence, in the
12
Varun Kumar Ojha et al.
Table 3 Parameter setting of different classifiers
category
Network-Based
Classifiers (F1)
Tree-Based
Classifiers (F2)
Instance-Based
Classifiers (F3)
Rule-Based
Classifiers (F4)
Ensemble
Classifiers (E1)
Ensemble
Classifiers (E2)
Classifiers
Parameters
MLP
Learning rate: 0.3, momentum factor: 0.2, iteration: 500, nodes in hidden layer: 100
Kernel: Gaussian basis function.
Kernel: Radial basis function
Minimum no. of instance per leaf: 2, split proportion: 0.001
Leaf node: nave Bayes classifier.
Node: logistic function, Number of instance per
node for splitting: 15
Similarity function: linear nearest neighbor search,
neighbor size: 1
Similarity function: entropy distance measure.
Similarity function: linear nearest neighbor search,
Weight function: Linear, Classifier: Decision
Stamp.
Evaluation metric: accuracy, Search method: best
first
Confidence threshold for pruning: 0.25
Ensemble size: 10. Classifier: REP Tree
Ensemble size: 10. Classifier: Decision Stamp.
Ensemble size: 10. Classifier: REP Tree
Ensemble size: 10. Classifier: Random Tree
Ensemble size: 10. Classifier: Random Tree
Ensemble size: 10. Classifier: REP Tree
Ensemble size: 12. Classifiers: F1, F2, F3 and F4
Ensemble size: 12. Classifiers: F1, F2, F3 and F4
Ensemble size: 12. Classifiers: F1, F2, F3 and F4
RBF
SVM
REP
NBT
LMT
IBK
K Star
LWL
DT
PART
Zero R
Bagging
AdaBoost
Random SEL
Random COM
Rotation FRST
Ensemble SEL
Vote
Multi Scheme
WPE
third part of the results, we used pairwise comparison of the classifiers using
Kolmogorov–Smirnov (KS) test, which ascertains whether the supremacy of
one classifier over the other is statistically significant or not. A comprehensive
matrix of the pairwise KS test results are presented in Table 6. The KS Test
is a non-parametric statistical test that determines the difference between the
cumulative frequency distribution (cfd) of two samples. In other words, it indicates whether the empirical cfd of one sample is equal “=”, larger “”, or
smaller “≺” than the other. It tells whether two dataset A and B are statistically similar “A=B”, dissimilar “A≺B”, where A being statistically dominated
by B, or dissimilar “AB”, where A being statistically dominant over B. In
our experiments, the KS test was evaluated with 5% significance level, i.e.,
with 95% confidence.
4 Discussions
Since the developed electronic portable gas detector shall be used by naive
persons who are engaged in maintaining sewer-pipeline, we are looking for
binary answer. Hence, our objective is to search for classification accuracy and
Identifying Hazardousness of Sewer-Pipeline
13
Table 4 Experimental Results of Classifiers over 10 fold cross validation error
category
NN-Based
Classifiers
Tree-Based
Classifiers
Instance-Based
Classifiers
Rule-Based
Classifiers
Ensemble
Classifiers
Classifiers
Training
avg. accuracy
std
Test
avg. accuracy
std
SVM
MLP
RBF
LMT
REP Tree
NB Tree
IBK
K Star
LWL
PART
Decision Table
Zero R
Multi
Rotation FRST
Random COM
Bagging
WPE
Ensemble SEL
Vote
Random SUB
AdaBoostM1
0.9407
0.8681
0.8064
0.9697
0.9528
0.9064
1.0000
0.9997
0.7613
0.9275
0.8672
0.7613
1.0000
1.0000
1.0000
0.9728
0.9635
0.9577
0.9423
0.9160
0.7613
0.9340
0.8664
0.8051
0.9360
0.9265
0.8898
0.9671
0.9638
0.7613
0.9062
0.8553
0.7613
0.9672
0.9622
0.9549
0.9395
0.9356
0.9330
0.9214
0.8720
0.7613
0.0041
0.0081
0.0187
0.0023
0.0067
0.0418
0.0023
0.0036
0.0128
0.0103
0.0154
0.0091
0.0035
0.0036
0.0073
0.0077
0.0056
0.0077
0.0143
0.0165
0.0091
0.0008
0.0029
0.0090
0.0039
0.0025
0.0469
0.0000
0.0001
0.0014
0.0097
0.0049
0.0010
0.0000
0.0000
0.0000
0.0008
0.0006
0.0009
0.0128
0.0103
0.0010
Table 5 Ranking algorithms according to their performance on test set (10 Fold CV).
Rank
category
Classifiers
Training
Test
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
E2
F3
F3
E1
E1
E1
E2
F2
E1
F1
F2
E2
F4
F2
F1
E1
F4
F1
F3
F4
E1
Multi
IBK
KStar
Rotation FRST
Random COM
Bagging
WPE
LMT
Ensemble SEL
SVM
REPTree
Vote
PART
NBTree
MLP
Random SUB
DT
RBF
LWL
ZeroR
AdaBoost
100.0000
100.0000
99.9653
100.0000
100.0000
97.2874
96.3564
96.7454
95.7173
94.0837
95.2469
93.9593
92.7331
89.4770
86.8566
90.9650
86.6874
80.7795
76.1285
76.1285
76.1302
96.8060
96.7945
96.4677
96.2725
95.7737
94.1025
93.9865
93.4674
93.3778
93.3647
92.4733
92.1314
90.8675
88.0795
86.6336
86.4397
85.4790
80.7571
76.1451
76.1451
76.1302
14
Varun Kumar Ojha et al.
RBF
REPTree
SVM
ZeroR
Multi-Scheme
Vote
AdaBoost
Bagging
Ensemble SEL
Random COM
≺
≺
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
≺
≺
≺
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
≺
≺
≺
≺
.
.
.
.
.
.
.
.
.
.
.
.
.
.
≺
.
.
.
.
.
.
.
.
.
.
.
.
.
≺
≺
≺
≺
≺
≺
.
.
.
.
.
.
.
.
.
.
.
.
≺
≺
≺
≺
≺
≺
≺
.
.
.
.
.
.
.
.
.
.
.
=
.
.
.
.
.
.
.
.
.
.
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
.
.
.
.
.
.
.
.
.
≺
≺
≺
≺
≺
≺
≺
.
.
.
.
.
.
.
.
=
=
.
.
.
.
.
.
.
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
.
.
.
.
.
.
≺
≺
≺
≺
≺
≺
≺
=
≺
≺
≺
.
.
.
.
.
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
.
.
.
.
WPE
PART
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Random SUB
NBTree
≺
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Rotation FRST
MLP
≺
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
LMT
≺
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
LWL
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
IBK
DT
IBK
Kstar
LMT
LWL
MLP
NBTree
PART
RBF
REPTree
SVM
ZeroR
Multi-Scheme
Vote
AdaBoost
Bagging
Ensemble SEL
Random COM
Random SUB
Rotation FRST
WPE
Kstar
Classifiers
DT
Table 6 Ranking algorithms according to their performance on test set (10 Fold CV).
≺
≺
=
≺
≺
≺
.
.
.
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
.
.
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
≺
.
the model (weights) with the highest accuracy so that such a combination may
be implemented into electronic portable gas detector form.
Moreover, it is also a difficult task to be certain with the accuracy of an
implemented electronic portable gas detector because the toxic exposure of a
gas is also proportional to the time and not only its safety limit. However,
with a real-time monitoring and requisite maintenance involved, the accuracy
of detector may be relaxed and hence, we resorted to choose 90% accuracy as
the accuracy for our developed detector. So, the classifier’s performance was
compared with a threshold setting of 90% accuracy.
First, let us discuss on the obtained results. For the classifiers belonging
to network-based category F1, the classifier SVM performs better than its
counterparts MLP and RBF both in terms of high accuracy (test accuracy
0.93403) and high consistency (std on test accuracy 0.0041). On the other
hand, the performance of MLP was reported next to SVM with high consistency. The performance of the RBF was found to be inconsistent and poorer
in comparison to its counterparts.
In the tree based category F2, the performance of LMT and REPTree was
comparable to whereas, NBTree has shown poor performance compared to its
counterparts.
Identifying Hazardousness of Sewer-Pipeline
15
In instance-base category F3, the performance of IBK and K Star was
comparative with a high accuracy and high consistency. LWL performed poor
with a very low accuracy.
When it came to the category of rule based classifier F4, PART has outperformed others in its category, but the consistency was not as high as the
consistency of the other well performing classifiers IBK, SVM, MLP, etc. The
classifier ZeroR consistently performed poor in comparison to all other classifiers.
In the ensemble category E1 and E2, the multi-scheme, Random COM, Rotation FRST, Bagging, WPE, and Ensemble SEL performed with high accuracies (over 90%) and consistency. However, the performance of the ensembles
Random Forest, Vote, and AdaBoost were not as satisfactory as compared to
the other ensembles. One of the reason behind poor performance of Random
SUB was the usage of subset of the features. Therefore, the feature selection
may not help in case of this dataset because of the high correlation maintained by each of the features with the output feature. Similarly, Voting used
probability measures to combine the predictors and AddBoost combined weak
predictors, whereas, the entirely better performing ensemble exploited the best
predictors. Hence, they performed better in this scenario.
Considering the assumption of 90% accuracy being a good predictor for implementation as gas detector, we can figure out from Table 5 that the classifiers
belong to category F3 (exception of the classifier LWL) had performed better
than the classifiers of other categories. However, the instance based classifier
IBK is not suitable for the implementation as electronic gas detector since it
required a large memory for its computation for saving all the instances of the
training set. IBK prediction is computed based on all training samples. Hence,
it takes long time to compute the output, which is unacceptable in real time.
The next category whose performance was found close to IBK were the
classifiers of category F2 (tree based classifier). Two classifies, LMT and REP
Tree qualified the 90% accuracy threshold. On the contrary, two classifiers
from each F1 and F4 had performed lower than 90% accuracy. However, SVM
performed significantly well with a very high accuracy 93.36%. Similarly, classifier PART from category F4 had an accuracy of 90.86%. However, since the
SVM produced less number of parameters than the tree based predictor and it
robustly accommodates the noisy attributes, it was recommended from these
experiments that SVM is a proper choice for the implementation of the proposed gas detector.
5 Conclusion
In this work, we explored a real world problem in the context of classification,
where we simplified the approach by offering binary decision to the problem. We explored the problem related to the detection of hazardousness of a
sewer pipeline environment. This is very crucial problem since it is related to
the safety of the persons who have to work under the toxic environment of
16
Varun Kumar Ojha et al.
the sewer-pipeline. Usually, a sewer-pipeline environment contains mixture of
toxic gases. Hence, we collected samples from sewer pipelines from different
locations. Then we examined those samples to identify data samples for our
experiments. We prepared a large dataset by collecting gas sensor responses
from laboratory tests, literature and scaled the collected gas sensor responses
to form a dataset where non-hazardous samples were labeled 0 and hazardous
samples were labeled 1. Finally, we applied 21 different classifiers over the identified dataset and their empirical and statistical performance were evaluated.
We discovered that for this problem, the instance based classifier performed
best followed by the performance of tree based classifiers. However, we found
that the performance of the classifiers were dependent on the ability and mechanism of the classifiers themselves and not on the information regarding which
category they belong to.
Acknowledgements This work was supported by the IPROCOM Marie Curie Initial
Training Network, funded through the People Programme (Marie Curie Actions) of the
European Unions Seventh Framework Programme.
References
1. J. Whorton, ““the insidious foe”–sewer gas,” Western Journal of Medicine, vol. 175,
no. 6, pp. 427–428, Dec. 2001.
2. R. J. Lewis, Sax’s Dangerous Properties of Industrial Materials, 12th ed. Wiley, 2010.
3. N. Gromicko, “Sewer gases in the home,” 2006, http://www.nachi.org/sewer-gaseshome.html.
4. T. Hindu, “Deaths in the drains,” 2014, http://www.thehindu.com/opinion/oped/deaths-in-the-drains/article5868090.ece?homepage=true., Accessed on 15 Dec 2015.
5. NDTV,
“He
died
on
diwali
inside
a
sewage
pipe,”
2014,
http://www.ndtv.com/opinion/he-died-on-diwali-inside-a-sewage-pipe-1245559,
Accessed on 15 Dec 2015.
6. S. Anand, “Dying in the gutters,” Tehelka Magazine, vol. 4, no. 47, Dec 2007,
Achttp://archive.tehelka.com/story main36.asp?filename=Ne081207DYING.asp,
cessed on: 15 Dec 2015.
7. T. Hindu, “Provide safety gear to sewer workers who enter manholes, says court,”
2011,
http://www.thehindu.com/todays-paper/tp-national/provide-safety-gear-tosewer-workers-who-enter-manholes-says-court/article2228688.ece, Accessed on 15 Dec
2015.
8. ——, “Sewer deaths,” 2014, http://www.thehindu.com/opinion/letters/sewer-deaths/
article5873493.ece, Accessed on 15 Dec 2015.
9. ——, “Supreme court orders states to abolish manual scavenging,” 2014,
http://www.thehindu.com/news/national/supreme-court-orders-states-to-abolishmanual-scavenging/article5840086.ece, Accessed on 15 Dec 2015.
10. D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE
Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 67–82, 1997.
11. J. Li, “A mixed gas sensor system based on thin film saw sensor array and neural
network,” in Proceedings of the Twelfth Southern Biomedical Engineering Conference,
1993, pp. 179–181.
12. A. Srivastava, S. Srivastava, and K. Shukla, “On the design issue of intelligent electronic nose system,” in Proceedings of IEEE International Conference on Industrial
Technology 2000., vol. 2. IEEE, 2000, pp. 243–248.
13. ——, “In search of a good neuro-genetic computational paradigm,” in Proceedings of
IEEE International Conference on Industrial Technology 2000., vol. 1. IEEE, 2000,
pp. 497–502.
Identifying Hazardousness of Sewer-Pipeline
17
14. E. Llobet, R. Ionescu, S. Al-Khalifa, J. Brezmes, X. Vilanova, X. Correig, N. Barsan, and
J. W. Gardner, “Multicomponent gas mixture analysis using a single tin oxide sensor
and dynamic pattern recognition,” IEEE Sensors Journal, vol. 1, no. 3, pp. 207–213,
2001.
15. D.-S. Lee, S.-W. Ban, M. Lee, and D.-D. Lee, “Micro gas sensor array with neural
network for recognizing combustible leakage gases,” IEEE Sensors Journal, vol. 5, no. 3,
pp. 530–536, 2005.
16. M. Ambard, B. Guo, D. Martinez, and A. Bermak, “A spiking neural network for gas
discrimination using a tin oxide sensor array,” in 4th IEEE International Symposium
on Electronic Design, Test and Applications. IEEE, 2008, pp. 394–397.
17. H. Baha and Z. Dibi, “A novel neural network-based technique for smart gas sensors
operating in a dynamic environment,” Sensors, vol. 9, no. 11, pp. 8944–8960, 2009.
18. W. Pan, N. Li, and P. Liu, “Application of electronic nose in gas mixture quantitative
detection,” in IEEE International Conference on Network Infrastructure and Digital
Content. IEEE, 2009, pp. 976–980.
19. C. Wongchoosuk, A. Wisitsoraat, A. Tuantranont, and T. Kerdcharoen, “Portable electronic nose based on carbon nanotube- SnO2 gas sensors and its application for detection
of methanol contamination in whiskeys,” Sensors and Actuators B: Chemical, vol. 147,
no. 2, pp. 392–399, 2010.
20. Q. Zhang, H. Li, and Z. Tang, “Knowledge-based genetic algorithms data fusion and its
application in mine mixed-gas detection,” in Chinese Control and Decision Conference
(CCDC). IEEE, 2010, pp. 1334–1338.
21. W. So, J. Koo, D. Shin, and E. S. Yoon, “The estimation of hazardous gas release
rate using optical sensor and neural network,” Computer Aided Chemical Engineering,
vol. 28, pp. 199–204, 2010.
22. V. K. Ojha, P. Dutta, and H. Saha, “Performance analysis of neuro genetic algorithm
applied on detecting proportion of components in manhole gas mixture,” International
Journal of Artificial Intelligence \& Applications, vol. 3, no. 4, pp. 83–98, 2012.
23. V. K. Ojha and P. Dutta, “Performance analysis of neuro swarm optimization algorithm
applied on detecting proportion of components in manhole gas mixture,” Artificial Intelligence Research, vol. 1, no. 1, pp. 31–45, 2012.
24. V. K. Ojha, P. Dutta, A. Chaudhuri, and H. Saha, “Convergence analysis of backpropagation algorithm for designing an intelligent system for sensing manhole gases,” in
Hybrid Soft Computing Approaches. Springer India, 2016, pp. 215–236.
25. P. Dutta and V. K. Ojha, “Conjugate gradient trained neural network for intelligent
sensing of manhole gases to avoid human fatality,” in Advances in Secure Computing,
Internet Services, and Applications. IGI Global, 2013, pp. 257–280.
26. V. K. Ojha, P. Dutta, A. Chaudhuri, and H. Saha, “Understating continuous ant colony
optimization for neural network training: A case study on intelligent sensing of manhole
gas components,” International Journal of Hybrid Intelligent Systems, vol. 12, no. 4,
pp. 185–202, 2016.
27. ——, “A multi-agent concurrent neurosimulated annealing algorithm: A case study
on intelligent sensing of manhole gases,” International Journal of Hybrid Intelligent
Systems, vol. 12, no. 4, pp. 203–217, 2016.
28. S. Ghosh, A. Roy, S. Singh, H. Saha, V. K. Ojha, and P. Dutta, “Sensor array for
manhole gas analysis,” in 1st International Symposium on Physics and Technology of
Sensors (ISPTS). IEEE, 2012, pp. 9–12.
29. S. Ghosh, H. Saha, C. RoyChaudhuri, V. K. Ojha, and P. Dutta, “Portable sensor array
system for intelligent recognizer of manhole gas,” in Sixth International Conference on
Sensing Technology (ICST). IEEE, 2012, pp. 589–594.
30. C. Cantalini, L. Valentini, I. Armentano, L. Lozzi, J. Kenny, and S. Santucci, “Sensitivity
to NO2 and cross-sensitivity analysis to NH3 , ethanol and humidity of carbon nanotubes
thin film prepared by PECVD,” Sensors and Actuators B: Chemical, vol. 95, no. 1, pp.
195–202, 2003.
31. K. D. Mitzner, J. Sternhagen, and D. W. Galipeau, “Development of a micromachined
hazardous gas sensor array,” Sensors and Actuators B: Chemical, vol. 93, no. 1, pp.
92–99, 2003.
18
Varun Kumar Ojha et al.
32. J. Liu, Y. Zhang, Y. Zhang, and M. Cheng, “Cross sensitivity reduction of gas sensors
using genetic algorithm neural network,” in Optical Methods for Industrial Processes,
S. Farquharson, Ed., vol. 4201. Proceedings of SPIE, 2001.
33. K. J. Donham, “Exposure limits related to air quality and risk assessment,” Iowa Concentrated Animal Feeding Operations Air Quality Study, p. 164, 2002.
34. L. K. Weaver, “Carbon monoxide poisoning,” New England Journal of Medicine, vol.
360, no. 12, pp. 1217–1225, 2009.
35. S. Simonton, “Human health effects from exposure to low-level concentrations of hydrogen sulfide,” Occupational Health & Safety, Nov. 2007.
36. G. Shilpa, “New insight into panic attacks: Carbon dioxide is the culprit,” Journal
of Young Investigators, Nov. 2007, http://www.jyi.org/issue/new-insight-into-panicattacks-carbon-dioxide-is-the-culprit/.
37. D. W. Fahey and M. I. Hegglin, “Twenty questions and answers about the ozone layer:
2010 update,” Scientific assessment of ozone depletion, pp. 4–1, 2010.
38. A. S. Weigend, B. A. Huberman, and D. E. Rumelhart, “Predicting the future: A
connectionist approach,” International journal of neural systems, vol. 1, no. 03, pp.
193–209, 1990.
39. D. Lowe and D. Broomhead, “Multivariable functional interpolation and adaptive networks,” Complex System, vol. 2, pp. 321–355, 1988.
40. C. Cortes and V. Vapnik, “Support-vector networks,” Machine learning, vol. 20, no. 3,
pp. 273–297, 1995.
41. L. Olshen, C. J. Stone et al., “Classification and regression trees,” Wadsworth International Group, vol. 93, no. 99, p. 101, 1984.
42. J. R. Quinlan, C4. 5: programs for machine learning. Elsevier, 2014.
43. F. Esposito, D. Malerba, G. Semeraro, and V. Tamma, “The effects of pruning methods
on the predictive accuracy of induced decision trees,” Applied Stochastic Models in
Business and Industry, vol. 15, no. 4, pp. 277–299, 1999.
44. W. N. H. W. Mohamed, M. N. M. Salleh, and A. H. Omar, “A comparative study
of reduced error pruning method in decision tree algorithms,” in IEEE International
Conference on Control System, Computing and Engineering (ICCSCE), 2012. IEEE,
2012, pp. 392–397.
45. S. H. Walker and D. B. Duncan, “Estimation of the probability of an event as a function
of several independent variables,” Biometrika, vol. 54, no. 1-2, pp. 167–179, 1967.
46. D. R. Cox, “The regression analysis of binary sequences,” Journal of the Royal Statistical Society. Series B (Methodological), pp. 215–242, 1958.
47. N. Landwehr, M. Hall, and E. Frank, “Logistic model trees,” Machine Learning, vol. 59,
no. 1-2, pp. 161–205, 2005.
48. R. Kohavi, “The power of decision tables,” in Machine Learning: ECML-95. Springer,
1995, pp. 174–189.
49. E. Frank and I. H. Witten, “Generating accurate rule sets without global optimization,”
1998.
50. D. W. Aha, D. Kibler, and M. K. Albert, “Instance-based learning algorithms,” Machine
learning, vol. 6, no. 1, pp. 37–66, 1991.
51. J. G. Cleary, L. E. Trigg et al., “K*: An instance-based learner using an entropic distance
measure,” in Proceedings of the 12th International Conference on Machine learning,
vol. 5, 1995, pp. 108–114.
52. E. Frank, M. Hall, and B. Pfahringer, “Locally weighted naive bayes,” in Proceedings of
the Nineteenth conference on Uncertainty in Artificial Intelligence. Morgan Kaufmann
Publishers Inc., 2002, pp. 249–256.
53. C. G. Atkeson, A. W. Moore, and S. Schaal, “Locally weighted learning,” Artificial
Intelligence Review, vol. 11, no. 5, pp. 11–73, 1997.
54. R. Polikar, “Ensemble based systems in decision making,” IEEE Circuits and Systems
Magazine, vol. 6, no. 3, pp. 21–45, 2006.
55. L. Breiman, “Bagging predictors,” Machine learning, vol. 24, no. 2, pp. 123–140, 1996.
56. Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning
and an application to boosting,” Journal of computer and system sciences, vol. 55,
no. 1, pp. 119–139, 1997.
Identifying Hazardousness of Sewer-Pipeline
19
57. T. K. Ho, “The random subspace method for constructing decision forests,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 8, pp. 832–
844, 1998.
58. J. J. Rodriguez, L. I. Kuncheva, and C. J. Alonso, “Rotation forest: A new classifier
ensemble method,” IEEE Transactions on Pattern Analysis and Machine Intelligence,
vol. 28, no. 10, pp. 1619–1630, 2006.
59. R. Caruana, A. Niculescu-Mizil, G. Crew, and A. Ksikes, “Ensemble selection from
libraries of models,” in Proceedings of the twenty-first international conference on Machine learning. ACM, 2004, p. 18.
60. L. I. Kuncheva, Combining pattern classifiers: methods and algorithms. John Wiley
& Sons, 2004.
61. “Weka 3: Data mining software in java,” accessed: 2016-05-01. [Online]. Available:
http://www.cs.waikato.ac.nz/ml/index.html
62. “Matlab: Statistics and machine learning toolbox,” accessed: 2016-05-01. [Online].
Available: http://www.mathworks.com/products/matlab/
| 9 |
The Generic Model of Computation
Nachum Dershowitz
School of Computer Science
Tel Aviv University
Tel Aviv, Israel
[email protected]
Over the past two decades, Yuri Gurevich and his colleagues have formulated axiomatic foundations
for the notion of algorithm, be it classical, interactive, or parallel, and formalized them in the new
generic framework of abstract state machines. This approach has recently been extended to suggest
a formalization of the notion of effective computation over arbitrary countable domains. The central
notions are summarized herein.
1
Background
Abstract state machines (ASMs), invented by Yuri Gurevich [24], constitute a most general model of
computation, one that can operate on any desired level of abstraction of data structures and native operations. All (ordinary) models of computation are instances of this one generic paradigm. Here, we
give an overview of the foundational considerations underlying the model (cobbled together primarily
from [18, 3, 12]).1
Programs (of the sequential, non-interactive variety) in this formalism are built from three components:
• There are generalized assignments f (s1 , . . . , sn ) := t, where f is any function symbol (in the vocabulary of the program) and the si and t are arbitrary terms (in that vocabulary).
• Statements may be prefaced by a conditional test, if C then P or if C then P else Q, where C is a
propositional combination of equalities between terms.
• Program statements may be composed in parallel, following the keyword do, short for do in parallel.
An ASM program describes a single transition step; its statements are executed repeatedly, as a unit,
until no assignments have their conditions enabled. (Additional constructs beyond these are needed for
interaction and large-scale parallelism, which are not dealt with here.)
As a simple example, consider the program shown as Algorithm 1, describing a version of selection
sort, where F(0), . . . , F(n − 1) contain values to be sorted, F being a unary function symbol. Initially,
n ≥ 1 is the quantity of values to be sorted, i is set to 0, and j to 1. The brackets indicate statements that
are executed in parallel. The program proceeds by repeatedly modifying the values of i and j, as well
as of locations in F, referring to terms F(i) and F( j). When all conditions fail, that is, when j = n and
i + 1 = n, the values in F have been sorted vis-à-vis the black-box relation “>”. The program halts, as
there is nothing left to do. (Declarations and initializations for program constants and variables are not
shown.)
This sorting program is not partial to any particular representation of the natural numbers 1, 2, etc.,
which are being used to index F. Whether an implementation uses natural language, or decimal numbers,
1 For
a video lecture of Gurevich’s on this subject, see http://www.youtube.com/v/7XfA5EhH7Bc.
E. Kashefi, J. Krivine, F. van Raamsdonk (Eds.)
DCM 2011
EPTCS 88, 2012, pp. 59–71, doi:10.4204/EPTCS.88.5
c N. Dershowitz
This work is licensed under the
Creative Commons Attribution License.
60
Generic Model of Computation
Algorithm 1 An abstract-state-machine program for sorting.
i := i + 1
if j = n then if i + 1 6= n then do
j := i + 2
F(i) := F( j)
if F(i) > F( j) then do
F( j) := F(i)
else do
j := j + 1
Algorithm 2 An abstract-state-machine program for bisection search.
if sgn f ((a + b)/2) = sgn f (a) then a := (a + b)/2
if |b − a| > ε then do
if sgn f ((a + b)/2) = sgn f (b) then b := (a + b)/2
or binary strings is immaterial, as long as addition behaves as expected (and equality and disequality, too).
Furthermore, the program will work regardless of the domain from which the values of F are drawn (be
they integers, reals, strings, or what not), so long as means are provided for evaluating the inequality (>)
relation.
Another simple ASM program is shown in Algorithm 2. This is a standard bisection search for the
root of a function, as described in [22, Algorithm #4]. The point is that this abstract formulation is, as the
author of [22] wrote, “applicable to any continuous function” over the reals—including ones that cannot
be programmed.
What is remarkable about ASMs is that this very simple model of computation suffices to precisely
capture the behavior of the whole class of ordinary algorithms over any domain. The reason is that,
by virtue of the abstract state machine (ASM) representation theorem of [25] (Theorem 2 below), any
algorithm that satisfies three very natural “Sequential Postulates” can be step-by-step, state-for-state
emulated by an ASM. Those postulates, articulated in Section 2, formalize the following intuitions: (I)
an algorithm is a state-transition system; (II) given the algorithm, state information determines future
transitions and can be captured by a logical structure; and (III) state transitions are governed by the
values of a finite and input-independent set of terms.
The significance of the Sequential Postulates lies in their comprehensiveness. They formalize which
features exactly characterize a classical algorithm in its most abstract and generic manifestation. Programs of all models of effective, sequential computation satisfy the postulates, as do idealized algorithms
for computing with real numbers (e.g. Algorithm 2), or for geometric constructions with compass and
straightedge (see [34] for examples of the latter).
Abstract state machines are a computational model that is not wedded to any particular data representation, in the way, say, that Turing machines manipulate strings using a small set of tape operations. The
Representation Theorem, restated in Section 3, establishes that ASMs can express and precisely emulate
any and all algorithms satisfying the premises captured by the postulates. For any such algorithm, there
is an ASM program that describes precisely the same state-transition function, state after state, as does
the algorithm. In this sense, ASMs subsume all other computational models.
It may be informative to note the similarity between the form of an ASM, namely, a single repeated
loop of a set of generalized assignments nested within conditionals with the “folk theorem” to the effect
N. Dershowitz
61
that any flowchart program can be converted to a single loop composed of conditionals, sequencing, and
assignments, with the aid of some auxiliary variables (see [29]). Parallel composition gives ASMs the
ability to perform multiple actions sans extra variables, and to capture all that transpires in a single step
of any algorithm.
This versatility of ASMs is what makes them so ideal for both specification and prototyping. Indeed,
ASMs have been used to model all manner of programming applications, systems, and languages, each
on the precise intended level of abstraction. See [13] and the ASM website (http://www.eecs.umich.
edu/gasm) for numerous exemplars. ASMs provide a complete means of describing algorithms, whether
or not they can be implemented effectively. On account of their abstractness, one can express generic
algorithms, like our bisection search for arbitrary continuous real-valued functions, or like Gaussian
elimination, even when the field over which it is applied is left unspecified. AsmL [26], an executable
specification language based on the ASM framework, has been used in industry, in particular for the
behavioral specification of interfaces (see, for example, [1]).
Church’s Thesis asserts that the recursive functions are the only numeric functions that can be effectively computed. Similarly, Turing’s Thesis stakes the claim that any function on strings that can
be mechanically computed can be computed, in particular, by a Turing machine. More generally, one
additional natural hypothesis regarding the describability of initial states of algorithms, as explained in
Section 5, characterizes the effectiveness of any model of computation, operating over any (countable)
data domain (Theorem 4).
On account of the ability of ASMs to precisely capture single steps of any algorithm, one can infer
absolute bounds on the complexity of algorithms under arbitrary effective models of computation, as will
be seen (Theorem 6) at the end of Section 5.
2
Sequential Algorithms
The Sequential Postulates of [25] regarding algorithmic behavior are based on the following key observations:
• A state should contain all the relevant information, apart from the algorithm itself, needed to
determine the next steps. For example, the “instantaneous description” of a Turing machine computation is just what is needed to pick up a machine’s computation from where it has been left
off; see [38]. Similarly, the “continuation” of a Lisp program contains all the state information
needed to resume its computation. First-order structures suffice to model all salient features of
states. Compare [32, pp. 420–429].
• The values of programming variables, in and of themselves, are meaningless to an algorithm,
which is implementation independent. Rather, it is relationships between values that matter to the
algorithm. It follows that an algorithm should work equally well in isomorphic worlds. Compare
[19, p. 128]. An algorithm can—indeed, can only—determine relations between values stored in
a state via terms in its vocabulary and equalities (and disequalities) between their values.
• Algorithms are expressed by means of finite texts, making reference to only finitely many terms
and relations among them. See, for example, [31, p. 493].
The three postulates given below (from [25], modified slightly as in [4, 5, 6, 3]) assert that a classical
algorithm is a state-transition system operating over first-order structures in a way that is invariant under
isomorphisms. An algorithm is a prescription for updating states, that is, for changing some of the
interpretations given to symbols by states. The essential idea is that there is a fixed finite set of terms
Generic Model of Computation
62
that refer (possibly indirectly) to locations within a state and which suffice to determine how the state
changes during any transition.
2.1 Sequential Time
To begin with, algorithms are deterministic state-transition systems.
Postulate I (Sequential Time) An algorithm determines the following:
• A nonempty set2 S of states and a nonempty subset S0 ⊆ S of initial states.
• A partial next-state transition function τ : S ⇀ S .
Terminal states S‡ ⊆ S are those states X for which no transition τ (X ) is defined.
Having the transition depend only on the state means that states must store all the information needed
to determine subsequent behavior. Prior history is unavailable to the algorithm unless stored in the current
state.
State-transitions are deterministic. Classical algorithms in fact never leave room for choices, nor
do they involve any sort of interaction with the environment to determine the next step. To incorporate
nondeterministic choice, probabilistic choice, or interaction with the environment, one would need to
modify the above notion of transition.
This postulate is meant to exclude formalisms, such as [21, 33], in which the result of a
computation—or the continuation of a computation—may depend on (the limit of) an infinite sequence
of preceding (finite or infinitesimal) steps. Likewise, processes in which states evolve continuously (as
in analog processes, like the position of a bouncing ball), rather than discretely, are eschewed.
Though it may appear at first glance that a recursive function does not fit under the rubric of a
state-transition system, in fact the definition of a traditional recursive function comes together with a
computation rule for evaluating it. As Rogers [36, p. 7] writes, “We obtain the computation uniquely by
working from the inside out and from left to right”.
2.2 Abstract State
Algorithm states are comprehensive: they incorporate all the relevant data (including any “program
counter”) that, when coupled with the program, completely determine the future of a computation. States
may be regarded as structures with (finitely many) functions, relations, and constants. To simplify matters, relations will be treated as truth-valued functions and constants as nullary functions. So, each state
consists of a domain (base set, universe, carrier) and interpretations for its symbols. All relevant information about a state is given explicitly in the state by means of its interpretation of the symbols appearing
in the vocabulary of the structure. The specific details of the implementation of the data types used by
the algorithm cannot matter. In this sense states are “abstract”. This crucial consideration leads to the
second postulate.
Postulate II (Abstract State) The states S of an algorithm are (first-order) structures over a finite
vocabulary F , such that the following hold:
• If X is a state of the algorithm, then any structure Y that is isomorphic to X is also a state, and Y
is initial or terminal if X is initial or terminal, respectively.
• Transitions preserve the domain; that is, Dom τ (X ) = Dom X for every non-terminal state X .
2 Or
class; the distinction is irrelevant for our purposes.
N. Dershowitz
63
• Transitions respect isomorphisms, so, if ζ : X ∼
= Y is an isomorphism of non-terminal states X ,Y ,
then also ζ : τ (X ) ∼
= τ (Y ).
State structures are endowed with Boolean truth values and standard Boolean operations, and vocabularies include symbols for these. As a structure, a state interprets each of the function symbols in its
vocabulary. For every k-ary symbol f in the vocabulary of a state X and values a1 , . . . , ak in its domain,
some domain value b is assigned to the location f (a1 , . . . , ak ), for which we write f (ā) 7→ b. In this way,
X assigns a value [[t]]X in Dom X to (ground) terms t.
Vocabularies are finite, since an algorithm must be describable in finite terms, so can only refer
explicitly to finitely many operations. Hence, an algorithm can not, for instance, involve all of Knuth’s
arrow operations, ↑, ↑↑, ↑↑↑, etc. Instead one could employ a ternary operation λ x, y, z. x ↑z y.
This postulate is justified by the vast experience of mathematicians and scientists who have faithfully
and transparently presented every kind of static mathematical or scientific reality as a logical structure.
In restricting structures to be “first-order”, we are limiting the syntax to be first-order. This precludes
states with infinitary operations, like the supremum of infinitely many objects, which would not make
sense from an algorithmic point of view. This does not, however, limit the semantics of algorithms to
first-order notions. The domain of states may have sequences, or sets, or other higher-order objects, in
which case, the state would also need to provide operations for dealing with those objects.
Closure under isomorphism ensures that the algorithm can operate on the chosen level of abstraction.
The states’ internal representation of data is invisible and immaterial to the program. This means that
the behavior of an algorithm, in contradistinction with its “implementation” as a C program—cannot, for
example, depend on the memory address of some variable. If an algorithm does depend on such matters,
then its full description must also include specifics of memory allocation.
It is possible to liberalize this postulate somewhat to allow the domain to grow or shrink, or for the
vocabulary to be infinite or extensible, but such “enhancements” do not materially change the notion of
algorithm. An extension to structures with partial operations is given in [3]; see Section 4.
2.3 Effective Transitions
The actions taken by a transition are describable in terms of updates of the form f (ā) 7→ b, meaning
that b is the new interpretation to be given by the next state to the function symbol f for values ā. To
program such an update, one can use an assignment f (s̄) := t such that [[s̄]]X = ā and [[t]]X = b. We view
a state X as a collection of the graphs of its operations, each point of which is a location-value pair also
denoted f (ā) 7→ b. Thus, we can define the update set ∆(X ) as the changed points, τ (X ) \ X . When X is
a terminal state and τ (X ) is undefined, we indicate that by setting ∆(X ) = ⊥.
The point is that ∆ encapsulates the state-transition relation τ of an algorithm by providing all the
information necessary to update the interpretation given by the current state. But to produce ∆(X ) for a
particular state X , the algorithm needs to evaluate some terms with the help of the information stored in
X . The next postulate will ensure that ∆ has a finite representation and its updates can be determined and
performed by means of only a finite amount of work. Simply stated, there is a fixed, finite set of ground
terms that determines the stepwise behavior of an algorithm.
Postulate III (Effective Transitions) 3 For every algorithm, there is a finite set T of (ground) critical
terms over the state vocabulary, such that states that agree on the values of the terms in T also share the
same update sets. That is, ∆(X ) = ∆(Y ), for any two states X ,Y such that [[t]]X = [[t]]Y for all t ∈ T . In
particular, if one of X and Y is terminal, so is the other.
3 Or
Bounded Exploration.
Generic Model of Computation
64
The intuition is that an algorithm must base its actions on the values contained at locations in the
current state. Unless all states undergo the same updates unconditionally, an algorithm must explore
one or more values at some accessible locations in the current state before determining how to proceed.
The only means that an algorithm has with which to reference locations is via terms, since the values
themselves are abstract entities. If every referenced location has the same value in two states, then the
behavior of the algorithm must be the same for both of those states.
This postulate—with its fixed, finite set of critical terms—precludes programs of infinite size (like
an infinite table lookup) or which are input-dependent.
A careful analysis of the notion of algorithm in [25] and an examination of the intent of the founders
of the field of computability in [18] demonstrate that the Sequential Postulates are in fact true of all
ordinary, sequential algorithms, the (only) kind envisioned by the pioneers of the field. In other words,
all classical algorithms satisfy Postulates I, II, and III. In this sense, the traditional notion of algorithm is
precisely captured by these axioms.
Definition 1 (Classical Algorithm) An object satisfying Postulates I, II, and III shall be called a classical algorithm.
2.4 Equivalent Algorithms
It makes sense to say that two algorithms have the same behavior, or are behaviorally equivalent, if they
operate over the same states and have the same transition function.
Two algorithms are syntactically equivalent if their states are the same up to renaming of symbols
(α -conversion) in their vocabularies, and if transitions are the same after renaming.
For a wide-ranging discussion of algorithm equivalence, see [2].
3
Abstract State Machines
Abstract state machines (ASMs) are an all-powerful description language for the classical algorithms we
have been characterizing.
3.1 Programs
The semantics of the ASM statements, assignment, parallel composition, and conditionals, are as expected, and are formalized below. The program, as such, defines a single step, which is repeated forever
or until there is no next state.
For convenience, we show only a simple form of ASMs. Bear in mind, however, that much richer
languages for ASMs are given in [24] and are used in practice [27].
Programs are expressed in terms of some vocabulary. By convention, ASM programs always include
symbols for the Boolean values (true and false), undef for a default, “undefined” value, standard Boolean
operations (¬, ∧, ∨), and equality (=, 6=). The vocabulary of the sorting program, for instance, contains
F = {1, 2, +, >, F, n, i, j} in addition to the standard symbols. Suppose that its states have integers and
the three standard values for their domain. The nullary symbols 0 and n are fixed programming constants
and serve as bounds of F. The nullary symbols i and j are programming “variables” and are used as
array indices. All its states interpret the symbols 1, 2, +, >, as well as the standard symbols, as usual.
Unlike i, j, and F, these are static; their interpretation will never be changed by the program. Initial
states have n ≥ 0, i = 0, j = 1, some integer values for F(0), . . . , F(n − 1), plus undef for all other points
N. Dershowitz
65
States X such that
Update set ∆(X )
0
[[ j]] = [[n]] = [[i]] + 1
⊥
1
[[ j]] = [[n]] 6= [[i]] + 1
i 7→ [[i]] + 1, j 7→ [[i]] + 2
2
[[ j]] 6= [[n]] , [[F(i)]] > [[F( j)]] F([[i]]) 7→ [[F( j)]] , F([[ j]]) 7→ [[F(i)]] , j 7→ [[ j]] + 1
3
[[ j]] 6= [[n]] , [[F(i)]] 6> [[F( j)]]
j 7→ [[ j]] + 1
Table 1: Update sets for sorting program.
of F. This program always terminates successfully, with j = n = i + 1 and with the first n elements of F
in nondecreasing order.
There are no hidden variables in ASMs. If some steps of an algorithm are intended to be executed in
sequence, say, then the ASM will need to keep explicit track of where in the sequence it is up to.
3.2 Semantics
Unlike algorithms, which are observed to either change the value of a location in the current state, or
not, an ASM might “update” a location in a trivial way, giving it the same value it already has. Also,
an ASM might designate two conflicting updates for the same location, what is called a clash, in which
case the standard ASM semantics are to cause the run to fail (just as real-world programs might abort).
An alternative semantics is to imagine a nondeterministic choice between the competing values. (Both
were considered in [24].) Here, we prefer to ignore both nondeterminism and implicit failure, and tacitly
presume that an ASM never involves clashes, albeit this is an undecidable property.
To take the various possibilities into account, a proposed update set ∆+
P (X ) (cf. [4]) for an ASM P
may be defined in the following manner:
∆+f (s1 ,...,sn ):=t (X ) = { f ([[s1 ]]X , . . . , [[sn ]]X ) 7→ [[t]]X }
+
∆+
(X ) ∪ · · · ∪ ∆+
Pn (X )
do {P1 ···Pn } (X ) = ∆
(P1
+
∆P (X ) if X |= C
∆+
(X
)
=
if C then P else Q
∆+ (X ) otherwise
( Q
∆+
P (X ) if X |= C
∆+
(X
)
=
if C then P
∅
otherwise .
Here X |= C means, of course, that Boolean condition C holds true in X . When the condition C of a
conditional statement does not evaluate to true, the statement does not contribute any updates.
When ∆+ (X ) = ∅ for ASM P, its execution halts with success, in terminal state X . (Since no
confusion will arise, we are dropping the subscript P.) Otherwise, the updates are applied to X to yield
the next state by replacing the values of all locations in X that are referred to in ∆+ (X ). So, if the latter
contains only trivial updates, P will loop forever.
For terminal states X , the update set ∆(X ) is ⊥, to signify that there is no next state. For non-terminal
X , ∆(X ) is the set of non-trivial updates in ∆+ (X ). The update sets for the sorting program (Algorithm 1)
are shown in Table 1, with the subscript in [[·]]X omitted. For example, if state X is such that n = 2, i = 0,
Generic Model of Computation
66
j = 1, F(0) = 1, and F(1) = 0, then (per row 2) ∆+ (X ) = {F(0) 7→ 0, F(1) 7→ 1, j 7→ 2}. For this X ,
∆(X ) = ∆+ (X ), and the next state X ′ = τ (X ) has i = 0 (as before), j = 2, F(0) = 0 and F(1) = 1. After
one more step (per row 1), in which F is unchanged, the algorithm reaches a terminal state, X ′′ = τ (X ′ ),
with j = n = i + 1 = 2. Then (by row 0), ∆+ (X ′′ ) = ∅ and ∆(X ′′ ) = ⊥.
4
The Representation Theorem
Abstract state machines clearly satisfy the three Sequential Postulates: ASMs define a state-transition
function; they operate over abstract states; and they depend critically on the values of a finite set of
terms appearing in the program (and on the unchanging values of parts of the state not modified by the
program). For example, the critical terms for our sorting ASM are all the terms appearing in it, except
for the left-hand sides of assignments, which contribute their proper subterms instead. These are j 6= n,
( j = n) ∧ (i + 1 6= n), F(i) > F( j), i + 2, j + 1, and their subterms. Only the values of these affect the
computation. Thus, any ASM describes a classical algorithm over structures with the same vocabulary
(similarity type).
The converse is of greater significance:
Theorem 2 (Representation [25, Theorem 6.13]) Every classical algorithm, in the sense of Definition 1, has a behaviorally equivalent ASM, with the exact same states and state-transition function.
The proof of this representation theorem constructs an ASM that contains conditions involving equalities and disequalities between critical terms. Closure under isomorphisms is an essential ingredient for
making it possible to express any algorithm in the language of terms.
A typical ASM models partial functions (like division or tangent) by using the special value, undef, denoting that the argument is outside the function’s domain of definition, and arranging that most
operations be strict, so a term involving an undefined subterm is likewise undefined. The state of such
an ASM would return true when asked to evaluate an expression c/0 = undef, and it can, therefore, be
programmed to work properly, despite the partiality of division.
In [3], the analysis and representation theorem have been refined for algorithms employing truly
partial operations, operations that cause an algorithm to hang when an operation is attempted outside its
domain of definition (rather than return undef). The point is that there is a behaviorally equivalent ASM
that never attempts to access locations in the state that are not also accessed by the given algorithm. Such
partial operations are required in the next section.
5
Effective Algorithms
The Church-Turing Thesis [30, Thesis I† ] asserts that standard models capture effective computation.
Specifically:
All effectively computable numeric (partial) functions are (partial) recursive.
All (partial) string functions can be computed by a Turing machine.
We say that an algorithm computes a partial function f : Dk ⇀ D if there are input states I ⊆ S0 ,
with particular locations for input values, such that running the algorithm results in the correct output
values of f . Specifically:
• The domain of each input state is D. There are k terms such that their values in input states cover
all tuples in Dk . Other than that, input states all agree on the values of all other terms.
N. Dershowitz
67
• For all input values ā, the corresponding input state leads, via a sequence of transitions τ , to a
terminal state in which the value of a designated term t (in the vocabulary of the algorithm) is f (ā)
whenever the latter is defined, and leads to an infinite computation whenever it is not.
To capture what it is that makes a sequential algorithm mechanically computable, we need for input
states to be finitely representable. Accordingly, we insist that they harbor no information beyond the
means to reach domain values, plus anything that can be derived therefrom.
We say that function symbols C construct domain D in state X if X assigns each value in D to exactly
one term over C , so restricting X to C gives a free Herbrand algebra. For example, the domain of the
sorting algorithm, consisting of integers and Booleans, can be constructed from 0, true, false, undef, and
a “successor” function (call it c) that takes non-negative integers (n) to the predecessor of their negation
(−n − 1) and negative integers (−n) to their absolute value (n).
Postulate III ensures that the transition function is describable by a finite text, and—in particular–by
the text of ASM. For an algorithm to be effective, its states must also be finitely describable.
Definition 3 (Effectiveness)
1. A state is effective if it includes constructors for its domain, plus operations that are almost everywhere the same, meaning that all but finitely-many locations (these can hold input values) have
the same default value (such as undef ).
2. A classical algorithm is effective if its initial states are.
3. Moreover, effective algorithms can be bootstrapped: A state is effective also if its vocabulary can
be enriched to C ⊎ G so that C constructs its domain, while every (total or partial) operation in
G is computed by an effective algorithm over those constructors.
4. A model (of computation), that is, a set of algorithms with shared domain(s), is effective if all its
algorithms are, via the same constructors.
This effectiveness postulate excludes algorithms with ineffective oracles, such as the halting function. Having only free constructors at the foundation precludes the hiding of potentially uncomputable
information by means of equalities between distinct representations of the same domain element.
This is the approach to effectiveness advocated in [11], extended to include partial functions in states,
as in [3]. For each n ≥ 1, our sorting algorithm is effective in this sense, since addition (+) of the natural
numbers and comparisons (>) of integers, operations that reside in its initial states, can be programmed
from the above-mentioned constructors (0, true, false, undef, c).
In particular, partial-recursion for natural numbers and Turing machines for strings form effective
models [11]. Furthermore, it is shown in [12] that three prima facie different definitions of effectiveness
over arbitrary domains, as proposed in [11, 18, 35], respectively, comprise exactly the same functions,
strengthening the conviction that the essence of the underlying notion of computability has in fact been
captured.
Theorem 4 (Church-Turing Thesis [11]) For every effective model, there is a representation of its domain values as strings, such that its algorithms are each simulated by some Turing machine.
Call an effective computational model maximal if adding any function to those that it computes
results in a set of functions that cannot be simulated by any effective model. Remarkably (or perhaps
not), there is exactly one such model:
Theorem 5 (Effectiveness [12, Theorem 4]) The set of partial recursive functions (and likewise the set
of Turing-computable string functions) is the unique maximal effective model, up to isomorphism, over
any countable domain.
Generic Model of Computation
68
We have recently extended the proof of the Church-Turing Thesis and demonstrated the validity of
the widely believed Extended Church-Turing Thesis:
Theorem 6 (Extended Church-Turing Thesis [17]) Every effective algorithm can be polynomially
simulated by a Turing machine.
6
Conclusion
We have dealt herein with the classical type of algorithms, that is to say, with the “small-step” (meaning,
only bounded parallelism) “sequential-time” (deterministic, no intra-step interaction with the outside
world) case. Abstract state machines can faithfully emulate any algorithm in this class, as we have seen
in Theorem 2. Furthermore, we have characterized the distinction between effective algorithms and their
more abstract siblings in Theorem 4.
There are various “declarative” styles of programming for which the state-transition relation is implicit, rather than explicit as it is for our notion of algorithm. For such programs to be algorithms in the
sense of Definition 1, they would have to be equipped with a specific execution mechanism, like the one
for recursion mentioned above. For Prolog, for example, the mechanism of unification and the mode of
search would need to be specified [14].
The abstract-state-machine paradigm can be extended to handle more modern notions:
• When desired, an algorithm can make an explicit distinction between successful and failing terminal states by storing particular values in specific locations of the final state. Alternatively, one may
declare failure when there is a conflict between two or more enabled assignments. See [24].
• There is no difficulty in allowing for nondeterminism, that is, for a multivalued transition function.
If the semantics are such that a choice is made between clashing assignment statements, then
transitions are indeed nondeterministic. See [24, 28].
• More general forms of nondeterminism can be obtained by adding a choice command of some sort
to the language. See [24].
• Nothing needs to be added to the syntax of ASMs to apply to cases for the environment provides
input incrementally. One need only imagine that the environment is allowed to modify the values
of some (specified) set of locations in the state between machine steps. See [24].
• In [4, 5, 6], the analysis of algorithms was extended to the case when an algorithm interacts with
the outside environment during a step, and execution waits until all queries of the environment
have been responded to.
• In [8, 9], all forms of interaction are handled.
• In [7], the analysis was extended to massively parallel algorithms.
• Distributed algorithms are handled in [24, 20].
• The fact that ASMs can emulate algorithms step-for-step facilitates reasoning about the complexity
of algorithms, as for Theorem 6 above. Parallel ASMs have been used for studying the complexity
of algorithms over unordered structures. See [10, 37].
• Quantum algorithms have been modeled by ASMs in [23].
• Current research includes an extension of the framework for hybrid systems, combining discrete
(sequential steps) and analog (evolving over time) behaviors [15, 16].
N. Dershowitz
69
Acknowledgements
I thank Yuri Gurevich and Nikolaj Bjørner for their perspicacious suggestions, the referees for their
questions, and Evgenia Falkovich for her help.
References
[1] Mike Barnett & Wolfram Schulte (2001): The ABCs of Specification: AsmL, Behavior, and Components. Informatica (Slovenia) 25(4), pp. 517–526. Available at http://research.microsoft.com/pubs/73061/
TheABCsOfSpecification(Informatica2001).pdf (viewed June 7, 2009).
[2] Andreas Blass, Nachum Dershowitz & Yuri Gurevich (2009): When are Two Algorithms the Same? Bulletin
of Symbolic Logic 15(2), pp. 145–168, doi:10.2178/bsl/1243948484. Available at http://nachum.
org/papers/WhenAreTwo.pdf (viewed Mar. 27, 2011).
[3] Andreas Blass, Nachum Dershowitz & Yuri Gurevich (2010): Exact Exploration and Hanging Algorithms. In: Proceedings of the 19th EACSL Annual Conferences on Computer Science Logic (Brno,
Czech Republic), Lecture Notes in Computer Science, Springer, Berlin, Germany, pp. 140–154, doi:10.
1007/978-3-642-15205-4_14. Available at http://nachum.org/papers/HangingAlgorithms.
pdf (viewed May 27, 2011); longer version at http://nachum.org/papers/ExactExploration.pdf
(viewed May 27, 2011).
[4] Andreas Blass & Yuri Gurevich (2006): Ordinary Interactive Small-Step Algorithms, Part I. ACM Transactions on Computational Logic 7(2), pp. 363–419, doi:10.1145/1131313.1131320. Available at http://
tocl.acm.org/accepted/blass04.ps (viewed May 21, 2009).
[5] Andreas Blass & Yuri Gurevich (2007): Ordinary Interactive Small-Step Algorithms, Part II. ACM Transactions on Computational Logic 8(3), doi:10.1145/1243996.1243998. Article 15. Available at http://
tocl.acm.org/accepted/blass2.pdf (viewed May 21, 2009).
[6] Andreas Blass & Yuri Gurevich (2007): Ordinary Interactive Small-Step Algorithms, Part III. ACM Transactions on Computational Logic 8(3), doi:10.1145/1243996.1243999. Article 16. Available at http://
tocl.acm.org/accepted/250blass.pdf (viewed May 21, 2009).
[7] Andreas Blass & Yuri Gurevich (2008): Abstract State Machines Capture Parallel Algorithms: Correction
and Extension. ACM Transactions on Computation Logic 9(3), doi:10.1145/1352582.1352587. Article
19. Available at http://research.microsoft.com/en-us/um/people/gurevich/Opera/157-2.pdf
(viewed Aug. 11, 2010).
[8] Andreas Blass, Yuri Gurevich, Dean Rosenzweig & Benjamin Rossman (2007): Interactive Small-Step Algorithms, Part I: Axiomatization. Logical Methods in Computer Science 3(4), doi:10.2168/LMCS-3(4:
3)2007. Paper 3. Available at http://research.microsoft.com/~gurevich/Opera/176.pdf (viewed
June 5, 2009).
[9] Andreas Blass, Yuri Gurevich, Dean Rosenzweig & Benjamin Rossman (2007): Interactive Small-Step Algorithms, Part II: Abstract State Machines and the Characterization Theorem. Logical Methods in Computer
Science 4(4), doi:10.2168/LMCS-3(4:4)2007. Paper 4. Available at http://arxiv.org/pdf/0707.
3789v2 (viewed July 17, 2011).
[10] Andreas Blass, Yuri Gurevich & Saharon Shelah (2002): On Polynomial Time Computation over Unordered
Structures. Journal of Symbolic Logic 67(3), pp. 1093–1125, doi:10.2178/jsl/1190150152. Available at
http://research.microsoft.com/en-us/um/people/gurevich/Opera/150.pdf (viewed July 13,
2011).
[11] Udi Boker & Nachum Dershowitz (2008): The Church-Turing Thesis over Arbitrary Domains. In Arnon
Avron, Nachum Dershowitz & Alexander Rabinovich, editors: Pillars of Computer Science, Essays Dedicated to Boris (Boaz) Trakhtenbrot on the Occasion of His 85th Birthday, Lecture Notes in Computer Science
70
Generic Model of Computation
4800, Springer, pp. 199–229, doi:10.1007/978-3-540-78127-1_12. Available at http://nachum.org/
papers/ArbitraryDomains.pdf (viewed Aug. 11, 2010).
[12] Udi Boker & Nachum Dershowitz (2010): Three Paths to Effectiveness. In Andreas Blass, Nachum Dershowitz & Wolfgang Reisig, editors: Fields of Logic and Computation: Essays Dedicated to Yuri Gurevich on the Occasion of His 70th Birthday, Lecture Notes in Computer Science 6300, Springer, Berlin,
Germany, pp. 36–47, doi:10.1007/978-3-642-15025-8_7. Available at http://nachum.org/papers/
ThreePathsToEffectiveness.pdf (viewed Aug. 11, 2010).
[13] Egon Börger (2002): The Origins and the Development of the ASM Method for High Level System Design
and Analysis. Journal of Universal Computer Science 8(1), pp. 2–74, doi:10.3217/jucs-008-01-0002.
Available at http://www.jucs.org/jucs_8_1/the_origins_and_the/Boerger_E.pdf (viewed June
17, 2009).
[14] Egon Börger & Dean Rosenzweig (1995): A Mathematical Definition of Full Prolog. Science of Computer
Programming 24, pp. 249–286, doi:10.1016/0167-6423(95)00006-E. Available at ftp://www.eecs.
umich.edu/groups/gasm/prolog.pdf (viewed July 17, 2011).
[15] Olivier Bournez & Nachum Dershowitz (2010): Foundations of Analog Algorithms. In: Proceedings of the
Third International Workshop on Physics and Computation (P&C), Nile River, Egypt, pp. 85–94. Available
at http://nachum.org/papers/Analog.pdf (viewed May 27, 2011).
[16] Olivier Bournez, Nachum Dershowitz & Evgenia Falkovich (2012): Towards an Axiomatization of Simple
Analog Algorithms. In Manindra Agrawal, S. Barry Cooper & Angsheng Li, editors: Proceedings of the 9th
Annual Conference on Theory and Applications of Models of Computation (TAMC 2012, Beijing, China),
Lecture Notes in Computer Science 7287, Springer Verlag, pp. 525–536. Available at http://dx.doi.
org/10.1007/978-3-642-29952-0_49. Available at http://nachum.org/papers/SimpleAnalog.
pdf (viewed July 11, 2012).
[17] Nachum Dershowitz & Evgenia Falkovich (2011): A Formalization and Proof of the Extended Church-Turing
Thesis. In: Proceedings of the Seventh International Workshop on Developments in Computational Models (DCM 2011, July 2012, Zurich, Switzerland), Electronic Proceedings in Theoretical Computer Science.
Available at http://nachum.org/papers/ECCT.pdf (viewed July 15, 2011).
[18] Nachum Dershowitz & Yuri Gurevich (2008): A Natural Axiomatization of Computability and Proof of
Church’s Thesis. Bulletin of Symbolic Logic 14(3), pp. 299–350, doi:10.2178/bsl/1231081370. Available
at http://nachum.org/papers/Church.pdf (viewed Apr. 15, 2009).
[19] Robin Gandy (1980): Church’s Thesis and Principles for Mechanisms. In: The Kleene Symposium,
Studies in Logic and the Foundations of Mathematics 101, North-Holland, pp. 123–148, doi:10.1016/
S0049-237X(08)71257-6.
[20] Andreas Glausch & Wolfgang Reisig (2009): An ASM-Characterization of a Class of Distributed Algorithms. In Jean-Raymond Abrial & Uwe Glässer, editors: Rigorous Methods for Software Construction and Analysis, Lecture Notes in Computer Science 5115, Springer, Berlin, pp. 50–64, doi:10.1007/
978-3-642-11447-2_4. Available at http://www2.informatik.hu-berlin.de/top/download/
publications/GlauschR2007_dagstuhl.pdf (viewed Aug. 11, 2010).
[21] E. Mark Gold (1965): Limiting Recursion. J. Symbolic Logic 30(1), pp. 28–48, doi:10.2307/2270580.
[22] Saul Gorn (1960): Algorithms: Bisection Routine. Communications of the ACM 3(3), p. 174, doi:10.1145/
367149.367173.
[23] Erich Grädel & Antje Nowack (2003): Quantum Computing and Abstract State Machines. In: Proceedings
of the 10th International Conference on Abstract State Machines: Advances in Theory and Practice (ASM
’03; Taormina, Italy), Springer-Verlag, Berlin, pp. 309–323, doi:10.1007/3-540-36498-6_18. Available
at http://www.logic.rwth-aachen.de/pub/graedel/GrNo-asm03.ps (viewed July 13, 2011).
[24] Yuri Gurevich (1995): Evolving Algebras 1993: Lipari Guide. In Egon Börger, editor: Specification and
Validation Methods, Oxford University Press, pp. 9–36. Available at http://research.microsoft.com/
~gurevich/opera/103.pdf (viewed Apr. 15, 2009).
N. Dershowitz
71
[25] Yuri Gurevich (2000): Sequential Abstract State Machines Capture Sequential Algorithms. ACM Transactions on Computational Logic 1(1), pp. 77–111, doi:10.1145/343369.343384. Available at http://
research.microsoft.com/~gurevich/opera/141.pdf (viewed Apr. 15, 2009).
[26] Yuri Gurevich, Benjamin Rossman & Wolfram Schulte (2005): Semantic Essence of AsmL. Theoretical Computer Science 343(3), pp. 370–412, doi:10.1016/j.tcs.2005.06.017. Available at http://research.
microsoft.com/~gurevich/opera/169.pdf (viewed June 7, 2009).
[27] Yuri Gurevich, Wolfram Schulte & Margus Veanes (2001): Toward Industrial Strength Abstract State
Machines. Technical Report MSR-TR-2001-98, Microsoft Research. Available at http://research.
microsoft.com/en-us/um/people/gurevich/opera/155.pdf (viewed Aug. 11, 2010).
[28] Yuri Gurevich & Tatiana Yavorskaya (2006): On Bounded Exploration and Bounded Nondeterminism. Technical Report MSR-TR-2006-07, Microsoft Research. Available at http://research.microsoft.com/
~gurevich/opera/177.pdf (viewed Apr. 15, 2009).
[29] David Harel (1980): On Folk Theorems. Communications of the ACM 23(7), pp. 379–389, doi:10.1145/
358886.358892.
[30] Stephen C. Kleene (1967): Mathematical Logic. Wiley, New York.
[31] Stephen C. Kleene (1987): Reflections on Church’s Thesis. Notre Dame Journal of Formal Logic 28(4), pp.
490–498, doi:10.1305/ndjfl/1093637645.
[32] Emil L. Post (1994): Absolutely Unsolvable Problems and Relatively Undecidable Propositions: Account of
an Anticipation. In M. Davis, editor: Solvability, Provability, Definability: The Collected Works of Emil L.
Post, Birkhaüser, Boston, MA, pp. 375–441. Unpublished paper, 1941.
[33] Hilary Putnam (1965): Trial and Error Predicates and the Solution to a Problem of Mostowski. J. Symbolic
Logic 30(1), pp. 49–57, doi:10.2307/2270581.
[34] Wolfgang Reisig (2003): On Gurevich’s Theorem on Sequential Algorithms. Acta Informatica 39(4), pp.
273–305, doi:10.1007/s00236-002-0106-3. Available at http://www2.informatik.hu-berlin.de/
top/download/publications/Reisig2003_ai395.pdf (viewed Aug. 11, 2010).
[35] Wolfgang Reisig (2008): The Computable Kernel of Abstract State Machines. Theoretical Computer Science
409(1), pp. 126–136, doi:10.1016/j.tcs.2008.08.041. Draft available at http://www2.informatik.
hu-berlin.de/top/download/publications/Reisig2004_hub_tr177.pdf (viewed Aug. 11, 2010).
[36] Hartley Rogers, Jr. (1966): Theory of Recursive Functions and Effective Computability. McGraw-Hill, New
York.
[37] Marc Spielmann (2000): Abstract State Machines: Verification Problems and Complexity. Ph.D. thesis, RWTH Aachen, Aachen, Germany. Available at http://www-mgi.informatik.rwth-aachen.de/
~spielmann/diss.pdf (viewed July 13, 2011).
[38] Alan M. Turing (1936–37): On Computable Numbers, With an Application to the Entscheidungsproblem.
Proceedings of the London Mathematical Society 42, pp. 230–265, doi:10.1112/plms/s2-42.1.230. Corrections in vol. 43 (1937), pp. 544-546. Reprinted in M. Davis (ed.), The Undecidable, Raven Press, Hewlett,
NY, 1965. Available at http://www.abelard.org/turpap2/tp2-ie.asp.
| 6 |
Commonsense L OCATED N EAR Relation Extraction
arXiv:1711.04204v2 [cs.CL] 16 Nov 2017
Frank F. Xu∗, Bill Y. Lin∗ and Kenny Q. Zhu
{frankxu, yuchenlin}@sjtu.edu.cn, [email protected]
Department of Computer Science and Engineering
Shanghai Jiao Tong University, Shanghai, China
1
Introduction
Artificial Intelligent systems can benefit from incorporating commonsense knowledge as background,
such as ice is cold (H AS P ROPERTY), chewing is a sub-event of eating (H AS S UBEVENT), chair and
table are typically found near each other (L OCATED N EAR), etc. This kind of commonsense facts
have been utilized in many downstream tasks, such as textual entailment [4, 1] and visual recognition
tasks [29]. The commonsense knowledge is often represented as relation triples in commonsense
knowledge bases, such as ConceptNet by MIT [20], one of the largest commonsense knowledge
graph available today. However, this kind of commonsense knowledge bases are usually manually
curated or crowd-sourced by community efforts and thus do not scale well.
This paper aims at automatically extracting the commonsense L OCATED N EAR relation between
physical objects from textual corpora, which is defined as two objects typically found near each
other in real life. We focus on L OCATED N EAR relation for these reasons: (i) L OCATED N EAR facts
are helpful prior knowledge for object detection in complex image scenes; Figure 1 illustrates two
motivating examples; (ii) such commonsense knowledge can potentially benefit general reasoning in
reading comprehension, question answering as well as many other AI tasks; (iii) existing knowledge
bases have very few facts for this relation (ConceptNet 5 has only 49 triples of L OCATED N EAR).
Figure 1: L OCATED N EAR relation facts assist the detection of vague objects: in a dimly lit room
with settings shown in the left sub-figure, if a bright laptop is present on a table, one may guess that a
lamp, a photo frame or books maybe nearby. Similarly in the right sub-figure, if a set of knife, fork
and plate is on the table, one may believe there could be a glass beside based on the commonsense,
even though these objects are hardly visible due to low light.
We propose two novel tasks in extracting L OCATED N EAR relation from textual corpora. One is
a binary relation classification problem which judges whether or not a sentence is describing two
objects physically close by. The other task is to produce a ranked list of L OCATED N EAR facts with
the given classified results of large number of sentences. We believe both two tasks can help the
community further automatically complete and populate existing commonsense knowledge bases.
∗
The first two authors contribute equally.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Additionally, we also create two benchmark datasets for evaluating L OCATED N EAR relation extraction systems on the two tasks: one is 5,000 sentences each describing a scene of two physical objects
and with a label indicating if the two objects are co-located in the scene; the other consists of 500
pairs of objects with human-annotated scores indicating confidences that a certain pair of objects are
commonly located near in the real life.
We propose several methods to solve the tasks including feature-based and LSTM-based neural
architecture. The proposed neural architecture compares favorably with the current state-of-the-art
method for general-purpose relation classification problem. From our relatively smaller proposed
datasets, we extract in total 2,067 new L OCATED N EAR triples that are not in ConceptNet.
2
Sentence-level L OCATED N EAR Relation Classification
Given a sentence s mentioning a pair of physical objects <ei , ej >, we call <s, ei , ej > an instance. In
this section, we aim to determine whether ei and ej are located near each other in a physical scene
described in the sentence s. For example, suppose ei is “dog", ej is “cat”, and s = “The King puts
his dog and cat on the table.”. As it is true that the two objects are located near in this sentence, a
successful classification model is expected to label this instance as True. While if s2 = “My dog is
older than her cat.”, then the answer to the instance <s2 , ei , ej > is False, for s2 is just talking about a
general comparison. In the following subsections, we present two different kinds of baseline methods
for this binary classification task: feature-based methods and LSTM-based neural architectures.
2.1
Feature-based Methods
Our first baseline is an SVM classifier based on following features. We claim that such semantic and
syntactic features are widely utilized among existing relation classification models [2, 6, 28, 17]. Note
that we put special focus on adverbs and prepositions based on the assumption that these lexical units
describing directions and positions in physical world will help identify L OCATED N EAR relations.
Proposed features:
- Bag of Words (BW) The set of words that ever appeared in the sentence.
- Bag of Path Words (BPW) The set of words that appeared on the shortest dependency path between
objects ei and ej in the dependency tree of the sentence s, plus the words in the two subtrees rooted
at ei and ej in the parse tree.
- Bag of Adverbs and Prepositions (BAP) The existence of adverbs and prepositions in the sentence as
binary features.
- Global Features (GF) The length of the sentence, the number of nouns, verbs, adverbs, adjectives,
determiners, prepositions and punctuations in the whole sentence.
- Shortest Dependency Path Features (SDP) From the dependency parse tree of the sentence and the
shortest path between the two objects ei and ej .
- Semantic Similarity Features (SS) The cosine similarity between the pre-trained GloVe word
embeddings [16] of the two object words.
Obtaining such features for every instances, we then feed processed data into a SVM classifier.
We evaluate linear and RBF kernels with different parameter settings, and the RBF kernel with
{C = 100, γ = 10−3 } performs the best overall.
2.2
LSTM-based Neural Architectures
Long Short Term Memory based recurrent neural architectures (LSTMs) [8] are widely used in
relation classification [19, 5, 22, 24]. We observe that the existence of L OCATED N EAR relation in
an instance <s,e1 ,e2 > depends on two major information sources: one is from the semantic and
syntactical features of sentence s and the other is from the object pair <e1 ,e2 >. By this intuition, we
design our LSTM-based model with two parts, shown in Figure 2. The left part is for encoding the
syntactical and semantic information of the sentence s, while the right part is encoding the semantic
similarity between the pre-trained word embeddings of e1 and e2 .
2
Output
confidence
σ
LSTM
Dense Layer
Token
Vector
Representation
Position
Normalized
Sequence
Original
Sentence
DT
lead#s
lead
DT
𝐸1
into
PR
JJ
𝐸2
-4
-3
-2
-1
0
1
2
3
4
-8
-7
-6
-5
-4
-3
-2
-1
0
The king
Token
Pre-trained word
vectors of 𝐸1 and 𝐸2
Position
led the dog into his nice garden .
dog
garden
Figure 2: The proposed LSTM-based model
2.2.1
Sentence Normalization
Using the original word sequence as of a sentence s as input has two problems: (i) the irrelevant
words in the sentence can take noise into model; (ii) the large vocabulary of original words induce too
many parameters, which may cause over-fitting. For example, given two sentences “The king led the
dog into his nice garden.” and “A criminal led the dog into a poor garden.”. The object pair is <dog,
garden> in both sentences. The two words “lead” and “into” are essential for determining whether
the object pair is located near, but they are not given more bias than other words. Also, the semantic
differences between irrelevant words, such as “king” and “criminal”, “beautiful” and “poor”, are not
useful to the co-location relation between the “dog” and “garden”, and thus tends to act as noise.
Level
Objects
Lemma
Dependency Role
POS Tag
Examples
E1 , E2
open, lead, into, ...
open#s, open#o, into#o, ...
DT, PR, CC, JJ, ...
Table 1: Examples of four types of tokens during sentence normalization. (#s represents the subject
of given verb or preposition, and #o represents the object)
Considering above problems, we propose utilizing POS (Part-of-Speech) tags instead to capture more
syntactical information and reduce the vocabulary size. However, solely doing this loses too much
semantic dependency between the words. Thus, we propose a normalized sentence representation
method merging the three most important and relevant kinds of information about each instance:
lemma, POS tags and dependency role 2 .
We first replace the two nouns in the object pair as E1 and E2 , keep the lemmatized form of the
original words for all the verbs, adverbs and prepositions, which are highly relevant to describing
physical scenes. Then, we replace the subjects and direct objects of the verbs and prepositions
(nsubj, dobj for verbs and case for prepositions in dependency parse tree) with special tokens
indicating their dependency roles. For the remaining words, we simply use their POS tags to replace
the originals. The four kinds of tokens are illustrated in Table 1. Table 2 is a real example of our
normalized sentence representation, where the object pair of interest is <dog, garden>.
The
DT
king
open#s
opened
open
the
DT
door
open#o
and
CC
led
lead
the
DT
dog
E1
into
into
his
PR
Table 2: Sentence Normalization Example
2
We utilize Stanford CoreNLP tool: https://stanfordnlp.github.io/CoreNLP/
3
nice
JJ
garden.
E2 .
2.2.2
Model Training
As shown in Figure 2, the bottom of the figure shows the original sentence, which is transformed to
normalized sequence described above. Apart from the normalized tokens of the original sequence, to
capture more structural information, we also encode the distance from each token to E1 and E2 . Such
word position embeddings (position/distance features) are proposed by [27] with the intuition that
information needed to determine the relation between two target nouns normally comes from words
which are close to the target nouns. Then, we leverage LSTM to encode the whole sequence of the
tokens of normalized representation plus position embedding.
In the meantime, two pretrained GloVe word embeddings [16] of the original two physical object
words are fed into a hidden dense layer. Finally, we concatenate both outputs and then use sigmoid
activation function to obtain the final prediction.
We choose to use the widely-used standard binary cross-entropy as our loss function, and RMSProp [7]
is used as optimizer. Following [26], we add 0.5 dropout in LSTM as well as embedding layer, and
utilize batch normalization [10, 3] for overfitting problem due to relatively small dataset.
3
L OCATED N EAR Relation Extraction
Figure 3 shows the overall workflow of our automatic framework to mine LocatedNear relations from
raw text. We first construct a vocabulary of physical objects and generate all candidate instances. For
each sentence in the corpus, if a pair of physical objects ei and ej appear as nouns in a sentence s,
then we apply our L OCATED N EAR relation classifier on this instance. The relation classifier yields a
probabilistic score s indicating the confidence of the existence of L OCATED N EAR relation. Finally,
all scores of <s,ei ,ej > instances from the corpus are grouped by the object pairs and aggregated,
where each object pair is associated with a final score. Such mined physical pairs with scores can
easily be integrated into existing commonsense knowledge base.
More specifically, for each object pair <ei , ej >, we find all the m sentences in our corpus mentioning
both objects. We classify the m instances with the sentence-level relation classifier and get confidences
for each instance, feed them into a function f to obtain the final score of the object pair. There are
five variants of the scoring functions:
m
m
X
1 X
f0 = m, f1 =
conf(sk , ei , ej ), f2 =
conf(sk , ei , ej )
m
k=1
f3 =
m
X
k=1
m
1 X
f4 =
1{conf(sk ,ei ,ej )>0.5}
m
1{conf(sk ,ei ,ej )>0.5} ,
k=1
Object Pairs
< 𝑠𝑚 , 𝑒𝑖 , 𝑒𝑗 >
...
𝑓
...
𝑐𝑜𝑛𝑓 < 𝑠𝑚 , 𝑒𝑖 , 𝑒𝑗 >
...
𝑐𝑜𝑛𝑓 < 𝑠1 , 𝑒𝑖 , 𝑒𝑗 >
Object
co-location
classifier
...
Corpus
Classification
Confidence
.. ..
.. ..
< 𝑒𝑖 , 𝑒𝑗 >
...
...
< 𝑠1 , 𝑒𝑖 , 𝑒𝑗 >
k=1
𝑠𝑐𝑜𝑟𝑒 < 𝑒𝑖 , 𝑒𝑗 >
LocatedNear
Relation Scores
Figure 3: Computing the L OCATED N EAR scores of object pairs
4
Datasets
Our proposed vocabulary of single-word physical objects is constructed by the intersection of all
entities that belong to “physical object” class in Wikidata and all ConceptNet concepts. We then
manually filtered out some words that have the meaning of an abstract concept, which results in 1169
physical objects in total.
Afterwards, we utilize a cleaned subset of the Project Gutenberg corpus [11], which contains 3,036
English books written by 142 authors. An assumption here is that sentences in fictions are more
4
Acc.
P
R
F1
Random
0.500
0.551
0.500
0.524
Majority
0.551
0.551
1.000
0.710
SVM
0.584
0.606
0.702
0.650
SVM(-BW)
0.577
0.579
0.675
0.623
SVM(-BPW)
0.556
0.567
0.681
0.619
SVM(-BAP)
0.563
0.573
0.811
0.672
Acc.
P
R
F1
SVM(-SDP)
0.579
0.597
0.728
0.656
SVM(-SS)
0.584
0.605
0.708
0.652
DRNN [22]
0.635
0.658
0.702
0.679
LSTM+Word
0.637
0.635
0.800
0.708
LSTM+POS
0.641
0.650
0.751
0.697
LSTM+Norm
0.653
0.654
0.784
0.713
SVM(-GF)
0.605
0.616
0.751
0.677
Table 3: Performance of baselines on co-location classification task with ablation. (Acc.=Accuracy,
P=Precision, R=Recall, “-” means without certain feature)
likely to describe real life scenes. We sample and investigate the density of L OCATED N EAR relations
in Gutenberg with other widely used corpora, namely Wikipedia, used by Mintz et al. (2009) and
New York Times corpus, created by Riedel et al. (2010) and used by Lin et al. (2016), Hoffmann
et al. (2011), Surdeanu et al. (2012). In the English Wikipedia dump, out of all sentences which
mentions at least two physical objects, 32.4% turn out to be positive. In the New York Times
corpus, the percentage of positive sentences is only 25.1%. In contrast, that percentage in the
Gutenberg corpus is 55.1%, much higher than the other two corpora, making it a good choice for
L OCATED N EAR relation extraction.
From this corpus, we identify 15,193 pairs that co-occur in more than 10 sentences. Among these
pairs, we randomly select 500 object pairs and 10 sentences with respect to each pair for annotators
to label their commonsense L OCATED N EAR. Each instance is labeled by at least three annotators
who are college students and proficient with English. The final truth label of a sentence is decided by
a majority vote from the four annotators. The Cohen’s Kappa among the three annotators is 0.711
which suggests substantial agreement. We randomly choose 4000 instances as the training set and
1000 as the test set for evaluating the first sentence-level relation classification task. For the second
task, we further ask the annotators to label whether each pair of objects are likely to locate near each
other in the real world. Majority votes determine the final truth labels. The inter-annotator agreement
here is 0.703. Both datasets are made publicly available.3
5
Evaluation
5.1
Sentence-level L OCATED N EAR Relation Classification
We evaluate the proposed methods against the state-of-the-art general domain relation classification
model (DRNN) [23]. The results are shown in Table 3. For feature-based SVM, we do feature
ablation on each of the 6 feature types (Section 2.1). For LSTM-based model, we experiment on
variants of input sequence of original sentence. “LSTM+Word” uses the original words as the input
tokens, while “LSTM+POS” uses just the POS tag sequence as the input tokens. “LSTM+Norm”
uses the tokens of sequence after sentence normalization. 4
From the results, we find that the SVM model without the Global Features performs best, which
indicates that bag-of-word features benefit more in shortest dependency paths than on the whole
sentence. We find that DRNN performs best (0.658) on precision but not significantly higher than
LSTM+Norm (0.654). The experiment also shows that LSTM+Word enjoys the highest recall score.
In terms of the overall performance, LSTM+Norm is the best one. One possible reason is that our
proposed the normalization representation reduces input sequences’ token vocabulary size, while
preserving important syntactical and semantic information. While LSTM+POS also reduces the
vocabulary size, it loses too much information.
Another reason is that L OCATED N EAR relation are described in sentence mostly with the prepositions/adverbs decorating them, which are the descendants of object word in the dependency tree,
other than words merely along the shortest dependency path. Thus, DRNN cannot capture the information from the words belonging to the descendants of the two object words in the tree, while this
3
https://adapt.seiee.sjtu.edu.cn/~frank/location_relation_data.zip
Besides, we added two naive baselines: “Random” baseline classifies the instances into two classes with
equal probability; “Majority” baseline considers all the instances to be positive.
4
5
f
f0
f1
f2
f3
f4
MAP
0.42
0.58
0.48
0.59
0.56
P@50
0.40
0.70
0.56
0.68
0.40
P@100
0.44
0.60
0.52
0.63
0.48
P@200
0.42
0.53
0.49
0.55
0.50
P@300
0.38
0.44
0.42
0.44
0.42
Table 4: Ranking performances of the 5 scoring methods.
information is captured by LSTM+Norm. For the rest of the experiments, we will use LSTM+Norm
as the classifier of our choice.
5.2
L OCATED N EAR Relation Extraction
Once we have classified the sentences using LSTM+Norm, we can extract L OCATED N EAR relation
using the four scoring functions in Section 3. We first present the quantitative results. We use each of
the scoring functions to rank the 500 commonsense L OCATED N EAR object pairs described in Section
3. Table 4 shows the ranking results using Mean Average Precision (MAP) and Precision at K as
metric. Accumulative scores (f1 and f3 ) generally do better.
(door, room)
(ship, sea)
(fire, wood)
(fire, smoke)
(book, table)
(boy, girl)
(house, garden)
(house, fire)
(door, hall)
(fruit, tree)
(cup, tea)
(arm, leg)
(horse, saddle)
(door, street)
(table, chair)
Table 5: Top object pairs returned by best performing scoring function f3
Qualitatively, we show 15 object pairs with some of the highest f3 scores in Table 5. Setting a
threshold of 40.0 for f3 , which is the minimum non-zero f3 score for all true object pairs in the
L OCATED N EAR object pairs data set (500 pairs), we obtain a total of 2,067 L OCATED N EAR relations,
with a precision of 68% by human inspection.
6
Related Work
Classifying relations between entities in a certain sentence plays a key role in NLP applications
and thus has been a hot research topic recently. Feature-based methods [6] and neural network
techniques [19, 5] are most common. Xu et al. (2015) introduce multi-channel SDP-based LSTM
model to classify relations incooperating several different kinds of information of a sentence improved
by Xu et al. (2016), which performed best on SemEval-2010 Task 8 and is one of our baseline methods.
The most related work to ours is the extraction of visual commonsense knowledge by Yatskar
et al. (2016). This work learns the textual representation of seven types of fine-grained visual relations
using textual caption for the image in MS-COCO dataset [13]. Another important related work
is from Li et al. (2016), which enriches several popular relations in ConceptNet with little textual
information from real large corpora. However, L OCATED N EAR relation was not studied in this work,
while this relation is extremely scarce in ConceptNet and has its own distinctiveness.
7
Conclusion
We presented a novel study on enriching L OCATED N EAR relationship from textual corpora. Based on
our two newly-collected benchmark datasets, we proposed several methods to solve the sentence-level
relation classification problem. We showed that existing methods do not work as well on this task
and discovered that LSTM-based model does not have significant edge over simpler feature-based
model. Whereas, our multi-level sentence normalization turns out to be useful.
Future directions include: 1) better utilizing distant supervision, 2) incorporating knowledge graph
embedding techniques, 3) applying the L OCATED N EAR knowledge into downstream applications of
Computer Vision and Natural Language Processing.
6
References
[1] S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning. A large annotated corpus for learning
natural language inference. arXiv preprint arXiv:1508.05326, 2015.
[2] R. C. Bunescu and R. J. Mooney. A shortest path dependency kernel for relation extraction. In
HLT/EMNLP, 2005.
[3] T. Cooijmans, N. Ballas, C. Laurent, Ç. Gülçehre, and A. Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016.
[4] I. Dagan, B. Dolan, B. Magnini, and D. Roth. Recognizing textual entailment: Rational,
evaluation and approaches. Natural Language Engineering, 15(4):i–xvii, 2009.
[5] J. Ebrahimi and D. Dou. Chain based rnn for relation classification. In HLT-NAACL, pages
1244–1249, 2015.
[6] I. Hendrickx, S. N. Kim, Z. Kozareva, P. Nakov, D. Ó. Séaghdha, S. Padó, M. Pennacchiotti,
L. Romano, and S. Szpakowicz. Semeval-2010 task 8: Multi-way classification of semantic
relations between pairs of nominals. In Proceedings of the 5th International Workshop on
Semantic Evaluation, SemEval@ACL 2010, Uppsala University, Uppsala, Sweden, July 15-16,
2010, pages 33–38, 2010.
[7] G. Hinton, N. Srivastava, and K. Swersky. Neural networks for machine learning lecture 6a
overview of mini–batch gradient descent. Lecture 6.5, Coursera, 2012.
[8] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):
1735–1780, 1997.
[9] R. Hoffmann, C. Zhang, X. Ling, L. Zettlemoyer, and D. S. Weld. Knowledge-based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual
Meeting of the Association for Computational Linguistics: Human Language TechnologiesVolume 1, pages 541–550. Association for Computational Linguistics, 2011.
[10] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[11] S. Lahiri. Complexity of Word Collocation Networks: A Preliminary Structural Analysis. In
Proceedings of the Student Research Workshop at EACL, pages 96–105, April 2014.
[12] X. Li, A. Taheri, L. Tu, and K. Gimpel. Commonsense knowledge base completion. In
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL),
Berlin, Germany, August. Association for Computational Linguistics, 2016.
[13] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick.
Microsoft coco: Common objects in context. In European Conference on Computer Vision,
pages 740–755. Springer, 2014.
[14] Y. Lin, S. Shen, Z. Liu, H. Luan, and M. Sun. Neural relation extraction with selective attention
over instances. In ACL, 2016.
[15] M. Mintz, S. Bills, R. Snow, and D. Jurafsky. Distant supervision for relation extraction without
labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and
the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume
2-Volume 2, pages 1003–1011. Association for Computational Linguistics, 2009.
[16] J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In
Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, 2014. URL
http://www.aclweb.org/anthology/D14-1162.
[17] X. Ren, Z. Wu, W. He, M. Qu, C. R. Voss, H. Ji, T. F. Abdelzaher, and J. Han. Cotype: Joint
extraction of typed entities and relations with knowledge bases. In WWW, 2017.
[18] S. Riedel, L. Yao, and A. McCallum. Modeling relations and their mentions without labeled
text. Machine learning and knowledge discovery in databases, pages 148–163, 2010.
7
[19] R. Socher, J. Pennington, E. H. Huang, A. Y. Ng, and C. D. Manning. Semi-supervised recursive
autoencoders for predicting sentiment distributions. In Proceedings of the conference on empirical methods in natural language processing, pages 151–161. Association for Computational
Linguistics, 2011.
[20] R. Speer and C. Havasi. Representing general relational knowledge in conceptnet 5. In LREC,
pages 3679–3686, 2012.
[21] M. Surdeanu, J. Tibshirani, R. Nallapati, and C. D. Manning. Multi-instance multi-label learning
for relation extraction. In Proceedings of the 2012 joint conference on empirical methods in
natural language processing and computational natural language learning, pages 455–465.
Association for Computational Linguistics, 2012.
[22] Y. Xu, L. Mou, G. Li, Y. Chen, H. Peng, and Z. Jin. Classifying relations via long short term
memory networks along shortest dependency paths. In EMNLP, pages 1785–1794, 2015.
[23] Y. Xu, R. Jia, L. Mou, G. Li, Y. Chen, Y. Lu, and Z. Jin. Improved relation classification by
deep recurrent neural networks with data augmentation. In COLING, 2016.
[24] Y. Xu, R. Jia, L. Mou, G. Li, Y. Chen, Y. Lu, and Z. Jin. Improved relation classification by deep
recurrent neural networks with data augmentation. arXiv preprint arXiv:1601.03651, 2016.
[25] M. Yatskar, V. Ordonez, and A. Farhadi. Stating the obvious: Extracting visual common sense
knowledge. In Proceedings of NAACL-HLT, pages 193–198, 2016.
[26] W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. arXiv
preprint arXiv:1409.2329, 2014.
[27] D. Zeng, K. Liu, S. Lai, G. Zhou, J. Zhao, et al. Relation classification via convolutional deep
neural network. In COLING, pages 2335–2344, 2014.
[28] G. Zhou, J. Su, J. Zhang, and M. Zhang. Exploring various knowledge in relation extraction. In
ACL, 2005.
[29] Y. Zhu, A. Fathi, and L. Fei-Fei. Reasoning about object affordances in a knowledge base
representation. In European conference on computer vision, pages 408–424. Springer, 2014.
8
| 2 |
Continuous-time GARCH process driven by semi-Lévy
process
arXiv:1803.00733v1 [math.ST] 2 Mar 2018
M. Mohammadi
∗
S. Rezakhah∗
N. Modarresi†
March 5, 2018
Abstract
In this paper we study the simple semi-Lévy driven continuous-time generalized
autoregressive conditionally heteroscedastic (SS-COGARCH) process. The statistical properties of this process are characterized. This process has the potential
to approximate any semi-Lévy driven COGARCH processes. We show that the
state representation of such SS-COGARCH process can be described by a random
recurrence equation with periodic random coefficients. The almost sure absolute
convergence of the state process is proved. The periodically stationary solution of
the state process is shown which cause the volatility to be periodically stationary
under some suitable conditions. Also it is shown that the increments with constant
length of such SS-COGARCH process is itself a periodically correlated (PC) process.
Finally, we apply some test to investigate the PC behavior of the increments (with
constant length) of the simulated samples of proposed SS-COGARCH process.
Keywords: Continuous-time GARCH process; Semi-Lévy process; Periodically
correlated; Periodically stationary.
1
Introduction
Many financial data and indices have heteroscedastic structure. Examples of this kind are
stocks returns, network traffic and natural data, see [4, 18, 16]. Popular model for these
data are autoregressive conditionally heteroscedastic (ARCH) model proposed by Engle
[13] and generalized ARCH (GARCH), Bollerslev [3]. The GARCH type processes have
become the most popular tools to model heteroscedasticity in discrete time.
∗
Faculty of Mathematics and Computer Science, Amirkabir University of Technology, 424 Hafez
Avenue, Tehran 15914, Iran.
E-mail: [email protected](M. Mohammadi) and [email protected](S. Rezakhah).
†
Department of Mathematics and computer science, Allameh Tabatabai University, Tehran, Iran.
E-mail: [email protected](N. Modarresi).
1
In practice, for various reasons such as high-frequency data, many time series are irregularly spaced and this has created a demand for continuous-time models, [8]. For the first
time, Kluppelberg et al. [17] introduced a continuous-time version of the GARCH(1,1)
(COGARCH(1,1)) process, which preserves the essential features of the discrete-time
GARCH(1,1) processes. They replaced the noise of the discrete-time GARCH(1,1) process with the increments of some Lévy process. The volatility of this process satisfies a
stochastic differential equation. They proved the stationarity property and also second
order properties under some regularity conditions on the corresponding Lévy process.
Brockwell et al. [8] generalized the Lévy driven COGARCH(1,1) process to the Lévy
driven COGARCH(p, q) process for q ≥ p ≥ 1 when its volatility is a continuous-time
ARMA (CARMA) process [7]. They showed that the state representation of the volatility
can be expressed as a stochastic recurrence equation with random coefficients.
Periodic behavior is common in many real-world time series such as power market prices,
car accident claims for an insurance company and sales with seasonal interest. The term
periodically correlated (PC) was introduced by Gladyshev [14], but the same property
was introduced by Bennett [1] who called them cyclostationary ([15]). Properties of PC
processes are studies by Hurd and Miamee [15]. Bibi and Lescheb [2] studied the class of
bilinear processes with periodic time-varying coefficients of periodic ARMA and periodic
GARCH models.
Lévy processes introduced by Lévy have stationary and independent increments and right
continuous paths with left limits [21]. Such processes have potential to be applied to
financial data following stochastic volatility structure. A generalization of Lévy process is
semi-Lévy process, that has periodically stationary increments, studied by Maejima and
sato [19]. We considered this process as the underlying process in CARMA [7] and COGARCH [8, 17] processes that can be applied when there is evident that the underlying
process has PC increments. The observations of such processes have significant dependency to the ones of previous periods. So semi-Lévy process are more prominent than
Lévy processes in such cases.
In this paper we introduce a COGARCH process driven by some simple semi-Lévy process, which we call SS-COGARCH process. The simple semi-Levy process is defined as
a compound Poisson process with periodic time-varying intensity with period τ . This
process enables us to provide the statistical properties of the SS-COGARCH process.
Moreover, we find a random recurrence equation with periodic random coefficients for the
state representation of such process. By some regularity condition we show the absolute
convergence of the state equation. We also show that the volatility of the SS-COGARCH
process is strictly periodically stationary. The increments of the SS-COGARCH process
with constant length h = τ /% where % is some integer is a discrete-time PC process with
period %. Such SS-COGARCH process has the potential to provide an approximation
for every semi-Lévy driven COGARCH process. Finally, we investigate the theoretical
results concerning PC structure of the increment process by simulation. We show that
the increments of the SS-COGARCH process with length h is PC with some period %
and the support of the squared coherence statistics consists of lines parallel to the main
2
diagonal and having spacing of 2π/%.
This paper is organized as follows. In section 2 we introduce the simple semi-Levy driven
COGARCH processes. For this, we present the simple semi-Levy process and obtain
the characteristic function of it. Section 3 is devoted to some sufficient conditions which
make the volatility process strictly periodically stationary. We obtain the mean, covariance function of the state process and volatility process in section 4. We also investigate
second order properties of the squared increments of the COGARCH process in this section. In section 5 we illustrate the results with simulations. All proofs are contained in
Section 6.
2
Simple semi-Lévy driven COGARCH processes
In this section we study the preliminaries such as the additive processes and their characteristic functions and semi-Lévy process in subsection 2.1. We also describe the structure
of simple semi-Lévy process and characteristics it in subsection 2.2. Then we introduce
the simple semi-Lévy driven COGARCH (SS-COGARCH) process in subsection 2.3.
2.1
Preliminaries
Let (Ω, F, (Ft )t≥0 , P) be a filtered probability space, where Ft is the smallest rightcontinuous filtration such that F0 contains all the P-null sets of F. A process (Xt )t≥0
defined on the probability space (Ω, F, (Ft )t≥0 , P) is called an additive process if X0 = 0
a.s., it is stochastically continuous, it has independent increments and its sample paths
are right-continuous and have left limits in t > 0. Further, if (Xt )t≥0 has stationary increments, it is a Lévy process [11, 21]. The characteristic function of the additive process
(Xt )t≥0 has a following Lévy-Khinchin representation [21, Theorems 9.1-9.8].
Theorem 2.1 Let (Xt )t≥0 be an additive process on Rd . Then (Xt )t≥0 has infinitely
divisible distribution for t ≥ 0. The law of (Xt )t≥0 is uniquely determined by its spot
characteristic triplet (Γt , Πt , ψt )t≥0
E[ei<w,Xt > ] = eϕt (w) ,
1
ϕt (w) = i < w, Γt > − < w, Πt w > +
2
Z
w ∈ Rd ,
(ei<w,x> − 1 − i < w, x > I{||x||≤1} )ψt (dx).
Rd
where < ·, · > is inner product and || · ||Ris Euclidean vector norm. The spot Lévy measure
ψt satisfies the integrability condition Rd min{1, ||x||2 }ψt (dx) < ∞ for t ≥ 0.
Remark 2.1 By [11, p.458-459], the spot characteristic triplet (Γt , Πt , ψt )t∈[0,T ] can be
3
defined by
Z
t
Γt =
γs ds
0
Z
t
Πt =
σs2 ds
Z0 t
υs (B)ds,
ψt (B) =
∀B ∈ Rd ,
0
where Rd is σ−field on the Rd . The triplet (γt , σt2 , υt )t∈[0,T ] is called the local characteristic
triplet of (Xt )0≤t≤T which satisfy the following conditions:
• γt : [0, T ] → Rd is a deterministic function with finite variation.
• σt : [0, T ] → Md×d (R) is a symmetric, continuous and matrix valued function which
RT
verifies 0 σt2 dt < ∞.
• (υt )t∈[0,T ] is a family of Lévy measures which verifies
Z TZ
min{1, ||x||2 }υt (dx)dt < ∞.
0
Rd
As an extension of Lévy process, we present the definition of semi-Lévy processes [19].
Definition 2.1 A subclass of additive processes (Xt )t≥0 is called semi-Lévy process with
period τ > 0, if for any 0 ≤ s ≤ t,
d
Xt − Xs = Xt+τ − Xs+τ
d
where = denotes the equality in distributions.
2.2
Structure of simple semi-Lévy process
For describing the structure of the simple semi-Lévy process, we define the general structure of the intensities function of the Poisson process with periodically stationary increments. We also characterize this pure jump process by representation the characteristic
function and introduce the corresponding semi-Lévy measure.
Definition 2.2 : Poisson process with periodically stationary increment
A process N (t) t≥0 is a Poisson process with periodically stationary increment where
E N (t) = Λ(t),
Z t
Λ(t) =
λ(u)du
(2.1)
0
and the intensity λ(·) is a periodic non-negative function with some period τ > 0, so
λ(t) = λ(t + kτ ) for t ≥ 0, k ∈ N.
4
Definition 2.3 : Simple compound Poisson process
Let 0 = t0 < t1 < · · · be a partition of positive real line. Also assume that Aj =
Pl
[tj−1 , tj ), j ∈ N, and |Aj | = |Aj+l | for some integer l ∈ N and τ =
j=1 |Aj |. Let
N (t) t≥0 be a Poisson process which has periodically stationary increments with period
τ > 0 and intensity function Λ(t) defined by (2.1). Then the simple compound Poisson
process (St )t≥0 is defined as
S t = Dt +
N (t)
X
Zn
(2.2)
n=1
P
S
where Zn = lj=1 Znj I{Υn ∈Dj } , Υn is the arrival time of nth jump Zn , Dj = ∞
A
and
R 2 k=0 j+kl
j
Zn are independent and have distribution Fj , j = 1, · · · , l, such that R z Fj (dz) < ∞ for
j = 1, · · · , l. Also Dt , t > 0, is a deterministic drift function with period τ , say Dt = Dt+τ ,
and D0 = 0. One can easily verify that (St )t≥0 has independent increment.
Now we find characteristic function of the simple compound Poisson process (St )t≥0 by
the following Lemma.
Lemma 2.2 Let (N (t))t≥0 be a Poisson process with periodically stationary increment
and mean Λ(t), defined by (2.1). Then the process (St )t≥0 defined by (2.2) has the following
characteristic function for t ≥ 0
E[eiwSt ] = eϕt (w) ,
Z
ϕt (w) = iwΓt + (eiwz − 1 − iwz I{|z |≤1} )ψt (dz ),
R
where
Γt = Dt +
m−1
l Z
XX
k=0 r=1
+
j−1 Z
X
Zr=1
z Λ(tkl+r ) − Λ(tkl+r−1 ) Fr (dz)
|z|≤1
z Λ(tml+r ) − Λ(tml+r−1 ) Fr (dz)
|z|≤1
+
z Λ(t) − Λ(tml+j−1 ) Fj (dz),
|z|≤1
and
ψt (dz) =
m−1
l
XX
Λ(tkl+r ) − Λ(tkl+r−1 ) Fr (dz)
k=0 r=1
j−1
+
X
Λ(tml+r ) − Λ(tml+r−1 ) Fr (dz)
r=1
+ Λ(t) − Λ(tml+j−1 ) Fj (dz),
where m = [ τt ] and (t − mτ ) ∈ Aj for some j = 1, · · · , l.
5
Proof: see Appendix, P1.
Remark 2.2 By Remark 2.1, Lemma 2.2 and (2.1), the spot characteristic triplet of
process (St )0≤t≤T , (Γt , 0, ψt )t∈[0,T ] , have the local characteristic triplet (γs , 0, υs )s∈[0,T ] which
has the following form
m−1
l Z
XX
γs = dDs +
zλ(s)I[tkl+r−1 ,tkl+r ) (s)Fr (dz)
|z|≤1
k=0 r=1
+
j−1 Z
X
r=1
zλ(s)I[tml+r−1 ,tml+r ) (s)Fr (dz)
|z|≤1
Z
zλ(s)I[tml+j−1 ,t] (s)Fj (dz),
+
|z|≤1
and
υs (dz) =
m−1
l
XX
λ(s)I[tkl+r−1 ,tkl+r ) (s)Fr (dz)
k=0 r=1
j−1
+
X
λ(s)I[tml+r−1 ,tml+r ) (s)Fr (dz)
r=1
+ λ(s)I[tml+j−1 ,t] (s)Fj (dz).
(2.3)
It follows from definition 2.3 and Remark 2.2 that the family (υs )s∈[0,T ] of semi-Lévy
measures verify
Z TZ
|z|2 υs (dz)ds < ∞.
(2.4)
0
R
This implies that (St )t≥0 is semi-martingale, so it has Lévy-Ito decomposition and has
quadratic variation process [11, p.459-460].
Corollary 2.3 By lemma 2.2, the stochastic process (St )t≥0 defined by (2.2) is a semiLévy process with period τ.
Proof: see Appendix, P2.
2.3
Structure of simple semi-Lévy driven COGARCH process
Let (St )t≥0 be a simple semi-Lévy process with period τ defined by (2.2). Process (Gt )t≥0
with parameters α0 > 0, α1 , · · · , αp ∈ R, β1 , · · · , βq ∈ R, αp 6= 0, βq 6= 0, and αp+1 =
· · · αq = 0 is a simple semi- Lévy
√ driven COGARCH(p,q) process (SS-COGARCH(p,q)),
q ≥ p ≥ 1, defined by dGt = Vt dSt or equivalently
Z tp
Gt =
Vu dSu ,
t > 0, G0 = 0,
(2.5)
0
6
in which the left-continuous volatility process (Vt )t≥0 is defined by
Vt = α0 + a0 Yt− ,
t > 0,
V0 = α0 + a0 Y0 ,
(2.6)
where the state process (Yt )t≥0 is the unique càdlàg solution of the stochastic differential
equation
dYt = BYt− dt + e(α0 + a0 Yt− )d[S, S]t ,
t > 0,
(2.7)
d denotes differentiation with respect to t. The initial value Y0 is F0 -measurable and
independent of the driving semi-Lévy process (St )t≥0 , and
0
1
0
···
0
0
α1
0
0
0
1
·
·
·
0
α2
..
.
.
.
.
.
.
.
.
.
B= .
(2.8)
, a = .. , e = .. .
.
.
.
.
.
0
0
0
···
1
0
αq
−βq −βq−1 −βq−2 · · · −β1
1
3
Periodic stationarity conditions
In this section we provide some conditions to prove that the volatility process (Vt )t≥0
defined by (2.6) is strictly periodically stationary with period τ . As a result of main
theorem, we prove that the increments with constant length of process (Gt )t≥0 is itself a
periodically correlated (PC) process which is the mian aim of this paper. We also give a
sufficient an necessary condition by which we can determine the volatility is non-negative.
In the following theorem in (b) a Lr −matrix norm of the (q × q)-matrix C is defined as
kCkr =
kCckr
.
c∈Cq \{0} kckr
sup
Theorem 3.1 (a) Let (Yt )t≥0 be the state process of the SS-COGARCH(p,q) process
with parameters B, a and α0 defined by (2.5). Suppose that (St )t≥0 be a simple semi-Lévy
process defined by (2.2). Then for all 0 ≤ s ≤ t
Yt = Js,t Ys + Ks,t ,
(3.1)
where (Js,t , Ks,t )0≤s≤t is a family of random
(q × q)−matrix Js,t and random vector Ks,t in
q
R . In addition, Js+kτ,t+kτ , Ks+kτ,t+kτ k∈N0 are independent and identically distributed.
(b) Let ηi , i = 1, · · · , q, be the eigenvalues of invertible matrix B which have strictly
negative real parts. Also suppose that exists one r ∈ [1, ∞] such that
Z
1
log 1 + ||P −1 ea0 P ||r z 2 dνt (z) < −
νt (R)ητ,
∀t ∈ [0, τ ), (3.2)
Λ(t + τ ) − Λ(t)
R
7
where P is a matrix in which P −1 BP is diagonal and η := maxi=1,··· ,q ηi and (νt )t≥0 is
semi-Lévy measure defined by (2.3). Then Yt+mτ converges in distribution to a finite
random vector U(t) for fixed t ∈ [0, τ ), as m goes to infinity. The distribution of the
vector U(t) is the unique solution of the random equation
d
where U(t)
U(t) = Jt,t+τ U(t) + Kt,t+τ ,
is independent of Jt,t+τ , Kt,t+τ .
(3.3)
d
(c) Let the conditions of (b) hold and Y0 = U(0) , Then (Yt )t≥0 and (Vt )t≥0 are strictly
periodically stationary with period τ . In the other hands, for any s1 , s2 , · · · , sn ≥ 0 and
Borel sets E1 , E2 , · · · , En of Rd and Borel sets J1 , J2 , · · · , Jn of R and k ∈ N,
P Ys1 ∈ E1 , Ys2 ∈ E2 , · · · , Ysn ∈ En = P Ys1 +kτ ∈ E1 , Ys2 +kτ ∈ E2 , · · · , Ysn +kτ ∈ En ,
and
P Vs1 ∈ J1 , Vs2 ∈ J2 , · · · , Vsn ∈ Jn = P Vs1 +kτ ∈ J1 , Vs2 +kτ ∈ J2 , · · · , Vsn +kτ ∈ Jn .
Proof: see Appendix P3.
In the following remark we describe the non-negativity of the Lyapunov exponent which
leads to the absolutely convergence of the state process (Yt )t≥0 in Theorem 3.1.
Remark 3.1 (a) The proof of Theorem 3.1 will be based on the use of the general theory
of multivariate random recurrence equations, as discussed by Bougerol and Picard [5],
Brandt [6] and Vervaat [22] in the one dimensional case. The state vector (Yt )t≥0 defined
by (2.7) satisfies multivariate random recurrence equation.
(b) The condition (3.2) which provides the stability of the model based on the existence
of a vector norm || · ||r such that Jt,t+τ and Kt,t+τ for all t ∈ [0, τ ) satisfy the conditions
E log + ||Kt,t+τ ||r < ∞
(3.4)
E log||Jt,t+τ ||r < 0,
where log + (x) = log(max{1, x}). E log||Jt,t+τ
||
< 0 is equivalent to the assertion that
r
the Lyapunov exponent of the Jt+kτ,t+(k+1)τ k∈N0 is strictly negative almost surely. i.e.
1
lim sup log||Jt,t+τ · · · Jt+kτ,t+(k+1)τ ||r < 0,
k−→∞ k
a.s.
(c) The conditions of Theorem 3.1 imply (3.4) with the natural matrix norm ||A||B,r =
||P −1 AP ||r , for some matrix A, which corresponds to the following the natural vector norm
||c||B,r := ||P −1 c||r ,
where P is a matrix in which P −1 AP is diagonal.
8
c ∈ Cq
Corollary 3.2 If (Vt )t≥0 is a strictly periodically stationary process with period τ , then
increments with constant length of the process (Gt )t≥0 make a PC process. In the other
words, for any t ≥ 0 and h ≥ p > 0 and k ∈ N,
(p)
(p)
E Gt = E Gt+kτ ,
(p)
(p)
(p)
(p)
cov Gt , Gt+h = cov Gt+kτ , Gt+h+kτ ,
(p)
where Gt :=
R t+p √
t
Vs dSs .
Proof: see Appendix P4.
Theorem 3.3 Let (Yt )t≥0 be the state process of the SS-COGARCH(p,q) process (Gt )t≥0
with parameters B, a and α0 > 0. Suppose that γ ≥ −α0 is a real constant and the
following two conditions hold:
a0 eBt e ≥ 0 ∀t ≥ 0,
a0 eBt Y0 ≥ γ
a.s.
∀t ≥ 0.
(3.5)
(3.6)
Then, with probability one,
V t ≥ α0 + γ ≥ 0
∀t ≥ 0.
Conversely, if either (3.6) fails, or (3.6) holds with γ > −α0 and (3.5) fails, then there
exists a simple semi-Lévy process (St )t≥0 and t0 ≥ 0 such that P (Vt0 < 0) > 0.
The proof of the non-negativity volatility process (Vt )t≥0 is similar to the proof of Theorem
5.1 in [8] for Lévy process.
4
Characterization of the state process
The aim of this section is to study expected value and covariance function of the state
process {Yt : t ≥ 0} and volatility process {Vt : t ≥ 0}. First, we prove that by some
sufficient conditions the expected value and covariance Yt exist. Then, by presenting
the first and second moments of the random vector U (0) , we find the expected value and
covariance function of the state process. Furthermore, a closed form for square increments
of the COGARCH process is characterized.
c
Lemma 4.1 Let the assumptios of Theorem 3.1 hold. If E ||Y0 ||r < ∞, for c = 1, 2,
then
(a) If E(St2 ) < ∞, then E(Yt ) < ∞ and E(U(0) ) < ∞.
(b) If E(St4 ) < ∞, then cov(Yt ) < ∞ and cov(U(0) ) < ∞.
where {St : t ≥ 0} is the simple semi-Levy process.
9
Proof: see Appendix P5.
Remark 4.1 By Theorem 3.1(b), (a) we find that E(U(0) ) is the solution of the following
random equation
I − E(J0,τ ) E U(0) = E(K0,τ ),
and
(b) E (U(0) )(U(0) )0 is the solution of the following equation
Iq2 − E(J0,τ ⊗ J0,τ ) vec E (U(0) )(U(0) )0 = E(K0,τ ⊗ J0,τ ) + E(J0,τ ⊗ K0,τ ) E(U(0) )
+ vec E(K0,τ K00,τ ) ,
where ⊗ is the Kronecker product of two matrices and for a matrix C, vec(C) is the
2
column vector in Cq which is constructed by stacking the columns of matrix A in a vector.
The following lemmas establish the the mean and covariance function of the state process.
Lemma 4.2 Suppose that {Yt : t ≥ 0} be the state process and the conditions of Theorem
0
3.1 and lemma
4.1 hold.
Then for
t, h ≥ 0, there
exists m, n ∈ N and t1 , t2 ∈ [0, τ ) such
that t ∈ mτ, (m + 1)τ , t + h ∈ nτ, (n + 1)τ , t = t1 + mτ and t + h = t2 + nτ and
E(Yt ) = E(J0,t1 )E(U) + E(K0,t1 ),
(4.1)
m−n−1 h
cov(Yt , Yt+h ) = E(J0,t2 ) E(J0,τ )
E(J0,τ E(UU0 )J00,t1 ) − E(J0,τ )E(U)E(U0 )E(J00,t1 )
+ E(J0,τ E(U)K00,t1 ) − E(J0,τ )E(U)E(K00,t1 )
0
+ E(K0,τ E(U0 )J00,t1 ) − E(K0,τ )E(U )E(J00,t1 )
i
0
0
+ E(K0,τ K0,t1 ) − E(K0,τ )E(K0,t1 ) .
(4.2)
Proof: see Appendix P6.
Corollary 4.3 Let {Vt : t ≥ 0} be the volatility process. Then for t, h ≥ 0, expected value
and covariance function of Vt have the following forms.
E(Vt ) = α0 + a0 E(Yt )
cov(Vt , Vt+h ) = a0 cov(Yt , Yt+h )a.
Proof: see Appendix P7.
In financial time series, the returns have negligible correlation while the squared returns are significantly correlated, therefore we investigate the behavior of the second-order
properties of the increments of the COGARCH process. We assume that volatility process
is strictly periodically stationary and non-negative.
(p)
Now we present the first and second orders of the increment process Gt that in defined
in Corollary 4.2.
10
Proposition 4.4 Let G be a zero mean simple semi-Levy driven COGARCH process.
Then for t ≥ 0 and h ≥ p > 0,
(a)
(p)
E(Gt
= 0,
(p)
(p)
Gt , Gt+h
cov
(4.3)
= 0.
(4.4)
(b) There exist m, m0 ∈ N0 where t ∈ Aml+i , i = 1, · · · , l and t + p ∈ A(m+m0 )l+i0 , i0 =
i, · · · , l, then
Z
Z
t+p
tml+i
(p) 2
2
2
E Vs λ(s)ds
E Vs λ(s)ds + E (Zi0 )
E (Gt ) = E (Zi )
t(m+m0 )l+i0 −1
t
m0 l+i0 −2
+
X
2
E (Zr+1 )
Z
tml+r+1
E Vs λ(s)ds.
(4.5)
tml+r
r=i
Moreover, there exist n, n0 ∈ N0 (n ≥ m and n0 ≥ m0 ) where t + h ∈ Anl+j , j = 1, · · · , l
and t + h + p ∈ A(n+n0 )l+j 0 , j 0 = j, · · · , l, then
cov
(p)
(p)
(Gt )2 , (Gt+h )2
2
0
Z
tnl+j
= E (Zj ) a
(p)
λ(s)E(Jt+p,s− )cov (Gt )2 , Yt+p ds
t+h
+ E (Zj 0 )2 a0
Z
t+h+p
(p)
λ(s)E(Jt+p,s− )cov (Gt )2 , Yt+p ds
t(n+n0 )l+j 0 −1
n0 l+j 0 −2
+
X
2
0
Z
tnl+r+1
E (Zr+1 ) a
(p)
λ(s)E(Jt+p,s− )cov (Gt )2 , Yt+p ds.
tnl+r
r=j
(4.6)
Proof: see Appendix P8.
Remark 4.2 For s ≥ 0, if we assume that s ≥ 0,
R
R
z 3 νs (dz) = 0, Then
(p)
cov (Gt )2 , Yt+p = 2E(It+p Yt+p ) − 2E(Jt,t+p )E(It Yt )
Z t+p
− cov(Yt+p ) + cov(Yt+p , Yt ) −
cov(Yt+p , Ys− )dsB 0 e,
t+
where It :=
Rt
0
√
Gs− Vs dSs and
Z
E(It Yt ) = B
t
Z tZ
E(Is Ys )ds + α0
0
0
11
R
E(Is Ys )z 2 νs (dz)ds.
5
Simulation
In this section we simulate the simple semi-Lévy process defined by (2.2). This process is
a compound Poisson process with time-varying arrival rate Λ(t) defined by (2.1). Then
we verify the theoretical results concerning PC structure of the increments of the SSCOGARCH(p,q) process (Gt )t≥0 defined by (2.5) by simulation. For this we simulate the
state process (Yt )t≥0 defined by (2.7) at jump time points and non-jump time points using its random recurrence equation (3.1). Then we evaluate the discretized version of the
volatility process (Vt )t≥0 defined by (2.6) and corresponding the SS-COGARCH(p,q) process (Gt )t≥0 . Finally, we verify the PC structure of the increments of the SS-COGARCH
process by following the method of [12].
For simulating the simple semi-Lévy process defined by (2.2) with the underlying Poisson
process N (t) t≥0 , we consider T1 as the time of the first jump and Tn , n = 2, 3, · · · the
P
time intervals between the (n − 1)th and nth jumps. Then Υn = nj=1 Tj , n ∈ N, are the
arrival times and Υ0 = 0. Therefore for j = 1, 2, · · ·
FTsn (x) := P Tn ≤ x Υn−1 = s
= 1 − P N (s + x) − N (s) = 0
= 1 − e−Λ(s+x)+Λ(s) ,
(5.1)
where Λ(t) defined by (2.1). The arrival times Υ1 , Υ2 , · · · are generated by the following
algorithm.
1. Generate the independent and identically distributed (iid) sequence U1 , U2 , · · · from
Uniform (0,1). Then by (5.1) as Υ0 = 0 the first arrival time Υ1 = T1 has distribution
FΥ0 1 (x) = 1 − e−Λ(x) . Therefore
d
Λ(Υ1 ) = −ln(1 − U ),
where U denotes a Uniform (0,1). So by generating U1 , Υ1 = Λ−1 − ln(1 − U1 ) can
be considered as a generated sample for the first arrival time. If Υn−1 , n = 2, 3, · · · ,
Υ
is the (n − 1)th evaluated arrival time, then by (5.1) Tn has distribution FTnn−1 (x) =
1 − e−Λ(x+Υn−1 )+Λ(Υn−1 ) . Therefore
d
Λ(Υn ) = Λ(Υn−1 ) − ln(1 − U ).
So by generating Un , Υn = Λ−1 Λ(Υn−1 ) − ln(1 − Un ) is a generated sample for the
nth arravial time. Thus applying the iid sample U1 , U2 , · · · we can evaluate successively
the nth arrival time by the (5.1), for the details see [10, p.99]. So by having the periodic
intensity function λ(u) in (2.1), one can evaluate Λ−1 (·) by available software.
2. Consider some periodic drift function Ht and as the successive jump size Zn generate
independently
and has distribution Fn (·) if corresponding arrival time belongs to Dj =
S∞
A
,
j
=
1, 2, · · · , l. Now evaluate the simple semi-Lévy process (St )t≥0 from (2.2)
k=0 j+kl
12
as
St = Ht +
N (t) l
X
X
Znj I{Υn ∈Dj } .
(5.2)
n=1 j=1
Now we consider the following steps for the simulation of the SS-COGARH(p,q) process defined by (2.5)-(2.7).
1. Consider p and q as some integer such that q ≥ p ≥ 1.
2. Choose real parameters β1 , · · · , βq and α1 , · · · , αp and α0 > 0 such that the eigenvalues of the matrix B defined by (2.8) have strictly negative real parts and conditions
(3.2), (3.5) and (3.6) are satisfied.
3. Having evaluated arrival times Υn by the above algorithm, generate the state
process (YΥn )n∈N by the following the recurrence equation after assuming some initial
value for YΥ0
2
B Υn −Υn−1
0 B Υn −Υn−1
YΥn = e
YΥn−1 + e α0 + a e
YΥn−1 Zn , n ∈ N.
This recurrence equation obtained by replacing s = Υn−1 and t = Υn in (3.1). The jump
size Zn can be simulated by (5.2) for predefined distributions µ1 , µ2 , · · · , µl .
4. As the simple semi-Lévy process (St )t≥0 (2.2) has no jump over [Υn−1 , Υ−
n ], n ∈ N,
−
].
Therefore
for
t
∈
[Υ
it follows from (2.7) that dYt = BYt dt for t ∈ [Υn−1 , Υ−
n−1 , Υn ]
n
e−Bt dYt = Be−Bt Yt dt
so that
d e−Bt Yt = 0.
From this follows that for t ∈ [Υn−1 , Υ−
n]
Z t
d e−Bu Yu = 0
Υn−1
hence
Yt = eB(t−Υn−1 ) YΥn−1 .
(5.3)
By (2.6) and (5.3), the discrete-time version of the process (Vt )t≥0 is as
VΥn = α0 + a0 YΥ−n
= α0 + a0 eB(Υn −Υn−1 ) YΥn−1 ,
13
(5.4)
and using that the process (St )t≥0 (2.2) has one jump at time Υn over [Υn−1 , Υn ] it follows
from (2.5) that
Z Υn p
Z Υn−1 p
GΥn − GΥn−1 =
Vu dSu −
Vu dSu
0
0
Z Υn p
Vu dSu
=
Υn−1
p
= VΥn Zn .
(5.5)
5. Having evaluated values of the process (YΥn )n∈N and G0 = 0, generate the process
(VΥn )n∈N by (5.4) and corresponding the process (GΥn )n∈N by (5.5).
6. Finally, using the values of VΥn and GΥn provided by previous step, evaluate the
sampled processes (Vih )i∈N and (Gih )i∈N for some h > 0 by the followings:
(i) Suppose that ih ∈ [Υn−1 , Υn ), for i, n ∈ N. Since the simple semi-Lévy process
(St )t≥0 (2.2) has no jump over [Υn−1 , Υn ), it follows from (2.6) and (5.3) that for ih ∈
[Υn−1 , Υn )
Vih = α0 + a0 Yih−
= α0 + a0 eB(ih−Υn−1 ) YΥn−1 ,
note that if ih = Υn−1 , then it follows from step 4 that
Vih = α0 + a0 YΥ−n−1
= α0 + a0 eB(Υn−1 −Υn−2 ) YΥn−2 .
(ii) Using that the process (St )t≥0 (2.2) has no jump over [Υn−1 , ih] it follows from
(2.5) that
Z
ih
Gih − GΥn−1 =
Z
p
Vu dSu −
0
Z
Υn−1
p
Vu dSu
0
ih
=
p
Vu dSu = 0
Υn−1
hence
Gih = GΥn−1 .
5.1
Test for the PC Structure of the increments process
To detect the PC structure of a process, Hurd and Miamee [15] and Dudek et al. [12]
showed that their proposed spectral coherence can be used to test whether a discrete-time
14
process is PC. Their method is based on the fact that the support of the spectral coherence
of a PC process with period % is contained in the subset of parallel lines λs = λr + 2jπ/%
for j = −(% − 1), · · · , −1, 0, 1, · · · , (% − 1). The squared coherence statistic for the series
X1 , X2 , · · · , XN is computed as follows
PM −1
X̃N (λr−M/2+m )X̃N (λs−M/2+m )|2
|γ̂(λr , λs , M )| = PM −1
PM −1
2
2
m=1 |X̃N (λr−M/2+m )|
m=1 |X̃N (λs−M/2+m )|
2
|
m=1
P
−iλj k
where X̃N (λj ) = N
is discrete Fourier transform of Xk for j = 0, 1, · · · , N −1,
k=1 Xk e
λj = 2πj/N and λj ∈ (0, 2π]. This statistic satisfies 0 ≤ |γ̂(λr , λs , M )|2 ≤ 1.
Under the null hypothesis that X̃N (λj ) are complex Gaussian with uncorrelated real and
imaginary parts for each j, squared coherence statistic has probability density, [15]
p(|γ|2 ) = (M − 1) 1 − |γ|2
M −2
,
0 ≤ |γ|2 ≤ 1.
For type I error α, the squared coherence α-threshold is determined, [15]
xα := |γ|2α = 1 − elog(α)/(M −1) .
The values of statistic |γ̂(λr , λs , M )|2 are computed for all r and s that pair (λr , λs ) ∈
[0, 2π) × [0, 2π). By plotting the values of statistic that exceed the α−threshold, if there
are some significant values of statistic that lie along the parallel equally spaced diagonal
lines, then Xk is PC. The graph of these significant values indicates the presence of the
subset of parallel lines s = r + jN/% for j = −(% − 1), · · · , −1, 0, 1, · · · , (% − 1).
To ensure that periodic structure of the series Xk is not a consequence of a periodic mean,
it is recommended to remove the periodic mean from this series first.
Example 5.1 Let (St )t≥0 be a simple semi-Lévy process by the rate function λ(t) =
−cos( π6 t) +4, for t ≥ 0. Furthermore, τ = 12, l = 5 and the length of the successive
partitions of each period intervals are 2, 2, 2, 3, 3. Moreover, the distribution of jumps
size on these subintervals are assumed to be N (3, 1), N (0, 1), N (1.25, 1.25), N (4, 1), and
N (0, 1.5), where N (µ, σ 2 ) denotes a Normal distribution with mean µ and variance σ 2 .
In this example we consider SS-COGARCH(1,3) process with parameters of α0 = 1,
α1 = 0.03, β1 = 5, β2 = 9 and β3 = 5. Thus, the matrix B is
0
1
0
0
1
B= 0
−5 −9 −5
and conditions (3.2), (3.5) and (3.6) are satisfy. For such the SS-COGARCH process
we simulate GΥn for the duration of 40 period intervals with the parameters specified
above, Y0 = (8.3580, 2.3377, 0.9040)0 and G0 = 0. Then, using step 6, we sample from
this process in equally space partition with distance one (h=1). So we get 480 discretized
15
samples of this 40 period intervals. Then we follow to verify that the increments of the
sampled process are a PC process.
Figure 1:
Top: the increments of the simulated process {Gi : i ∈ N} of size 480; bottom left: the sample autocorrelation
(1)
plot of Gt ; bottom right: the significant values of the sample spectral coherence with α = 0.05.
In figure 1 graph of the increments of the sampled process of size 480 (top) with the
sample autocorrelation graph of this process (bottom left) are presented. The bottom
right graph shows the sample coherent statistics values for a specified collection of pairs
(λr , λs ) ∈ [0, 2π) × [0, 2π) and M = 240 that exceed the threshold corresponding to
α = 0.05. The parallel lines for the sample spectral coherence confirm the increments of
the sampled process are PC. Also in this graph, the significant off-diagonal is at |r−s| = 40
which verifies the first peak at 40 and shows that there is a second order periodic structure
with period % = 480/40 = 12.
16
Table 1:
Some different values of |γ̂(λr , λs , M )|2 for some values r, s with α = 0.05 and xα = 0.0125.
The some different values of sample coherence statistics for the test that the increments
of the sampled process have period 12 are presented in Table 1. As the corresponding
α = 0.05 threshold shows the test is significant on the corresponding parallel lines of
Figure 1.
6
Appendix
P1: Proof of Lemma 2.2
For any t ≥ 0 there exist j = 1, · · · , l and m ∈ N0 such that t ∈ Aj+ml . Thus, using the
Definition 2.2, Definition 2.3 and the fact that (St )t≥0 has independence increments we
have
PN (t)
PN (t)
iw Dt + n=1 Zn
iwSt
= eiwDt E eiw n=1 Zn
E e
=E e
PN (t)
PN (t1 ) 1
PN (t )
j
2
iw n=N (t
Zn
iw n=N2 (t )+1 Zn
ml+j−1 )+1
1
= eiwDt E eiw n=1 Zn × e
× ··· × e
iwDt
=e
l
m−1
YY
iw
E e
k=0 r=1
PN (t)
iw n=N (t
× E e
PN (tkl+r )
n=N (tkl+r−1 )+1
j
Zn
ml+j−1 )+1
r
Zn
×
j−1
Y
r=1
.
17
iw
E e
PN (tml+r )
n=N (tml+r−1 )+1
r
Zn
Since for r = 1, · · · , l, Znr are independent and have distribution Fr , it follows from
Definition 2.2 and conditional expected value for k = 0, · · · , m and r = 1, · · · , l that
PN (t
)
PN (t
)−N (tkl+r−1 ) r
iw n=Nkl+r
Zr
Zn
iw n=1kl+r
(tkl+r−1 )+1 n
|N (tkl+r ) − N (tkl+r−1 ) = N
E e
=E E e
∞
X
=
E eiw
PN
n=1
r
Zn
P N (tkl+r ) − N (tkl+r−1 ) = N
N =0
∞
X
=
N =0
= e−
N
e
r N Λ(tkl+r ) − Λ(tkl+r−1 )
E(eiwZn )
N!
hR
iN
iwz
X
∞
Λ(t
)
−
Λ(t
)
F
(dz)
e
kl+r
kl+r−1
r
R
Λ(tkl+r )−Λ(tkl+r−1 )
N!
N =0
R
=e
− Λ(tkl+r )−Λ(tkl+r−1 )
iwz −1)
R (e
Λ(tkl+r )−Λ(tkl+r−1 ) Fr (dz)
Therefore
R
E e
iwSt
iwDt
=e
e
h
iwz −1)
R (e
iw Dt +
=e
h
R
|z|≤1
z
+
R
×e
r=1
k=0
+
Pm−1 Pl
Pj−1
Pm−1 Pl
r=1
k=0
r=1
i
Λ(tml+r )−Λ(tml+r−1 ) Fr (dz)+ Λ(t)−Λ(tml+j−1 ) Fj (dz)
r=1
Pj−1
Λ(tkl+r )−Λ(tkl+r−1 ) Fr (dz)
Λ(tkl+r )−Λ(tkl+r−1 ) Fr (dz)
i
Λ(tml+r )−Λ(tml+r−1 ) Fr (dz)+ Λ(t)−Λ(tml+j−1 ) Fj (dz)
h
iwz −1−iwzI
{|z |≤1} )
R (e
Pm−1 Pl
r=1
k=0
+
Pj−1
r=1
Λ(tkl+r )−Λ(tkl+r−1 ) Fr (dz)
i
Λ(tml+r )−Λ(tml+r−1 ) Fr (dz)+ Λ(t)−Λ(tml+j−1 ) Fj (dz)
.
P2: Proof of Corollary 2.3
It is sufficient to prove that for any 0 ≤ s < t and K ∈ N,
d
St − Ss = St+Kτ − Ss+Kτ .
For any 0 ≤ s < t there exist m, m0 ∈ N0 and j, j 0 = 1, · · · , l such that s ∈ Aj+ml and
t ∈ Aj 0 +(m+m0 )l . Thus
PN (t)
PN (t)
PN (s)
iw St −Ss
iw Dt −Ds + n=1 Zn − n=1 Zn
iw Dt −Ds
E e
=E e
=e
E eiw n=N (s)+1 Zn
PN (tml+j ) j
PN (t)
j0
iw n=N (t
Zn
iw n=N (s)+1 Zn
iw Dt −Ds
0 )l+j 0 −1 )+1
(m+m
=e
E e
× ··· × e
18
iw(Dt −Ds )
=e
×E e
PN (tml+j )
iw
N (s)+1
j
Zn
×
l
Y
E e
iw
PN (tml+r )
n=N (tml+r−1 )+1
r
Zn
r=j+1
m+m0 −1
×
l
Y Y
iw
PN (tkl+r )
E e
n=N (tkl+r−1 )+1
r
Zn
k=m+1 r=1
×
0 −1
jY
iw
E e
PN (t(m+m0 )l+r )
n=N (t(m+m0 )l+r−1 )+1
r
Zn
×E e
iw
0
PN (t)
n=N (t(m+m0 )l+j 0 −1 )+1
j
Zn
.
r=1
By similar method in the Proof of Lemma 2.2, we have
h
R iwz
(e
−1)
Λ(t
)−Λ(s)
Fj (dz)+···+
j+ml
R
E eiw St −Ss = eiw Dt −Ds e
P 0
P 0
+
m −1
k=1
Λ(t(m+k)l+1 )−Λ(t(m+k)l ) F1 (dz)+···+
m −1
k=1
Λ(t(m+1)l )−Λ(t(m+1)l−1 ) Fl (dz)
Λ(t(m+k+1)l )−Λ(t(m+k+1)l−1 ) Fl (dz)
i
+ Λ(t(m+m0 )l+1 )−Λ(t(m+m0 )l )F1 (dz) +···+ Λ(t)−Λ(t(m+m0 )l+j 0 −1 )Fj 0 (dz)
.
Since s + Kτ ∈ Aj+(m+K)l and t + Kτ ∈ Aj 0 +(m+m0 +K)l , it follows the same method used
in the computation of the characteristic function of St − Ss that
h
R iwz
(e
−1)
Λ(tj+(m+K)l )−Λ(s+Kτ ) Fj (dz)+···
R
iw Dt+Kτ −Ds+Kτ
iw St+Kτ −Ss+Kτ
e
=e
E e
+ Λ(t(m+K+1)l )−Λ(t(m+K+1)l−1 ) Fl (dz)
+
Pm0 −1
+
Pm0 −1
k=1
k=1
Λ(t(m+K+k)l+1 )−Λ(t(m+K+k)l ) F1 (dz)+···
Λ(t(m+K+k+1)l )−Λ(t(m+K+k+1)l−1 ) Fl (dz)
+ Λ(t(m+m0 +K)l+1 )−Λ(t(m+m0 +K)l )F1 (dz) +···
i
+ Λ(t+Kτ )−Λ(t(m+m0 +K)l+j 0 −1 )Fj 0 (dz)
.
By Definition 2.2 and Definition 2.3, we have for partition 0 ≤ ti < tj and K ∈ N
Λ(tj ) − Λ(ti ) = Λ(tj + Kτ ) − Λ(ti + Kτ )
= Λ(tj+Kl ) − Λ(ti+Kl )
(6.1)
and Dt = Dt+Kτ for t ≥ 0. Thus
iw St −Ss
E e
=E e
iw St+Kτ −Ss+Kτ
.
P3: Proof of Theorem 3.1
Proof a : Let (S)t≥0 be the simple semi-Lévy process defined by (2.2) and Zi be ith jump
19
size. Furthermore T1 denote the time that the first jump occurs P
and Tj , j = 2, 3, · · · , be
th
th
the time intervals between the (j − 1) and j jumps and Υn := nj=1 Tj , for n ∈ N, and
Υ0 = 0. For n ∈ N
Qn := I + (Zn )2 ea0 eB(Υn −Υn−1 ) ,
Rn := α0 (Zn )2 e.
It follows from [8] that Yt satisfies in
Yt = Js,t Ys + Ks,t ,
0≤s≤t
(6.2)
where,
Js,t = eB(t−ΥN (t) ) QN (t) · · · QN (s)+2 I + (ZN (s)+1 )2 ea0 eB(ΥN (s)+1 −s) ,
Ks,t = eB(t−ΥN (t) ) RN (t) + QN (t) RN (t)−1 + · · · + QN (t) · · · QN (s)+2 RN (s)+1 .
In order to prove that the sequence Js+kτ,t+kτ , Ks+kτ,t+kτ k≥0 is independently identically
distributed, let mτ ≤ s ≤ t ≤ (m + 1)τ such that m ∈ N0 . We define
(m+1)
:= ΥN (s)+1 − s,
Υ1
(m+1)
Z1
:= ZN (s)+1 ,
..
.
(m+1)
ΥN (t)−N (s) := ΥN (t) − s,
(m+1)
ZN (t)−N (s) := ZN (t) .
(6.3)
(m+1)
(m+1)
(m+1)
,
, · · · , ΥN (t)−N (s) , Z1
Therefore, Js,t , Ks,t is function of random vector N (t)−N (s), Υ1
(m+1)
· · · , ZN (t)−N (s) . Using the fact that increments of the Poisson process {N (t) : t ≥ 0} are
independent and the density function of random vector (X1 , · · · , Xn ) can be computed
as follows:
P
x
<
X
≤
x
+
δ
,
·
·
·
,
x
<
X
≤
x
+
δ
1
1
1
1
n
n
n
n
(x1 , · · · , xn ) =
f
lim
,
X1 ,··· ,Xn
δ1 ,··· ,δn →∞
δ1 · · · δn
(m+1)
(m+1)
, · · · , Υn
|N (t) − N (s) = n as follows:
we can give the conditional density of Υ1
f
)
(s1 , · · · , sn ) = n!λ(s1 ) × · · · × λ(s
n n ,
Λ(t) − Λ(s)
(m+1)
(m+1)
Υ1
,··· ,Υn
|N (t)−N (s)=n
0 < s1 < · · · < sn < τ.
Since the increment process N (t) − N (s) is a Poisson process with mean Λ(t) − Λ(s)
such that Λ(t) − Λ(s) = Λ(t + kτ ) − Λ(s + kτ ), for all k ∈ N0 , it follows from Definition
2.1 and conditional density that
d
Js,t , Ks,t = Js+kτ,t+kτ , Ks+kτ,t+kτ .
20
The independence of the sequence Js+kτ,t+kτ , Ks+kτ,t+kτ is clear, since Js+kτ,t+kτ and
Ks+kτ,t+kτ are constructed only from the segment Su , (s + kτ ) ≤ u ≤ (t + kτ ), of the
semi-Levy process S.
If 0 ≤ s≤ t, there are n, m ∈ N0 such that s ∈ nτ, (n + 1)τ and t ∈ (n + m)τ, (n +
m + 1)τ . By iterating (3.1) we obtain
h
Yt = J(n+m)τ,t J(n+m−1)τ,(n+m)τ · · · J(n+1)τ,(n+2)τ Js,(n+1)τ Ys + K(n+m)τ,t
i
+ J(n+m)τ,t K(n+m−1)τ,(n+m)τ + · · · + J(n+m)τ,t J(n+m−1)τ,(n+m)τ · · · J(n+1)τ,(n+2)τ Ks,(n+1)τ .
It follows from [8] and (6.2) that
Js,t = J(n+m)τ,t J(n+m−1)τ,(n+m)τ · · · J(n+1)τ,(n+2)τ Js,(n+1)τ
Ks,t = K(n+m)τ,t + J(n+m)τ,t K(n+m−1)τ,(n+m)τ + · · · + J(n+m)τ,t · · · J(n+1)τ,(n+2)τ Ks,(n+1)τ ,
therefore, for all k ∈ N0 ,
d
Js,t , Ks,t = Js+kτ,t+kτ , Ks+kτ,t+kτ .
(b) By iterating (3.1) we obtain
h
Yt+mτ = Jt+(m−1)τ,t+mτ · · · Jt,t+τ Yt + Kt+(m−1)τ,t+mτ + Jt+(m−1)τ,t+mτ Kt+(m−2)τ,t+(m−1)τ
i
+ · · · + Jt+(m−1)τ,t+mτ · · · Jt+τ,t+2τ Kt,t+τ .
Since Jt+(m−1)τ,t+mτ , Kt+(m−1)τ,t+mτ
follows immediately
d
Yt+mτ =
m
Y
m∈N
, are independent and identically distributed it
Jt+(k−1)τ,t+kτ Yt + Kt,t+τ +
m−1
X
Jt,t+τ · · · Jt+(k−1)τ,t+kτ Kt+kτ,t+(k+1)τ .
k=1
k=1
Note that the Kt,t+τ +
infinite series
(t)
U
Pm−1
k=1
Jt,t+τ · · · Jt+(k−1)τ,t+kτ Kt+kτ,t+(k+1)τ is the partial sums of the
:= Kt,t+τ +
∞
X
Jt,t+τ · · · Jt+(k−1)τ,t+kτ Kt+kτ,t+(k+1)τ .
(6.4)
k=1
Thus, using the general theory of random recurrence equations (see Bougerol and Picard
[5], Brandt [6] and Vervaat [22]) and condition (3.2), we prove the almost sure absolute
convergence of the series (6.4). Let P be such that ∆ := P −1 BP is diagonal. Then we
have for t ≥ 0,
||eBt ||B,r = ||P e∆t P −1 ||B,r = ||P −1 P e∆t P −1 P ||B,r = ||e∆t ||r = eηt .
21
Using (6.2), (6.3) and condition (3.2) show
(2)
(2)
E log||Jt,t+τ ||B,r ≤ E ητ + log 1 + (ZN (t+τ )−N (τ ) )2 ||ea0 ||B,r + · · · + log 1 + (Z1 )2 ||ea0 ||B,r
(1)
(1)
+ log 1 + (ZN (τ )−N (t) )2 ||ea0 ||B,r + · · · + log 1 + (Z1 )2 ||ea0 ||B,r < 0
and it follows from [8], (6.2) and (2.4)
E log||Kt,t+τ ||B,r
)−N (t) h
N (t+τ
i
X
≤E
log(1 + (ZN (t+τ )−j+1 )2 ||ea0 ||B,r ) + (ZN (t+τ )−j+1 )2
j=1
+ log(α0 ||e||B,r ) < ∞.
Hence, the strong law of large numbers yield
k
1 X
log||Jt+(j−1)τ,t+jτ ||B,r + log||Kt+kτ,t+(k+1)τ ||B,r < 0 a.s.,
lim sup
k→∞ k
j=1
i.e.
k1
lim sup ||Jt,t+τ · · · Jt+(k−1)τ,t+kτ Kt+kτ,t+(k+1)τ ||B,r
< 1 a.s.
k→∞
From Cauchy’s root criterion follows that series (6.4) is almost sure absolute convergence.
Since the state process Y has cadlag paths, it follows that ||Yt ||B,r is almost surely finite.
Therefore
(t)
||Yt+mτ − U ||B,r ≤ ||
m
Y
||
Jt+(k−1)τ,t+kτ ||B,r ||Yt − U(t)
m B,r −→ 0 a.s.
k=1
P
(t)
where Um := Kt+mτ,t+(m+1)τ + ∞
k=m Jt+mτ,t+(m+1)τ · · · Jt+kτ,t+(k+1)τ Kt+(k+1)τ,t+(k+2)τ . It
follows from [9] that Yt+mτ converges in distribution to U(t) , for fixed t ∈ [0, τ ). That
U(t) satisfies (3.3) and is the unique solution is clear by the general theory of random
recurrence equations.
(c) It suffices to show that for any s1 , s2 , · · · , sn and k ∈ N0
d
Ys1 , Ys2 , · · · , Ysn = Ys1 +kτ , Ys2 +kτ , · · · , Ysn +kτ .
Using the recursion equation (6.2) and analysis is used in (a) we obtain above relation.
We give the proof for s1 ∈ [0, τ ) and s2 ∈ [τ, 2τ ). The general case is similar. Therefore
Ys2 = Jτ,s2 Js1 ,τ Ys1 + Kτ,s2 + Jτ,s2 Ks1 ,τ ,
and
Ys1 = J0,s1 Y0 + K0,s1 .
22
The random vector Ys1 , Ys2 is function from J0,s1 , K0,s1 , Js1 ,τ , Ks1 ,τ , Jτ,s2 , Kτ,s2 , Y0 and
with similar argument also shows that the random vector Ys1 +kτ , Ys2 +kτ is function
from Jkτ,s1 +kτ , Kkτ,s1 +kτ , Js1 +kτ,(k+1)τ , Ks1 +kτ,(k+1)τ , J(k+1)τ,s2 +kτ , K(k+1)τ,s2 +kτ , Ykτ . Using
d
d
(a) and assumption Y0 = U(0) , it follows that Ys1 , Ys2 = Ys1 +kτ , Ys2 +kτ .
P4: Proof of Corollary 3.2
Since for all s ∈ [t, t + p], the process dSs is independent of Fs , it follows from (2.5) and
Corollary 3. that
Z t+p p
(p)
E Gt =
E( Vs )E(Ss+ds − Ss )
t
Z t+p p
d
(p)
E( Vs+kτ )E(Ss+ds+kτ − Ss+kτ ) = E Gt+kτ .
=
t
(p)
In order to prove that the covariance function of Gt
is periodic, it suffices to show that
(p) (p)
(p)
(p)
E Gt Gt+h = E Gt+mτ Gt+h+mτ .
Let Et+p denote conditional expectation with respect to the σ-algebra Ft+p . Since the
increments of S on the interval (t + h, t + h + p] are independent of Ft+p and the increment
(p)
process Gt is measurable Ft+p , we have
(p) (p)
(p)
(p)
E Gt Gt+h = E Gt Et+p Gt+h
Z t+h+p Z t+p p p
Vs Vu dSu E dSs .
=
E
t+h
t
√ √
Since
Vs Vu dSu is function of Ju+du,s− , Ku+du,s− , Ju,u+du , Ku,u+du , Ju− ,u , Ku− ,u , Jt,u− , Kt,u− ,
− +kτ , Ju+kτ,u+du+kτ ,
Yt and this vector has the same distribution with Ju+du+kτ,s− +kτ , Ku+du+kτ,s
−
−
−
−
Ku+kτ,u+du+kτ , Ju +kτ,u+kτ , Ku +kτ,u+kτ , Jt+kτ,u +kτ , Kt+kτ,u +kτ , Yt+kτ , it follows that
(p)
(p)
(p)
(p)
cov Gt , Gt+h = cov Gt+kτ , Gt+h+kτ .
P5: Proof of Lemma 4.1
(a) Let Ỹt be state process of semi Levy driven COGARCH(1,1) process. Then
Ỹt = J̃0,t Ỹ0 + K̃0,t ,
where
J̃0,t = exp ηt +
N (t)
X
log(1 + ||ea0 ||B,r Zi2 ) ,
i=1
23
(6.5)
K̃0,t = α0 ||e||B,r × exp
N (t)
X
log(1 + ||ea
0
N (t)
X
||B,r Zi2 )
i=1
Zi2 .
i=1
It follows from [8] that for all t ≥ 0
||J0,t ||B,r ≤ exp ηt +
N (t)
X
log(1 + ||ea
0
||B,r Zi2 )
,
(6.6)
i=1
||K0,t ||B,r ≤ α0 ||e||B,r × exp
N (t)
X
log(1 + ||ea
0
||B,r Zi2 )
N (t)
X
i=1
Zi2 .
(6.7)
i=1
Now define a cadlag process {Xt : t ≥ 0} by
Xt = −ηt −
N (t)
X
log(1 + ||ea0 ||B,r Zi2 ),
t ≥ 0.
i=1
Then, Xt is a negative simple pure jump semi Levy procress. It follows from Definition
2.1 and Remark 3.2 that
E e
−cXt
= E exp cηt + c
N (t)
X
log(1 + ||ea0 ||B,r Zi2 )
i=1
Z tZ
= exp cηt +
0
(1 + ||ea0 ||B,r z 2 )c − 1 νs (dz)ds .
R
Using a similar analysis is used in proof of Proposition 3.2 [17], it follows from [8] that
Z t
h
i
0 −1
−Xt
K̃0,t = ||ea ||B,r α0 ||e||B,r e
−η
e−(Xt −Xu ) du − 1 .
0
It follows from (6.5), (6.6) and (6.7) that ||Yt ||B,r < Ỹt , for all t ≥ 0. Thus E(St2 ) < ∞
and E(St4 ) < ∞ imply E(Yt ) < ∞ and cov(Yt ) < ∞, respectively.
In the proof of Theorem 3.1(a) we have seen that (3.2) implies that the sequence Ỹmτ
converges in distribution to a finite random vector Ũ which of the vector Ũ is the unique
solution of the random equation
d
Ũ = J̃0 Ũ + K̃0 ,
d
where J̃0 , K̃0 = J̃0,τ , K̃0,τ and Ũ is independent of J̃0 , K̃0 . It follows from (6.4),
(6.6) and (6.7) that U ≤ Ũ. Thus E(St2 ) < ∞ and E(St4 ) < ∞ imply E(U) < ∞ and
cov(U) < ∞, respectively.
24
P6: Proof of Lemma 4.2
Using (5.1) and independence Ymτ and (Jmτ,s1 +mτ , Kmτ,s1 +mτ ), we obtain
E(Yt ) = E(Jmτ,s1 +mτ )E(Ymτ ) + E(Kmτ,s1 +mτ )
= E(J0,s1 )E(U) + E(K0,s1 )
d
where the last equality follows from that (Jmτ,s1 +mτ , Kmτ,s1 +mτ ) = (J0,s1 , K0,s1 ) and assumption of section (c) of the Theorem 3.1.
For computing cov(Yt , Yt+h ) it is sufficient to obtain E(Yt+h Yt0 ). It will therefore be
followed from recursion equations which used in the proof of Theorem 4.1 that
Yt = Jmτ,t Ymτ + Kmτ,t ,
Yt+h = Jnτ,t+h J(n−1)τ,nτ · · · Jmτ,(m+1)τ Ymτ
h
+ Knτ,t+h + Jnτ,t+h K(n−1)τ,nτ +
n−m−1
X
Jnτ,t+h · · · J(n−i)τ,(n−i+1)τ K(n−i−1)τ,(n−i)τ
i
i=1
The relation (4.2) follows from independence the sequence (Jkτ,s+kτ , Kkτ,s+kτ ) for any
k ∈ N0 and s ∈ [0, τ ] and also independence Ymτ from this sequence for any k ≥ m.
P7: Proof of Corollary 4.3
Since for fixed t, almost surely Vt = Vt+ = α0 + a0 Yt , we have the expected value and
covariance function volatility process from (2.6).
P8: Proof of Proposition 4.4
(a) We imitate the proof of Theorem 6.1 of Brockwell, Chadraa, and Lindner [8]. Since
S is a martingale with zero mean, we have (4.3). It follows from Ito isometry for square
integrable martingales as integrators (e.g. [20], IV 27) that
Z t+h+p
(p) (p)
Vs I[t,t+p) (s)I[t+h,t+h+p (s)d[S, S]s = 0,
E Gt Gt+h = E
0
and hence (4.4) follows.
(b) It follows from partial integration that
Z t+p
(p) 2
(Gt ) = 2
Gs− dGs + [G, G]t+p
t+
t+
Z t+p
X
p
=2
Gs− Vs dSs +
Vs (∆Ss )2 .
t
t<s≤t+p
By similar analysis is used in (a), the compensation formula and [11] we have
Z t+p Z
X
(p) 2
2
E (Gt ) = E
Vs (∆Ss ) =
E(Vs )z 2 νs (dz)ds
t
t<s≤t+p
25
R
(6.8)
From Remark 3.2 the relation (4.5) follows.
For proof of (4.6), Since the increments of S on the interval (t, t + p] are independent of
Ft+p and S has expectation 0, it follows that
Z t+p
p
Et+p
Gs− Vs dSs = 0.
t
Thus it follows from the compensation formula and (6.2) that
X
(p)
Et+p (Gt+h )2 = Et+p
α0 + a0 Jt+p,s− Yt+p + a0 Kt+p,s− (∆Ss )2
t+h<s≤t+h+p
Z
t+h+p
Z
=
t+h
α0 + a0 E(Jt+p,s− )Yt+p + a0 E(Kt+p,s− ) z 2 νs (dz)ds,
R
therefore
(p)
(p)
(p)
(p)
(p)
(p)
cov (Gt )2 , (Gt+h )2 = E (Gt )2 Et+p (Gt+h )2 − E (Gt )2 E (Gt+h )2 ,
(p)
and by Remark 3.2 and (2.6) we have (4.6). To calculate cov Yt+p , (Gt )2 , partial
integration (6.8) to get
Z t+p
Z t+p
p
(p) 2
Vs d[S, S]s .
cov Yt+p , (Gt ) = 2cov Yt+p ,
Gs− Vs dS̄s + cov Yt+p ,
t+
t
Rt
√
To calculate the first term, let It := 0 Gs− Vs dSs . We know E(It ) = 0 for all t ≥ 0.
Therefore
Z t+p
p
cov Yt+p ,
Gs− Vs dSs = E It+p Yt+p − E(Jt,t+p )E It Yt − E(It )E(Kt,t+p ).
t
From [8], partial integration and substituting dVt+ = a0 BYt dt + α0 Vt d[S, S]t it follows
that
Z t
Z t
E(It Vt+ ) = E
Is− dVs+ + E
Vs dIs + E [V+ , I]t
0
0
Z t
Z tZ
0
=aB
E(Is− Ys )ds + αq
E(Is− Vs )z 2 νs (dz)ds
0
Z0 t
ZR t
p
p
+E
Gs− Vs Vs dSs + αq E
Gs− Vs Vs dMs ,
0
0
where Ms :=
)3 is a locally integrable martingale, with mean zero as a
0<u≤s (∆SsR
result of assumption that R z 3 νs (dz) = 0, for all s ≥ 0. Thus, using the fact that
Rt
√
E 0 Gs− Vs Vs dSs = 0, E(It Vt+ ) = a0 E(It Yt ) and that Is Ys = Is− Ys = Is− Ys− almost
surely for fixed s, so we have
Z t
Z tZ
0
0
0
a E(It Yt ) = a B
E(Is Ys )ds + α0 a
E(Is Ys )z 2 νs (dz)ds,
P
0
0
26
R
The equality holds for any vector a, hence
Z t
Z tZ
E(It Yt ) = B
E(Is Ys )ds + α0
E(Is Ys )z 2 νs (dz)ds.
0
0
R
To calculate the second term of the covariance, it follows from [8] and (2.7) that
Z t+p
Z t+p
0
0
Ys− dsB 0 )e
cov Yt+p ,
Vs d[S, S]s = cov Yt+p , (Yt+p − Yt −
t+
t+
Z t+p
= cov(Yt+p ) − cov(Yt+p , Yt ) −
cov(Yt+p , Ys− )dsB 0 e.
t+
References
[1] W. R. Bennett (1958). Statistics of regenerative digital transmission. Bell System
Technical Journal, 37, 1501-1542.
[2] A. Bibi and I. Lescheb (2012). On general periodic time-varying bilinear processes.
Economics Letters, 114, 353357.
[3] T. Bollerslev (1986). Generalized Autoregressive Conditional Heteroskedasticity.
Journal of Econometrics, 31, 307327.
[4] T. Bollerslev, A.J. Patton and W. Wang (2016). Daily House Price Indices: Construction, Modeling, and Longerrun Predictions. Journal of Applied Econometrics,
31(6), 1005-1025.
[5] P. Bougerol and N. Picard (1992). Stationarity of GARCH processes and of some
nonnegative time series. Journal of Econometrics, 52, 115-127.
[6] A. Brandt (1986). The Stochastic Equation Yn+1 = An Yn + Bn with Stationary
Coefficients. Advances in Applied Probability, 18(1), 211-220.
[7] P.J. Brockwell (2009). Levy-driven Continuous time ARMA processes. Handbook of
Financial Time Series 457-480.
[8] P.J. Brockwell, E. Chadraa and A. Lindner (2006). CONTINUOUS-TIME GARCH
PROCESSES. Ann. Appl. Probab., 16(2), 790-826.
[9] P.J. Brockwell and R.A. Davis (1991). Time series: Theory and Methods. 2nd edition,
Springer, New York.
[10] E. Cinlar (1975). Introduction to stochastic processes. Prentice Hall, Englewood
Cliffs, New Jersey.
27
[11] R. Cont and P. Tankov (2004). Financial Modelling With Jump Processes. Chapman
and Hall/CRC Financial Mathematics Series.
[12] A.E. Dudek, H. L. Hurd, W. Wojtowicz (2015). PARMA models with applications in
R, Applied Condition Monitoring, Vol 3 (Cyclostationarity: Theory and Methods-II)
Springer, Switzerland, 131-154.
[13] R.F. Engle (1982). Autoregressive Conditional Heteroscedasticity with Estimates of
the Variance of United Kingdom Inflation. Econometrica, 50, 9871008.
[14] E.G. Gladyshev (1961). Periodically correlated random sequences. Soviet Math.
Dokl., 2, 385-388.
[15] H. L. Hurd and A. G. Miamee (2007). Periodically Correlated Random Sequences:
Spectral Theory and Practice. New York: Wiley.
[16] Jeon, J., Taylor, J.W. (2016). Short-term Density Forecasting of Wave Energy Using
ARMA-GARCH Models and Kernel Density Estimation. International Journal of
Forecasting, 32, 991-1004.
[17] C. Kluppelberg, A. Lindner and R. Maller (2004). A continuous time GARCH process
driven by a Levy process: Stationarity and second order behaviour. J.Appl. Probab.,
41, 601-622.
[18] B. Krithikaivasan, Y. Zeng, K. Deka, and D. Medhi (2007). based Traffic Forecasting and Dynamic Bandwidth Provisioning for Periodically Measured Nonstationary
Traffic. IEEE/ACM Transactions on Networking, 15(3), 683-696.
[19] M. Maejima and K. Sato (1999). Semi-selfsimilar processes. Journal of Theoretical
Probability, 11, 347-373.
[20] L.C.G. Roger and D. Williams (2000). Diffusions, Markov Processes, and Martingales,
Volume 2. Ito calculus. Cambridge University Press. Cambridge.
[21] K. Sato (1999a). Levy Processes and Infinitely Divisible Distributions, Cambridge
University Press, Cambridge, U.K.
[22] W. Vervaat (1979). On a Stochastic Difference Equation and a Representation of
Non-Negative Infinitely Divisible Random Variables, Advances in Applied Probability,
11(4), 750-783.
28
| 10 |
AdaDNNs: Adaptive Ensemble of Deep Neural Networks
for Scene Text Recognition
Chun Yang† , Xu-Cheng Yin†∗ , Zejun Li† , Jianwei Wu† ,
arXiv:1710.03425v1 [cs.CV] 10 Oct 2017
†
Chunchao Guo‡ , Hongfa Wang‡ , and Lei Xiao‡
Department of Computer Science and Technology, University of Science and Technology Beijing, Beijing, China
‡
TEG, Tencent Co. LTD, Shenzhen, China
∗
Corresponding author: [email protected]
Abstract
Recognizing text in the wild is a really challenging task because of complex backgrounds, various illuminations and diverse distortions, even with deep neural networks (convolutional neural networks and recurrent neural networks). In
the end-to-end training procedure for scene text recognition,
the outputs of deep neural networks at different iterations
are always demonstrated with diversity and complementarity for the target object (text). Here, a simple but effective
deep learning method, an adaptive ensemble of deep neural networks (AdaDNNs), is proposed to simply select and
adaptively combine classifier components at different iterations from the whole learning system. Furthermore, the ensemble is formulated as a Bayesian framework for classifier
weighting and combination. A variety of experiments on several typical acknowledged benchmarks, i.e., ICDAR Robust
Reading Competition (Challenge 1, 2 and 4) datasets, verify
the surprised improvement from the baseline DNNs, and the
effectiveness of AdaDNNs compared with the recent state-ofthe-art methods.
Scene text is widely used as visual indicators for navigation and notification, and text recognition from scene images
and videos is one key factor for a variety of practical applications with reading in the wild (Ye and Doermann 2015;
Yin et al. 2016; Tian et al. 2017), such as assisting for visually impaired people (Goto and Tanaka 2009; Sanketi, Shen,
and Coughlan 2011), real-time translation (Shi and Xu 2005;
Fragoso et al. 2011), user navigation (Minetto et al. 2011),
driving assistance systems (Wu, Chen, and Yang 2005), and
autonomous mobile robots (Létourneau et al. 2003).
Scene text (cropped word) recognition methods can be
generally grouped into segmentation-based word recognition and holistic word recognition. Typical segmentationbased approaches over-segment the word image into small
segments, combine adjacent segments into candidate characters, classify them using convolutional neural networks
(CNNs) or gradient feature-based classifiers, and find an
approximately optimal word recognition result (Bissacco et
al. 2013; Jaderberg, Vedaldi, and Zisserman 2014). Because
of complex backgrounds and diverse distortions, character
segmentation is another more challenging task. Thereby,
holistic word recognition approaches with deep neural networks are more impressive for text reading in the wild.
Copyright c 2017-2018, All rights reserved.
Word spotting, the direct holistic approach, usually calculates a similarity measure between the candidate word image and a query word (Jaderberg et al. 2016; Gordo 2015).
Sequence matching, the indirect holistic approach, recognizes the whole word image by embedding hidden segmentation strategies. For example, Shi et al. constructed an endto-end training deep neural network for image-based sequence recognition (scene text recognition) (Shi, Bai, and
Yao 2017).
However, there are a variety of grand challenges for scene
text recognition (see samples in Fig. 1), even with recent
deep neural networks (DNNs), where additional characters
will be probably identified for text distortions and complex
backgrounds, some characters are wrongly recognized for
changing illuminations and complex noises, and characters
are sometimes missed for low resolutions and diverse distortions.
Figure 1: Some challenging examples (from 2015 ICDAR Robust Reading Competition Challenge 4 dataset) of scene text images which are incorrectly recognized by the baseline DNNs (see
related descriptions in Experiments). The captions show the recognized text (left) versus the ground truth (right): additional characters, wrong characters and missing characters in target words.
Stochastic Gradient Descent (SGD) (Bottou 2010) and its
variants have become the defacto techniques for optimizing DNNs, where SGD always leads to local minima, even
though the popularity of SGD can be attributed to its ability
to avoid spurious saddle-points and local minima (Dauphin
et al. 2014). There are a plenty number of (more than million) possible local minima in DNNs (Kawaguchi 2016), and
local minima with flat basins are supposed to generalize better in the learning system (Keskar et al. 2017). As a result, although different local minima often have similar error rates,
the corresponding neural networks in DNNs tend to make
different mistakes. This diversity and complementarity can
be exploited via classifier ensemble (Huang et al. 2017).
There are two major ways for ensemble of deep neural
networks. On the one hand, different learning systems with
DNNs are first trained independently, and then the final system is a trivial ensemble of these different deep learning architectures via majority voting or averaging. For example,
most high profile competitions in ImageNet 1 and Kaggle 2
are won by such ensemble techniques. Because of the huge
computation complexity, this ensemble becomes uneconomical and impossible for most researchers in the universities
and even in the small companies. On the other hand, one
learning system with DNNs is first trained, and then the final ensemble selects and combines neural network components 3 in this only one system without incurring any
additional training cost. Huang et al. proposed such an ensemble technique, called as Snapshot Ensembling, where a
specific optimization strategy is designed to train DNNs and
“model snapshots” (neural network components) in all cycles are combined for the final ensemble in the learning procedure (Huang et al. 2017). However, how to design the specific and effective optimization algorithms for DNNs is also
a challenge.
In this paper, we propose a new and adaptive ensemble
of deep neural networks (AdaDNNs) in the most simplest
way, i.e., given trained neural networks (of all iterations)
from a learned DNNs system 4 , a subset of neural network
components are simply selected and adaptively combined to
perform the final predictions. And the ensemble is formally
formulated as a Bayesian framework for classifier weighting and combination. We argue that because of the diversity
and complementarity in DNNs with SGD, AdaDNNs via
ensembling with diversity can improve robust performance
of the final learning system. On the same time, because
of the high accuracy of components in DNNs, AdaDNNs
via combination with accurate neural network components
can improve precision performance of the final classification system. A variety of experiments on several acknowledged benchmarks, i.e., ICDAR Robust Reading Competition (Challenge 1, 2 and 4) datasets, have shown that the
simple but effective AdaDNNs improves largely from the
baseline DNNs. Moreover, our proposed approach has the
1
www.image-net.org.
www.kaggle.com.
3
The neural network component means the resulting DNN of
each iteration in the whole training procedure.
4
Here, the DNNs system can be trained with conventional optimization algorithms (Bottou, Curtis, and Nocedal 2016), or even
with the specific algorithms, e.g., Snapshot Ensembling (Huang et
al. 2017).
2
top performance compared with the latest state-of-the-art
methods.
Related Work
Recognizing text in scene videos attracts more and more
interests in the fields of document analysis and recognition, computer vision, and machine learning. The existing methods for scene text (cropped word) recognition can
be grouped into segmentation-based word recognition and
holistic word recognition. In general, segmentation-based
word recognition methods integrate character segmentation
and character recognition with language priors using optimization techniques, such as Markov models (Weinman et
al. 2014) and CRFs (Mishra, Alahari, and Jawahar 2012; Shi
et al. 2013). In recent years, the mainstream segmentationbased word recognition techniques usually over-segment the
word image into small segments, combine adjacent segments into candidate characters and classify them using
CNNs or gradient feature-based classifiers, and find an approximately optimal word recognition result using beam
search (Bissacco et al. 2013), Hidden Markov Models (Alsharif and Pineau 2014), or dynamic programming (Jaderberg, Vedaldi, and Zisserman 2014).
Word spotting (Manmatha, Han, and Riseman 1996), a
direct holistic word recognition approach, is to identify specific words in scene images without character segmentation, given a lexicon of words (Wang and Belongie 2010).
Word spotting methods usually calculate a similarity measure between the candidate word image and a query word.
Impressively, some recent methods design a proper CNN architecture and train CNNs directly on the holistic word images (Jaderberg et al. 2014; Jaderberg et al. 2016), or use label embedding techniques to enrich relations between word
images and text strings (Almazan et al. 2014; Gordo 2015).
Sequence matching, an indirect holistic word recognition
approach, recognizes the whole word image by embedding
hidden segmentation strategies. Shi et al. constructed an endto-end train deep neural network for image-based sequence
recognition (scene text recognition), where a convolutional
recurrent neural networks framework (CRNN) is designed
and utilized (Shi, Bai, and Yao 2017). In this paper, a similar CRNN architecture is used in AdaDNNs for recognizing
scene text sequently and holistically.
Classifier ensemble can be mainly divided into two categories. The first one aims at learning multiple classifiers at
the feature level, where multiple classifiers are trained and
combined in the learning process, e.g., Boosting (Freund and
Schapire 1997), Bagging (Breiman 1996), and Rotation Forest (Rodriguez, Kuncheva, and Alonso 2006). The second
tries to combine classifiers at the output level, where the results of multiple available classifiers are combined to solve
the targeted problem, e.g., multiple classifier systems (classifier combination) (Zhou 2012; Yin et al. 2014). AdaDNNs
in this paper follows the second one. Namely, given multiple
classifiers (neural network components sequently learned in
DNNs), AdaDNNs is constructed by combining intelligently
these component classifiers within a Bayesian-based formulation framework.
Adaptive Ensemble of Deep Neural Networks
As we have known, both SGD and batch optimization can
lead to different local minima in DNNs, and neural network
components are always with diversity and complementarity.
Conventionally, there are tens of thousands of iterations and
also neural network components in the learning system of
DNNs. Considering the acceptable computation complexity
in the testing procedure, one thing is to quickly select a small
subset of neural network components in different training iterations. At the same time, considering the high accuracy requirement, another thing is to adaptively combine this subset
of neural network components and construct a final classification system. In the following, the unified framework of
AdaDNNs is first formulated. Next, the detail procedure of
AdaDNNs is then described.
There are two key issues for optimizing Eq. 3. The first
one is the calculation of W (y, hi (x)). As mentioned above,
P (y|hi , x) is the distribution of describing the correlation
between decision y and hi (x). Thus, W (y, hi (x)) can be
derived from y, hi (x) and the distance between y and hi (x).
Here, W (y, hi (x)) is assumed to be computed as
W (y, hi (x)) = I(y = hi (x)) + U (y) ∗ V (y, hi (x)) (4)
where both U (∗) and V (∗, •) are functions. I(y = hi (x))
returns 1 when y = hi (x); otherwise, I(y = hi (x)) = 0.
For the scene text recognition task, on the one hand, with
a given dictionary 5 , U (y) can be calculated as
U (y) = {
1
0
y ∈ Dict
y∈
/ Dict
(5)
Unified Framework
To formulate the ensemble decision, the individual classifier
decisions can be combine by majority voting, which sums
the votes for each class and selects the class that receives
most of the votes. While the majority voting is the most popular combination rule, a major limitation of majority voting
is that only the decision of each model is taken into account
without considering the distribution of decisions.
In particular, all the possible models in the hypothesis
space could be exploited by considering their individual decisions and the correlations with other hypotheses. Here,
we use a Bayesian-based framework to combine classifiers.
Given a sample x and a set H of independent classifiers, the
probability of label y can be estimated by a Bayesian Model
(BM) as
X
P (y|H, x) =
P (y|hi , x)P (hi |x)
(1)
On the other hand, the correlation between y and hi (x) can
be assumed by the function V of Cost Levenshtein Distance (CLD). In the traditional Levenshtein Distance, the
cost of any two different characters is always 1. However,
in spelling correction, the cost of two characters with similar shape tends to have a smaller distance. In this paper,
we statistics the frequencies of different character pairs at
the same location from the label and the hypothesis on the
validation set (bootstrapped from the training set in Experiments), and calculate the cost of two different characters (a
and b) as
cost(a, b) = 1 − P (a|b)
(6)
Note that if both y and hi (x) are from the given dictionary,
then they will have a competitive relationship with each
other. Thus, V (y, hi (x)) can be calculated with
hi ∈H
where P (y|hi , x) is the distribution of describing the correlation between decision y and hi (x), and P (hi |x) denotes
the posterior probability of model hi . The posterior P (hi |x)
can be computed as
P (hi |x) = P
P (D|hi )P (hi )
hi ∈H P (D|hi )P (hi )
(2)
where P (hi ) is the prior probability of classifier hi and
P (D|hi ) isP
the model likelihood on the training set D. Here,
P (hi ) and hi ∈H P (D|hi )P (hi ) are assumed to be a constant in Eq. 2. Therefore, BM assigns the optimal label y to
y ∗ according to the following decision rule, i.e.,
P (y ∗ )
= argmaxy PP(y|H, x)
= argmaxy Phi ∈H P (y|hi , x)P (hi |x)
= argmaxy P
hi ∈H P (y|hi , x)P (D|hi )
= argmaxy 2P hi ∈H P (y|hi , x)P (D|hi ) − P (D)
= argmaxy Phi ∈H (2P (y|hi , x) − 1)P (D|hi )
= argmaxy hi ∈H W (y, hi (x))P (D|hi )
(3)
where W (y, hi (x)) is a function of y and hi (x). By multiplying the scaling factor λ > 0, W (y, hi (x)) can have a
different range in R.
hi (x) ∈ Dict
hi (x) ∈
/ Dict
(7)
where F is a function of the CLD between y and hi (x). By
a heuristic approach, the values of F can be empirically assigned at the multiple integral points, and the values at other
points can be calculated by the piecewise linear interpolation. An example of F is shown in Fig. 2. In general, F
has a small range, e.g., [1.5, 1.5] in Fig. ??. So, the obtained
weights from Eq. 4 are convenient for linear combination of
classifiers.
The second issue is about generating voting candidates
(more probable labels of the hypotheses). Obviously, the
ground truth doesn’t always appear in the decisions made
by H. It is necessary to find an effective way to generate
good candidates from all the decisions, i.e., to find a more
probable label yi (x) from the existed initial label yi0 (x) of
hypothesis hi . Generally speaking, a good candidate means
it has a small edit distance with most of the hypotheses. Following this idea, we propose an algorithm to semantically
generate voting candidates (see Algorithm 1).
V (y, hi (x)) = {
F (−CLD(y, hi (x)))
F (CLD(y, hi (x)))
5
In our experiments, a 90k word dictionary from (Jaderberg et
al. 2016) is used as the given dictionary.
batch orderings will converge to different solutions. Those
snapshots often have the similar error rates, but make different mistakes. This diversity can be exploited by ensembling,
in which multiple snapnots are average sampling and then
combined with majority voting.
Focusing on scene text recognition, the CRNN
model (Shi, Bai, and Yao 2017) is used to generate
base classifiers (neural network components) as our text
recognizer. CRNN uses CTC (Graves et al. 2006) as its
output layer, which estimates the sequence probability
conditioned on the input image, i.e. P (h|x), where x is the
input image and h represents a character sequence.
Figure 2: An example of F of describing the relationship between
V (in Y-axis) and the Cost Levenshtein Distance (in X-axis).
Algorithm 1: Generating Voting Candidates.
Input:
H = {h1 , h2 , ..., hL }: the base classifier set, |H| = L.
Y0 : the initial decisions made by H.
ED: the measurement function of the pairwise distance.
θ: the upper bound of the distance between the candidate
and the hypothesis.
Output:
Y : the voting candidate set.
Parameter:
H ? : a subset of H, ∀h?i , h?j ∈ H ? , ED(h?i , h?j ) ≤ 2θ.
Procedure:
1: Y = ∅.
2: For each H ? ⊂ H;
3: For each y ∈ Y0 :
4: If maxh?i ∈H ? ED(y, h? (x)) ≤ θ:
5:
Y = Y ∪ {y}.
6: End
7: End
In Algorithm 1, the searching process of H ? is an implicit
computational way for P (D|hi ). In our experiments, a special simple case of algorithm 1 is used, where during the
voting candidates generation process, Y0 is initialized only
by H, the upper bound is set from θ to inf, and P (D|hi ) is
assumed to be a constant.
AdaDNNs Algorithm
Within the above framework, the procedure of AdaDNNs for
scene text recognition includes three major steps, i.e., base
classifiers generation, classifier combination, and ensemble
pruning.
Base Classifiers Generation Ensembles work best if the
base models have high accuracy and do not overlap in the set
of examples they misclassify. Deep neural networks (DNNs)
are naturally used as a base classifier generator for ensembles. On the one hand, DNNs have dramatically improved
the state-of-the-art in many domains, such as speech recognition, visual object recognition and object detection, by being composed of multiple processing layers to learn representations of data with multiple levels of abstraction. On
the other hand, during the training phase of one individual deep neural network, two snapshots with different mini-
Classifier Combination for AdaDNNs The core of
AdaDNNs is to calculate y ∗ (by Eq. 3), i.e., the calculation
of F , which is a function of distance between y and hi (x).
Here, F is represented by the set of values at the multiple
integral points. These values are assigned with the highest
recognition rate on the validation set. The detail procedure
of the AdaDNNs ensemble is shown in Algorithm 2.
Algorithm 2: AdaDNNs (classifier combination).
Input:
H = {h1 , h2 , ..., hL }: the base classifier set, |H| = L.
Dict: the given dictionary.
F : a function of distance between y and hi (x).
Parameter:
Y : the voting candidates set generated by Algorithm 1.
Output:
y ∗ : the label of prediction.
Procedure:
1:
Initialize Y by H and Dict.
3: For y ∈ Y :
4:
Calculate P (y|H, x) through Eq. 2.
5: End
6:
Calculate y ∗ through Eq. 3.
AdaDNNs Pruning In classifier ensemble, pruning can
generally improve the ensemble performance. Here, we use
Genetic Algorithm (GA) to pruning the ensemble. GA is a
meta heuristic inspired by the process of natural selection
that belongs to the larger class of evolutionary algorithms.
GAs are commonly used to generate high-quality solutions
for optimization and search problems by relying on bioinspired operators such as mutation, crossover and selection.
In AdaDNNs pruning, firstly, a population of binary
weight vectors is randomly generated, where 1 means the
classifier is remained. Secondly, the population is iteratively
evolve where the fitness of a vector w is measured on the
V
validation set V , i.e., f (w) = Rw
(R stands for the recognition rate). Finally, the ensemble is correspondingly pruned
by the evolved best weight vector w∗ .
Experiments
To evaluate the effectiveness of the proposed AdaDNNs
method, a variety of experiments for text (cropped word)
recognition are conducted on acknowledged benchmark
datasets. We first focused on the most challenging task,
i.e., incidental scene text recognition (ICDAR Robust Reading Competition Challenge 4), trained our AdaDNNs learning system (on both the synthetic dataset from (Jaderberg
et al. 2016) and the training set of Challenge 4), and performed comparative experiments. Then, we also conducted
experiments of this learned AdaDNNs on other text recognition tasks, i.e., focused scene text recognition and borndigital text recognition (ICDAR Robust Reading Competition Challenge 1 and 2), and checked the generalization
of AdaDNNs. Here, the baseline DNNs model, CRNN, is
same to the one in (Shi, Bai, and Yao 2017). The official
metrics in ICDAR 2011/2013/2015 Robust Reading Competition (Shahab, Shafait, and Dengel 2011; Karatzas et al.
2013; Karatzas et al. 2015) are used.
Figure 3: Challenging samples of scene text from COCO-text
which are correctly recognized (with C.R.W. upper) by AdaDNNs:
“GEMS”, “mgennisgal”, “RGAO”, “RAILROAD”, “UNITED”,
“Kappa”, “XMAS”, “ZOOM”, “YouTube”, “YORK”, “WALK”,
“WPRD”, “WHEN”, “YEAR”, and “WISCONSIN”.
Experiments with Incidental Scene Text
Recognition
The ICDAR 2015 Robust Reading Competition Challenge 4
database (Karatzas et al. 2015) is a widely used and highly
competitive benchmark database for scene text recognition
within complex situations in the recent 3 years. The public dataset includes a training set of 1, 000 images and a
test set of 500, with more than 10, 000 annotated text regions (cropped words). Because of complex backgrounds,
various illuminations and diverse distortions, this incidental
scene text recognition topic is a very challenging task. In our
experiments, a variety of methods are conducted and compared, i.e., the baseline DNNs, AdaDNNs, AdaDNNs pruning, the winning participation method in the official competition (marked as bold words), and the latest top submissions of the Robust Reading Competition (RRC) website 6
in 2017 (marked as italic words).
Table 1: Comparative results on 2015 ICDAR Challenge 4
dataset (incidental scene text recognition), where the comparative results are from the RRC website.
Date
Method
T.E.D
2017/7/6
2017/7/6
2017/6/29
2015/4/1
-
Baidu IDL v3
HIK OCR v3
HKU-VisionLab
MAPS
Baseline DNNs
AdaDNNs
AdaDNNs Pruning
211.59
191.25
258.59
1,128.01
384.76
251.98
224.7
C.R.W
(%)
80.02
78.29
72.03
32.93
60.18
76.31
79.78
T.E.D.
(upper)
171.15
158.84
212.17
1,068.72
303.77
185.36
147.11
C.R.W.
(upper)
82.33
80.12
74.19
33.90
64.90
80.55
84.21
As can be seen from Table 1, our proposed AdaDNNs is
much better than the baseline DNNs. For example, for the
measure of “C.R.W (upper)”, AdaDNNs has a surprised improvement, i.e., from 64.90% to 80.55%. That is to say, the
adaptive ensemble of DNNs in a simple but effective strategy can largely improved the performance from the original
baseline DNNs. Moreover, compared with the latest top submissions (e.g., “Baidu IDL v3” and “HIK OCR v3”), our
method, AdaDNNs Pruning, has the best performance with
“C.R.W (upper)”, i.e., 84.21%.
We also perform experiments on the COCO-text
dataset (Veit et al. 2016), a similar challenging but largescale incidental scene text dataset. Images in this dataset are
from the MS COCO dataset that contain text (63, 686 images with 173, 589 text regions). ICDAR2017 Robust Reading Challenge on COCO-Text is holding and will be released
6
http://rrc.cvc.uab.es.
in ICDAR 2017. So, the comparative results of AdaDNNs,
AdaDNNs Pruning and the baseline DNNs are only on the
validation set; they are 58.08%, 66.07%, and 66.27%, respectively. Some scene text recognition samples for COCOtext are shown in Fig. 3.
Experiments with Focused Scene Text Recognition
and Born-Digital Text Recognition
In order to investigate the generalization of AdaDNNs, we
directly use the trained AdaDNNs system above (for 2015
ICDAR Challenge 4), and perform experiments on 2013
ICDAR Challenge 2 (cropped word recognition) dataset.
The Challenge 2 dataset contains 1, 015 ground truths
cropped word images. In our experiments, a variety of methods are conducted and compared, i.e., the baseline DNNs,
AdaDNNs, AdaDNNs pruning, the winning participation
method in the official competition (marked as bold words),
the top three results in published papers, and the latest top
submissions of the RRC website in 2017 (marked as italic
words).
Table 2: Comparative results on 2013 ICDAR Challenge 2
dataset (focused scene text recognition), where the comparative results without publications are from the RRC website.
Date
Method
T.E.D
2017/8/14
2017/7/28
2017/2/24
2016
2016
2017
2013/4/6
-
TencentAILab
Tencent Youtu
HIK OCR
CNN (Jaderberg et al. 2016)
RARE (Shi et al. 2016)
CRNN (Shi, Bai, and Yao 2017)
PhotoOCR
Baseline DNNs
AdaDNNs
AdaDNNs Pruning
42
48.12
64.95
–
–
–
122.75
306.43
193.51
182.7
C.R.W
(%)
95.07
92.42
90.78
–
–
–
82.83
75.34
83.20
85.21
T.E.D.
(upper)
39.35
40.37
42.31
–
–
–
109.9
282.35
170.13
164.38
C.R.W.
(upper)
95.34
93.42
93.33
90.8
88.6
89.6
85.30
78.63
86.67
88.13
Similarly, AdaDNNs is much better than the baseline
DNNs, e.g., the measure of “C.R.W (upper)” increases from
78.63% to 86.67%. Surprisedly, only trained for another
task (Challenge 4), the AdaDNNs (AdaDNNs Pruning) has
a competitive performance on a new dataset (Challenge 2
dataset), compared with the recent published methods (e.g.,
CRNN (Shi, Bai, and Yao 2017)), and even with the latest
submission results.
Apart from the above experiments on text recognition
from scene images (ICDAR Robust Reading Competition
Challenge 2 and 4), we also directly perform the learned
AdaDNNs on the born-digital images track (Challenge 1).
Though born-digital images are not scene images, they have
similar challenging issues for text recognition, e.g., complex backgrounds, low resolution and various colors. We
also compare AdaDNNs (AdaDNNs Pruning) with the baseline DNNs, the winning participation method in the official
competition (marked as bold words), and the latest top submissions of the RRC website (marked as italic words). The
similar conclusions are drawn. Firstly, AdaDNNs improves
largely compared with the baseline DNNs (from 84.50 to
92.22 for “C.R.W (upper)”). Secondly, AdaDNNs has a
comparative performance with the latest submission results
(e.g., “Dahua OCR v1” with 92.49% in 2017/9/1).
Table 3: Comparative results on ICDAR Challenge 1 dataset
(born-digital text recognition), where the comparative results are from the RRC website.
Date
Method
T.E.D
2017/8/22
2017/7/21
2017/9/1
2013/4/6
-
Tecent Youtu
TecentAILab
Dahua OCR v1
PhotoOCR
Baseline DNNs
AdaDNNs
AdaDNNs Pruning
17.51
18.77
57.47
103.41
87.7
55.44
55.64
C.R.W
(%)
96.80
96.18
91.31
82.21
82.42
89.58
89.53
T.E.D.
(upper)
13.67
12.91
42.87
87.19
72.32
39.61
39.81
C.R.W.
(upper)
97.29
97.22
92.49
85.41
84.50
92.22
92.17
We fully believe that if AdaDNNs (AdaDNNs Pruning)
performs re-training on ICDAR Challenge 1 and Challenge
2 datasets, the performance will correspondingly be improved and obtain a more impressive results compared with
the latest submission systems. This is also a near issue for
our future work.
Conclusion and Discussion
A variety of DNNs based methods have been proposed and
are still being investigated in the literature for scene text
recognition because of the grand challenges, e.g., complex
backgrounds, various illuminations and diverse distortions.
In order to fully take advantage of the complementary diversity and the high accuracy of neural network components in DNNs, an adaptive ensemble of deep neural networks (AdaDNNs) is proposed to simply select and adaptively combine neural networks in the whole training procedure. Comparative experiments of scene text (cropped word)
recognition showed that AdaDNNs achieves a remarkable
increase in the final performance (more than 10%) compared
with the baseline DNNs.
Note that the DNNs methods have dramatically improved
the state-of-the-art in object detection, object recognition,
speech recognition and many other domains. Consequently,
a near future issue is to evaluate the efficacy of AdaDNNs
with state-of-the-art DNNs on object recognition and speech
recognition. For example, experiments for object detection and recognition of AdaDNNs with Snapshot Ensembling (Huang et al. 2017), ResNet (He et al. 2016), and
DenseNet (Huang, Liu, and Weinberger 2017) can be performed and compared in the next step.
References
[Almazan et al. 2014] Almazan, J.; Gordo, A.; Fornes, A.;
and Valveny, E. 2014. Word spotting and recognition with
embedded attributes. IEEE Trans. Pattern Analysis and Machine Intelligence 36(12):2552–2566.
[Alsharif and Pineau 2014] Alsharif, O., and Pineau, J. 2014.
End-to-end text recognition with hybrid HMM maxout models. In Proceedings of International Conference on Learning
Representations (ICLR’14).
[Bissacco et al. 2013] Bissacco, A.; Cummins, M.; Netzer,
Y.; and Neven, H. 2013. PhotoOCR: Reading text in uncontrolled conditions. In Proceedings of International Conference on Computer Vision (ICCV’13), 4321–4328.
[Bottou, Curtis, and Nocedal 2016] Bottou, L.; Curtis, F. E.;
and Nocedal, J. 2016. Optimization methods for large-scale
machine learning. CoRR abs/1606.04838.
[Bottou 2010] Bottou, L. 2010. Large-scale machine learning with stochastic gradient descent. In Proceedings of the
19th International Conference on Computational Statistics
(COMPSTAT’10), 177–186.
[Breiman 1996] Breiman, L. 1996. Bagging predictors. Machine Learning 24:122–140.
[Dauphin et al. 2014] Dauphin, Y. N.; Pascanu, R.; Gülçehre,
Ç.; Cho, K.; Ganguli, S.; and Bengio, Y. 2014. Identifying
and attacking the saddle point problem in high-dimensional
non-convex optimization. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014 (NIPS’14), 2933–
2941.
[Fragoso et al. 2011] Fragoso, V.; Gauglitz, S.; Zamora, S.;
Kleban, J.; and Turk, M. 2011. Translatar: A mobile augmented reality translator. In Proceedings of 2011 IEEE
Workshop on Applications of Computer Vision (WACV’11),
497–502.
[Freund and Schapire 1997] Freund, Y., and Schapire, R.
1997. A decision-theoretic generalization of on-line learning and an application to Boosting. Journal of Computer
and System Sciences 55(1):119–139.
[Gordo 2015] Gordo, A. 2015. Supervised mid-level features
for word image representation. In Proceedings of 2015 IEEE
International Conference on Computer Vision and Pattern
Recognition (CVPR’15), 2956–2964.
[Goto and Tanaka 2009] Goto, H., and Tanaka, M. 2009.
Text-tracking wearable camera system for the blind. In Proceedings of International Conference on Document Analysis
and Recognition (ICDAR’09), 141–145.
[Graves et al. 2006] Graves, A.; Fernández, S.; Gomez, F. J.;
and Schmidhuber, J. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent
neural networks. In Machine Learning, Proceedings of the
Twenty-Third International Conference (ICML 2006), Pittsburgh, Pennsylvania, USA, June 25-29, 2006, 369–376.
[He et al. 2016] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016.
Deep residual learning for image recognition. In Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16), 770–778.
[Huang et al. 2017] Huang, G.; Li, Y.; Pleiss, G.; Liu, Z.;
Hopcroft, J. E.; and Weinberger, K. Q. 2017. Snapshot ensembles: Train 1, get M for free. In Proceedings of International Conference on Learning Representations (ICLR’17).
[Huang, Liu, and Weinberger 2017] Huang, G.; Liu, Z.; and
Weinberger, K. Q. 2017. Densely connected convolutional networks. In Proceedings of 2017 IEEE Conference
on Computer Vision and Pattern Recognition (CVPR’17),
4700–4708.
[Jaderberg et al. 2014] Jaderberg, M.; Simonyan, K.;
Vedaldi, A.; and Zisserman, A. 2014. Synthetic data and
artificial neural networks for natural scene text recognition.
CoRR abs/1406.2227.
[Jaderberg et al. 2016] Jaderberg, M.; Simonyan, K.;
Vedaldi, A.; and Zisserman, A. 2016. Reading text in the
wild with convolutional neural networks. International
Journal of Computer Vision 116:1 – 20.
[Jaderberg, Vedaldi, and Zisserman 2014] Jaderberg,
M.;
Vedaldi, A.; and Zisserman, A. 2014. Deep features for text
spotting. In Proceedings of the 13th European Conference
on Computer Vision (ECCV’14), 512–528.
[Karatzas et al. 2013] Karatzas, D.; Shafait, F.; Uchida, S.;
Iwamura, M.; i Bigorda, L. G.; Mestre, S. R.; Mas, J.; Mota,
D. F.; Almazán, J.; and de las Heras, L. 2013. ICDAR 2013
robust reading competition. In Proceedings of 12th International Conference on Document Analysis and Recognition
(ICDAR’13), 1484–1493.
[Karatzas et al. 2015] Karatzas, D.; Gomez-Bigorda, L.;
Nicolaou, A.; Ghosh, S.; Bagdanov, A.; Iwamura, M.;
Matas, J.; Neumann, L.; Chandrasekhar, V. R.; Lu, S.;
Shafait, F.; Uchida, S.; and Valveny, E. 2015. ICDAR 2015
competition on robust reading. In Proceedings of 13th International Conference on Document Analysis and Recognition
(ICDAR’15), 1156–1160.
[Kawaguchi 2016] Kawaguchi, K. 2016. Deep learning
without poor local minima. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural
Information Processing Systems 2016 (NIPS’16), 586–594.
[Keskar et al. 2017] Keskar, N. S.; Mudigere, D.; Nocedal,
J.; Smelyanskiy, M.; and Tang, P. T. P. 2017. On largebatch training for deep learning: Generalization gap and
sharp minima. In Proceedings of International Conference
on Learning Representations (ICLR’17).
[Létourneau et al. 2003] Létourneau, D.; Michaud, F.; Valin,
J.-M.; and Proulx, C. 2003. Textual message read by a
mobile robot. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS’03), volume 3, 2724–2729.
[Manmatha, Han, and Riseman 1996] Manmatha, R.; Han,
C.; and Riseman, E. M. 1996. Word spotting: A new
approach to indexing handwriting. In Proceedings of
Conference on Computer Vision and Pattern Recognition
(CVPR’96), 631–637.
[Minetto et al. 2011] Minetto, R.; Thome, N.; Cord, M.;
Leite, N. J.; and Stolfi, J. 2011. Snoopertrack: Text detection and tracking for outdoor videos. In Proceedings of the
18th IEEE International Conference on Image Processing
(ICIP’11), 505–508.
[Mishra, Alahari, and Jawahar 2012] Mishra, A.; Alahari,
K.; and Jawahar, C. V. 2012. Top-down and bottom-up
cues for scene text recognition. In Proceedings of 2012
IEEE Conference on Computer Vision and Pattern Recognition (CVPR’12), 2687–2694.
[Rodriguez, Kuncheva, and Alonso 2006] Rodriguez, J. J.;
Kuncheva, L. I.; and Alonso, C. J. 2006. Rotation Forest: A new classifier ensemble method. IEEE Trans. Pattern
Analysis Machine Intelligence 28(10):1619–1630.
[Sanketi, Shen, and Coughlan 2011] Sanketi, P.; Shen, H.;
and Coughlan, J. M.
2011.
Localizing blurry and
low-resolution text in natural images. In Proceedings of
2011 IEEE Workshop on Applications of Computer Vision
(WACV’11), 503–510.
[Shahab, Shafait, and Dengel 2011] Shahab, A.; Shafait, F.;
and Dengel, A. 2011. ICDAR 2011 robust reading competition challenge 2: Reading text in scene images. In Proceedings of International Conference on Document Analysis and
Recognition (ICDAR’11), 1491–1496.
[Shi and Xu 2005] Shi, X., and Xu, Y. 2005. A wearable
translation robot. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation (ICRA’05),
4400–4405.
[Shi, Bai, and Yao 2017] Shi, B.; Bai, X.; and Yao, C. 2017.
An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE Trans. Pattern Analysis and Machine Intelligence. published online.
[Shi et al. 2013] Shi, C.; Wang, C.; Xiao, B.; Zhang, Y.; Gao,
S.; and Zhang, Z. 2013. Scene text recognition using partbased tree-structured character detection. In Proceedings
of 2013 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR’13), 2961–2968.
[Shi et al. 2016] Shi, B.; Wang, X.; Lyu, P.; Yao, C.; and
Bai, X. 2016. Robust scene text recognition with automatic rectification. In Proceedings of 2016 IEEE Conference
on Computer Vision and Pattern Recognition (CVPR’16),
4168–4176.
[Tian et al. 2017] Tian, S.; Yin, X.-C.; Su, Y.; and Hao, H.W. 2017. A unified framework for tracking based text detection and recognition from web videos. IEEE Trans. Pattern
Analysis and Machine Intelligence. published online.
[Veit et al. 2016] Veit, A.; Matera, T.; Neumann, L.; Matas,
J.; and Belongie, S. J. 2016. Coco-text: Dataset and benchmark for text detection and recognition in natural images.
CoRR abs/1601.07140.
[Wang and Belongie 2010] Wang, K., and Belongie, S. 2010.
Word spotting in the wild. In Proceedings of European Conference on Computer Vision (ECCV’10), 591–604.
[Weinman et al. 2014] Weinman, J. J.; Butler, Z.; Knoll, D.;
and Feild, J. 2014. Toward integrated scene text reading. IEEE Trans. Pattern Analysis and Machine Intelligence
36(2):375–387.
[Wu, Chen, and Yang 2005] Wu, W.; Chen, X.; and Yang, J.
2005. Detection of text on road signs from video. IEEE
Trans. Intelligent Transportation Systems 6(4):378–390.
[Ye and Doermann 2015] Ye, Q., and Doermann, D. 2015.
Text detection and recognition in imagery: A survey.
IEEE Trans. Pattern Analysis and Machine Intelligence
37(7):1480–1500.
[Yin et al. 2014] Yin, X.-C.; Huang, K.; Yang, C.; and Hao,
H.-W. 2014. Convex ensemble learning with sparsity and
diversity. Information Fusion 20:49–59.
[Yin et al. 2016] Yin, X.-C.; Zuo, Z.-Y.; Tian, S.; and Liu, C.L. 2016. Text detection, tracking and recognition in video:
A comprehensive survey. IEEE Trans. Image Processing
25(6):2752–2773.
[Zhou 2012] Zhou, Z.-H. 2012. Ensemble Methods: Foundations and Algorithms. Boca Raton, FL: Chamman &
Hall/CRC.
| 1 |
Yang et al. BMC Systems Biology 2014, 8(Suppl 4):S7
http://www.biomedcentral.com/1752-0509/8/S4/S7
RESEARCH
Open Access
Microbial community pattern detection in human
body habitats via ensemble clustering framework
Peng Yang1, Xiaoquan Su2, Le Ou-Yang3, Hon-Nian Chua1, Xiao-Li Li1, Kang Ning2*
From Asia Pacific Bioinformatics Network (APBioNet) Thirteenth International Conference on Bioinformatics
(InCoB2014)
Sydney, Australia. 31 July - 2 August 2014
Abstract
Background: The human habitat is a host where microbial species evolve, function, and continue to evolve.
Elucidating how microbial communities respond to human habitats is a fundamental and critical task, as
establishing baselines of human microbiome is essential in understanding its role in human disease and health.
Recent studies on healthy human microbiome focus on particular body habitats, assuming that microbiome
develop similar structural patterns to perform similar ecosystem function under same environmental conditions.
However, current studies usually overlook a complex and interconnected landscape of human microbiome and
limit the ability in particular body habitats with learning models of specific criterion. Therefore, these methods
could not capture the real-world underlying microbial patterns effectively.
Results: To obtain a comprehensive view, we propose a novel ensemble clustering framework to mine the
structure of microbial community pattern on large-scale metagenomic data. Particularly, we first build a microbial
similarity network via integrating 1920 metagenomic samples from three body habitats of healthy adults. Then a
novel symmetric Nonnegative Matrix Factorization (NMF) based ensemble model is proposed and applied onto the
network to detect clustering pattern. Extensive experiments are conducted to evaluate the effectiveness of our
model on deriving microbial community with respect to body habitat and host gender. From clustering results, we
observed that body habitat exhibits a strong bound but non-unique microbial structural pattern. Meanwhile,
human microbiome reveals different degree of structural variations over body habitat and host gender.
Conclusions: In summary, our ensemble clustering framework could efficiently explore integrated clustering results
to accurately identify microbial communities, and provide a comprehensive view for a set of microbial
communities. The clustering results indicate that structure of human microbiome is varied systematically across
body habitats and host genders. Such trends depict an integrated biography of microbial communities, which offer
a new insight towards uncovering pathogenic model of human microbiome.
Background
Metagenomic background
The human body is a content that complex microbial
communities are living inside and on. This microbiome
occupies body habitats and endows us with ecosystem
functions, such as nutrition, pathogen resistance and
* Correspondence: [email protected]
2
Computational Biology Group of Single Cell Center, Shandong Key
Laboratory of Energy Genetics and CAS Key Laboratory of Biofuels, Qingdao
Institute of Bioenergy and Bioprocess Technology, Chinese Academy of
Science, Qingdao 266101, China
Full list of author information is available at the end of the article
immune system development [1,2], to help maintain our
health. Hence systematically defining the “normal” states
of human microbiome is an important step towards
understanding role of microbiota in pathogenesis [3].
However, the majority of microbiomes have been poorly
investigated.
To understand the principle of human microbiome,
prior research concentrated on particular body habitats
[3-8]. For example, Turnbaugh et al. [9] investigated the
gut microbiome in obese and lean twins to address how
host, environmental condition and diet influence the
© 2014 Yang et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons
Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://
creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Yang et al. BMC Systems Biology 2014, 8(Suppl 4):S7
http://www.biomedcentral.com/1752-0509/8/S4/S7
microbial components. Grice et al. [10] targeted human
skin microbiome to characterize its topological and personal variations within multiple sites. Bik et al. [11]’s
research indicated the distinctness of microbial structure
on oral cavity and tongue.
However, human microbial habitats are not isolated
with one another; instead they reveal community structure correlation across body habitats [12]. In this case,
ensemble of different habitat samples could bring global
and full-scale insights into microbiome. Recent studies
had aggregated microbial samples from different body
habitats to perform a comprehensive study. Costello et
al. [13] surveyed the microbiomes that were gathered
from 27 body habitats of nine adults. Mitreva [12] carried out the extensive sampling on 18 body habitats
from 242 individuals. In order to establish a global
insight of human microbiome, they built a “whole-body”
microbial similarity network where the nodes were consisted of metagenomic samples from multiple human
body sites and the edges as pair-wise phylogenetic similarity of samples were measured in terms of their shared
evolutionary history. Clustering approaches [14] had
been applied on this large-scale similarity network to
group samples that shared more similar phylogenetic
structures with each other within the clusters than other
ones. From these clusters, researchers could infer how
microbial patterns were affected by body habitat, host
gender and environmental condition with time. Costello
et al. [13] proposed a hierarchical clustering algorithm
on a microbial community network and found out personal microbiota relatively stable within habitats over
time. Turnbaugh et al. [9] identified two distinct functional modules on gut microbiome via principal components analysis (PCA) and hierarchical clustering
algorithm, and experimental results disclosed that
microbiome within same clusters carried out similar
ecosystem-level functions. Mitreva [12] adopted a centroid-based clustering algorithm and discovered the covariation and co-exclusion of microbiome between different habitats.
Current Limitations
Clustering approach aims to group metagenomic samples with similar phylogenetic patterns. It can be
achieved by various algorithms that differ significantly in
terms of computational principles and measures, by
which each generated clustering results can be viewed
as taking a different “look” through data (as shown in
Table 1). However, most of prior studies employ one
particular clustering approach, by which the clustering
outputs tend to be specific towards the criterion of the
proposed approach. For example, density-based clustering algorithm groups samples that are densely connected in similarity network. However, true microbial
Page 2 of 12
communities are not limited to densely connected structures; samples with sparsely microbial structure widely
exist in the lake [15]. Graph partition-based clustering
such as MCL [16] and K-means clustering [17] explores
the best partition of a network. But these algorithms do
not allow the overlaps between clusters. Therefore, they
are unable to discover shared microbe between two
communities, such as some species that could adapt in
multi-environmental conditions like microbial mats and
biofilms. Hierarchical clustering algorithm [18] learns
the hierarchical structure of a network, which has been
used in [13], but hierarchical structure is determined by
local optimization criterion as such there is no global
objective function, which might lead to small clusters
with only part of similar samples. Distribution-based
clustering approach, like expectation-maximization (EM)
[19], identifies the clusters that follow statistical Condorcet criteria. But statistical model for microbial community remains rarely known and therefore it is difficult
to evaluate reliability of the results.
Advantage of proposed Ensemble clustering framework
Ideally, a clustering algorithm should be able to exploit
clustering patterns as comprehensive as possible. However, as we have mentioned above, few algorithms are
capable of taking into consideration all factors. Different
clustering algorithms may produce different partitions of
the network. Given multiple clustering results, we need
to explore their information and output more robust
results that can exploit the complementary nature of
these patterns.
Ensemble clustering was proposed recently which has
been successfully used to solve many community detection problems [20-23]. Thus, we use ensemble clustering
framework to integrate the various kinds of clusters (here
we call them base clustering results) and output more
comprehensive results. In this study, we first construct a
consensus matrix which measures similarity of samples
based on co-occurrence of samples in base clustering
results [24]. Next we apply Symmetric Nonnegative
Matrix Factorization (NMF) [25] on the consensus
matrix to derive clusters. Symmetric NMF provides a
lower rank approximation of a nonnegative matrix,
which could be easily related to the clustering of the nonnegative data. As mentioned in [25], the factorization of
the consensus matrix will generate a clustering assignment matrix that could capture the cluster structure
inherent in the network.
Unlike prior researches that applied single cluster
algorithm on particular habitat microbiome, our framework assembled clustering algorithms of different
human microbiome in different body habitats. We carried out our experiments to demonstrate its capability
in capturing the microbial community. Experimental
Yang et al. BMC Systems Biology 2014, 8(Suppl 4):S7
http://www.biomedcentral.com/1752-0509/8/S4/S7
Page 3 of 12
Table 1 Summary of four particular clustering approaches
Clustering
Approaches
Density-based
clustering
Characteristics
Limitations on microbial pattern
Clusters are defined as connected dense regions in the
network
True microbial community are not limited to densely connected
structures; sparsely microbial structure still exists
Graph partition- Clusters are generated via graph partitioning techniques
based
clustering
Partition based algorithms do not allow the overlaps between clusters.
Therefore, they are unable to discover shared microbe among clusters,
such as some species that could adapt in multi-environmental
conditions like microbial mats and biofilms
Hierarchical
clustering
Clusters are built based on an agglomerative clustering
model that shows relations between the members and
groups
Hierarchical structure is determined by local optimization criterion as
such there is no global objective function, which might lead to small
clusters with only part of similar samples
Distributionbased
clustering
Clusters are modelled using statistical distributions
Statistical models of microbial communities are still unknown and
need to be further explored
results showed that predicted clusters were capable of
revealing the spatial and gender roles of human microbiota and eventually elaborated human microbiome biogeography, which provided new insights about disease
pathogenesis of human microbiome [9,12,13].
Material and methods
In this section, we first briefly introduced the experimental
data, the similarity measurements of metagenomic samples and GPU based fast similarity matrix computing.
Then we described the schema of ensemble clustering framework and its phases to structure microbial community.
Experimental data
In this work, we used 1920 metagenomic samples from
the project “Moving pictures of human microbiome”
[26] to build the microbial matrix and similarity network (refer to section “Similarity measurements of
metagenomic samples” for details). A sample of metagenomic matrix and network were illustrated in Figure 1
and the similarity matrices of all datasets were shown in
Additional file 2: Table S1. GPU-Meta-Storms [27] were
performed to measure structural similarity of metagenomic samples (Efficiency of GPU-Meta-Storms is
shown in Additional file 2: Figure S1). Metagenomic
samples were annotated by two meta-labels: Habitat =
{gut, skin, oral cavity} defined human body habitat the
samples live in, while Gender = {male, female} defined
the gender of host the samples inhabit. Combining the
two meta-labels, each sample was partition into one of
six meta-classes, they were {male & gut, male & skin,
male & oral cavity, female & gut, female & skin, female
& oral cavity}. Table 2 summarized the distribution of
1920 metagenomic samples on three body habitats and
two host genders.
Similarity measurements of metagenomic samples
The scoring function of Meta-Storms [27] compared two
microbial samples’ structure by calculating the maximum
common component of their common phylogenetic tree
Figure 1 An example of (A) similarity matrix and (B) its similarity network. In the matrix, each tile indicates a similarity value between 2
samples by colour gradient from red (high) to green (low). In the network, each node represents a sample, and edges represent similarity values
in the matrix.
Yang et al. BMC Systems Biology 2014, 8(Suppl 4):S7
http://www.biomedcentral.com/1752-0509/8/S4/S7
Page 4 of 12
Table 2 1920 microbial samples on six human body
habitats
Gut
Skin
Oral
1395
Male
331
698
366
Female
130
262
133
525
Total
461
960
499
1920
Total
considering the b-diversity, phylogenetic distance and
abundance of each species (Formula 1). The scoring
function first evaluated the common abundance of each
species on the leaf node, which was considered as the
smaller abundance value in two samples. These abundance values were propagated to their ancestors iteratively, and the accumulative common abundance values
at the root node reflected the overall similarity between
the two metagenomic samples, which could be computed
using Similarity (Root) defined in Formula 1.
⎧
⎪
⎪ Common Abundance(X)
⎪
⎪
⎪
⎪
⎪
⎪
⎨
Similarity(X) =
If X is a leaf node
Common Abundance(X) If X is an internal node
⎪
⎪
⎪
⎪
⎪
⎪ +Similarity(X.Left)
⎪
⎪
⎩
+Similarity(X.Right)
(1)
Then we constructed the similarity matrix based on
the pair-wised similarity among all sample pair (Figure 1
(A)). Exploiting the multi-core architecture of the GPU
[28], Formula 1 could be invoked in parallel using a
large number of threads to compute similarity between
different pairs of metagenomic samples. To compute the
pair-wise similarity matrix for N samples, we spawned N
* N threads in the GPU such that each similarity value
in the matrix was processed by an independent thread.
Figure 2 Overview of the GPU based similarity matrix computing.
Figure 2 illustrated the GPU computing workflow: to
build the common phylogenetic tree, we first loaded and
initialize abundant specie data from the file system to
main memory; this data was then reloaded to the GPU
for computing. When all threads of the GPU kernel had
been completed (Figure 1, step 3, the key step), these
values were returned back to RAM to populate the similarity matrix, which was then stored in the file system.
Ensemble clustering framework
In this subsection, we proposed a novel ensemble clustering framework, namely Meta-EC, to perform microbial community pattern detection. The framework
consisted of two stages: a generation phase where a consensus matrix was constructed based on base clustering
results and an identification phase in which a symmetric
NMF-based clustering was used to detect reliable clusters from the consensus matrix. The schema of our
Meta-EC algorithm was presented in Figure 3.
Terminology: After computing the pair-wise similarity
matrix of the metagenomic samples, we used it to construct the microbial similarity network that was reformatted as a simple undirected graph G = (V, E), where
V defined a vertex set which containeed |V| = N vertices, and E an edge set. A vertex v ∈ V represented a
metagenomic sample and a weighted edge e ∈ E represented the polygenetic structure similarity
of two
samples (Figure 1(B)). A cluster Ci = Vci , Eic was a
subnetwork of G such that Vci ⊂ V and Eic was the set of
edges induced by Vci from G. A microbial community of
G was a set of predicted microbial clusters, defined as
{C1,...,Cm}.
Generation phase: When the similarity network was
ready, a set of base clustering results were calculated by
Yang et al. BMC Systems Biology 2014, 8(Suppl 4):S7
http://www.biomedcentral.com/1752-0509/8/S4/S7
Page 5 of 12
Figure 3 The schema of Meta-EC algorithm.
applying four clustering algorithms (base clustering algorithms) on the similarity network with different initializations, as shown in Figure 3(A). The base clustering
algorithms included EM algorithm, K-mean clustering,
hierarchical clustering and density-based clustering, as
present in Table 1 and Additional file 1: Section 1. A consensus matrix W was introduced to measure the co-occurrence of samples in clusters of the base clustering results.
Each Wij indicated the number of base clustering results
in which sample i and sample j were assigned to the same
cluster, divided by the total number of base clustering
results. Therefore, matrix W took into consideration all
generated clusters and reflected the co-clusters similarity
between each pair of samples based on different clustering
criterions. The higher the value of Wij, the more likely
sample i and sample j belonged to the same cluster.
Identification phase: When the consensus matrix was
constructed, we applied a symmetric NMF-based clustering algorithm on this matrix to derive the clusters.
The flowchart of this algorithm was shown in Figure 3
(B). The main idea of this algorithm was outlined as
follows:
The symmetric NMF defined in Equation (2) was suitable for network clustering based on similarity matrix W:
minH≥0 D W|HHT
(2)
Here D(W|HH T ) was a predefined cost function
K and K was the predefined number of clusters. H was a
cluster indicator matrix in which each entry hi,k denoted
the real-valued membership of sample i belonging to
cluster k. So we could easily infer the clustering assignment of sample i from the i-th row of H. In this study,
we used Kullback-Leibler (KL) divergence [28] as the
cost function, which could be represented as:
D W HHT = DKL W HHT =
N
i,j=1
Wij
Wij log
HHT ij
− Wij + HHT ij
(3)
We chose KL-divergence as the cost function since it
was free of noise parameter and had been widely used
in NMF.
A sample may belong to more than one cluster, but it
seldom belonged to all clusters. Thus, the cluster indicator matrix H should be sparse. To achieve sparsity of
Yang et al. BMC Systems Biology 2014, 8(Suppl 4):S7
http://www.biomedcentral.com/1752-0509/8/S4/S7
Page 6 of 12
the solution of H, a L1-norm regularization for H was
integrated. Neglecting constants and adding the L1norm regularization for H, the modified formulation was
as follows:
min −
H≥0
N
N
i=1
j=1
Wi,j log HHT i,j − HHT i,j +
N
K
i=1
z=1
βhi,z
(4)
where the hyper-parameter b > 0 controlled the sparsity of H, and H ≥ 0 was the cluster indicator matrix.
Solution to NMF-based Ensemble Clustering: Minimization of the cost function in equation (4) with constraints formed a constrained nonlinear optimization
problem. Similar to [29,30], we adopted the multiplicative
update rule [31] to estimate H, which was widely accepted
as a useful algorithm in solving nonnegative matrix factorization problem. By the multiplicative update rule, we
obtained the following update rules for hi,z:
|V|
j=1
hi,z ←
Wi,j hj,z
K
l=1 hi,l hj,l
hi,z 1
+ hi,z |V|
2
2
j=1 hj,z + 0.5β
(5)
We iteratively updated H according to the updating
rule (5) until they satisfied a stopping criterion. Let Hl be
the cluster indicator matrix at iteration time l (l > 1). The
algorithm was stopped whenever ||Hl - Hl-1||1 < r, where
r was a predefined tolerance parameter. Here we set r =
10-6 as the default value of tolerance parameter. In addition, the maximum of iteration time was limited to 200
iterations if the stopping criteria r was unsatisfied. In
order to avoid local minimum, for random initialization,
we repeated the algorithm 10 times with random initial
conditions and chose the results with lowest value of the
cost function (4).
From cluster indicator matrix to microbial clusters:
Similar to [32], we obtained microbial clusters from cluster
indicator matrix H by taking the threshold τ to assign a
sample to a cluster when its weight for the cluster
exceeded τ. In this way, we obtained theresultant samplecluster membership matrix H∗ = h∗i,z , where h∗i,z = 1 if
hi,z ≥ τ and h∗i,z = 0 if hi,z < τ . Here, h∗i,z = 1 mean sample i
was assigned to detected cluster z and h* mean the final
output of h. After completing these steps, we obtained the
refined clusters EK that satisfied the following conditions:
EK = {C1 , ..., CK } : vi ∈ CZ , if .h∗i,z = 1
(6)
where i = 1,...,N and z = 1,...,K.
We summarized the whole algorithm in Figure 4.
Results
In this section, we focused on evaluating the effectiveness of Meta-EC algorithm. Before presenting the
experimental results, we first introduced our experiment
design: evaluation metrics and experimental settings in
Figure 4 The algorithm of Meta-EC for microbial community
pattern detection.
our study. Then we conducted experimental comparison
between Meta-EC and base clustering approaches, and
comparison between constructed consensus network
and original metagenomic similarity network. Finally,
from clustering results, we investigated how human
microbial community was influenced by body habitat
and host gender.
Evaluation metrics
In this work, we evaluated the effectiveness of clustering
algorithms by observing how well detected clusters corresponded to the sampling information of habitats and genders (six meta-classes, refer to subsection Terminology
for details). Since the true number of cluster patterns for
habitat and gender was unknown, and there were no literature references to clearly mention how to determine
the number of cluster patterns in either body habitat or
host gender, we empirically defined reference clusters
based on six meta-classes. Assuming that metagenomic
samples with identical meta-classes were likely to have
similar microbial structures [13], we bring the metagenomic samples with identical meta-classes into one
reference cluster. Typically, the quality of the predicted
clusters could be evaluated by following three quantity
measures, f-measure [33], PR metrics and F-score, which
could measure how well the detected clusters corresponded to reference clusters.
Among these three measures, f-measure which was the
harmonic mean of Precision and Recall, aimed at assessing how well the detected clusters matched reference
Yang et al. BMC Systems Biology 2014, 8(Suppl 4):S7
http://www.biomedcentral.com/1752-0509/8/S4/S7
Page 7 of 12
ones at cluster level (Precision measured what fraction of
the detected clusters were matched with reference ones
and Recall measured what fraction of reference clusters
were matched to detected clusters). PR-based metric
took into account the overlap between detected and
reference clusters. F-score focused on measuring whether
samples within identical habitats were grouped together
in the detected clusters. The value of each measure varied
from 0 to 1, and the higher value indicated better match.
For more details of f-measure, PR metrics and F-score,
please refer to Additional file 1: Section 2. And the parameter setting in the experiments is introduced in Additional file 1: Section 3.
Evaluation of clustering results generated by Meta-EC
algorithm
In this subsection, to evaluate the performance of MetaEC algorithm, we presented performance comparison of
proposed Meta-EC algorithm with base clustering
approaches and comparison of constructed consensus
matrix with original microbial similarity matrix.
Comparison against four base clustering
approaches: To evaluate the performance of ensemble
clustering approach, the accuracy of the clustering results
derived from our proposed approach was compared with
the ones derived from these base clustering algorithms.
Figure 5 illustrated the performance of different clustering algorithms in terms of three metrics (PR, f-measure
and F-score) with respect to the reference clusters. From
Figure 5, we could observe that our ensemble-based
approach had competitive performance compared with
the base clustering algorithms as regard to all three measures. Among the base clustering algorithms, K-means
with cluster number set to 6 had better performance in
terms of PR, while K-means and Density-based clustering
with cluster number set to 6 had better performance in
terms of f-measure, and Hierarchical clustering with cluster number set to 9 and 10 had comparable performance
with K-means and Density-based clustering with cluster
number set to 6 in terms of F-score. But none of them
could have superior performance than others as regard to
all three measures. However, our ensemble-based
approach obtained the best performance in terms of all
the three measures. This may be owing to the fast that
ensemble-based approach could make use of clusters
derived from different base clustering algorithms and
extract more reliable results. In addition, we conducted
sensitivity study of phylogenetic structure similarity on
microbial network. We ran algorithm Meta-EC with
threshold value of metagenomic similarity in matrix tuning from 0.7 to 0.9 with 0.1 as step size, the results in
Additional file 2: Figure S4 showed Meta-EC outperformed other state-of-art clustering techniques in the
wide range of edge threshold, indicating that our
Figure 5 Performance comparison of ensemble clustering
framework to base clustering algorithms with respect to fmeasure, PR and F-score. Note that ensemble-based approach
with random initialization is denoted as “Ensemble_random”, while
ensemble-based approach with a base clustering result as initial
input is denoted as “Ensemble_initial”. The result of
“Ensemble_random” is obtained with β = 1.
algorithm is robust and insensitive to the similarity network noisy and data coverage. In addition, we have compared the computational time with base clustering
approaches in Table 3 and results show that Meta-EC
Yang et al. BMC Systems Biology 2014, 8(Suppl 4):S7
http://www.biomedcentral.com/1752-0509/8/S4/S7
Page 8 of 12
Table 3 Comparison with bases clustering approaches on
computational time
Method
EM
Time(S)
221.17
K-Means Hierarchical Density-based Meta-EC
11.57
55.04
12.24
72.8
exactly spend more time than that by K-mean and Hierarchical clustering but less than EM clustering, so the
total time cost of MetaEC is the sum of all base clustering algorithms plus 72.8 seconds. With rapid development of computational capability, we could improve the
time efficiency on large amount of operations.
Comparison of constructed consensus network with
original similarity network: To demonstrate the benefits of combining different base clustering results, we
applied symmetric NMF on original metagenomic similarity network and evaluated its performance. To be fair,
the results of symmetric NMF on original metagenomic
similarity network were obtained over the best tuned
parameter. The comparison of the two tested similarity
network is present in Figure 6 as regard to F-measure,
PR and f-measure.
The results in Figure 6 showed that applying symmetric NMF on consensus matrix achieved better performance than that on the original similarity network.
These results demonstrated the benefits of combining
different base clustering results. If the similarity matrix
was well constructed (each element reflected the cocluster similarity), the factorization of the similarity
matrix would generate a clustering assignment matrix
Figure 6 Performance comparison of Bayesian NMF based
clustering algorithm applied on ensemble clustering similarity
network and original microbial similarity network. Additive
values of three measures are present for each data source. For
random initialization case, the value of β is set to 1 and the result
corresponds to “Original_random”. We also choose the base
clustering results which presents the best performance as the initial
input of symmetric NMF and the result corresponds to
“Original_initial”.
that could well capture the cluster structure inherent in
the network representation. However, the original network weighted the interaction via measuring the phylogenetic structure of samples. In this way, metagenomic
samples with higher phylogenetic similarity were more
likely to be involved in one cluster. If the actual microbial pattern was uncorrelated with phylogenetic similarity, the community detected by symmetric NMF may be
unreliable. In ensemble clustering framework, we generated a consensus matrix that integrated the clustering
results derived from different clustering algorithms.
Each element in consensus matrix indicated the frequency of the corresponding sample pair being clustered
together in these base clustering results. Thus, applying
symmetric NMF on consensus matrix could take into
consideration the co-cluster strength of multiple clustering patterns and output a more comprehensive and
robust result.
Interpretation of Microbial community patterns on
human body habitats based on clustering results
Recall that metagenomic samples were clustered in
terms of co-occurrence frequency in base clustering
results. Hence the final output clusters assembled samples to represent unique microbial patterns that are the
consensus from base clustering approaches. Next, from
the clustering results, we infer how microbial pattern
was influenced by body habitats and host genders.
Structural variation across body habitats: Through
analyzing the enrichment of body habitat and host gender over six predicted clusters, the results in Figure 7
revealed a stronger coherence by body habitat than host
gender. These clusters dominated by particular body
habitats inferred that these body habitats harboured distinctive microbial patterns, which was also observed in
base clustering results in Additional file 2: Figure S3.
Although four base clustering algorithms generate clustering patterns with different criterions, most clusters in
Additional file 2: Table S2 were enriched with particular
habitats.
Meanwhile, we observed that microbial communities
at different body habitat exhibited different degree of
compositional structure variation. Figure 7 showed that
microbial structure remained relatively stable in oral
cavity, compared with diverse microbial structures harboured in skin. It was biologically reasonable to detect
diverse patterns on skin, since there were quite different
places where skin microbial communities could be
sampled.
Different extend of habitat structural variation were
also observed in base clustering results. In Additional
file 2: Figure S2, gut and oral cavity microbial community patterns were only fit with one clustering criterion,
gut consistent with K-means and oral cavity with
Yang et al. BMC Systems Biology 2014, 8(Suppl 4):S7
http://www.biomedcentral.com/1752-0509/8/S4/S7
Page 9 of 12
Figure 7 Sample distribution on predicted clusters with respect to body habitat and host gender.
hierarchical clustering. Contrary to gut and oral cavity,
skin-enriched cluster could be recognized by four clustering criterions in all experimental settings, inferring
skin samples have many cluster patterns with diverse
microbial structures.
Note that the proposed Meta-EC generates a more
comprehensive community patterns with respect to
meta-data since our result is an agreement by consensus
of multiple base clustering approaches. For example,
compared to Hierarchical and EM clustering results in
Additional file 2: Figure S3 that only capture male-gut
cluster, ensemble clustering is able to uncover femalegut specific clusters (shown in Figure 7), indicating that
Meta-EC could reveal degree of structural variation over
body habitat more comprehensively than base clustering
results.
Structural variation across host gender: We further
assessed microbial structure variation with respect to host
gender. Meta-Storm [27] was used to measure similarity
of two metagenomic samples. The results in Figure 8 indicated that over all habitats, variation was significantly less
within same gender samples than between opposite gender
samples. However, these habitats perform different degree
of structural variation with respect to host gender. Oral
cavity microbiome exhibited a stable structure both
among same and opposite gender individuals (both above
92% phylogenetic structure similarity). And skin communities had no unique structural variation patterns regarding to host gender. Gut community structure was highly
variable between samples from opposite gender hosts (less
than 90% similarity value for opposite gender samples of
gut cluster 3), but exhibited strong coherence to same
gender hosts. On the other hand, the enrichment study in
Figure 7 showed that two gut clusters were distinct with
host gender, indicating that opposite sexual individuals
may exhibit a distinct microbial composition in gut.
Microbial interconnection over habitats: Although
microbial communities reflected unique structures (distributions) over body habitats, the interconnected
microbial components among the body habitats were
still observed in the clustering results. For example,
cluster 1 in Figure 7 contained 10 skin samples that
shared similar microbial compositions with oral cavity
communities, while skin cluster 2, 4 and 6 harboured 6,
15 and 2 oral cavity samples respectively. Since skin
microbial pattern was closely associated with external
Yang et al. BMC Systems Biology 2014, 8(Suppl 4):S7
http://www.biomedcentral.com/1752-0509/8/S4/S7
Figure 8 Structural variation over host gender in oral cavity,
gut and skin-dominated clusters.
environment [34] and oral cavity was an open system
where microbiome from external environment was
imported by breathing, eating food and drinking water
[35], oral cavity and skin would respond to outside
environmental conditions, and gradually evolve similar
microbiomes.
Conclusions and discussions
The human microbiomes are microbiomes that are hosted
in gut, oral mucosa and multi-layer of skin etc. These
organisms perform ecosystem-level functions that are useful for human host to maintain healthy, yet detailed factors
that attribute the microbial community structures in
human body habitats and host gender remain poorly conceptualized. To fully understand the roles of human
microbiome in disease and health, prior studies focus on
particular body habitats of health individuals with specific
clustering approaches, based on the assumption that metagenomic samples of same body habitats would develop
similar microbial structure patterns. However, human
habitats are not isolated; they are interacted and correlated
to form an integrated and complex system. And identified
structures might be unsuccessful due to noisy sample
similarity and specific topological structure within metagenomic network. Hence, single clustering algorithm rarely
achieves optimal outcome. To uncover a global and comprehensive landscape of human microbiome, we perform
an ensemble clustering framework Meta-EC on large-
Page 10 of 12
scale metagenomic samples. In this study, our proposed
Meta-EC algorithm has four main advantages on microbial
pattern detection: (1) Meta-EC could effectively identify
more reliable microbial communities via integrating many
base clustering results, (2) As regard to the modularity of
microbial communities, defined as the clustering of microbial communities (modularity) according to the effects of
their related environments or treatments (meta-data), the
consensus clustering network is much clearer at showing
such modularity property (such as how environments (or
meta-data) shape microbial communities in body habitat,
which is critical to healthcare and diognosis) than the original metagenomic similarity network [25], (3) Ensemble
framework is robust for the coverage of metagenomic
similarity network (as shown in Additional file 2: Figure
S4), and (4) Compared to base clustering results in Additional file 2: Figure S3, Meta-EC algorithm could reveal
the spatial and gender patterns of microbiome (as shown
in Figure 7) more comprehensively, as the ensemble clustering result is a general agreement by multiple base clustering approaches.
Nevertheless, it should be acknowledged that the performance of our algorithm depends on the base clustering results and quality of original metagenomic
similarity network. If all these base results were generated by poor clustering algorithms, the ensemble outputs would be far from real microbial community
similarity patterns. If the original similarity network is
unreliable to capture the modularity of metagenomic
samples, none of clustering approaches could work. To
address this problem, we have to integrate more base
clustering approaches with diverse optimization criterions and pattern assumptions, to reduce the bias generated by base approaches. We assume these algorithms
can capture a wide variety of clustering patterns in similarity network to alleviate the effect of unreliable clustering results. On the other hand, the proposed NMF
based mode, which could be used in association study
of bioinformatics domain [36-39], is a more complex
method to implement, and convergence could be slow,
as shown in Table 3. With rapid development of computational capability, we could improve the time efficiency on large amount of operations. And the
nonnegative constraints on cluster indicator matrix H
may be an insufficient condition for achieving sparseness in some cases [20]. Then one may set appropriate
thresholds to enforce sparseness. In summary, Meta-EC
is an ensemble clustering framework for large-scale
metagenomic data analysis and microbial community
pattern detection. In the future, NMF based model
could be exploited to offer potential applications on
bipartite model of drug-target association [40] and disease gene prediction [41].
Yang et al. BMC Systems Biology 2014, 8(Suppl 4):S7
http://www.biomedcentral.com/1752-0509/8/S4/S7
Availability
The data sets and supporting experimental results of
this article are available for download from http://
datam.i2r.a-star.edu.sg/MetaEC.
Page 11 of 12
5.
6.
7.
Additional material
Additional file 1: Experimental Design. The file show the experimental
design in this paper, including: (1) introductory of four base clustering
approaches; (2) evaluation of microbial clusters; (3) parameter setting.
Additional file 2: Supplementary Material. The file presents several
figures, tables and additional experimental results mentioned in this
paper, including: (1) the efficiency of GPU-meta-storm algorithm; (2)
evaluation of four base clustering results; (3) sensitivity study of
phylogenetic structure similarity on microbial network.
8.
9.
10.
11.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
Conceptualized and designed the method and drafted manuscript: PY KN.
Responsible for the implementation: PY XS LOY. Provided raw data: KN XS.
Participated in discussion and improved the method as well as revised the
draft: XS LOY H-NC X-LL. Read and approved the manuscript: PY XS LOY HNC X-LL KN.
Acknowledgements
This work is supported in part by Chinese Academy of Sciences’ e-Science
grant INFO-115-D01-Z006, Ministry of Science and Technology’s high-tech
(863) grant 2009AA02Z310 and 2014AA21502, as well as National Science
Foundation of China grant 61103167 and 31271410.
Declarations
Publication costs for this article were partially funded by Chinese Academy
of Sciences’ e-Science grant INFO-115-D01-Z006, Ministry of Science and
Technology’s high-tech (863) grant 2009AA02Z310 and 2014AA21502, as
well as National Science Foundation of China grant 61103167 and 31271410
and by the Institute for Infocomm Research, Agency for Science, Technology
& Research (A*STAR), Singapore.
This article has been published as part of BMC Systems Biology Volume 8
Supplement 4, 2014: Thirteenth International Conference on Bioinformatics
(InCoB2014): Systems Biology. The full contents of the supplement are
available online at http://www.biomedcentral.com/bmcsystbiol/supplements/
8/S4.
Authors’ details
Institute for Infocomm Research, Agency for Science, Technology &
Research (A*STAR), Singapore, 138632, Singapore. 2Computational Biology
Group of Single Cell Center, Shandong Key Laboratory of Energy Genetics
and CAS Key Laboratory of Biofuels, Qingdao Institute of Bioenergy and
Bioprocess Technology, Chinese Academy of Science, Qingdao 266101,
China. 3Center for Computer Vision and Department of Mathematics, Sun
Yat-Sen University, Guangzhou, 510275, China.
1
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
Published: 8 December 2014
References
1. Wilson M: Bacteriology of humans: an ecological perspective. John Wiley &
Sons 2009.
2. Dethlefsen L, McFall-Ngai M, Relman DA: An ecological and evolutionary
perspective on human-microbe mutualism and disease. Nature 2007,
449(7164):811-818.
3. Turnbaugh PJ, Ley RE, Hamady M, Fraser-Liggett CM, Knight R, Gordon JI:
The human microbiome project: exploring the microbial part of
ourselves in a changing world. Nature 2007, 449(7164):804-810.
4. Lederberg J: Infectious history. Science 2000, 288(5464):287-293.
27.
28.
29.
30.
Eckburg PB, Bik EM, Bernstein CN, Purdom E, Dethlefsen L, Sargent M,
Gill SR, Nelson KE, Relman DA: Diversity of the human intestinal microbial
flora. Science 2005, 308(5728):1635-1638.
Fierer N, Hamady M, Lauber CL, Knight R: The influence of sex,
handedness, and washing on the diversity of hand surface bacteria.
Proceedings of the National Academy of Sciences 2008, 46:17994-17999.
Aas JA, Paster BJ, Stokes LN, Olsen I, Dewhirst FE: Defining the normal
bacterial flora of the oral cavity. Journal of Clinical Microbiology 2005,
43(11):5721-5732.
Nasidze I, Quinque D, Li J, Li M, Tang K, Stoneking M: Comparative analysis
of human saliva microbiome diversity by barcoded pyrosequencing and
cloning approaches. Analytical biochemistry 2009, 391(1):64-68.
Turnbaugh PJ, Hamady M, Yatsunenko T, Cantarel BL, Duncan A, Ley RE,
Sogin ML, Jones WJ, Roe BA, Affourtit JP, et al: A core gut microbiome in
obese and lean twins. Nature 2009, 7228:480-484.
Grice EA, Kong HH, Conlan S, Deming CB, Davis J, Young AC, NISC
Comparative Sequencing Program, Bouffard GG, Blakesley RW, Murray PR,
et al: Topographical and temporal diversity of the human skin
microbiome. Science 2009, 324(5931):1190-1192.
Bik EM, Long CD, Armitage GC, Loomer P, Emerson J, Mongodin EF,
Nelson KE, Gill SR, Fraser-Liggett CM, Relman DA: Bacterial diversity in the
oral cavity of 10 healthy individuals. The ISME journal 2010, 4(8):962-974.
Mitreva M: Structure, function and diversity of the healthy human
microbiome. Nature 2012, 486:207-214.
Costello EK, Lauber CL, Hamady M, Fierer N, Gordon JI, Knight R: Bacterial
community variation in human body habitats across space and time.
Science 2009, 5960:1694-1697.
Lozupone C, Hamady M, Knight R: UniFrac-an online tool for comparing
microbial community diversity in a phylogenetic context. BMC
Bioinformatics 2006, 7:371.
Kent AD, Yannarell AC, Rusak JA, Triplett EW, McMahon KD: Synchrony in
aquatic microbial community dynamics. The ISME journal 2007, 1(1):38-47.
Zinger L, Coissac E, Choler P, Geremia RA: Assessment of microbial
communities by graph partitioning in a study of soil fungi in two alpine
meadows. Applied and environmental microbiology 2009, 75(18):5863-5870.
Lloyd SP: Least squares quantization in PCM. IEEE Transactions on
Information Theory 1982, 28:129-137.
Szekely GJ, Rizzo ML: Hierarchical clustering via Joint Between-Within
Distances: Extending Ward’s Minimum Variance Method. Journal of
classification 2005, 22(2):151-183.
Moon TK: The expectation-maximization algorithm. IEEE Signal processing
magazine 1996, 13(6):47-60.
Devarajan K: Nonnegative matrix factorization: an analytical and
interpretive tool in computational biology. PLoS computational biology
2008, 4(7):e1000029.
Qi Q, Zhao Y, Li M, Simon R: Non-negative matrix factorization of gene
expression profiles: a plug-in for BRB-ArrayTools. Bioinformatics 2009,
25(4):545-547.
Zhang S, Li Q, Liu J, Zhou XJ: A novel computational framework for
simultaneous integration of multiple types of genomic data to identify
microRNA-gene regulatory modules. Bioinformatics 2011, 27(13):i401-i409.
Ou-Yang L, Dai DQ, Zhang XF: Protein complex detection via weighted
ensemble clustering based on Bayesian nonnegative matrix
factorization. PloS One 2013, 8(5):e62158.
Lancichinetti A, Fortunato S: Consensus clustering in complex networks.
Scientific reports 2012, 2.
Kuang D, Park H, Ding CH: Symmetric Nonnegative Matrix Factorization
for Graph Clustering. SDM 2012, 12:106-117.
Caporaso JG, Lauber CL, Costello EK, Berg-Lyons D, Gonzalez A,
Stombaugh J, Knights D, Gajer P, Ravel J, Fierer N, et al: Moving pictures of
the human microbiome. Genome Biol 2011, 12:R50.
Su X, Xu J, Ning K: Meta-Storms: Efficient Search for Similar Microbial
Communities Based on a Novel Indexing Scheme and Similarity Score
for Metagenomic Data. Bioinformatics 2012, 28(19):2493-2501.
Kullback S: Letter to the Editor: The Kullback-Leibler distance. The
American Statistician 1987, 41(4):340-341.
Psorakis I, Roberts S, Sheldon B: Soft partitioning in networks via bayesian
non-negative matrix factorization. Adv Neural Inf Process Syst 2010.
Tan VY, Févotte C: Automatic relevance determination in nonnegative
matrix factorization. In SPARS’09-Signal Processing with Adaptive Sparse
Structured Representations 2009.
Yang et al. BMC Systems Biology 2014, 8(Suppl 4):S7
http://www.biomedcentral.com/1752-0509/8/S4/S7
Page 12 of 12
31. Seung D, Lee L: Algorithms for non-negative matrix factorization.
Advances in neural information processing systems 2001, 13:556-562.
32. Greene D, Cagney G, Krogan N, Cunningham P: Ensemble non-negative
matrix factorization methods for clustering protein-protein interactions.
Bioinformatics 2008, 24(15):1722-1728.
33. Manning CD, Raghavan P, Schütze H: Introduction to information retrieval.
Cambridge university press 2008, 1:6.
34. McGuire AL, Colgrove J, Whitney SN, Diaz CM, Bustillos D, Versalovic J:
Ethical, legal, and social considerations in conducting the Human
Microbiome Project. Genome Research 2008, 18(12):1861-1864.
35. Dewhirst FE, Chen T, Izard J, Paster BJ, Tanner AC, Yu WH, Wade WG: The
human oral microbiome. Journal of bacteriology 2010, 192:(19): 5002-5017.
36. Yang P, Li X, Mei JP, Kwoh CK, Ng SK: Positive-unlabeled learning for
disease gene identification. Bioinformatics 2012, 28(20):2640-2647.
37. Mei JP, Kwoh CK, Yang P, Li X, Zheng J: Drug-target interaction prediction
by learning from local information and neighbors. Bioinformatics 2013,
29(2):238-245.
38. Yang P, Li X, Wu M, Kwoh CK, Ng SK: Inferring gene-phenotype
association via global protein complex network propagation. PloS One
2011, 6(7):e21502.
39. Zheng X, Ding H, Mamitsuka H, Zhu S: Collaborative matrix factorization
with multiple similarities for predicting drug-target interactions. In 19th
ACM SIGKDD international conference on Knowledge discovery and data
mining 2013, 1025-1033.
40. Mei JP, Kwoh CK, Yang P, Li X, Zheng J: Globalized bipartite local model
for drug-target interaction prediction. Proceedings of the 11th International
Workshop on Data Mining in Bioinformatics 2012, 8-14.
41. Yang P, Li X, Chua HN, Kwoh CK, Ng SK: Ensemble Positive Unlabeled
Learning for Disease Gene Identification. PloS one 2014, 9(5):e97079.
doi:10.1186/1752-0509-8-S4-S7
Cite this article as: Yang et al.: Microbial community pattern detection
in human body habitats via ensemble clustering framework. BMC
Systems Biology 2014 8(Suppl 4):S7.
Submit your next manuscript to BioMed Central
and take full advantage of:
• Convenient online submission
• Thorough peer review
• No space constraints or color figure charges
• Immediate publication on acceptance
• Inclusion in PubMed, CAS, Scopus and Google Scholar
• Research which is freely available for redistribution
Submit your manuscript at
www.biomedcentral.com/submit
| 5 |
Dynamic Loop Parallelisation
Adrian Jackson∗ and Orestis Agathokleous∗
∗ EPCC,
arXiv:1205.2367v1 [cs.PL] 10 May 2012
The University of Edinburgh,
Kings Buildings,
Mayfield Road, Edinburgh,
EH9 3JZ, UK
Abstract—Regions of nested loops are a common feature of
High Performance Computing (HPC) codes. In shared memory
programming models, such as OpenMP, these structure are
the most common source of parallelism. Parallelising these
structures requires the programmers to make a static decision
on how parallelism should be applied. However, depending on
the parameters of the problem and the nature of the code,
static decisions on which loop to parallelise may not be optimal,
especially as they do not enable the exploitation of any runtime
characteristics of the execution. Changes to the iterations of the
loop which is chosen to be parallelised might limit the amount
of processors that can be utilised.
We have developed a system that allows a code to make a
dynamic choice, at runtime, of what parallelism is applied to
nested loops. The system works using a source to source compiler,
which we have created, to perform transformations to user’s
code automatically, through a directive based approach (similar
to OpenMP). This approach requires the programmer to specify
how the loops of the region can be parallelised and our runtime
library is then responsible for making the decisions dynamically
during the execution of the code.
Our method for providing dynamic decisions on which loop
to parallelise significantly outperforms the standard methods for
achieving this through OpenMP (using if clauses) and further
optimisations were possible with our system when addressing
simulations where the number of iterations of the loops change
during the runtime of the program or loops are not perfectly
nested.
I. I NTRODUCTION
High Performance Computing (HPC) codes, and in particular scientific codes, require parallel execution in order to
achieve a large amount of performance increase. Depending on
the underlying parallel platform which is used, programmers
use different programming models in order to achieve parallel
execution. In distributed memory systems, the message passing
programming model is the most commonly used approach for
applying parallelism in the codes. In shared memory systems
however, an attractive choice for parallel programming is
through OpenMP[16].
The parallelisation of codes with OpenMP is often achieved
with loop parallelisation. As long as the iterations of a loop are
independent, they can be distributed to the available processors
of the system in order to execute them in parallel. A programmer is required to specify a loop that can be parallelised
by placing compiler directives before the loop, resolving any
dependency issues between the iterations beforehand. HPC
codes often consist of regions with nested loops of multiple
levels. In order to parallelise these regions, a choice must be
made on how parallelism should be applied on the loops.
Even though OpenMP supports a variety of strategies for
parallelising nested loops, only a single one can be used to
parallelise the code.
A static choice however, cannot exploit any runtime characteristics during the execution of the program. Changes
in the input parameters of the executable which affect the
iterations of the loops may render the parallelisation decision
suboptimal. In addition to this, the iterations of a loop can
change at runtime due to the nature of the code. A common
feature of HPC codes is to organise the data into hierarchies,
for example blocks of multi-dimensional arrays. Depending
on the problem, the blocks can have different shapes and
sizes. These parameters affect the loops that are responsible
for accessing this data. In some situations, a static decision
has the potential to impose a limitation on the amount of
processors that can be used for the parallel execution of the
loops. With the current trend of chip manufactures to increase
the number of cores in the processors in each generation
leading to larger and larger shared memory system being
readily available to computational scientists on the desktop
and beyond, a more dynamic approach must be considered
for taking such decisions.
This report outlines our investigations into various strategies
that can be applied at runtime in order to make a dynamic
decision on how to parallelise a region with nested loops.
Our approach is to try to automatically perform modifications
to users code before compilation in order to enable the code
to make these decisions dynamically at runtime. Specifically,
we investigated the possibility of having multiple versions
of a loop within a region of nested loops in order to make
a dynamic choice on whether a loop should be execute
sequentially or in parallel.
II. O PEN MP
OpenMP [1] is, arguably, the dominant parallel programming model currently used for writing parallel programs for
used on shared memory parallel systems. Now at version
3.1, and supported by C and FORTRAN, OpenMP operates
using compiler directives. The programmer annotates their
code specifying how it should be parallelised. The compiler
then transforms the original code into a parallel version
when the code is compiled. By providing this higher level
of abstraction, OpenMP codes tend to be easier to develop,
debug and maintain. Moreover, with OpenMP it is very easy
TABLE I
S TRATEGIES FOR PARALLELISING NESTED LOOP REGIONS
Name
Description
Outermost
Inner Loop
Nested
Loop Parallelisation of the outermost loop
Parallelisation of one of the inner loops
Parallelisation of multiple loops with nested
parallel regions
Collapsing the loops into a single big loop
Loop Collapsing
Loop Selection
Runtime loop selection using if clauses
to develop the parallel version of a serial code without any
major modifications.
Whilst there are a number of different mechanisms that
OpenMP provides for adding parallel functionality to programs, the one that is generally used most often is loop
parallelisation. This involves taking independent iterations of
loops and distributing them to a group of threads that perform
these sets of independent operations in parallel. Since each of
the threads can access shared data, it is generally straightforward to parallelise any loop with no structural changes to the
program.
III. N ESTED L OOPS
HPC codes, and particularly scientific codes, deal with numerical computations based on mathematical formulas. These
formulas are often expressed in the form of nested loops,
where a set of computations is applied to a large amount
of data (generally stored in arrays) and parallelisation can
be applied to each loop individually. The arrays often consist of multiple dimensions and the access on the data is
achieved with the presence of nested loops. Furthermore it
is not uncommon that the arrangement of the data is done
in multiple hierarchies, most commonly in blocks with multidimensional arrays, where additional loops are require in order
to traverse all the data. When such code is presented, a choice
must be made on which loop level to parallelise (where the
parallelisation should occur) [2]. A summary of the available
strategies is presented in Table I
A. Outermost loop
The most commonly used approach is to parallelise the
outermost loop of a nested loop region, as shown in Listing
1. Using this strategy, the iterations of the loop are distributed
to the members of the thread team. The threads operate in
parallel by executing the portion of iterations they are assigned
to them individually. The nested loops of the parallel region
are executed in a sequential manner.
1
2
3
4
5
6
#pragma omp parallel for private (j)
for(i = 0; i < I; i++){
for(j = 0; j < J; j++){
work();
}
}
Listing 1.
Outer loop parallelisation of a nested loop region
Parallelising the outermost loop is often a good choice, as
it minimises the parallel overheads of the OpenMP implementation (such as the initialisation of the parallel region, the
scheduling of loop iterations to threads and the synchronisation
which takes place at the end of the Parallel loops). More
extensive work on the overheads of various OpenMP directives
can be found in [3].
Despite the advantages of the Outermost Loop parallelisation strategy in this context, there are drawbacks of this choice.
The maximum amount of available parallelism is limited by
the number of iterations of the outerloop loop. Considering
the example code in Listing 1, it is only possible to have I
tasks being executed in parallel. This restricts the number of
threads the code can utilise upon execution, and therefore the
number of processors or cores that can be exploited.
B. Inner loop
This is a variant on the outermost loop strategy, with the
difference that one of the inner loops of the region is chosen
to be parallelised. This approach will only be required or
beneficial if the outer loop does not have enough iterations
to parallelise efficiently as this variant on the parallelisation
strategy introduces parallelisation overheads by requiring the
parallelisation to be performed for each loop of the outerloop
rather than once for all the loops (as shown in Listing 2).
Further nesting of the parallelisation (at deeper loop levels)
will further increase the performance problems; the parallel
overheads appear a lot more times, whereas the amount of
work of each iteration becomes finer.
1
2
3
4
5
6
for(i = 0; i < I; i++){
#pragma omp parallel for shared (i)
for(j = 0; j < J; j++){
work();
}
}
Listing 2.
Inner loop parallelisation of a nested loop region
Another issue with this strategy is the scenario where loops
are not perfectly nested. In this situation, when there are
computations in-between the loops, as shown in Listing 3,
parallelising a loop of a deeper level will result in sequential
execution of that work. Depending on the amount of the
execution time which is now serialised, this approach has the
potential to increase the execution time of the code.
1
2
3
4
5
6
for(i = 0; i < I; i++){
somework();
for(j = 0; j < J; j++){
otherwork();
}
}
Listing 3.
Poorly nested loop region example
C. Nested
The Nested parallelisation strategy exploits the fact that
more than one loop can be executed in parallel. By opening
multiple nested parallel regions at different levels of loops, as
presented in Listing 4, more threads can be utilised during the
parallel execution of the code.
Unlike the Outermost Loop and the Inner Loop approaches,
which can only utilise as many threads as the iterations of the
loop with the biggest number of iterations, this strategy can
exploit further parallelisation opportunities. Other studies have
shown that nested parallelism can give good results on systems
with a large number of processors [4] [5].
1
2
3
4
5
6
7
#pragma omp parallel for private (j)
for(i = 0; i < I; i++){
#pragma omp parallel for shared (i)
for(j = 0; j < J; j++){
work();
}
}
Listing 4.
Nested loop parallelisation of a nested loop region
D. Loop Collapsing
The loop collapsing strategy takes a different approach for
exposing additional parallelism within nested loop regions. By
performing code transformations, multiple nested loops are
combined, or collapsed, into a single loop. The newly created
loop has a larger amount of iterations, which can be distributed
to the threads.
As of version 3.0, OpenMP supports loop collapsing by
using the COLLAPSE clause in the Loop Construct, requiring
the programmer to provide the number of loop levels to
collapse. To be able to use the COLLAPSE clause the loops
have to be perfectly nested (i.e. no code between the loops)
and the number of loop iterations (when multiplied together)
need to be able to be regularly divided. Loop collapsing can
produce better results than both the inner loop and nested loop
strategies, since the parallel overheads are minimal, however
it is not always available, either because not all compilers support OpenMP version 3.0, or because the conditions outlined
above cannot be met.
1
2
3
4
5
6
7
8
#pragma omp parallel for collapse (3)
for(i = 0; i < I; i++){
for(j = 0; j < J; j++){
for(k = 0; k < K; k++){
work();
}
}
}
parallel region is always created in either case. The presence
of the if clause only affects the number of threads that get
assigned to the parallel region. When sequential execution is
triggered, the code is only executed by the master thread, for
parallel execution all threads execute the code.
Furthermore, with the if clause, programmers are still
required to manually write code which makes the decision,
construct sensible scalar-expressions to be evaluated, and
manually parallelise each loop that is a potential target for
parallelisation.
IV. DYNAMIC LOOP
One of the motivators for this work was a parallelisation
that was undertaken of a finite-volume cell-centred structured
Navier-Stokes code for undertaking Computational Fluid Dynamics (CFD) simulation. It is a structured mesh, multigrid,
code which works with multiblock grids, and includes a range
of CFD solvers including; steady state, time-domain dual
time-stepping, frequency-domain harmonic balance, and timedomain Runge-Kutta. The general pattern for the computations
within the code is shown in Listing 6. Whilst this type of
computational pattern is not uncommon for scientific codes
one of the challenges in the parallelisation is that as the
code can use a range of different methods, as previously
outlined, the range of these loops can vary. For instance
when performing a time domain simulation the harmonic loop
has a single iteration. However, when performing a harmonic
balance simulation it can have a range of values, generally
between 2 and 16. Furthermore, it is not uncommon to run
large simulations with a single block, or a small number of
blocks, meaning that the block loop has a very small number
of iterations. Finally, each block in the simulation can have
different values for its dimensions.
In theory, the loop collapsing strategy would be ideal for this
type of simulation code as this would enable parallelisation
without having to deal with the varying sizes of the nested
loops. However, it cannot be guaranteed that for all input
datasets the loop iterations can be regularly divided, and there
are also particular areas of the code where the loops are not
perfectly nested.
1
2
3
4
5
Listing 5.
Parallelisation of a nested loop region with loop collapsing
6
7
8
E. Loop Selection
OpenMP already provides a way of forcing a parallel region
to execute sequentially with the use of the if clause on
OpenMP directives. The if clause, of the following form,
if (scalar − expression), is used to determine at runtime
whether the code enclosed in the parallel region should execute
sequentially or in parallel. When the scalar expression of the
clause evaluates to 0, the region is executed sequentially. Any
other value will result in parallel execution. However, a new
PARALLELISATION
9
10
11
for(iter = 0; iter < n_iters; iter++){
for(block = 0; block < n_blocks; block++){
for(harmonics = 0; harmonics < n_harmonics;
harmonics++){
for(j_cell = 0; j_cell < n_cells_j; j_cell++){
for(i_cell = 0; i_cell < n_cells_i; i_cell++){
perform computations;
}
}
}
}
}
Listing 6.
Example scientific code loops
Given the different techniques that can be used to parallelise
nested loops, the occurrence of nested loops in many scientific
simulation codes, and the fact that the loop iterations of nested
loops can change for different input datasets of a code or
when performing different functions with a code, we wanted
a system that enabled the selection of different parallelisation
choices to be available to code at runtime when the specific
ranges of the nested loops are known.
Our strategy for providing this functionality is to create
code, based on the provided user code, that can perform a
parallelisation of any of the nested loops and add decision
making algorithms to dynamically choose, at runtime, which
parallelisation is used. Specifically, we have created tools that
create multiple versions of a loop within a region of nested
loops in order to make a dynamic choice on whether a loop
should execute sequentially or in parallel
In general, code duplication is considered bad programming practice as it can, amongst other issues, lead to update
anomalies (where not all instances of the functionality are
modified when modifications occur) and thus damage the
maintainability of the code. However, if the duplicate code
(in our instance the serial and parallel versions of each loop
in the nested loop structure) can be generated automatically
for standard user code then it will not adversely affect the
maintainability of the user program.
We created a source-to-source compiler that recognises
compiler directives within user’s source code and uses them to
pre-process the source code and generate a program that has
the alternative parallelisation strategies encapsulated within it.
By exposing a simple interface to the programmers through
compiler directives, which are similar to the already familiar
OpenMP compiler directives, we can automatically provide
the dynamic parallelisation functionality for users without
requiring significant changes to the original source code.
Furthermore, this approach provides the users the choice of
enabling or disabling our functionality with minimum effort.
To complement the code duplication we have also implemented functionality (in a small runtime library) that produces
the code which is responsible for deciding what parallelisation
to perform automatically. The decision functionality considers
the number of iterations of a loop in order to chose a
parallelisation strategy that makes best use of the processors
or cores available. Our implementation is currently limited
to parallelising a single loop of a nested loop region, taking
advantage only the Outermost and Inner loop strategies.
Other authors [2] have already taken a similar approach
by modifying the OpenMP runtime library in order to make
these decisions dynamically. However, applying this logic in
the OpenMP runtime library would have limited the implementation to a specific compiler. Using our source-to-source
compile approach we are aiming to transfer the logic in user
code in order to maintain the portability of our solution.
In addition to simple heuristics, we also explored the idea
of a profile-based approach at runtime in order to detect the
best possible parallelisation strategy with time measurements.
A heuristics based approach alone cannot capture any information on the amount of the actual computations when making
a decision on parallelising a loop. Whilst this is generally
irrelevant for perfectly nested loops (as all the work is in
the lowest loop), it may have more of an impact where there
is work between the different loops as well. There may also
Fig. 1.
Compilation process using the source-to-source compiler
be situations where a different inner loop has slightly more
iterations than an outer loop so could be chosen by a simple
heuristic as the place where the parallelisation occurs but the
overheads associated with parallelising that inner loop actually
make this a suboptimal choice. Providing a profiling based
decision mechanism may help with both these scenarios, and
enable us to identify situations where, for instance, using less
threads to parallelise an outer loop might provide a better
execution time. The idea of an auto tuning code has already
been proposed by other compiler-related researches [6] [7] for
producing optimised code, we apply similar logic.
V. S OURCE - TO - SOURCE
COMPILER
Our source-to-source compiler acts as a preprocessor to C
code which can contain OpenMP directives, as well as our
own directives. The compiler parses the code, and creates
an internal representation of the code in the form of an
Abstract Syntax Tree (AST). The regions of the input code
that contain our directives are translated into the semantics of
the C programming language and OpenMP directives during
the parse phase, and appropriate nodes for these regions are
placed in the AST. The created AST is then translated back
to C code with OpenMP directives. This generated code is
then compiled using a standard, OpenMP enabled, C compiler
to produce a parallel executable (this process is illustrated in
Figure 1.
Our compiler, implemented using the Lua [8] programming
language along with the Lpeg [9] parsing library, recognises
a number of our own bespoke compiler directives of the
form #pragma preomp. A loop that is preceded by a
#pragma preomp f or directive is considered by our compiler as a suitable candidate for applying parallelisation. When
such a loop is found, our compiler performs the necessary code
transformations so that a decision can be made at runtime
whether the loop should run sequentially or in parallel (and to
ensure that both the sequential and parallel versions of the loop
are available in the executable at runtime). In addition to this,
a simple analysis of the loop is performed in order to facilitate
the computation of a loops iterations during the making of the
decision. An example of such a code is presented in Listing
7.
1
2
3
4
5
6
7
#pragma preomp parallel for private(j)
for(i=0; i<I; i++){
#pragma preomp parallel for shared(i)
for(j=0; j<J; j++){
work();
}
}
Listing 7.
A nested loop region with preomp
Furthermore we also extend the grammar to support an additional clause, the parallel threshold(expression) clause.
This is optional, and when it is not present the compiler will
assume a default value of 1.0. This clause is used to allow
control over when a loop is parallelised, and will be discussed
further in Section VI.
A. Code Duplication
The main function of the source-to-source compiler is to
take the original user code and duplicate the loops to be
parallelised so that there are both serial and parallel versions
of those loops that can be selected at runtime. As previously
mentioned our system only allows one loop to be parallelised
at any given time (although which loop is parallelised can
change over the runtime of a program as the parameters of
the loop change), but both the serial and parallel versions of
all the loops to be parallelised must appear in the executable
to enable a selection at runtime to take place.
When a loop is preceded by a #pragma preomp f or
directive, the loop is duplicated and wrapped in a normal
if − else statement which evaluates a decision function from
our runtime library and selects the if or else branch based on
the outcome of the evaluation.
B. OpenMP if
As a comparison to our code duplication approach we also
implemented the same functionality uses the existing if clause
of the OpenMP Parallel Construct. Our custom directive is
translated into an OpenMP Parallel For directive, with an
attached if clause in order to decide whether to execute the
loop in parallel or not (rather than a serial and parallel version
of the loop). The expression of the if clause consists of a
call to a decision function of our runtime library, which takes
the evaluated expressions of the loops information in order
to make a decision. This functionality was included to allow
a comparison of our approach to the standard method that
developers could currently use to provide dynamic selection
of parallelism with OpenMP.
However, a major drawback of this approach (and the reason
we do not uses it for our functionality) is that a parallel region
will be created regardless of whether a loop is parallelised or
not. Considering the example in Figure 2, parallelising the
outer loop of two nested loops with two threads will result
in three parallel regions. Each thread of the outer region will
Fig. 2. An example of using the if clause to parallelise (a) the outer and (b)
the inner loop of two nested loops with two threads
create a new parallel region and become its master. In the
case of the inner loop being parallelised, two parallel regions
are created. For nested regions with a larger number of loops
this method has the potential to produce excessive parallel
overheads.
VI. D ECISION
FUNCTIONS AND THE RUNTIME LIBRARY
The runtime library implements the logic for deciding which
version of a loop is chosen during execution. Once a code has
been processed by the source-to-source compiler it must then
be linked with our runtime library to enable this functionality
to be used.
A. Decision Based On Heuristics
Here we use heuristics, based on information collected at
runtime, to decide whether a loop should execute sequentially
or in parallel. The idea of this approach is to look for the first
loop that has enough iterations to utilise all of the available
threads, based on the assumption that parallelising outer loops
is more efficient than parallelising inner loops as the amount
of parallel overheads should be lower (as the OpenMP parallel
regions are encountered less frequently).
Before the execution of a loop, the decider checks whether
a loop of an outer level is already running in parallel. If this
condition is met, then the loop is serialised. In the case that
no outer loop is running in parallel the number of iterations
of the loop is calculated and it is divided with the available
number of threads. If this results in a value that is greater than
or equal to a specified threshold, then the parallel version of a
loop is chosen, otherwise the loop is serialised. As discussed
in Section V, the default value of the threshold is 1 (there
must be no idle threads) although this can be controlled by
the user.
The calculations of the iterations is based on the parameters
of the loop which are extracted by the source to source compiler and are provided as arguments to the decision function.
In the case that the original code of the loop uses variables for
its boundaries, any change in their value will also be captured
by the decision function during the calculation. This design
allows constant monitoring of any changes in the iterations
of the loops which also results in dynamic adaptation of the
parallelisation strategy during the execution of the program.
The algorithm is very simple and with minimum overheads.
Moreover, there is no need to maintain any state for the
loops. However, the logic which is used by the function is
of a program the profiling overhead will only be imposed in
the first few iterations of the program. Figure 3 outlines this
with an example of three nested loops.
VII. P ERFORMANCE E VALUATION
Fig. 3.
An example of the Heuristics With Profiling Decider on three loops
based on optimism. It only considers the amount of parallelism
exposed by the loop regardless of whether the amount of work
of the loop is big enough to justify any overheads of the
parallelisation or whether there is any work between loops.
To evaluate the performance of our new functionality we
aimed to benchmark it against standard, static, OpenMP parallelisations with a range of different configurations. In particular, we focussed on varying the number of loop iterations, the
amount of work between and within loops, and the number of
changes that occur to loop bounds during execution to evaluate
whether and when our approach is beneficial compared to a
static parallelisation.
To undertake these benchmarks we used two different codes.
The first is a synthetic, configurable, benchmark C code,
shown in Listing 8, which we constructed for this evaluation.
The number of iterations of each loop can be configured, as
can the amount of work that is simulated (by calling the delay
function) between the second and third loops, and within the
third loop.
1
B. Decision Based On Heuristics With Profiling
2
To address the potential issue with the basic decision based
on heuristics previously discussed we also implemented a
more complex decision function based on both the size of
loops and some evaluation of the work in the loops. In
the same manner as the heuristics decider, it uses the same
information extracted by the source to source compiler in order
to determine whether the loop should be parallelised or not.
However, if a loop does not meet the conditions, then the
function reverts to a profiling mode in order to decide which
version of the loop, serial or parallel, to choose from based
on timings.
The first time a loop is executed, the heuristics decider
determines if the loop should be parallelised. If the conditions
are not met, the sequential version of the loop is chosen and
profiling is enabled for this loop. At the next execution of
the loop, the evaluation of the heuristics is still performed.
If the conditions are still not met (for example there where
no changes in the iterations of the loop), the loop is now
parallelised since at this point we only have timing information
for the serial version. Consecutive executions of the loop will
first check the heuristics conditions, falling back to profiling
mode if the condition is not satisfied. However, the function
will detect that timings for both versions are available and
utilise the information gathered from profiling to decide what
loop to parallelise (providing the number of iterations of the
loop have not changed), with the fastest version chosen as the
final decision. In contrast to this, if the amount of work is not
the same (i.e. the number of loop iterations has changed) the
timings get invalidated, and profiling is re-initiated.
To implement this functionality requires additional code,
when compared to the basic heuristic decision function. This
will impose an extra overhead to the produced program,
although if the loop iterations are static throughout the run
4
3
5
6
7
8
for(i=0; i<num_iters; i++){
for(j=0; j<outer_iters; j++){
delay(outer_delayreps);
for(k=0; k<inner_iters; k++){
delay(inner_delayreps);
}
}
}
Listing 8.
Synthetic benchmark code
The second benchmark code was an extract from the CFD
code outlined in Listing 6. This code is more complex than
the synthetic benchmark and more representative of realistic
scientific simulation codes. This code is used to explore the
performance of our solution when the loop iterations vary and
when the bounds of loops are dynamic during the course of
the execution of the benchmark (i.e. one or more loops change
their loop bound as the outer loops are progressed).
A. Benchmark Environment
The platform used to evaluate the dynamic loop parallelisation functionality was Ness [10], at EPCC. The system is
composed by two parts, a front-end for development and job
submission and a back-end for job execution. The management
of the two parts is handled by the Sun Grid Engine which
allows submission of jobs from the front-end that must be
executed on the back-end nodes in isolation.
The back-end part of the system is composed by two SUN
X4600 Shared Memory nodes. The central processing unit
(CPU) of each node is an AMD Opteron processor of 16
2.6GHz processing cores and 32 GB or main memory. Each
core has 64K of L1 cache for data and 64K L1 cache for
instructions. In addition there is also 1 MB of L2 available to
each core (combined for data and instructions).
We used the Portland Group (PGI) C compiler for the
majority of the benchmarks, with the following compiler flags:
-O4,-c99,-mp. For the benchmarking involving the OpenMP
if functionality we used the GNC C compiler instead as the
version of the PGI compiler we used does not support a thread
team of a nested parallel region to have more than one threads
when an outer region is serialised with the if clause (this seems
contrary to the OpenMP specification where the if clause only
affects the number of threads that get assigned to a particular
parallel region, not the thread teams of its nested regions).
When using the GNU C compiler we used the following
compiler flags: -O3,-stf=c99,-fopenmp.
Timing
information
was
collected
using
the
omp get wtime() function, with each benchmark executed
three times and the worst time taken (since this is the limiting
factor for the execution time).
(a) Outer Loop Work : 0s
(b) Outer Loop Work : 0.022s
(c) Outer Loop Work : 0.079s
(d) Outer Loop Work : 0.15s
B. Synthetic benchmark results
If we consider the example code in Listing 8, the execution
time of the code of the two internal nested loops when only
the outer loop is parallelised with a certain amount of threads
(outer threads) can be calculated as shown in Equation 1.
TpOuter is the execution time when parallelising the outer loop,
Touter work is the time needed for the work in-between the
loops and Tinner work is the time needed for the amount of
work within the innermost loop.
basicstyle=
Tp
Outer
=
outer_iters ∗ (Touter_work + (inner_iters ∗ Tinner_work ))
(1)
outer_threads
In a similar fashion, when parallelising the inner loop using
inner threads, the execution time of the loops is shown in
Equation 2.
basicstyle=
Tp
Inner
= outer_iters ∗ (Touter_work +
inner_iters ∗ Tinner_work
)
(2)
inner_threads
If we want to have a reduction in the overall execution
time by parallelising the inner loop, the constraint TpInner <
TpOuter must be satisfied. Solving this constraint in terms of
Touter work we can get the maximum allowed threshold of
the execution time for the work of the outer loop as shown in
Equation 3. It is worth mentioning that this model is an ideal
performance model, where the work is evenly distributed to the
threads. In reality, the time of Touter work might be affected
by the presence of parallel overheads.
basicstyle=
Touter_work <
1
1
inner_iters ∗ Tinner_work ∗ (
−
)
outer_threads
inner_threads
1
1−
outer_threads
(3)
In order to test our hypothesis, we measured the amount
of time which is required by the delay function for various
values, with the results shown in Figure 4. The graphs in
Figure 4 show the performance of four different parallelisation
strategies. OpenM P Outer(1) and OpenM P Inner(2) are
the results from manual, static, parallelisations of the individual loops in the benchmark. Heuristics are the results from
our basic decision function using a value of one (i.e. only
parallelise the loop if there are more iterations than threads
available), and Heuristic P rof iler are the results from our
system using the profiling functionality where appropriate.
Fig. 4. Synthetic Benchmark results with varying levels of work between
the loops
From the results it is evident that when the loops are
perfectly nested, and regular (i.e. the loop bounds are not
changing), then there is no benefit from using the profiling
functionality. The basic heuristics will choose the optimal loop
to parallelise apart from when we are using 6 threads. The
variation in outcomes for 6 threads is is a consequence of
the number of loop iterations chosen for the benchmark (8
iterations of the outer loop and 16 iterations of the inner loop).
The distribution of 8 iterations to 6 threads results in all of
the threads to get assigned 1 iteration of the outer loop each,
and 2 of the threads get and extra iteration. The total execution
time in this case is limited by the slowest threads, which is the
time of 32 iterations; 2 iterations of the outer loop multiplied
by 16 iterations of the inner one. Parallelising the inner loop
with 6 threads however, 2 of the thread get from 2 iterations
whereas the rest the threads get 3 iterations each. In this case,
the total execution time of the parallel loops is the amount of
time required for 24 iterations; 3 iterations of the inner loop
multiplied by 8 iterations of the outer loop. Since both decision
functions only utilise the heuristics decision (when the number
of threads is less than the number of iterations) they cannot
exploit this opportunity as no profiling is actually performed in
this case. This could be altered by setting the decision heuristic
to a value other than 1 (i.e setting the heuristic to 1.5).
From the graphs we can observe that our threshold value calculations hold. For the parameters we used for this benchmark
the calculated threshold value is approximately Touter work <
0.0468 seconds. When the work of the outer work is less than
the calculated threshold (Figures 4a 4b) parallelising the inner
loop with 16 threads is still faster than parallelising the outer
loop with 8 threads. As the amount of work increases, the
impact on the execution time when parallelising the inner loop
TABLE II
L OOP PARAMETERS USED FOR THE CFD CODE BENCHMARKING
Parameter
Value
iters
n cell j
n cell i
500
2496 or 8
8 or 2496
is increased, since more work is now being serialised. In these
cases, the heuristics decider makes the wrong choice (Figures
4c and 4d) since its decision only concerns the amount of
iterations of the loops and the available threads. In contrast to
this, when profiling is used in the decision function, it correctly
detected that the fastest execution time is achieved by not
parallelising the inner loop. In the case, where the amount
of work of the outer loop exceeds the calculated threshold,
parallelising the inner loop, even with 16 threads, increases
the total execution time. The benefit from using 16 threads to
parallelise the inner loop is not enough to justify the work that
is serialised.
C. CFD benchmarking results
The first benchmark that we performed using the extract
from the CFD code was to compare the OpenMP if clause
with our basic heuristic functionality. We used, as a reference, the timings of the manually parallelised the n blocks,
n harmonics and n cell j loops and compare the execution
time of the heuristics decision function for the two code
generation modes of our compiler. In order to avoid cases
of the iterations not being evenly distributed to the threads,
we only consider cases of 2, 4, 8, 12 and 16 threads. The
parameters used for the loop iterations are shown in Table II,
with varying amount of work in the inner loop.
We also consider cases where blocks do not have the same
shape by altering the values of the n cell j and n cell i
loops. No alterations indicate that all of the blocks have a grid
shape of 2496x8(j cell x i cell). An alteration of 2 means that
the first and third blocks have a grid shape of 8x2496 whereas
the second and fourth blocks have a shape of 2496x8.
The performance results shown in Figure 5 highlight the fact
that there is a significant difference between our implemented
functionality and that provided by OpenMP (the if clause).
Not only is the if clause slower than the basic OpenMP
parallelisation, but it also increases the overall execution time
of the code. For Figure 5a, where 2 and 4 threads are available,
only the loop of the outer level is parallelised in both code
generation modes. However, the if clause mode produces a
slower execution time than the code duplication mode. When
more than 4 threads are used, the parallelisation is applied
on the n cell j loop. In contrast to the code duplication
mode which produces an execution time similar to the case
of statically parallelising the loop, the if clause mode is still
slower. A similar performance pattern is seen at 16 threads.
Moreover, in the presence of alterations in the shape of the
blocks, as shown in Figures 5b and 5d, the if clause mode
produces an even slower execution time. On the other hand,
(a) Small work, no alterations
(b) Small work, 2 alterations
(c) Large work, no alterations
(d) Large work, 2 alterations
Fig. 5. CFD benchmark with n blocks = 4 and n harmonics = 4 with
varied alterations in the i and j cell loops and varied amount of work in
the inner loop
the code duplication mode can exploit this opportunity in order
to utilise all of the available threads by applying parallelism
on the n cell i loop.
Increasing the amount of work in the core calculation has
a positive effect on the if clause code generation mode.
We can observe from Figure 5c that compared to Figure 5a
the difference between using the if clause and the static
parallelisation is not as large for small numbers of threads.
This is likely to be because the performance cost of executing
the if clause is proportionally smaller compared to the overall
execution time. However, the same performance degradation
is still observed when increasing the number of threads.
The execution times of the code using the OpenMP if
clause raised some concerns over whether the code was
operating correctly. After extensive testing and verification
we ascertained that both versions of the code (the if clause
and code duplication) were correct and producing the same
behaviour. Therefore, we investigated the parallel overheads
of the OpenMP runtime library of the GCC compiler.
Other authors [11] have already studied the overheads of
nested parallelism on various compilers, including a more
recent version of the GCC compiler than the one used in
this work. Their findings suggest that the implementation of
nested parallel regions of the GCC compiler has significant
overheads. What is not presented in their work is whether
or not the use of the if clause on nested parallel regions
produces the same overheads. In order to ensure that the
behaviour we observed in our results is the cause of nested
parallel regions and not the presence of the if clause, we
have constructed a simple micro benchmark.
TABLE III
M ICRO BENCHMARK RESULTS OF GNU’ S C COMPILER ’ S
IMPLEMENTATION OF NESTED PARALLELISM
Parallel loop
Execution time
(seconds)
Outer
Inner
Nested (with if clause)
Nested (with num threads clause)
38.845619
153.06809
163.05681
162.85479
(a) n blocks = 4, n harmonic = 8,(b) n blocks = 4, n harmonic = 8,
no alterations
2 alterations
D. Nested parallel micro benchmark
We created four versions of a benchmark code with three
nested loops and the delay function of the EPCC Microbenchmark Suite in the block of the innermost loop. The first
version of the benchmark creates a parallel region on the loop
of the second level. The second version performs the same
operation on the innermost loop. The third version uses the if
clause on both loops by serialising the outer loop with a value
of 0 and parallelising the inner loop with a value of 1. Finally,
the last version creates a parallel region on both of these loops,
however we force the number of threads on the thread team of
the outer loop to 1 using the num threads clause. Through
this we manage to reproduce the same behaviour as with the
if clause code case when the inner loop is parallelised.
The number of iterations of the parallel loops are the same
as the number of available threads. Table III presents the
execution times of each case. We can see that parallelising
the inner loop with nested parallel regions takes 10 seconds
longer than parallelising the inner loop manually, even for this
small and simple benchmark. Moreover, the two versions that
contain nested parallel regions achieve very similar execution
times. From this test we can concluded that it is likely that
the behaviour we observed from the if clause code generation
mode is affected by the overheads of the implementation of
the GCC compiler for nested parallel regions.
E. Decision function benchmarking
Finally, we investigated the performance of our profiling
decision functionality for the CFD extract code. This code is
perfectly nested, so the basic heuristic decision function should
be optimal here as it should chose the best loop to parallelise
with very little overheads, whereas the profiling function has
extra functionality and therefore imposes extra overheads on
the performance of the code. The results from our experiments
are shown in Figure 6.
We can observe from Figures 6a that both the decision
functions make the correct choice of parallelisation strategy
up to 12 threads. However, the overheads of the profiling
functionality have a negative impact on the overall execution
time. Even when profiling is not actually being performed, the
functions which are inserted before and after the execution
of each loop to count the amount of work performed at each
loop level increase the overall time. Moreover, we can observe
that at 16 threads the profiler actually chooses to parallelise
the harmonics loop, whereas the heuristics decider produces
(c) n blocks = 8, n harmonic = 4,(d) n blocks = 8, n harmonic = 4,
no alterations
4 alterations
Fig. 6. CFD benchmark with varied alterations in the i and j cell loops
and a large amount of work in the inner loop
the correct behaviour of profiling the n cell loop. The timings
which are performed for each loop version during the profiling
mode are sensitive to the presence of any overheads which
ultimately affect the decision of the function (such as the
overhead of taking the timings).
When alterations are present in the shape of the loops, as
shown in Figures 6b and 6d, the heuristics decider manages to
adapt its behaviour, parallelising the innermost loop in order
to utilise more threads, and can significantly out perform the
static parallelisation.
In all of the test cases, the decision function which is based
on profiling provides slower execution times than the decision
function which is based on heuristics. Moreover, the additional
logic which is included in the decision function with profiling
caused a suboptimal decision to be made in some situations.
VIII. I MPROVED
PROFILING DECISIONS
The results from the previous benchmarks lead to considerations of the reasons behind the poor execution of the
decision function which performs profiling. Comparing the
functionality of this function with the simple case of the
heuristics decision function there are two sources of additional
overheads.
The first one is the logic of profiling each version of a
loop. In order to make a choice between the two versions of a
loop, the slow version must also be executed. However, if an
actual simulation code runs for a significant amount of time
this overhead should be negligible (providing the loop bounds
do not alter and trigger the profiling functionality too many
times) as it should only be incurred infrequently.
(a) Small work, n blocks =
n harmonic = 4, 2 alterations
4,(b) Large work, n blocks =
n harmonic = 4, 4 alterations
8,
Fig. 7. CFD benchmark with varied alterations in the i and j cell loops
and a large amount of work in the inner loop
The second source of overheads is the inclusion of additional function calls before and after each loop in order to
measure the time of the execution and count the amount of
work performed.
The elimination of the functionality for taking the slow
path is not possible since this is the essence of profiling.
Both versions of a loop must be executed in order to make
a comparison between their execution time. However, we can
relax the conditions on the validity of the timings.
If we only consider the number of the iterations of the
specific loop which is being profiled, then we can eliminate all
the logic that performs the counting of the work for the internal
loops. When the decision function decides that a version of
a loop should be profiled (after the failure of the heuristics
conditions) the number of the iterations of the version of the
loop that is going to be executed is saved in the state of the
loop at that point. This way, the code of the function calls
which are placed before and after each loop remains simple,
only adjusting the loop level counter of each thread as well as
marking the starting and ending times of the execution of a
loop which is being profiled, rather than counting the iterations
of internal loops as the initial profiling functionality does.
In order to test our theory we have created a new version
of the runtime library which includes the above modifications,
called the relaxed profiler.
From the graphs in Figure 7 we can see that the removal
of the additional logic which performs the counting benefits
the decision function with profiling. When no profiling is
performed (2 and 4 threads), the relaxed version of the decision
function is faster than the accurate version, and the same
performance pattern holds when the profiling is performed (8
threads and more for Figure 7a and 12 threads and more for
Figure 7b).
Comparing the execution time of the new version of the
decision function with profiling to the execution time of
the heuristics decision function, the latter still produces a
faster execution time, however the difference is not large.
This behaviour is expected, since the presence of profiling
introduces additional computations within the code itself from
the functions which are placed before and after each loop.
Moreover, in the cases where the parallelisation is applied on
a nested loop, the decision function must execute both versions
of a loop, one of them being the slow version, in order to make
a decision.
Finally, we can see that the relaxed decision function rectifies the problem with the original profiling decision function
of it choosing the wrong option in some cases. For Figure
7a we can see that at 12 and 16 threads the relaxed profiler
makes the correct choice, and the same for Figure 7b at 16
threads (where the performance of the relaxed profile decision
function is comparable to the heuristics decision function).
IX. C ONCLUSION
The main focus of this work was to investigate the possibility of dynamically choosing at runtime the best loop of a
nested loop region which best utilises the available threads.
We have successfully created a source-to-source compiler and
a runtime library in order to automatically allow a dynamic
choice to be made at runtime. As our solution uses a directives
based approach, similar to OpenMP, we requires minimum
effort and code change from the users point of view.
We have discovered that the current mechanism users can
exploit to perform this, the OpenMP if clause, does not
perform efficiently (at least for the implementation we tested).
Despite the fact that this behaviour is the result of the
inefficient implementation of the GCC compiler which was
used in this work, the same compiler with the code duplication
mode was able to provide additional speedup in the execution
time of the code. From this we conclude that by relying on the
OpenMP runtime library to perform loop nesting, the execution time is limited by the compilers implementation of nested
parallel regions. Although code duplication is considered to be
a bad programming practice, when it is done automatically, it
can eliminate unnecessary parallel overheads.
We have also shown that some level of auto-tuning (using
profiling to select which loop to parallelise) can provide
performance benefits in certain circumstances, for instance
when loops are not perfectly nested.
OpenMP is currently generally used for small scale parallelisation of code, primarily because there are very few large
scale shared-memory HPC resources. However, the current
trend in multi-core processors suggests that in the near future
large scale shared-memory resources (of order 100-1000s of
cores) are likely to be commonly available. Therefore, sharedmemory parallelisations are likely to become more utilised and
interesting for large scale scientific simulations.
R EFERENCES
[1] “Openmp: Openmp application programming interface version 3.1,”
2011.
[2] R. Duran, R. Silvera, J. Corbaln, and J. Labarta, “Runtime adjustment
of parallel nested loops,” in In Proc. of the International Workshop on
OpenMP Applications and Tools (WOMPAT 04, 2004.
[3] D.-K. Chen, H.-M. Su, and P.-C. Yew, “The impact of synchronization
and granularity on parallel systems,” in Proceedings of the 17th annual
international symposium on Computer Architecture, ser. ISCA ’90.
New York, NY, USA: ACM, 1990, pp. 239–248. [Online]. Available:
http://doi.acm.org/10.1145/325164.325150
[4] Y. Tanaka, K. Taura, M. Sato, and A. Yonezawa, “Performance evaluation of openmp applications with nested parallelism,” 2000.
[5] E. Ayguade, M. Gonzalez, X. Martorell, and G. Jost, “Employing
nested openmp for the parallelization of multi-zone computational
fluid dynamics applications,” J. Parallel Distrib. Comput.,
vol. 66, no. 5, pp. 686–697, May 2006. [Online]. Available:
http://dx.doi.org/10.1016/j.jpdc.2005.06.019
[6] M. Hall, J. Chame, C. Chen, J. Shin, G. Rudy, and M. M. Khan, “Loop
transformation recipes for code generation and auto-tuning.”
[7] “Pluto - an automatic parallelizer and locality optimizer for multicores.”
[Online]. Available: http://pluto-compiler.sourceforge.net
[8] “The
lua
programming
language.”
[Online].
Available:
http://www.lua.org
[9] “The
lpeg
pattern-matching
library.”
[Online].
Available:
http://www.inf.puc-rio.br/roberto/lpeg
[10] “Ness
hpc
machine.”
[Online].
Available:
http://www.epcc.ed.ac.uk/facilities/ness
[11] V. V. Dimakopoulos, P. E. Hadjidoukas, and G. C. Philos,
“A microbenchmark study of openmp overheads under nested
parallelism,” in Proceedings of the 4th international conference on
OpenMP in a new era of parallelism, ser. IWOMP’08. Berlin,
Heidelberg: Springer-Verlag, 2008, pp. 1–12. [Online]. Available:
http://dl.acm.org/citation.cfm?id=1789826.1789828
| 6 |
Avoiding Your Teacher’s Mistakes:
Training Neural Networks with Controlled Weak Supervision
Mostafa Dehghani1
Aliaksei Severyn2
Sascha Rothe2
Jaap Kamps1
1
University of Amsterdam
2 Google Research
[email protected], [email protected], [email protected], [email protected]
arXiv:1711.00313v2 [cs.LG] 7 Dec 2017
Abstract
In this paper, we propose a semi-supervised
learning method where we train two neural
networks in a multi-task fashion: a target
network and a confidence network. The target network is optimized to perform a given
task and is trained using a large set of unlabeled data that are weakly annotated. We
propose to weight the gradient updates to
the target network using the scores provided
by the second confidence network, which
is trained on a small amount of supervised
data. Thus we avoid that the weight updates
computed from noisy labels harm the quality of the target network model. We evaluate
our learning strategy on two different tasks:
document ranking and sentiment classification. The results demonstrate that our approach not only enhances the performance
compared to the baselines but also speeds
up the learning process from weak labels.
1
Introduction
Deep neural networks have shown impressive
results in a lot of tasks in computer vision, natural
language processing, and information retrieval.
However, their success is conditioned on the
availability of exhaustive amounts of labeled data,
while for many tasks such a data is not available.
Hence, unsupervised and semi-supervised methods
are becoming increasingly attractive.
Using weak or noisy supervision is a straightforward approach to increase the size of the training
data. For instance in web search, for the task of
ranking, the ideal training data would be rankings
of documents ordered by relevance for a large set of
queries. However, it is not practical to collect such
a data in large scale and only a small set of judged
query-document pairs is available. However, for
this task, the output of heuristic methods (Dehghani
et al., 2017c) or clickthrough logs (Joachims, 2002)
can be used as weak or noisy signals along with
a small amount of labeled data to train learning to
rank models.
This is usually done by pre-training the network
on weak data and fine-tuning it with true labels (Dehghani et al., 2017c; Severyn and Moschitti, 2015a).
However, these two independent stages do not
leverage the full capacity of information from true
labels. For instance, in the pet-raining stage there
is no handle to control the extent to which the data
with weak labels contribute in the learning process,
while they can be of different quality.
In this paper, we propose a semi-supervised
method that leverages a small amount of data with
true labels along with a large amount of data with
weak labels. Our proposed method has three main
components: A weak annotator, which can be a
heuristic model, a weak classifier, or even human
via crowdsourcing and it is employed to annotate
massive amount of unlabeled data, a target network
which uses a large set of weakly annotated instances
by weak annotator to learn the main task, and a
confidence network which is trained on a small
human-labeled set to estimate confidence scores
for instances annotated by weak annotator. We
train target network and confidence network in a
multi-task fashion.
In a joint learning process, target network and
confidence network try to learn a suitable representation of the data and this layer is shared between
them as a two-way communication channel. The
target network tries to learn to predict the label of
the given input under the supervision of the weak annotator. In the same time, the output of confidence
network, which are the confidence scores, define
the magnitude of the weight updates to the target
network with respect to the loss computed based
on labels from weak annotator, during the back-
propagation phase of the target network. This way,
confidence network helps target network to avoid
mistakes of her teacher, i.e. weak annotator, by
down-weighting the weight updates from weak labels that do not look reliable to confidence network.
From a meta-learning perspective (Dehghani
et al., 2017b), the goal of the confidence network
trained jointly with the target network is to calibrate
the learning rate for each instance in the batch. I.e.,
the weights w of the target network fw at step t+1
are updated as follows:
wt −
w t+1 =w
lt b
wt ) (1)
∑cθ (τi ,ỹi )∇L(fwt (τi ),y˜i )+∇R(w
b i=1
where lt is the global learning rate, b is the batch size,
L(⋅) the loss of predicting ŷ = fw (τ ) for an input
τ when the target label is ỹ; cθ (⋅) is a scoring function learned by the confidence network taking input
instance τ i and its noisy label ỹi and R(.) is the regularization term. Thus, we can effectively control
the contribution to the parameter updates for the target network from weakly labeled instances based on
how reliable their labels are according to the confidence network (learned on a small supervised data).
Our setup requires running a weak annotator to
label a large amount of unlabeled data, which is
done at pre-processing time. For many tasks, it is
possible to use a simple heuristic, or implicit human
feedback to generate weak labels. This set is then
used to train the target network. In contrast, a small
expert-labeled set is used to train the confidence
network, which estimates how good the weak annotations are, i.e. controls the effect of weak labels
on updating the parameters of the target network.
Our method allows learning different types of
neural architectures and different tasks, where a
meaningful weak annotator is available. In this
paper, we study the performance of our proposed
model by focusing on two applications in information retrieval and natural language processing:
document ranking and sentiment classification.
Whilst these two applications differ considerably, as
do the exact operationalization of our model to these
cases, there are also some clear similarities. First,
in both cases the human gold standard data is based
on a cognitively complex, or subjective, judgments
causing high interrater variation, increasing both the
cost of obtaining labels as the need for larger sets of
labels. Second, also in both cases, the weak supervision signal is more systemic or objective, which
facilitates the learning of the data representation.
Our experimental results suggest that the
proposed method is more effective in leveraging
large amounts of weakly labeled data compared to
traditional fine-tuning in both tasks. We also show
that explicitly controlling the weight updates in the
target network with the confidence network leads
to faster convergence since the filtered supervision
signals are more solid and less noisy.
In the following, in Section 2, we introduce the
general architecture of our model and explain the
training process. Then, we describe the details of
the applications to which we apply our model in
Section 3. In Section 4 we present the experimental
setups for each of the tasks along with its results
and analysis. We then review related works and
conclude the paper.
2
The Proposed Method
In the following, we describe our recipe for
semi-supervised learning of neural networks, in a
scenario where along with a small human-labeled
training set a large set of weakly labeled instances
is leveraged. Formally, given a set of unlabeled
training instances, we run a weak annotator to
generate weak labels. This gives us the training set
U . It consists of tuples of training instances τi and
their weak labels ỹi , i.e. U ={(τi ,ỹi ),...}.
For a small set of training instances with true
labels, we also apply the weak annotator to
generate weak labels. This creates the training
set V , consisting of triplets of training instances
τj , their weak labels ỹj , and their true labels yj ,
i.e. V = {(τj , ỹj ,yj ),...}. We can generate a large
amount of training data U at almost no cost using the
weak annotator. In contrast, we have only a limited
amount of data with true labels, i.e. ∣V ∣<<∣U ∣.
2.1
General Architecture
In our proposed framework we train a multi-task
neural network that jointly learns the confidence
score of weak training instances and the main task
using controlled supervised signals. The high-level
representation of the model is shown in Figure 1:
it comprises a weak annotator and two neural
networks, namely the confidence network and the
target network.
The goal of the weak annotator is to provide
weak labels ỹi for all the instances τi ∈ U ∪V . We
have this assumption that ỹi provided by the weak
annotator are imperfect estimates of true labels yi ,
where yi are available for set V , but not for set U .
Prediction loss
wrt. the weak labels
Supervision Layer
Prediction loss
wrt. the weak labels
Confidence Network
Supervision Layer
Goodness of
instances
Representation Learning
Weak Annotator
Goodness of
instances
Representation Learning
True Labels
(a) Full Supervision Mode: Training on batches of data with true labels.
Confidence Network
Weak Annotator
True Labels
(b) Weak Supervision Mode: Training on batches of data with weak labels.
Figure 1: Learning from controlled weak supervision: Our proposed multi-task network for learning a target task in a
semi-supervised fashion, using a large amount of weakly labeled data and a small amount of data with true labels. Faded parts of
the network are disabled during the training in the corresponding mode. Red-dotted arrows show gradient propagation. Parameters
of the parts of the network in red frames get updated in the backward pass, while parameters of the network in blue frames are
fixed during the training.
The goal of the confidence network is to estimate
the confidence score c̃j of training instances. It is
learned on triplets from training set V : input τj , its
weak label ỹj , and its true label yj . The score c̃j is
then used to control the effect of weakly annotated
training instances on updating the parameters of
the target network in its backward pass during
backpropagation.
The target network is in charge of handling
the main task we want to learn, or in other words,
approximating the underlying function that predicts
the correct labels. Given the data instance, τi and
its weak label ỹi from the training set U , the target
network aims to predict the label ŷi . The target
network parameter updates are based on noisy
labels assigned by the weak annotator, but the
magnitude of the gradient update is based on the
output of the confidence network.
Both networks are trained in a multi-task fashion
alternating between the full supervision and the
weak supervision mode. In the full supervision
mode, the parameters of the confidence network get
updated using batches of instances from training set
V . As depicted in Figure 1b, each training instance
is passed through the representation layer mapping
inputs to vectors. These vectors are concatenated
with their corresponding weak labels ỹj generated
by the weak annotator. The confidence network
then estimates c̃j , which is the probability of taking
data instance j into account for training the target
network.
In the weak supervision, mode the parameters of
the target network are updated using training set
U . As shown in Figure 1a, each training instance
is passed through the same representation learning
layer and is then processed by the supervision layer
which is a part of the target network predicting the
label for the main task. We also pass the learned
representation of each training instance along with
its corresponding label generated by the weak annotator to the confidence network to estimate the confidence score of the training instance, i.e. c̃i . The confidence score is computed for each instance from set
U . These confidence scores are used to weight the
gradient updating target network parameters or in
other words the step size during back-propagation.
It is noteworthy that the representation layer
is shared between both networks, so besides the
regularization effect of layer sharing which leads
to better generalization, sharing this layer lays the
ground for the confidence network to benefit from
the largeness of set U and the target network to
utilize the quality of set V .
2.2
Model Training
Our optimization objective is composed of two
terms: (1) the confidence network loss Lc , which
captures the quality of the output from the confidence network and (2) the target network loss Lt ,
which expresses the quality for the main task.
Both networks are trained by alternating between
the weak supervision and the full supervision mode.
In the full supervision mode, the parameters of
the confidence network are updated using training
instance drawn from training set V . We use
cross-entropy loss function for the confidence
network to capture the difference between the
predicted confidence score of instance j, i.e. c̃j and
the target score cj :
Ranker
Lc = ∑ −cj log(c̃j )−(1−cj )log(1−c̃j ),
(2)
j∈V
The target score cj is calculated based on the
difference of the true and weak labels with respect
to the main task.
In the weak supervision mode, the parameters
of the target network are updated using training
instances from U . We use a weighted loss function,
Lt , to capture the difference between the predicted
label ŷi by the target network and target label ỹi :
Lt = ∑ c̃i Li ,
Compositionality
Embedding
Weights
(3)
i∈U
where Li is the task-specific loss on training instance i and c̃i is the confidence score of the weakly
annotated instance i, estimated by the confidence
network. Note that c̃i is treated as a constant
during the weak supervision mode and there is no
gradient propagation to the confidence network in
the backward pass (as depicted in Figure 1a).
We minimize two loss functions jointly by
randomly alternating between full and weak
supervision modes (for example, using a 1:10 ratio).
During training and based on the chosen supervision
mode, we sample a batch of training instances from
V with replacement or from U without replacement
(since we can generate as much train data for set U ).
Since in our setups usually ∣U ∣ >> ∣V ∣, the training
process oversamples the instance from V .
The key point here is that the “main task” and
“confidence scoring” task are always defined to
be close tasks and sharing representation will
benefit the confidence network as an implicit data
augmentation to compensate the small amount
of data with true labels. Besides, we noticed that
updating the representation layer with respect to the
loss of the other network acts as a regularization for
each of these networks and helps generalization for
both target and confidence network since we try to
capture all tasks (which are related tasks) and less
chance for overfitting.
We also investigated other possible setups or
training scenarios. For instance, we tried updating
the parameters of the supervision layer of the target
network using also data with true labels. Or instead
of using alternating sampling, we tried training the
target network using controlled weak supervision
signals after the confidence network is fully trained.
As shown in the experiments the architecture and
training strategy described above provide the best
performance.
Figure 2: The target network for the document ranking.
3
Applications
In this section, we apply our semi-supervised
method to two different tasks: document ranking
and sentiment classification. For each task, we start
with an introduction of the task, followed by the
setup of the target network, i.e. description of the
representation learning layer and the supervision
layer.
3.1
Document Ranking
This task is the core information retrieval problem
which is challenging as it needs to capture the notion
of relevance between query and documents. We
employ a state-of-the-art pairwise neural ranker architecture as target network (Dehghani et al., 2017c).
In this setting, each training instance τ consists of a
query q, and two documents d+ and d− . The labels,
ỹ and y, are scalar values indicating the probability
of d+ being ranked higher than d− with respect to q.
The general schema of the target network is
illustrated in Figure 2.
The Representation Learning Layer is a setup
proposed in (Dehghani et al., 2017c). This layer is
a function ψ, which learns the representation of the
input data instances, i.e. (q,d+ ,d− ), and consists
of three components: (1) an embedding function
ε ∶ V → Rm (where V denotes the vocabulary set
and m is the number of embedding dimensions),
(2) a weighting function ω ∶ V → R, and (3) a
compositionality function ⊙ ∶ (Rm , R)n → Rm .
More formally, the function ψ is defined as:
∣q∣
ψ(q,d+ ,d− )=[⊙i=1 (ε(tqi ),ω(tqi )) ∣∣
∣d+ ∣
+
+
∣d− ∣
−
−
⊙i=1 (ε(tdi ),ω(tdi )) ∣∣
⊙i=1 (ε(tdi ),ω(tdi )) ],
(4)
where tqi and tdi denote the ith term in query q
respectively document d. The embedding function
ε maps each term to a dense m- dimensional real
value vector, which is learned during the training
phase. The weighting function ω assigns a weight
to each term in the vocabulary.
The compositionality function ⊙ projects a set of
n embedding-weighting pairs to an m- dimensional
representation, independent from the value of n:
n
⊙(ε(ti ),ω(ti ))=
i=1
∑ni=1 exp(ω(ti ))⋅ε(ti )
,
∑nj=1 exp(ω(tj ))
(5)
which is in fact the normalized weighted elementwise summation of the terms’ embedding vectors.
It has been shown that having global term weighting function along with embedding function improves the performance of ranking as it simulates the
effect of inverse document frequency (IDF), which
is an important feature in information retrieval (Dehghani et al., 2017c). In our experiments, we initialize the embedding function ε with word2vec embeddings (Mikolov et al., 2013) pre-trained on Google
News and the weighting function ω with IDF.
The Supervision Layer receives the vector
representation of the inputs processed by the representation learning layer and outputs a prediction ỹ.
We opt for a simple fully connected feed-forward
network with l hidden layers followed by a softmax.
Each hidden layer zk in this network computes
zk = α(Wk zk−1 +bk ), where Wk and bk denote the
weight matrix and the bias term corresponding to
the k th hidden layer and α(.) is the non-linearity.
These layers follow a sigmoid output. We employ
the weighted cross entropy loss:
Lt = ∑ c̃i [−ỹi log(ŷi )−(1− ỹi )log(1− ŷi )], (6)
i∈BU
where BU is a batch of instances from U , and c̃i
is the confidence score of the weakly annotated
instance i, estimated by the confidence network.
The Weak Annotator is BM25 (Robertson et al.,
2009) which is a well-performing unsupervised
retrieval method. In the pairwise documents
ranking setup, ỹi for a given instance τj =(q,d+ ,d− )
is the probability of document d+ being ranked
higher than d− , based on the scores obtained from
the annotator:
ỹi =Pq,d+ ,d− =
sq,d+
,
sq,d+ +sq,d−
(7)
whereas sq,d is the score obtained from the weak
annotator. To train the confidence network, the
target label cj is calculated using the absolute
difference of the true label and the weak label:
cj =1−∣yj − ỹj ∣, where yj is calculated similar to ỹi ,
but sq,d comes from true labels created by humans.
3.2
Sentiment Classification
This task aims to identify the sentiment (e.g.,
positive, negative, or neutral) underlying an
individual sentence. Our target network is a
convolutional model similar to (Deriu et al., 2017;
Severyn and Moschitti, 2015a,b; Deriu et al., 2016).
Each training instance τ consists of a sentence s
and its sentiment label ỹ. The architecture of the
target network is illustrated in Figure 3
The Representation Learning Layer learns a
representation for the input sentence s and is shared
between the target network and confidence network.
It consists of an embedding function ε ∶ V → Rm ,
where V denotes the vocabulary set and m is the
number of embedding dimensions.
This function maps the sentence to a matrix
S ∈ Rm×∣s∣ , where each column represents the embedding of a word at the corresponding position in
the sentence. Matrix S is passed through a convolution layer. In this layer, a set of f filters is applied to
a sliding window of length h over S to generate a feature map matrix O. Each feature map oi for a given
filter F is generated by oi = ∑k,j S[i ∶ i+h]k,j Fk,j ,
where S[i∶i+h] denotes the concatenation of word
vectors from position i to i+h. The concatenation
of all oi produces a feature vector o∈R∣s∣−h+1 . The
vectors o are then aggregated over all f filters into
a feature map matrix O ∈Rf ×(∣s∣−h+1) .
We also add a bias vector b ∈ Rf to the result
of a convolution. Each convolutional layer is
followed by a non-linear activation function (we
use ReLU(Nair and Hinton, 2010)) which is applied
element-wise. Afterward, the output is passed to
the max pooling layer which operates on columns
of the feature map matrix O returning the largest
value: pool(oi ) ∶ R1×(∣s∣−h+1) → R (see Figure 3).
This architecture is similar to the state-of-the-art
model for Twitter sentiment classification from
Semeval 2015 and 2016 (Severyn and Moschitti,
2015b; Deriu et al., 2016).
We initialize the embedding matrix with
word2vec embeddings (Mikolov et al., 2013)
pretrained on a collection of 50M tweets.
The Supervision Layer is a feed-forward neural
Classifier
Pooled Repr.
Conv.
Feature Map
Embedding
Embedding
Figure 3: The target network for the sentiment classification.
network similar to the supervision layer in the
ranking task (with different width and depth) but
with softmax instead of sigmoid as the output layer
which returns ŷi , the probability distribution over
all three classes. We employ the weighted cross
entropy loss:
Lt = ∑ c̃i ∑ −ỹik log(ŷik ),
i∈BU
(8)
k∈K
where BU is a batch of instances from U , and c̃i
is the confidence score of the weakly annotated
instance i, and K is a set of classes.
The Weak Annotator for the sentiment classification task is a simple unsupervised lexicon-based
method (Hamdan et al., 2013; Kiritchenko et al.,
2014). We use SentiWordNet03 (Baccianella et al.,
2010) to assign probabilities (positive, negative and
neutral) for each token in set U . Then a sentencelevel distribution is derived by simply averaging
the distributions of the terms, yielding a noisy label
ỹi ∈ R∣K∣ , where ∣K∣ is the number of classes, i.e.
∣K∣=3. We empirically found that using soft labels
from the weak annotator works better than assigning a single hard label. The target label cj for the
confidence network is calculated by using the mean
absolute difference of the true label and the weak
1
label: cj =1− ∣K∣
∑k∈K ∣yjk − ỹjk ∣, where yj is the onehot encoding of the sentence label over all classes.
4
Experiments and Results
Here we first describe baselines. Afterward, we
present the experimental setups for each of our
tasks along with their results and analysis.
4.1
Baselines and General Setups
For both tasks, we evaluate the performance of our
method compared to the following baselines:
• (1.WA) Weak Annotator, i.e. the unsupervised
method that we used for annotating the unlabeled
data.
• (2.WSO) Weak Supervision Only, i.e. the target
network trained only on weakly labeled data.
• (3.FSO) Full Supervision Only, i.e. the target
network trained only on true labeled data.
• (4.WS+FT) Weak Supervision + Fine Tuning, i.e.
the target network trained on the weakly labeled
data and fine-tuned on true labeled data.
• (5.WS+SFT) Weak Supervision + Supervision
Layer Fine-Tuning, i.e. the target network trained
only on weakly labeled data and the supervision
layer is fine-tuned on true labeled data while the
representation learning layer is kept fixed.
• (6.WS+RFT) Weak Supervision + Representation
Fine Tuning, i.e. WS+SFT, except the supervision
layer is kept fixed during fine tuning.
• (7.NLI) New Label Inference (Veit et al., 2017)
is similar to our proposed neural architecture
inspired by the teacher-student paradigm (Hinton
et al., 2015; Romero et al., 2014), but instead
of having the confidence network to predict the
“confidence score” of the training instance, there
is a label generator network which is trained on
set V to map the weak labels of the instances in
U to the new labels. The new labels are then used
as the target for training the target network.
• (8.CWSJT ) Controlled Weak Supervision with
Joint Training is our proposed neural architecture
in which we jointly train the target network and the
confidence network by alternating batches drawn
from sets V and U (as explained in Section 2.2).
• (9.CWSJT+ ) Controlled Weak Supervision + Full
Supervision with Joint Training is the same as
CWSJT , except that parameters of the supervision
layer in target network are also updated using
batches from V , with regards to the true labels.
Additionally, we compare the performance of
CWSJT , with other possible training setups:
• (a.CWSST ) Separate Training, i.e. we consider the
confidence network as a separate network, without sharing the representation learning layer, and
train it on set V . We then train the target network
on the controlled weak supervision signals.
• (b.CWSCT ) Circular Training, i.e. we train the
target network on set U . Then the confidence
network is trained on data with true labels, and the
target network is trained again but on controlled
weak supervision signals.
• (c.CWSPT ) Progressive Training is the mixture
of the two previous baselines. Inspired by
(Rusu et al., 2016), we transfer the learned
information from the converged target network to
the confidence network using progressive training.
We then train the target network again on the
controlled weak supervision signals.
The proposed architectures are implemented in
TensorFlow (Tang, 2016; Abadi et al., 2015). We
use the Adam optimizer (Kingma and Ba, 2014) and
the back-propagation algorithm. Furthermore, to
prevent feature co-adaptation, we use dropout (Srivastava et al., 2014) as a regularization technique
in all models.
In our setup, the confidence network to predict
c̃j is a fully connected feed forward network. Given
that the confidence network is learned only from
a small set of true labels and to speed up training
we initialize the representation learning layer
with pre-trained parameters, i.e., pre-trained word
embeddings. We use ReLU (Nair and Hinton,
2010) as a non-linear activation function α in both
target network and confidence network. In the
following, we describe task-specific setups and the
experimental results.
4.2
Document Ranking Setup & Results
Collections. We use two standard TREC collections for the task of ad-hoc retrieval: The first
collection (Robust04) consists of 500k news articles
from different news agencies as a homogeneous
collection. The second collection (ClueWeb)
is ClueWeb09 Category B, a large-scale web
collection with over 50 million English documents,
which is considered as a heterogeneous collection.
Spam documents were filtered out using the
Waterloo spam scorer 1 (Cormack et al., 2011) with
the default threshold 70%.
Data with true labels. We take query sets that
contain human-labeled judgments: a set of 250
queries (TREC topics 301–450 and 601–700)
for the Robust04 collection and a set of 200
queries (topics 1-200) for the experiments on the
ClueWeb collection. For each query, we take all
documents judged as relevant plus the same number
of documents judged as non-relevant and form
pairwise combinations among them.
Data with weak labels. We create a query set
Q using the unique queries appearing in the AOL
1
http://plg.uwaterloo.ca/˜gvcormac/clueweb09spam/
query logs (Pass et al., 2006). This query set
contains web queries initiated by real users in
the AOL search engine that were sampled from a
three-month period from March 2006 to May 2006.
We applied standard pre-processing (Dehghani
et al., 2017c,a) on the queries. We filtered out a large
volume of navigational queries containing URL
substrings (“http”, “www.”, “.com”, “.net”, “.org”,
“.edu”). We also removed all non-alphanumeric
characters from the queries. For each dataset, we
took queries that have at least ten hits in the target
corpus using our weak annotator method. Applying
all these steps, We collect 6.15 million queries to
train on in Robust04 and 6.87 million queries for
ClueWeb. To prepare the weakly labeled training set
U , we take the top 1000 retrieved documents using
BM25 for each query from training query set Q,
which in total leads to ∼∣Q∣×106 training instances.
Parameters and Settings.
We conducted
a nested 3-fold cross validation with 80/20
training/validation split in each fold. All hyperparameters of all models and baselines were tuned
individually on the validation set using batched GP
bandits with an expected improvement acquisition
function (Desautels et al., 2014). The size and
number of hidden layers for the ranker and the
confidence network were separately selected from
{64, 128, 256, 512} and {1, 2, 3, 4}, respectively.
The initial learning rate and the dropout parameter
were selected from {10−3 ,10−5 } and {0.0,0.2,0.5},
respectively. We considered embedding sizes of
{300,500}. The batch size in our experiments was
set to 128.
In all experiments, the parameters of the
network are optimized employing the Adam
optimizer (Kingma and Ba, 2014) and using the
computed gradient of the loss to perform the
back-propagation algorithm. At inference time,
for each query, we take the top 2000 retrieved
documents using BM25 as candidate documents
and re-rank them using the trained models. We use
the Indri2 implementation of BM25 with default
parameters (i.e., k1 =1.2, b=0.75, and k3 =1000).
Results and Discussions. We evaluate on set V
and report two standard evaluation metrics: mean
average precision (MAP) of the top-ranked 1000
documents and normalized discounted cumulative
gain calculated for the top 20 retrieved documents
(nDCG@20). Statistical significant differences of
MAP and nDCG@20 values are determined using
2
https://www.lemurproject.org/indri.php
Table 1: Performance of the proposed method and baseline
models on different datasets. (IJ or Ź indicates that the
improvements or degradations are statistically significant,
at the 0.05 level using the paired two-tailed t-test. For all
model, the improvement/degradations is with respect to
the “weak supervision only” baseline (WSO). For CWSJT ,
the improvement over all baselines is considered and the
Bonferroni correction is applied on the significant tests.)
Method
1
WABM25
2
3
WSO
4
5
6
WS+FT
7
8
9
NLI
FSO
WS+SFT
WS+RFT
CWSJT
CWS+JT
Robust04
Table 2: Performance of the variants of the proposed method
on different datasets. (IJ orŹ indicates that the improvements
or degradations are statistically significant, at the 0.05
level using the paired two-tailed t-test. For all model, the
improvement/degradations is with respect to the “weak
supervision only” baseline (WSO on Table 1) . For CWSJT ,
the improvement over all baselines is considered and the
Bonferroni correction is applied on the significant tests.)
ClueWeb
MAP
nDCG@20
MAP
nDCG@20
0.2503
0.4102
0.1021
0.2070
0.2702
0.1790Ź
0.4290
0.3519Ź
0.1297
0.0782Ź
0.2201
0.1730Ź
0.2830IJ
0.2711
0.2810IJ
0.4355IJ
0.4203
0.4316
0.1346IJ
0.1002Ź
0.1286
0.2346IJ
0.1940Ź
0.2240
0.2421Ź
0.3024IJ
0.2786IJ
0.4092Ź
0.4507IJ
0.4367IJ
0.1010Ź
0.1372IJ
0.1310
0.2004Ź
0.2453IJ
0.2244
the two-tailed paired t-test with p value < 0.05,
with Bonferroni correction.
Table 1 shows the performance on both datasets.
Based on the results, CW S JT provides a significant
boost on the performance over all datasets.
There are two interesting points we want to
highlight. First, among the fine-tuning experiments,
updating all parameters of the target network is
the best fine tuning strategy. Updating only the
parameters of the representation layer based on
the true labels works better than updating only
parameters of the supervision layer. This supports
our designed choice of a shared embedding layer
which gets updated on set V .
Second, while it seems reasonable to make use
of true labels for updating all parameters of the
target network, CWS+JT achieves no better results
than CWSJT . It also performs mostly even worse
than WS+FT. This is because during training, the
direction of the parameter optimization is highly
affected by the type of supervision signal and while
we control the magnitude of the gradients, we do not
change their directions, so alternating between two
sets with different label qualities (different supervision signal types, i.e. weak and string) confuses
the supervision layer of the target network. In fine
tinning, we don not have this problem since we optimize the parameters with respect to the supervision
from these two sets in two separate stages.
It is noteworthy that we have also tried CWS+JT
with another objective function for the target
network taking both weak and true labels into
account which was slightly better, but gives no
Method
a
b
c
CWSST
CWSCT
CWSPT
CWSJT
Robust04
ClueWeb
MAP
nDCG@20
MAP
nDCG@20
0.2716
0.2961IJ
0.2784IJ
0.3024IJ
0.4237
0.4440IJ
0.4292
0.4507IJ
0.1320
0.1378IJ
0.1314
0.1372IJ
0.2213
0.2431IJ
0.2207
0.2453IJ
improvement over CWSJT .
In the ranking task, the target network is designed
in particular to be trained on weak annotations (Dehghani et al., 2017c), hence training the network
only on weak supervision performs better than FSO.
This is due to the fact that ranking is a complex task
requiring many training instances, while relatively
few true labels are available.
The performance of NLI is worse than CWSJT
as learning a mapping from imperfect labels to
accurate labels and training the target network
on new labels is essentially harder than learning
to filter out the noisy labels, hence needs a lot of
supervised data. The reason is that for the ranking,
due to a few training instances with regards to the
task complexity, NLI fails to generate better new
labels, hence it directly misleads the target network
and completely fails to improve the performance.
Table 2 shows the performance of different
training strategies. As shown, CWSJT and CWSCT
perform better than other strategies. CWSCT is to
let the confidence network to be trained separately,
while still being able to enjoy shared learned
information from the target network. However, it
is less efficient as we need two rounds of training
on weakly labeled data.
CWSST performs poorly since the training data
V is too small to train a high-quality confidence network without taking advantage of the vast amount
of weakly annotated data in U . We also noticed that
this strategy leads to a slow convergence compared
to WSO. Also transferring learned information from
target network to confidence network via progressive training, i.e. CWSPT , performs no better than
full sharing of the representation learning layer.
Table 3: Performance of the baseline models as well as the
proposed method on different datasets. (IJ orŹ indicates that
the improvements or degradations are statistically significant,
at the 0.05 level using the paired two-tailed t-test. For all
model, the improvement/degradations is with respect to
the “weak supervision only” baseline (WSO). For CWSJT ,
the improvement over all baselines is considered and the
Bonferroni correction is applied on the significant tests.)
Method
SemEval-14
SemEval-15
1
WALexicon
0.5141
0.4471
2
3
WSO
0.6719
0.6307
0.5606
0.5811
4
5
6
WS+FT
0.7080IJ
0.6875
0.6932
0.6441IJ
0.6193IJ
0.6102IJ
7
8
9
NLI
0.7113IJ
0.7362IJ
0.7310IJ
0.7162IJ
0.6433IJ
0.6626IJ
0.6551IJ
0.6618IJ
FSO
WS+SFT
WS+RFT
CWSJT
CWS+JT
SemEval1th
Table 4: Performance of the variants of the proposed
method for sentiment classification task, on different datasets.
(IJ orŹ indicates that the improvements or degradations are statistically significant, at the 0.05 level using the paired two-tailed
t-test. For all model, the improvement/degradations is with respect to the “weak supervision only” baseline (WSO on Table 3)
. For CWSJT , the improvement over all baselines is considered
and the Bonferroni correction is applied on the significant tests.)
Method
a
b
c
CWSST
CWSCT
CWSPT
CWSJT
4.3
SemEval-14
IJ
0.7183
0.7363IJ
0.7009IJ
0.7362IJ
SemEval-15
0.6501IJ
0.6667IJ
0.6118IJ
0.6626IJ
Sentiment Classification Setup & Results
Collections.
We test our model on the twitter message-level sentiment classification of
SemEval-15 Task 10B (Rosenthal et al., 2015).
Datasets of SemEval-15 subsume the test sets from
previous editions of SemEval, i.e. SemEval-13 and
SemEval-14. Each tweet was preprocessed so that
URLs and usernames are masked.
Data with true labels. We use train (9,728 tweets)
and development (1,654 tweets) data from SemEval13 for training and SemEval-13-test (3,813 tweets)
for validation. To make our results comparable to
the official runs on SemEval we use SemEval-14
(1,853 tweets) and SemEval-15 (2,390 tweets) as
test sets (Rosenthal et al., 2015; Nakov et al., 2016).
Data with weak labels. We use a large corpus
containing 50M tweets collected during two months
for both, training the word embeddings and creating
the weakly annotated set U using the lexicon-based
method explained in Section 3.2.
Parameters and Settings. Similar to the docu-
ment ranking task, we tuned the hyper-parameters
for each model, including baselines, separately with
respect to the true labels of the validation set using
batched GP bandits with an expected improvement
acquisition function (Desautels et al., 2014). The
size and number of hidden layers for the classifier
and the confidence network were separately selected
from {32,64,128} and {1,2,3}, respectively. We
tested the model with both, 1 and 2 convolutional
layers. The number of convolutional feature maps
and the filter width is selected from {200, 300}
and {3, 4, 5}, respectively. The initial learning
rate and the dropout parameter were selected from
{1E − 3,1E − 5} and {0.0,0.2,0.5}, respectively.
We considered embedding sizes of {100,200} and
the batch size in these experiments was set to 64.
Results and Discussion. We report the performance of our model and the baseline models in
terms of official SemEval metric, Macro-F1, in
Table 3. We have also report statistical significance
of F1 improvements using two-tailed paired t-test
with p value < 0.05, with Bonferroni correction.
Our method is the best performing among all the
baselines.
Unlike the ranking task, training the network only
on data with true labels, i.e. TSO, performs rather
good. In the sentiment classification task, learning
representation of input which is a sentence (tweet) is
simpler than the ranking task in which we try to learn
representation for query and long documents. Consequently, we need fewer data to be able to learn a
suitable representation and with the amount of available data with true labels, we can already capture
a rather good representation without helps of weak
data, while it was impossible in the ranking task.
However, as the results suggest, we can still
gain improvement using fine-tuning. In this task,
behaviors of different fine-tuning experiments are
similar to the ranking task. Furthermore, updating
parameters of the supervision layer, with respect to
the true labels, i.e. CWS+JT model, does not perform
better than CWSJT , which again supports our choice
of updating just the representation learning layer
with respect to the signals from data with true labels.
In the sentiment classification task, the performance of NLI is acceptable compared to the ranking
task. This is first of all because generating new
classification labels is essentially simpler. Secondly,
in this task, we need to learn to represent a simpler
input, and learn a simpler function to predict the
labels, but a relatively bigger set of supervised
data which helps to generate new labels. However,
the performance of NLI is still lower than CWSJT .
We can argue that CWSJT is a more conservative
approach. It is in fact equipped with a soft filter
that decreases the effect of noisy training examples
from set U on parameter updates during training.
This is a smoother action as we just down-weight
the gradient, while NLI might change the direction
of the gradient by generating a completely new
label and consequently it is prone to more errors,
especially when there is not enough high-quality
training data to learn to generate better labels.
In the sentiment classification task, besides the
general baselines, we also report the best performing systems, which are also convolution-based
models (Rouvier and Favre (2016) on SemEval-14;
Deriu et al. (2016) on SemEval-15). Our proposed
model outperforms the best system on both datasets.
Table 4 also presents the results of different
training strategies for the sentiment classification
task. As shown, similar to the ranking task, CWSJT
and CWSCT perform better than other strategies.
Although CWSCT is slightly better (not statistically
significant) in terms of effectiveness compared
to CWSJT , it is not as efficient as CWSJT during
training.
Compared to the ranking task, for sentiment
classification, it is easier to estimate the confidence
score of instances with respect to the amount of
available supervised data. Therefore, CWSST
is able to improve the performance over WSO
significantly. Moreover, CWSPT fails compared
to the strategies where the representation learning
layer is shared between the target network and the
confidence network.
4.4
Faster Learning Pace
Controlling the effect of supervision to train neural networks not only improves the performance,
but also provides the network with more solid signals which speeds up the learning process. Figure 4
illustrates the training/validation loss for both networks, compared to the loss of training the target network with weak supervision, along with their performance on test sets, with respect to different amounts
of training data for the sentiment classification task3 .
As shown, in the training, the loss of the target
network in our model, i.e. Lt is higher than the
loss of the network which is trained only on weakly
3
We have observed similar speed-up in the learning process of the ranking
task, however we skip bringing its plots due to space limit since we have nested
cross-validation for the ranking task and a set of plots for each fold.
Figure 4: Loss of the target network (Lt ) and the confidence
network (Lc ) compared to the loss of WSO (LWSO ) on
training/validation set and performance of CWS, WSO, and
WA on test sets with respect to different amount of training
data on sentiment classification.
supervised data, i.e. LWSO . However, since these
losses are calculated with respect to the weak labels
(not true labels), having very low training loss can be
an indication of overfitting to the imperfection in the
weak labels. In other words, regardless of the general problem of lack of generalization due to overfitting, in the setup of learning from weak labels, predicting labels that are similar to train labels (very low
training loss) is not necessarily a desirable incident.
In the validation set, however, Lt decreases faster
than LWSO , which supports the fact that LWSO
overfits to the imperfection of weak labels, while
our setup helps the target network to escape from
this imperfection and do a good job on the validation
set. In terms of the performance, compared to
WSO, the performance of CWS on both test sets
increases very quickly and CWS is able to pass the
performance of the weak annotator by seeing much
fewer instances annotated by the weak annotator.
5
Related Work
Learning from weak or noisy labels has been studied
in the literature (Frénay and Verleysen, 2014). We
briefly review research most relevant to our work.
There are semiSemi-supervised learning.
supervised learning algorithms (Zhu, 2005)
developed to utilize weakly or even unlabeled
data. Self-training (Rosenberg et al., 2005) or
pseudo-labeling (Lee, 2013) tries to predict labels
of unlabeled data. This unlabeled data is provided
additionally. In particular for neural networks,
methods use greedy layer-wise pre-training of
weights using unlabeled data alone followed by
supervised fine-tuning (Deriu et al., 2017; Severyn
and Moschitti, 2015b,a; Go et al., 2009). Other
methods learn unsupervised encodings at multiple
levels of the architecture jointly with a supervised
signal (Ororbia II et al., 2015; Weston et al., 2012).
Meta-learning. From the meta-learning perspective, our approach is similar to Andrychowicz et al.
(2016) where a separate recurrent neural network
called optimizer learns to predict an optimal update
rule for updating parameters of the target network.
The optimizer receives a gradient from the target
network and outputs the adjusted gradient matrix.
As the number of parameters in modern neural
networks is typically on the order of millions the
gradient matrix becomes too large to feed into the
optimizer, so the approach of Andrychowicz et al.
(2016) is applied to very small models. In contrast,
our approach leverages additional weakly labeled
data where we use the confidence network to predict
per-instance scores that calibrate gradient updates
for the target network.
Direct learning with weak/noisy labels. Many
studies tried to address learning in the condition of
imperfect labels. Some noise cleansing methods
were proposed to remove or correct mislabeled
instances (Brodley and Friedl, 1999). Other studies
showed that weak or noisy labels can be leveraged
by employing a particular architecture or defining a
proper loss function to avoid overfitting the training
data imperfection (Dehghani et al., 2017c; Patrini
et al., 2016; Beigman and Klebanov, 2009; Zeng
et al., 2015; Bunescu and Mooney, 2007).
Modeling imperfection. There is also research
trying to model the pattern of the noise or weakness
in the labels. Some methods leverage generative
models to denoise weak supervision sources that a
discriminative model can learn from (Ratner et al.,
2016; Rekatsinas et al., 2017; Varma et al., 2017).
Other methods aim to capture the pattern of the
noise by inserting an extra layer or a separated module (Sukhbaatar et al., 2014; Veit et al., 2017), infer
better labels from noisy labels and use them to supervise the training of the network. This is inspired by
the teacher-student paradigm (Hinton et al., 2015;
Romero et al., 2014; Xiao et al., 2015) in which
the teacher generates a new label given the training
instance with its corresponding weak or noisy label. However, as we show in our experiments, this
approach is not sufficient when the amount of supervised data is not enough to generate better labels.
6
Conclusion and Future Directions
Training neural networks using large amounts of
weakly annotated data is an attractive approach in
scenarios where an adequate amount of data with
true labels is not available. In this paper, we propose
a multi-task neural network architecture that unifies
learning to estimate the confidence score of weak
annotations and training neural networks to learn a
target task with controlled weak supervision, i.e. using weak labels to updating the parameters, but taking their estimated confidence scores into account.
This helps to alleviate updates from instances with
unreliable labels that may harm the performance.
We applied the model to two tasks, document
ranking and sentiment classification, and empirically verified that the proposed model speeds up the
training process and obtains more accurate results.
As a promising future direction, we are going to understand to which extent using weak annotations has
the potential of training high-quality models with
neural networks and understand the exact conditions
under which our proposed method works.
References
Martı́n Abadi et al. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software
available from tensorflow.org. http://tensorflow.org/.
Marcin Andrychowicz, Misha Denil, Sergio Gomez,
Matthew W Hoffman, David Pfau, Tom Schaul, and
Nando de Freitas. 2016. Learning to learn by gradient
descent by gradient descent. In Advances in Neural
Information Processing Systems. pages 3981–3989.
Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. Sentiwordnet 3.0: An enhanced lexical
resource for sentiment analysis and opinion mining.
In LREC. volume 10, pages 2200–2204.
Eyal Beigman and Beata Beigman Klebanov. 2009.
Learning with annotation noise. In Proceedings of
the Joint Conference of the 47th Annual Meeting of
the ACL and the 4th International Joint Conference
on Natural Language Processing of the AFNLP:
Volume 1-Volume 1. Association for Computational
Linguistics, pages 280–287.
Carla E Brodley and Mark A Friedl. 1999. Identifying
mislabeled training data.
Journal of artificial
intelligence research 11:131–167.
Razvan Bunescu and Raymond Mooney. 2007. Learning to extract relations from the web using minimal
supervision. In ACL.
Gordon V. Cormack, Mark D. Smucker, and Charles L.
Clarke. 2011. Efficient and effective spam filtering
and re-ranking for large web datasets. Inf. Retr.
14(5):441–465.
Mostafa Dehghani, Sascha Rothe, Enrique Alfonseca,
and Pascal Fleury. 2017a. Learning to attend, copy,
and generate for session-based query suggestion. In
Proceedings of The international Conference on Information and Knowledge Management (CIKM’17).
Mostafa Dehghani, Aliaksei Severyn, Sascha Rothe,
and Jaap Kamps. 2017b. Learning to learn from
weak supervision by full supervision. arXiv preprint
arXiv:1711.11383 .
Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W. Bruce Croft. 2017c.
Neural ranking models with weak supervision.
In Proceedings of The 40th International ACM
SIGIR Conference on Research and Development in
Information Retrieval (SIGIR2017).
Jan Deriu, Maurice Gonzenbach, Fatih Uzdilli, Aurelien Lucchi, Valeria De Luca, and Martin Jaggi. 2016.
Swisscheese at semeval-2016 task 4: Sentiment classification using an ensemble of convolutional neural
networks with distant supervision. Proceedings of
SemEval pages 1124–1128.
Jan Deriu, Aurelien Lucchi, Valeria De Luca, Aliaksei
Severyn, Simon Müller, Mark Cieliebak, Thomas
Hofmann, and Martin Jaggi. 2017. Leveraging
large amounts of weakly supervised data for multilanguage sentiment classification. In Proceedings of
the 26th international International World Wide Web
Conference (WWW’17). pages 1045–1052.
Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter
sentiment classification using distant supervision.
CS224N Project Report, Stanford 1(12).
Hussam Hamdan, Frederic Béchet, and Patrice Bellot.
2013. Experiments with dbpedia, wordnet and
sentiwordnet as resources for sentiment analysis
in micro-blogging. In Second Joint Conference
on Lexical and Computational Semantics (* SEM).
volume 2, pages 455–459.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015.
Distilling the knowledge in a neural network. arXiv
preprint arXiv:1503.02531 .
Thorsten Joachims. 2002. Optimizing search engines
using clickthrough data. In Proceedings of the
eighth ACM SIGKDD international conference on
Knowledge discovery and data mining. ACM, pages
133–142.
Diederik Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint
arXiv:1412.6980 .
Svetlana Kiritchenko, Xiaodan Zhu, and Saif M Mohammad. 2014. Sentiment analysis of short informal
texts. Journal of Artificial Intelligence Research
50:723–762.
Dong-Hyun Lee. 2013. Pseudo-label: The simple and
efficient semi-supervised learning method for deep
neural networks. In Workshop on Challenges in
Representation Learning, ICML. volume 3, page 2.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S
Corrado, and Jeff Dean. 2013. Distributed Representations of Words and Phrases and their
Compositionality. In NIPS ’13. pages 3111–3119.
Vinod Nair and Geoffrey E Hinton. 2010. Rectified
linear units improve restricted boltzmann machines.
In Proceedings of the 27th international conference
on machine learning (ICML-10). pages 807–814.
Preslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Sebastiani, and Veselin Stoyanov. 2016. Semeval-2016
task 4: Sentiment analysis in twitter. Proceedings of
SemEval pages 1–18.
Alexander G Ororbia II, C Lee Giles, and David
Reitter. 2015. Learning a deep hybrid model for
semi-supervised text classification. In Proceedings
of the 2015 Conference on Empirical Methods in
Natural Language Processing (EMNLP).
Thomas Desautels, Andreas Krause, and Joel W Burdick. 2014. Parallelizing exploration-exploitation
tradeoffs in gaussian process bandit optimization.
Journal of Machine Learning Research 15(1):3873–
3923.
Greg Pass, Abdur Chowdhury, and Cayley Torgeson.
2006. A picture of search. In InfoScale ’06.
Benoı̂t Frénay and Michel Verleysen. 2014. Classification in the presence of label noise: a survey.
IEEE transactions on neural networks and learning
systems 25(5):845–869.
Giorgio Patrini, Alessandro Rozza, Aditya Menon,
Richard Nock, and Lizhen Qu. 2016. Making neural
networks robust to label noise: a loss correction
approach. arXiv preprint arXiv:1609.03683 .
Alexander J Ratner, Christopher M De Sa, Sen Wu,
Daniel Selsam, and Christopher Ré. 2016. Data
programming: Creating large training sets, quickly.
In Advances in Neural Information Processing
Systems. pages 3567–3575.
Theodoros Rekatsinas, Xu Chu, Ihab F Ilyas, and
Christopher Ré. 2017. Holoclean: Holistic data
repairs with probabilistic inference. arXiv preprint
arXiv:1702.00820 .
Stephen Robertson, Hugo Zaragoza, et al. 2009. The
probabilistic relevance framework: Bm25 and
beyond. Foundations and Trends® in Information
Retrieval 3(4):333–389.
Adriana Romero, Nicolas Ballas, Samira Ebrahimi
Kahou, Antoine Chassang, Carlo Gatta, and Yoshua
Bengio. 2014. Fitnets: Hints for thin deep nets.
arXiv preprint arXiv:1412.6550 .
Chuck Rosenberg, Martial Hebert, and Henry Schneiderman. 2005. Semi-supervised self-training of
object detection models. In Seventh IEEE Workshop
on Applications of Computer Vision.
Sara Rosenthal, Preslav Nakov, Svetlana Kiritchenko,
Saif M Mohammad, Alan Ritter, and Veselin Stoyanov. 2015. Semeval-2015 task 10: Sentiment
analysis in twitter. In Proceedings of the 9th international workshop on semantic evaluation (SemEval
2015). pages 451–463.
Mickael Rouvier and Benoit Favre. 2016. Sensei-lif at
semeval-2016 task 4: Polarity embedding fusion for
robust sentiment analysis. Proceedings of SemEval
pages 202–208.
Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray
Kavukcuoglu, Razvan Pascanu, and Raia Hadsell.
2016. Progressive neural networks. arXiv preprint
arXiv:1606.04671 .
Aliaksei Severyn and Alessandro Moschitti. 2015a.
Twitter sentiment analysis with deep convolutional
neural networks. In Proceedings of the 38th International ACM SIGIR Conference on Research and
Development in Information Retrieval. ACM, pages
959–962.
Aliaksei Severyn and Alessandro Moschitti. 2015b.
Unitn: Training deep convolutional neural network
for twitter sentiment classification. In Proceedings of
the 9th International Workshop on Semantic Evaluation (SemEval 2015), Association for Computational
Linguistics, Denver, Colorado. pages 464–469.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky,
Ilya Sutskever, and Ruslan Salakhutdinov. 2014.
Dropout: A simple way to prevent neural networks
from overfitting. J. Mach. Learn. Res. 15(1):1929–
1958.
Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri,
Lubomir Bourdev, and Rob Fergus. 2014. Training
convolutional networks with noisy labels. arXiv
preprint arXiv:1406.2080 .
Yuan Tang. 2016. Tf.learn: Tensorflow’s high-level
module for distributed machine learning. arXiv
preprint arXiv:1612.04251 .
Paroma Varma, Bryan He, Dan Iter, Peng Xu, Rose Yu,
Christopher De Sa, and Christopher Ré. 2017. Socratic learning: Correcting misspecified generative
models using discriminative models. arXiv preprint
arXiv:1610.08123 .
Andreas Veit, Neil Alldrin, Gal Chechik, Ivan Krasin,
Abhinav Gupta, and Serge Belongie. 2017. Learning
from noisy large-scale datasets with minimal supervision. In The Conference on Computer Vision and
Pattern Recognition.
Jason Weston, Frédéric Ratle, Hossein Mobahi, and
Ronan Collobert. 2012. Deep learning via semisupervised embedding. In Neural Networks: Tricks
of the Trade, Springer, pages 639–655.
Tong Xiao, Tian Xia, Yi Yang, Chang Huang, and
Xiaogang Wang. 2015. Learning from massive noisy
labeled data for image classification. In Proceedings
of the IEEE Conference on Computer Vision and
Pattern Recognition. pages 2691–2699.
Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao.
2015. Distant supervision for relation extraction
via piecewise convolutional neural networks. In
EMNLP. pages 1753–1762.
Xiaojin Zhu. 2005. Semi-supervised learning literature
survey .
| 9 |
Machine Learning Application in the Life Time of Materials
Xiaojiao Yu
Abstract:
Materials design and development typically takes several decades from the initial discovery to
commercialization with the traditional trial and error development approach. With the accumulation of
data from both experimental and computational results, data based machine learning becomes an
emerging field in materials discovery, design and property prediction. This manuscript reviews the
history of materials science as a disciplinary the most common machine learning method used in
materials science, and specifically how they are used in materials discovery, design, synthesis and even
failure detection and analysis after materials are deployed in real application. Finally, the limitations of
machine learning for application in materials science and challenges in this emerging field is discussed.
Keywords: Machine learning, Materials discovery and design, Materials synthesis, Failure detection
1. Introduction
Materials science has a long history that can date back to the Bronze age 1. However, only until the 16th
century, first book on metallurgy was published, marking the beginning of systematic studies in
materials science 2. Researches in materials science were purely empirical until theoretical models were
developed. With the advent of computers in the last century, numerical methods to solve theoretical
models became available, ranging from DFT (density functional theory) based quantum mechanical
modeling of electronic structure for optoelectronic properties calculation, to continuum based finite
element modeling for mechanical properties 3-4. Multiscale modeling that bridge various time and spatial
scales were also developed in the materials science to better simulate the real complex system 5. Even
so, it takes several decades from materials discovery to development and commercialization 6-7 . Even
though physical modeling can reduce the amount of time by guiding experiment work. The limitation is
also obvious. DFT are only used for functional materials optoelectronic property calculation, and that is
only limited to materials without defect 8 . The assumption itself is far off from reality. New concept such
as multiscale modeling is still far away from large scale real industrial application. Traditional ways of
materials development are impeding the progress in this field and relevant technological industry.
With the large amount of complex data generated by experiment, especially simulation results from
both published and archived data including materials property value, processing conditions, and
microstructural images, analyzing them all becoming increasingly challenging for researchers. Inspired
by the human genome initiative, Obama Government launched a Materials Genome Initiative hoping to
reduce current materials development time to half 9. With the increase of computing power and the
development of machine learning algorithms, materials informatics has increasingly become another
paradigm in the field.
Researchers are already using machine learning method for materials property prediction and discovery.
Machine learning forward model are used for materials property prediction after trained on data from
experiments and physical simulations. Bhadeshia et al. applied neural network(NN) technique to model
creep property and phase structure in steel 10-11. Crystal structure prediction is another area of study for
machine learning thanks to the large amount of structural data in crystallographic database. K -nearest-
neighbor’s method was used to identify materials’ structure type based on its neighbors’ structure types
12-13
. Machine learning is also applied for materials discovery by searching compositional, structural
space for desired properties, which is essentially solving a constrained optimization problem. Baerns et
al. was able to find an effective multicomponent catalyst for low-temperature oxidation of lowconcentration propane with a genetic algorithm and neural network 14.
There are a few reviews on machine learning application in materials science already. Dane Morgan and
Gerbrand Ceder reviewed the data mining methods in materials development 15. Tim Mueller, Aaron
Gilad Kusne, and Rampi Ramprasad also reviewed the progress and application of machine learning in
materials science, more specifically in phase diagram, crystal structural and property prediction 16.
However, their reviews are mostly based on applications in fundamental of materials science. Here, we
are taking a more practical approach of reviewing machine learning application in material design,
development and stages after deployment. We first discuss data problems specifically in materials
science. Then, machine learning concept and most widely used methods are introduced. Up-to-date
reviews on machine leaning application in materials discovery, design, development, deployment and
recall is conducted. The relation between data driven research and traditional experimental and physical
modeling is discussed afterwards. Finally, challenges and future endeavors of machine learning based
materials science research is pointed out for researchers in this niche area.
2.1 Data Problem in Materials Science
The successful application of informatics in biology, astronomy and business has inspired similar
application in materials science. However, materials science differs from other subjects due to its unique
characteristics. Some researchers are debating whether there is a big data problem in materials science,
after all the size of materials data is nothing comparable to biology data. The largest existing database
based on experimental results from materials has 5x105 data records 17. However, the rapid progress in
computational science and microscopy techniques is resulting in enormous amounts of output data 18 .
Furthermore, Materials science data tends to be complex and heterogeneous in terms of their sources
and types ranging from discrete numerical values to qualitative descriptions of materials behavior and
imaging data 19. Data in materials science also exhibit the Veracity characteristics of big data problem, by
that we acknowledge the practical reality of data missing and uncertainties with the data 19 . According
to the 4V (volume, variety, velocity, veracity) characteristics of big data, materials science does have a
big data problem 19 . With the emergence of this big data in materials science, how to extract hidden
information from the complex data and interpret resulted information is becoming increasingly
important for materials design and development.
2.2 Machine Learning Methods
Machine learning, a branch of artificial intelligence, is about computer learning from existing data
without being explicitly programmed and make predictions on new data by building a model from input
samples. Depending on the assigned task, machine learning can be classified into three categories:
supervised learning, machine learning algorithms are trained with a set of input value and labeled
output value first, then they are used to predict output values for corresponding unseen input values;
unsupervised learning, where there is no labelled output value for training data and machine learning
algorithm is used to discover patterns in the input value; reinforcement learning (program interact with
environment dynamically to maximize accumulated rewards). Reinforcement learning is not used in
materials science field; hence it is not introduced in detail in this manuscript. Supervised learning can
either be a classification problem or a regression problem depends on the whether the output value is
discrete or continuous.
2.3 Method Workflow
Machine learning method typically comprise several steps including raw data collection, data
preprocessing (filling in missing data, handling outliers, data transformation), feature engineering for
feature selection and extraction (principle component analysis), model selection, training, validations
and testing. A detailed workflow is presented in Fig 1. To select the best algorithm for a particular task,
model evaluation is important. Different algorithms are evaluated with different metrics. For instance, a
classifier’s evaluation metrics include confusion matrix, AUC (area under curve), precision recall, F
measure, Kolomogorov Smirnov chart (K-S). Confusion matrix is a 2X2 matrix with four elements: true
positive (TP), true negative(TN), false positive (FP), false negati ve(FN) 20. Other accuracy measures are
𝑇𝑃
𝑇𝑁
sensitivity (True Positive Rate=
), specificity (True negative rate=
). AUC is the area under
𝑇𝑃+𝐹𝑁
𝑇𝑁+𝐹𝑃
ROC curve, which consider the relation between sensitivity and specificity. The greater the area under
the curve, the more accurate is the model. Precision is
𝑇𝑃
, recall is the true positive rate defined as
𝑇𝑃+𝐹𝑃
above. Precision-recall shows the fraction of predictions that are false positive 21. F measure is also a
measure of the model accuracy and is defined as the weighted harmonic mean of the precision and
recall of the test. F is the balance between precision and recall 22 . K-S evaluate how the model separates
between the positive and negative distributions. Higher KS value means better separation 23 .
For regression algorithms, evaluation metric includes mean absolute error, (root) mean squared error
(RMSE =
√∑𝑁
̂𝑖 ) 2
𝑖=1(𝑦𝑖 −𝑦
𝑟𝑒𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛 𝑣𝑎𝑟𝑖𝑎𝑡𝑖𝑜𝑛
𝑁
𝑡𝑜𝑡𝑎𝑙 𝑣𝑎𝑟𝑖𝑎𝑡𝑖𝑜𝑛
), coefficient of determination (𝑅 2 ). 𝑅 2 =
=
̂ ̅ 2
∑𝑁
𝑖=1(𝑌𝑖−𝑌)
∑𝑁 (𝑌−𝑌̅)2
measures the percent of total variability that is explained by the regression model 24.
Fig. 1. Flowchart of a typical machine learning method
2.4 Method Comparison
𝑖=1
Some of the most common machine learning algorithms are SVM (support vector machine), ANN
(artificial neural network), logistic regression, decision trees. Support vector machine algorithms are
used to find the hyperplane that separate different classes with highest margin 25. The advantage of
SVM is that the solution is global and unique. Computation complexity of SVM does not depend on the
dimension of the input space and is less prone to overfitting 26-27 . However, SVM does not work well on
unbalanced data 26 . Artificial neural network is inspired by biological brain, where artificial neurons are
connected to mimic the connection of neurons in the brain 28. Multiple hidden layers and neurons can
add to the complexity of the neuron network architecture. The strength of ANN is that they are flexible
and can represent any nonlinear and linear function. However, it needs large amount of training data
and is prone to overfitting. Hyperparameter tuning is tedious and troublesome for ANN. Decision tree is
another commonly used basis classification algorithm, which comprises a root node, internal node,
branch, leaf node, and depth 29. Decision tree progressively splits the tested data based on input feature
value, decision process follows the branch, which is the collection between an internal node and its
parent node, until it reaches a leaf node 30. Ensemble methods such as random forest and adaboost
which are based on constructing a large number of trees with bootstrap samples and iteratively
build an ensemble of weak learners, in an attempt to generate a strong overall model. Ensemble
methods usually perform better than basic machine learning algorithms in terms of reducing variance
and bias 31.
3.1 Machine Learning Application in Materials Discovery and Design
An important concept in materials science field is structural-property-performance relationship 32.
Developing materials that meet the required performance and property goes back to control processing
conditions, structural and compositions of the materials. Hence, understanding how processing
condition, structural and compositions affect materials property and performance is the first step
towards materials design. Traditionally, controlled experiments are conducted to isolate the effect of
one variable. However, variables often are correlated with each other. It is infeasible to isolate some
variable for experimental testing 33. Data mining can help revealing hidden relations between large
amount of materials parameters, processing conditions and their re lations with dependent materials
properties 33. Traditional ways of materials development can be disrupted and reshaped by making the
use of available data.
3.1.1 Materials Property Prediction
Materials design first of all requires understanding of how de sired properties such as materials’ yield
strength, toughness, ultimate tensile strength, and fatigue life etc. are affected by intrinsic
microstructure, chemical composition, crystal structure, and external processing, loading conditions and
temperatures. Machine learning algorithm can derive the quantitative relation between the
independent and dependent variables and hence make prediction with enough training data when
physical model does not exist or is too complicated to apply 33.
Neural network algorithm has been used in ferritic steel welds toughness prediction due to their ability
to handle complex models 33. Toughness was studied as a function of chemical composition,
microstructure, welding process and testing temperature. Their influence on toughness was shown in
Fig 2. The interaction between different variables can also be predicted with neural network algorithm
as shown in Fig 3. The cross of the two toughness curves as a function of temperature and manganese
compositions indicates at higher temperatures the influence of manganese on toughness was not only
reduced but also negative.
Fig 2. Bar chart showing a measure of the model-perceived significance of each of the input variable in
influencing toughness. 33
Fig 3. Variation in the normalized toughness as a function of the manganese concentration and the test
temperature. 33
ANN can also be used to predict constitutive relations. For instance, the constitutive flow behavior of
42CrMo Steel is predicted with strain, log strain rate and temperature as input and flow stress as output.
Predicted results show good correlation with experimental value, indicating excellent capacity of the
developed model in predicting flow stress, Fig 4 34. Austenite Stainless Steel grade 304L and 316L
ultimate tensile strength, yield strength, tensile elongation rate, strain hardening exponent and strength
coefficient were also able to be predicted by ANN with a function of temperature and strain rate. The
optimum architecture is [2-6-5] for ASS 304L and [2-17-5] for ASS 316L using feed forward back
propagation learning. Model accuracy is verified with correlation coefficient, average absolute error and
its standard deviation 35 .
Fatigue properties has always been among the most difficult ones to predict due to the high cost and
long time for fatigue testing and the prevalence of structural failure caused by fatigue 36-37. Existing
physical models are either lacking of generality or fail to give quantitative indications 38. Agrawal et al.
predicted the fatigue strength of steel using data from the Japan National Institute of Materials Science
(NIMS) MatNavi database 39-40. They used 12 regression-based predictive model, among them, neural
network, decision tree and multivariate polynomial regression were able to achieve a high R 2 value
of >0.97.
Fig 4. Comparison between experimental value and predicted flow stress of 42CrMo steel using BP ANN.
34
(a) Predicted training data (b) Predicted testing data
3.1.2 Inversed Design of Materials
Understanding how mechanical properties are influenced by materials internal and external factors help
reducing searching space in the inversed materials design task. However, the inverse problem is more
challenging because of the possibility of multiple solutions and the enormous structural dimension 41 .
Machine learning application has shown promise in inversed materials discovery and design by reducing
searching path and searching region. Ruoqian Liu et al. developed a machine learning method for the
inverse design of Fe-Ge alloy microstructure with enhanced elastic, plastic and magnetostrictive
properties 41. A systematic approach consisting of random data generation, feature selection, and
classification was developed. Firstly, features that can quantitatively describe microstructures and
properties were developed. Then, randomly generated structural and properties pairs were simulated to
form the most desired and least desired classes. Two crucial steps, search path refinement and search
space reduction are conducted prior to the actual searching to find the most efficient orders of features
in search and the most promising search regions of features. This method was validated with five design
problems, which involves identification of microstructures that satisfy both linear and nonlinear
property constraints. This framework shows supremacy comparing with traditional optimization
methods in reducing as much as 80% of running time and achieving optimality that would not be
attained.
3.2 Machine learning application in materials processing and synthesis
Design of materials can be facilitated with the data driven machine learning approach, however the
commercialization of materials is still impeded by the availability to synthesize them. To disrupt the trial
and error synthesis methods, Olivetti group in MIT is working on creating a predictive synthesis system
for advanced materials processing. They are building a curated database of solid state materials and
their synthesis methods compiled from thousands of materials synthesis journal articles. The database
also contains algorithms developed through machine learning approaches, which are capable of
predicting synthesis routes for novel materials based on chemical formulae and other known physical
input data.
Even failed experiments can be used by the machine learning algorithm for materials discovery and
synthesis which truly shows the power of data mining and machine learning. After all, onl y a small
amount of information is published in the research work, most of the data are archived and not been
used to its full potential. Paul Raccuglia, et al. trained a machine learning model based on failed
hydrothermal syntheses data to predict reaction outcomes under different conditions such as
temperature, concentration, reactant quantity and acidity 42 . The model was validated and tested with
previously untested data and shown better performance than human researchers who have 10 years’
experience. It was able to predict conditions for new organically templated inorganic product formation
with a success rate of 89%.
3.3 Machine learning application in microstructure recognition and failure analysis
Microstructure damage and failure pre-detection is another area that machine learning find its
applications. Traditionally, materials scientist examines the SEM, OPM images of samples for failure
analysis similar to medical doctors analyze X-Ray images of patients. With the increasing penetration of
machine learning methods in medical imaging analysis, the same kind of application in materials imaging
is expect to happen as well 43.
In fact, there are already reports on machine learning and computer vision researches on materials
microstructure automatic recognition. Aritra et al. applied computer vision methods to identify images
that contain dendritic morphology and then classify whether the dendrites are al ong the longitudinal
direction or traverse direction if they do exist in the image. To extract features and reduce feature
dimensions, they used visual bag of words, texture and shape statistics, and pre -trained convolutional
neural network. Classification was conducted using support vector machine, nearest neighbors and
random forest models 44. It was shown that pre-trained convolutional neural network performs best in
terms of micrograph recognition and feature extraction, which confirmed with other reports 45-46.
Classification methods were able to reach great accuracy for both task. Another example is the
automatic measurement of ferrite volume fraction from the ferrite-austenite binary phase structures
based on GPF (Graph Processing Framework) algorithm developed by Hafiz Muhammad Tanveer, et al 47.
Machine learning algorithm can also be used in failure detection by examining microstructure images.
Matthias Demant et al. introduced an enhanced machine learning algorithm for crack detection in
photoluminescence (PL) images of as-cut wafers. The detection algorithm is based on a classification of
cracks due to the comparison of the crack descriptions with previous trained crack data. Crack centers
are identified by detecting features appearing as star or line-like structure. Grain boundary information
is extracted from additional images in the visible range to avoid false detections. Support vector
machine is used to train labelled data for crack and non-crack structures classification 48. The algorithm
is able to achieve a high precision of 91.1% and sensitivity of 80.4% for crack length greater than 3 mm.
Elaheh Rabiei et al. developed a dynamic Bayesian network(DBN) based on the variation of modulus of
elasticity to estimate damages from a prognostic approach when crack is not observable yet. Various
sources of information were taken into account to reduce uncertainties. DBN was applied to relate the
variables and their causal or correlation relationship. Degradation model parameters are learned with
joint particle filtering technique. Support vector regression models was applied to define unknown
nonparametric and nonlinear correlation between the input variables. More precise damage estimation
and crack initiation prediction in a metallic alloy under fatigue was confirmed by experimental
observations 49. This method is different from traditional empirical damage models (Paris law) since
direct damage indicators such as crack is not required to predict damage stage. Thus, underling damages
can be monitored at an earlier stage. It is easy to imagine manufacturing companies such as GE can
monitor their jet engine data to predict whether it needs inspection or maintenance.
Fig. 5. Overview of the crack detection algorithm 48 .
4. limitations of machine learning in materials science applications
Although machine learning has been widely used in a lot of fields and increasingly been used in
materials science, machine learning is by no means a panacea. Without understanding its limitations
and blindly apply it to every possible area can lead to wrongful predictions and a waste of time and
effort. First of all, machine learning system are opaque, making them very hard to debug. Machine
learning prediction heavily relies on training data. Machine learning often have overfitting or
overfitting problems that needs to be concerned when taking their prediction results into
consideration. Input data quality needs to be ensured. Interpolation and extrapolation can lead to
problems when training data is not sufficient in the interpolated or extrapolated regime or when
training data is noisy. Hence, error bar prediction is needed for evaluating prediction accuracy.
Machine learning does not explain the results from the physics point of view. Materials scientists often
are interested in understanding the mechanism of certain phenomena. Machine learnin g cannot
elucidate the mechanism since it works on data driven model training and prediction. Interpretation of
the machine learning results needs domain knowledge. Without understanding the underline physics,
nonsense predictions can’t be recognized. Even in the process of feature selection, a good
understanding of the causal relationship between these variable and dependent properties can be
helpful for selecting most effective features and build less complicated models.
Machine learning is also inseparable from experiment and physical simulation. It is typically used as a
supplemental tool for materials discovery, design and property prediction. Machine learning training
data are either from experimental results or physical simulation results 50 . Machine learning models also
rely on experiments or simulations for validation. To advance this field, people from different discipline,
both experimentalist and computational scientist, should collaborate on data collection, storage and
curation. Interdisciplinary researchers need to be trained to understand both materials science and
machine learning 51.
5. Literature
1. G. G. Gnesin, “On the origin of metallurgical technologies in the Bronze Age,” Powder Metall. Met.
Ceram., 52, No. 7–8, 477–488 (2013)
2. Karl Alfred von Zittel (1901). History of Geology and Palaeontology, p. 15. doi:10.5962/bhl.title.33301
3. Jürgen Hafner, Christopher Wolverton, and Gerbrand Ceder. MRS BULLETIN • VOLUME 31 •
SEPTEMBER 2006. 659-665.
4. Ashkan Vaziri, Arvind Gopinath and Vikram S. Deshpande, Journal of Mechanics of Materials and
Structures Vol. 2, No. 6, 2007.
5. Merryn Tawhai, Jeff Bischoff, Daniel Einstein, Ahmet Erdemir, Trent Guess, and Jeff Reinbolt IEEE Eng
Med Biol Mag. 2009 May–Jun; 28(3): 41–49.doi: 10.1109/MEMB.2009.932489
6. Lesar, Richard Alan and Bryden, K. M., "Multiscale Design of Materials" (2011). Ames Laboratory
Conference Papers, Posters, and Presentations. 78
7. Whittingham, M. S. (1976, June). Electrical Energy Storage and Intercalation Chemistry. Science,
192(4244), 1126-1127. doi:10.1126/science.192.4244.1126.
8. Jörg Neugebauer, Tilmann Hickel, Wiley Interdiscip Rev Comput Mol Sci. 2013 Sep; 3(5): 438–448
9. Holdren, J. P. et al. Material Genome Initiative Strategic Plan. Technical Report December
2014, https://www.whitehouse.gov/sites/default/files/microsites/ostp/NSTC/mgi_strategic_plan__dec_2014.pdf (National Science and Technology Council, 2014)
10. H.Bhadeshia, “design of ferritic creep-resistant steels,” ISIJ, Int., 41, 626-640, 2001.
11. T. Sourmail, H. Bhadeshia, and D.J.C. MacKay, “Neural network model of creep strength of austenitic
stainless steels,” Mater.Sci. Technol., 18, 655-663, 2002.
12. G. Bergerhoff, R. Hundt, R. Sievers and I.D. Brown. “the inorganic crystal-structure data-base.”
J.Chem. Compu. Sci., 23. 66-69 1983.
13. P.S. White, J. Rodgers, and Y. LePage, “Crystmet: a database of structures and powder patterns of
metals and intermetallics.” Acta Cryst. B, 58, 343-348, 2002.
14. U. Rodemerck, D. Wolf, O.V. Buyevskaya, P.Claus, S.Senkan, and M. Baerns, “High-throughput
synthesis and screening of catalytic materials-case study on the search for a low-temperature catalyst
for the, oxidation of low-concentration propane.” Chem. Eng. J., 82, 3-11, 2001.
15. Dane Morgan and Gerbrand Ceder, handbook of materials modeling, 395-421.
16. Tim Mueller, Aaron Gilad Kusne, Rampi Ramprasad, Abby L. Parrill, Kenny B. Lipkowitz. Reviews in
Computational Chemistry, Volume 29. DOI: 10.1002/9781119148739.ch4.
17. Villars P, Iwata S. PAULING FILE verifies / reveals 12 principles in materials science supporting four
cornerstones given by Nature. Chem. Met Alloys. 2013; 6:81–108.
18. Belianinov A, Vasudevan R, Strelcov E, et al. Big data and deep data in scanning and electron
microscopies: deriving functionality from multidimensional data sets. Adv. Str. Chem. Imaging. 2015;
1:6.
19. Krishna Rajan. Materialstoday, Volume 15, issue 11, Nov. 2012, pages 470
20. Xinjian Guo, Yilong Yin, Cailing Dong, Gongping Yang, Guangtong Zhou. 2008 Fourth International
Conference on Natural Computation, volume 7, 18-20 Oct. 2008.
21. Jesse Davis, Mark Goadrich. Proceeding ICML 06 Proceedings of the 23rd international conference on
Machine Learning Pages 233-240. Pittsburgh, Pennsylvania, USA — June 25 - 29, 2006
22. Matthew R. Boutell, Jiebo Luo, Xipeng Shen, Christopher M. Brown. Pattern Recognition. Volume 37,
Issue 9, September 2004, Pages 1757–1771
23. S. García, A. Fernández, J. Luengo, F. Herrera. Soft Comput (2009) 13: 959. doi:10.1007/s00500008-0392-y.
24. Kuan-Yu Chen, Cheng-Hua Wang. Tourism Management, Volume 28, Issue 1, February 2007, Pages
215–226
25. B. Boser, I. Guyon, and V. Vapnik. An algorithm for optimal margin classifiers. In Fifth Annual
Workshop on Computational Learning Theory, pages 144–152, Pittsburgh, 1992
26. O.Abuomar, S.Nouranian, R.king, T.M.Ricks, T.E.Lacy. Computational Materials Science. 99 (2015)
316-325.
27. S. Theodoridis, K. Koutroumbas, Pattern Recognition, fourth ed., Academic Press, Massachusetts,
2008.
28. A.K.Jain, Jianchang Mao, K.M. Mohiuddin, Computer Volume: 29, Issue: 3, Pages: 31-44, Mar 1996
29. Diertrich, D. Heller B, Yang, B. (2015). Data Science and Big Data Analytics. Indianapolis: Wiley
30. N. Suneetha et. al. (IJCSE) International Journal on Computer Science and Engineering Vol. 02, No.
06, 2010, 1959-1965
31. Abraham J. Wyner, Matthew Olson, Justin Bleich. arXiv:1504.07676v2 [stat.ML] 29 Apr 2017
32. Chih Hang TUNG, JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE, VOL.3, NO. 3,
SEPTEMBER, 2003
33. H. K. D. H. Bhadeshia, D. J. C. MacKay and L.–E. Svensson: Materials Science and Technology, 11
(1995) 1046–1051
34. Y.C. Lin, Jun Zhang, Jue Zhong. Computational materials science 43(2008) 752-758.
35. Raghuram Karthik Desu, Hansoge Nitin Krishnamurthy, Aditya Balu, Amit Kumar Gupta, Swadesh
Kumar Singh. J MATER RES TECHNOL. 2016; 5(1):13-20
36. X.J.Yu, K.S. Kumar, Materials Science & Engineering A, v.540, 2012 April 1, p.187(11) (ISSN: 09215093)
37. X.J.Yu, K.S. Kumar, Materials Science and Engineering A 676 ·August 2016 DOI:
10.1016/j.msea.2016.08.114
38. J.M. Schooling: The modelling of fatigue in nickel base alloys, Ph.D. Thesis, University of Cambridge.
(1997)
39. A. Agrawal, P.D. Deshpande, A. Cecen, G.P.Basavarsu, A.N.Choudhary, and S.R. Kalidindi,
“Exploration of data science techniques to predict strength of steel from composition and processing
parameters,” Integr. Mater. Manuf. Innovation 3, 1-19(2014)
40. Ankit Agrawal and Alok Choudhary. APL Materials 4, 053208(2016)
41. Ruoqian Liu, Abhishek Kumar, Zhengzhang Chen, Ankit Agrawal, Veera Sundararaghavan & Alok
Choudhary. Scientific Reports | 5:11551 | DOI: 10.1038/srep11551
42. Paul Raccuglia, Katherine C. Elbert, Philip D. F. Adler, Casey Falk, Malia B. Wenny,Aurelio Mollo,
Matthias Zeller, Sorelle A. Friedler, Joshua Schrier & Alexander J. Norquist. Nature 533,73–76 (05 May
2016) doi:10.1038/nature17439.
43. Miles N. Wernick, Yongyi Yang, Jovan G. Brankov, Grigori Yourganov, and Stephen C. Strother. IEEE
Signal Process Mag. 2010 Jul; 27(4): 25–38
44. Aritra Chowdhury, Elizabeth Kautz, Bulent Yener, Daniel lewis. Computational materials science.
123(2016) 176-187
45. L. A. Gatys, A. S. Ecker, and M. Bethge, “exture synthesis and the controlled generation
of natural stimuli using convolutional neural networks," (2015), arXiv:1505.07376 [cs.CV].
46. L. A. Gatys, A. S. Ecker, and M. Bethge, “A neural algorithm of artistic style," (2015),
arXiv:1508.06576 [cs.CV]
47. Hafiz Muhammad Tanveer, Hafiz Muhammad Tahir Mustafa, Waleed Asif, Munir Ahmad. (IJACSA)
International Journal of Advanced Computer Science and Applications, Vol. 6, No. 5, 2015
48. Matthias Demant, Marcus Oswald, Tim Welschehold, Sebastian Nold, Sebastian Bartsch, Stephan
Schoenfelder, and Stefan Rein. Presented at the 29th European PV Solar Energy Conference and
Exhibition, 22-26 September 2014, Amsterdam, The Netherlands
49. Elaheh Rabiei, Enrique Lopez Droguett, and Mohammad Modarres. Advances in Mechanical
Engineering 2016, Vol. 8(9) 1–19
50. Mohsen Ostad Shabani, Ali Mazahery. Metallurgical and materials transactions A. Volume 43A. June
2012, 2158-2162
51. Ashley A. White. MRS BULLETIN • VOLUME 38 • AUGUST 2013
| 5 |
SELF-LEARNING TO DETECT AND SEGMENT CYSTS IN LUNG CT IMAGES WITHOUT
MANUAL ANNOTATION
Ling Zhang1 , Vissagan Gopalakrishnan2 , Le Lu1 , Ronald M. Summers1 , Joel Moss2 , Jianhua Yao1
arXiv:1801.08486v1 [cs.CV] 25 Jan 2018
1
Radiology and Imaging Sciences Department, 2 Cardiovascular and Pulmonary Branch, NHLBI,
National Institutes of Health (NIH), Bethesda MD
ABSTRACT
Image segmentation is a fundamental problem in medical image analysis. In recent years, deep neural networks achieve
impressive performances on many medical image segmentation tasks by supervised learning on large manually annotated
data. However, expert annotations on big medical datasets are
tedious, expensive or sometimes unavailable. Weakly supervised learning could reduce the effort for annotation but still
required certain amounts of expertise. Recently, deep learning
shows a potential to produce more accurate predictions than
the original erroneous labels. Inspired by this, we introduce
a very weakly supervised learning method, for cystic lesion
detection and segmentation in lung CT images, without any
manual annotation. Our method works in a self-learning manner, where segmentation generated in previous steps (first by
unsupervised segmentation then by neural networks) is used
as ground truth for the next level of network learning. Experiments on a cystic lung lesion dataset show that the deep
learning could perform better than the initial unsupervised annotation, and progressively improve itself after self-learning.
Index Terms— Convolutional neural networks, weakly
supervised learning, medical image segmentation, graph cuts
1. INTRODUCTION
Image segmentation is a fundamental problem in medical
image analysis. Classic segmentation algorithms [1] are usually formulated as optimization problems relying on cues
from low-level image features. In recent years, deep learning
has made much progress on image segmentation tasks (e.g.,
FCN [2], HED [3]), achieved dominant performances on
many medical image segmentation benchmarks, e.g., UNet
[4] is competitive enough for many applications. The success of deep learning based segmentation requires supervised
learning on large manually annotated data. However, expert
annotations on big medical datasets are expensive to obtain
This research was supported - in part - by the Intramural Research Program of the National Institutes of Health Clinical Center. The authors thank
Dr. Li Zhang from Beijing Institute of Big Data Research for his inspiring
discussion, and Nvidia for the TITAN X Pascal GPU donation.
Mild
Moderate
Severe
Fig. 1. Examples of the cystic lung lesions with different
severity levels in CT image and their manual annotation (red).
or even unavailable. For example, manual annotation of hundreds of cysts in CT volume dataset (examples shown in Fig.
1) is not feasible for a recent large-sized clinical study of
lymphangioleiomyomatosis (LAM) [5].
To alleviate the annotation burden, researchers exploit
weakly supervised methods for deep learning based segmentation. One direction is to reduce the effort (e.g., time, expertise) for annotation. By combining FCN and active learning,
50% training data is needed to train a model with comparable
performance as training on all data [6]. Another direction
applies image-level annotation by incorporating FCN in a
multiple instance learning framework [7]. However, expertise
from physicians are still needed, such as assigning imagelevel annotations and estimating the lesion size.
Recently, deep learning has shown a potential to beat the
teacher (i.e., perform better than the training data labels) [8, 9]
or even self-learn to be an expert without human knowledge
in AlphaGo Zero [10]. Specifically, for some classification
[8] and semantic segmentation [9] tasks, when provided with
data labels with certain amount of errors, deep learning could
produce lower errors than the original erroneous labels. In
addition, with assisting by domain-specific algorithm (e.g.,
Monte Carlo tree search in Go game [10]; GrabCut in image
segmentation [9]), training samples/labels can be generated
to iteratively or recursively update the neural network parameters to achieve better performance.
transfer
Unsupervised
Segmentation
Segmentation Net
– level 1
transfer
Segmentation Net
– level 2
…
Segmentation Net
– level n
Data stream
Annotation stream
Fig. 2. Learning to segment medical images without manual annotation. Segmentation networks (level 1 – level n) are
recursively trained with the previous network segmentation as training labels.
In this paper, we propose a very weakly supervised approach for LAM cyst detection and segmentation. As shown
in Fig. 1, the detection and segmentation of cysts is a challenging task due to the large number of cysts, greatly variation of cyst sizes, severe touching of cysts, inconsistent image
quality, and image noise and motion artifact, etc. Moreover, it
is infeasible to obtain manual segmentation on LAM studies.
Our method, differs from weakly supervised methods, can
automatically learn from medical images without any manual pixel- [6], sparse- [9], or image-level [7] annotation and
without pre-training a segmentation network on other labeled
datasets [9]. Starting from classic segmentation techniques,
specifically unsupervised K-means clustering with spatial information followed by graph cuts [11] refinement, the initial
annotation is generated and serves as labels for a segmentation network (UNet [4] in this paper) learning. New networks
are then recursively trained with the previous network predictions as training labels. An improved segmentation network could be trained under two hypotheses: 1) deep learning
might generate better predictions than the training data labels
[8], and 2) better training data labels produce better predictions [10]. Note that the value of K in K-means clustering is
the only value provided to the framework.
2. METHODS
Given a medical image dataset without manual annotation,
our method works in a self-learning manner (Fig. 2), where
the previously generated (first by unsupervised segmentation
then by segmentation networks) pixel-level annotations serve
as inputs for the next level of network learning.
2.1. Unsupervised Segmentation
K-means clustering is an unsupervised segmentation approach. By involving pixel intensity, average and median
pixel intensities of a local window into a feature space, a spatial K-means [12] classifies the image by grouping similar
pixels in the feature space into clusters. The number of clus-
ters K needs to be manually set in different applications. For
the cyst segmentation in CT images, we set K = 3 to obtain
three clusters. c1 , c2 , and c3 are the cluster centers, indicating
cyst, lung tissue, and others, respectively.
With c1 , c2 , and c3 , we construct a three-terminal graph
with the energy function consisting of a data term and a pixel
continuity term as in [11]. The data term is assigned as the
squared intensity differences between pixels and the cluster
centers; The pixel continuity term is 0 when two neighboring
pixels values are the same, and δ otherwise (δ = 0.003 through
empirical evaluation on our data). Then, max-flow algorithm
[11] is used to optimize the energy function, and the global
optimal pixel labels are obtained.
2.2. Segmentation Network
After obtaining the initial annotation for all the images in the
dataset by using spatial K-means graph cuts, UNet is used as
the network architecture to learn a better segmentor because
of its efficiency and accuracy for medical image segmentation
[4]. UNet is constituted of four layers of contraction (pooling)
and four layers of expansion (up-convolution). Skip connections from contracting path to expansive path strengthen context information in higher resolution layers.
During UNet training, the inputs are raw CT images with
original resolution, and the outputs are 1-channel annotations
(cross-entropy loss is utilized). The training focuses on distinguishing between cysts and lung tissues and ignoring background labels. One critical problem in training UNet for medical images is that the label/class distribution can be highly
imbalanced, e.g., much more positive samples than negative
or vice versa. In our experiments, we use the distribution of
cysts and lung tissues in the image to balance the positive and
negative classes in loss function as in [3]. We also avoid sampling empty CT slices (no cyst in the slice) in the training.
2.3. Recursive Learning
The trained UNet will become its own teacher – it is applied
to segment all the CT images in training set to generate a new
set of pixel-level cyst labels, which will be used as the new
ground truth to train a next level UNet. The network parameters of the previous UNet are transferred to initialize the next
network, and a lower learning rate is used to train the next
network. The self-learning terminates when the similarity between successive segmentation is larger than a threshold.
3. EXPERIMENTAL METHODS
In this study, we evaluated our method on a LAM dataset. A
total of 183 CT volumes from patients with LAM in a natural
history protocol were studied. High resolution CT scans of
the chest were obtained. The scans contained 9-13 slices and
the slice thickness ranged from 1 to 1.25 mm at 3-cm intervals. Each CT slice is with 512×512 pixels.
The UNet is implemented using Caffe [13]. We train the
UNet model from scratch. Three UNet models are trained
progressively in the recursive framework, named as UNetlevel1, UNet-level2, and UNet-level3, respectively. The initial learning rate is 1×10−7 for UNet-level1 and decreases by
a factor of 10 for every next level thanks to transfer learning from previous level. Each UNet-level is trained for 50k
iterations. Mini-batch of 1 image since it provides better performance (than 5, 10, etc.) in a preliminary experiment. The
proposed method is tested on a DELL TOWER 7910 workstation with 2.40 GHz Xeon E5-2620 v3 CPU, 32 GB RAM,
and a Nvidia TITAN X Pascal GPU of 12 GB of memory.
Our model is trained on 166 CT volumes. The remaining
17 volumes including 5 mild, 6 moderate, and 6 severe cases
are left out as unseen testing data. To evaluate the segmentation performance, a medical student manually detect and
segment one slice from each of the 17 testing volumes. The
manual segmentation was tedious that it took 4 working days.
Quantification metrics include Dice coefficient and absolute
difference of cyst scores (ADCS). Cyst score is defined as the
percentage of lung region occupied by cysts, which is a critical clinical factor in LAM assessment [5].
It’s worth mentioning that differing from traditional concept of training set, our model does not learn from any manual
annotation from the 166 training data (which is not available),
therefore, these data can also be seen as testing data for performance evaluation. Six images (from 6 CT volumes) with
large ADCS between unsupervised segmentation results and
UNet results are additionally selected from the 166 dataset.
Manual segmentation is then conducted on these slices for
evaluation of the progressive improvement of our framework.
In addition, we compare our method with the cyst segmentation method in [5] where semi-automated thresholding followed by some postprocessing techniques were used.
4. RESULTS
Table 1 shows the performance on unseen images. 15 out of
the 17 images are with good image quality while 2 are noisy.
Table 1. Performance comparison on 17 unseen CT images.
SK-GC: spatial K-means graph cuts; ADCS: absolute difference of cyst scores. Bold indicates the best results.
Dice (%)
ADCS (%)
Semi-automated [5]
62.64
8.34
SK-GC (teacher)
74.67
3.71
UNet-LV.1 (student)
75.41
3.65
UNet-LV.2 (student) 75.87
3.38
UNet-LV.3 (student)
74.94
4.56
Table 2. Performance comparison on 6 CT images with large
ADCS between SK-GC and UNet from learning set. SK-GC:
spatial K-means graph cuts; ADCS: absolute difference of
cyst scores. Bold indicates the best results.
Dice (%)
ADCS (%)
Semi-automated [5]
79.05
5.19
SK-GC (teacher)
70.39
11.25
UNet-LV.1 (student)
82.25
2.89
UNet-LV.2 (student) 82.65
1.98
UNet-LV.3 (student)
81.93
3.42
Student (i.e., UNet) learning could achieve higher segmentation accuracy than its teacher (i.e., spatial K-means graph
cut, SK-GC), but the self-improvement seems to stop at level
3. The same trends could be observed in Table 2, where the
performance on images from the learning set is shown. In
these 6 CT images with large ADCS between SK-GC and
UNet, compared to manual annotation, UNet learning performs substantially better than SK-GC. The lower Dice of
UNet in Table 1 compared to which in Table 2 is mainly
caused by the lower Dice values from the 5 mild cases, where
both SK-GC and UNet have Dice values around 60%. Our
proposed self-learning method is also more accurate than the
semi-automated method [5].
Three examples in Fig. 3 show how the proposed strategy
recursively improves the segmentation performance itself.
Given inaccurate segmentation provided by SK-GC, one level
of UNet learning (UNet-LV.1) can already correct most oversegmentation and undersegmentation of cysts, thus achieve
both higher sensitivity and higher specificity. Higher levels of
UNet tend to obtain more accurate cyst boundaries especially
for the overlapping cysts. The whole training process takes
about 17 hours and testing is 0.13 sec./slice.
5. CONCLUSIONS
We report the first results of very weakly supervised learning
to detect and segment cysts in lung CT images without manual annotation. By first learning from classic unsupervised
segmentation, deep learning shows its potential to perform
even better after a few levels of self-learning. In future work,
we will extend this method to segment other medical images.
CT slice
SK-GC
UNet-LV.1
UNet-LV.2
Manual annotation
Fig. 3. Three examples (2 good image quality and 1 noisy) show segmentation results obtained by SK-GC, UNet-level1 and
UNet-level2, given manual annotation as reference. UNet-level3 is not shown due to space constraint.
6. REFERENCES
[1] M. Sonka, V. Hlavac, and R. Boyle, Image Processing,
Analysis, and Machine Vision, Cengage Learning, 2014.
[2] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in CVPR,
2015, pp. 3431–3440.
[3] S. Xie and Z. Tu, “Holistically-nested edge detection,”
in ICCV, 2015, pp. 1395–1403.
[4] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in MICCAI, 2015, pp. 234–241.
[5] J. Yao, A.M. Taveira-DaSilva, A.M Jones, P. JulienWilliams, M. Stylianou, and J. Moss, “Sustained effects
of sirolimus on lung function and cystic lung lesions in
lymphangioleiomyomatosis,” Am. J. Respir. Crit. Care
Med., vol. 190, no. 11, pp. 1273–1282, 2014.
[6] L. Yang, Y. Zhang, J. Chen, S. Zhang, and D.Z. Chen,
“Suggestive annotation: A deep active learning framework for biomedical image segmentation,” in MICCAI,
2017.
[7] Z. Jia, X. Huang, E. I. Chang, and Y. Xu, “Constrained
deep weak supervision for histopathology image segmentation,” IEEE TMI, 2017.
[8] M.Y. Guan, V. Gulshan, A.M. Dai, and G.E. Hinton,
“Who said what: Modeling individual labelers improves
classification,” arXiv preprint arXiv:1703.08774, 2017.
[9] A. Khoreva, R. Benenson, J. Hosang, M. Hein, and
B. Schiele, “Simple does it: Weakly supervised instance
and semantic segmentation,” in CVPR, 2017.
[10] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou,
A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai,
A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. Van
Den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature,
vol. 550, pp. 354–359, 2017.
[11] Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision,” TPAMI, vol. 26, no. 9, pp. 1124–
1137, 2004.
[12] K. Li, Z. Lu, W. Liu, and J. Yin, “Cytoplasm and nucleus segmentation in cervical smear images using radiating gvf snake,” Pattern Recognition, vol. 45, no. 4, pp.
1255–1264, 2012.
[13] Y. Jia,
“Caffe:
An open source convolutional architecture for fast feature embedding,”
http://caffe.berkeleyvision.org/, 2013.
| 1 |
arXiv:1801.07743v1 [cs.IR] 23 Jan 2018
Entity Retrieval
and
Text Mining
for
Online Reputation Monitoring
Pedro dos Santos Saleiro da Cruz
Departamento de Engenharia Informática
Faculdade de Engenharia da Universidade do Porto
In partial fulfillment of requirements for the degree of
Doctor in Informatics Engineering
FEUP
2017
Supervisor: Dr. Carlos Soares
Co-Supervisor: Dr. Eduarda Mendes Rodrigues
Faculdade de Engenharia
Universidade do Porto
Rua Dr. Roberto Frias, s/n
4200-465 Porto, Portugal
Copyright © 2017 by Pedro Saleiro
Doctoral Committee:
Dr. Eugénio Oliveira, Full Professor at FEUP, University of Porto
Dr. Mark Carman, Senior Lecturer at Monash University
Dr. Bruno Martins, Assistant Professor at IST, University of Lisbon
Dr. Luís Torgo, Associate Professor at FCUP, University of Porto
Dr. Carlos Soares, Associate Professor at FEUP, University of Porto
Esta tese é dedicada à minha Mãe,
Maria de Lurdes,
pelo seu amor e dedicação constantes.
Acknowledgements
First, I would like to thank everybody that contributed somehow to this work, from
co-authors to reviewers, colleagues and faculty staff. I will probably forget to mention
someone in particular and I sincerely apologize for that. This work was funded by
SAPO Labs, FCT and Microsoft Research. Without their financial support, I would
not be able to conclude this thesis.
I am deeply grateful to my supervisor, Carlos Soares. Although I was not working
in meta-learning :) Carlos always showed a genuine enthusiasm about this work. Thank
you for giving me the freedom to grow independently as a researcher and to pursuit my
own ideas, even in the moments I was not delivering at the rhythm you expected. I
believe you made me more pragmatic after all. We had really interesting and thorough
discussions during the last 5 years. We have not tried all the cool ideas but I hope we
will do it someday. Last, I would like to thank you for all the support and protection,
as well as, for believing that I should expand my horizons to the US! I will send you a
postcard from Chicago!
I am also thankful to Eduarda Mendes Rodrigues, co-supervisor of this work. It all
started with you. Thank you for receiving me at FEUP back in October 2012. I still
remember the day we drew the first draft of our framework for ORM. You always have
encouraged me and your positive feedback was a source of inspiration and motivation.
Even at distance you have always been available when I needed. I must also mention
your decisive role in helping me pursuing a Summer internship in a top notch place
such as Microsoft Research.
Being a graduate student is an opportunity to collaborate with new and inspirational
people and that was what happened when I had the chance to start working with
Natasa Milic-Frayling at Microsoft Research. When we are around Natasa we believe
we can make things happen. Thank you for your patience, support and motivation. I
hope we can keep our collaboration for many years.
I show my gratitude to Prof. Eugénio Oliveira who helped me with a smooth
transition to LIACC and always had the door open to discuss my issues. Furthermore,
Prof. Eugénio was very enthusiastic about my work even not being my supervisor and I
iv
sincerely appreciate that. Thank you for your advices and I will miss our conversations
about the past and future of AI.
I wish to thank Luís Sarmento for introducing me to the world of Data Science,
and Text Mining in particular. I just regret we did not had the chance to collaborate
more often.
I must thank two special friends that are also graduate students, Jorge Teixeira and
Damião Rodrigues. Jorge, you were my “brother in arms” throughout this journey and
I will always be thankful. Damião, my friend and colleague for more than 10 years, is
the friend I searched for sharing the ups and downs of being graduate student. Thank
you for your support and motivation.
I would like to thank Cristina Ribeiro for the administrative support regarding my
funding throughout these years. I must also address Rosaldo Rossetti for believing in
my abilities and for starting our productive collaboration, and of course, for being a
really enjoyable and funny colleague. Arian Pasquali also deserves a personal mention
for all the support, as well as, Luís Gomes, Luís Rei, Sílvio Amir, Tiago Cunha and
Gustavo Laboreiro. I would like also to thank all the members of the POPSTAR
project, specially Pedro Magalhães.
Nesta hora não poderia deixar de estender os agradecimentos aos amigos e à família.
Em especial queria referir o grupo do Rumo ao Penta onde todos os passarões me
proporcionam grandes momentos de boa disposição sem os quais seria impossível
abstrair-me dos problemas do dia a dia. Um grande abraço para o André, João Miguel,
Jorge e Márcio. Queria também deixar aqui uma palavra para a minha prima Xana
pela amizade desde sempre.
Como é óbvio tenho imenso a agradecer à minha Mãe, a quem dedico esta tese.
Obrigado pelo amor, dedicação e pela liberdade que sempre me deste em todas as
minhas escolhas. Como não deixaria de ser, sempre foste uma entusiasta deste meu
desafio que agora chega ao fim. Parabéns a ti! Deixo também um beijinho à minha
irmã Guida, ao querido Tomás e ao bebé Diogo que ainda não conheci mas a vida de
emigrante tem destas coisas.
E por fim, deixo o meu sentido agradecimento à minha namorada, Maria. Foste
crucial nesta caminhada. Ter-te ao meu lado deu-me força todos os dias para avançar
mais um pouco. Sei que este trabalho comprometeu muito o nosso tempo a dois
mas agradeço-te a compreensão e os incentivos constantes para levar isto até ao fim.
Prometo compensar no futuro :) Ah e claro tenho que agradecer ao Bobby pela
companhia que me fez durante a escrita e por me conseguir fazer sorrir mesmo quando
a vida é madrasta.
Abstract
Online Reputation Monitoring (ORM) is concerned with the use of computational
tools to measure the reputation of entities online, such as politicians or companies. In
practice, current ORM methods are constrained to the generation of data analytics
reports, which aggregate statistics of popularity and sentiment on social media. We
argue that this format is too restrictive as end users often like to have the flexibility to
search for entity-centric information that is not available in predefined charts.
As such, we propose the inclusion of entity retrieval capabilities as a first step
towards the extension of current ORM capabilities. However, an entity’s reputation is
also influenced by the entity’s relationships with other entities. Therefore, we address
the problem of Entity-Relationship (E-R) retrieval in which the goal is to search for
multiple connected entities. This is a challenging problem which traditional entity
search systems cannot cope with.
Besides E-R retrieval we also believe ORM would benefit of text-based entity-centric
prediction capabilities, such as predicting entity popularity on social media based on
news events or the outcome of political surveys. However, none of these tasks can
provide useful results if there is no effective entity disambiguation and sentiment
analysis tailored to the context of ORM.
Consequently, this thesis address two computational problems in Online Reputation
Monitoring: Entity Retrieval and Text Mining. We researched and developed methods
to extract, retrieve and predict entity-centric information spread across the Web.
We proposed a new probabilistic modeling of the problem of E-R retrieval together
with two fusion-based design patterns for creating representations of both entities and
relationships. Furthermore, we propose the Entity-Relationship Dependence Model, a
novel early-fusion supervised model based on the Markov Random Field framework
for Retrieval. Together with a new semi-automatic method to create test collections
for E-R retrieval, we released a new test collection for that purpose that will foster
research in this area. We performed experiments at scale with results showing that
it is possible to perform E-R retrieval without using fix and pre-defined entity and
relationship types, enabling a wide range of queries to be addressed.
vi
We tackled Entity Filtering and Financial Sentiment Analysis using a supervised
learning approach and studied several possible features for that purpose. We participated in two well known external competitions on both tasks, obtaining state-of-the-art
performance. Moreover, we performed analysis of the predictive power of a wide set of
signals extracted from online news to predict the popularity of entities on Twitter. We
also studied several sentiment aggregate functions on Twitter to study the feasibility
of using entity-centric sentiment on social media to predict political opinion polls.
Finally, we created and released an adaptable Entity Retrieval and Text Mining
framework that puts together all the building blocks necessary to perform ORM and can
be reused in multiple application scenarios, from computational journalism to politics
and finance. This framework is able to collect texts from online media, identify entities
of interest, perform entity and E-R retrieval as well as classify sentiment polarity and
intensity. It supports multiple data aggregation methods together with visualization
and modeling techniques that can be used for both descriptive and predictive analytics.
Resumo
A Monitorização da Reputação Online (MRO) consiste na utilização de ferramentas
computacionais para medir a reputação de entidades online, como por exemplo, políticos
ou empresas. Na prática, os métodos actuais de MRO estão restringidos à produção
de relatórios constituídos por análises de dados, tais como estatísticas agregadas da
popularidade e do sentimento nos media sociais. Consideramos que esta prática é
demasiado restritiva uma vez que os utilizadores finais das plataformas MRO desejam
frequentemente ter a flexibilidade que lhes permita pesquisar por informação centrada
nas entidades que vai além da disponibilizada nos gráficos pré-definidos.
Por conseguinte, propomos a inclusão da capacidade de recuperação de entidades
como um primeiro passo no sentido de estender as o estado atual das ferramentas de
MRO. No entanto, a reputação de uma dada entidade também é influenciada pelas
relações desta com outras entidades. Neste sentido, propomo-nos a tratar do problema
de recuperação de entidade-relações (E-R) onde o objectivo consiste na pesquisa por
múltiplas entidades relacionadas entre si. Trata-se de um desafio que os sitemas
tradicionais de recuperação de entidades ainda não são capazes de lidar.
Para além da recuperação E-R, também acreditamos que a MRO iria beneficiar da
capacidade de efectuar previsões baseadas em texto e centradas nas entidades, como por
exemplo a previsão da popularidade de entidades nos media sociais utilizando eventos
retratados nas notícias ou o resultado de sondagens. No entanto, nenhuma destas
tarefas terá sucesso e utilidade se não houver a capacidade efetiva de desambiguar
entidades mencionadas nos textos, assim como uma análise de sentimento específica
para o contexto da MRO.
Consequentemente, esta tese trata dois problemas computacionais da Monitorização
da Reputação Online: Recuperação de Entidades e Prospeção de Texto. Investigámos
e desenvolvemos métodos para extrair, recuperar e prever informação centrada em
entidades e espalhada pela Internet.
Propomos um novo modelo probabilístico do problema de recuperação E-R conjuntamente com dois padrões de desenho baseados em fusão de texto para criar
representações de entidades e relações. Propomos também o Modelo de Dependência
viii
Entitdade-Relação (MDER), um novo modelo supervisionado de fusão antecipada
baseado no Campo Aleatório de Markov para a Recuperação de Informação. Conjutamente com um novo método semi-automático de geração de coleções de teste para
recuperação E-R, lançamos uma nova coleção de teste com esse propósito que irá
fomentar a investigação nesta área. Efetuamos experiências de grande escala e os
resultados mostram que é possível realizar recuperação E-R sem utilizar tipos fixos e
pré-definidos de entidades e relações, o que permite atuar sobre o conjunto alargado de
pesquisas.
Tratamos também das tarefas de Filtragem de Entidades e Análise de Sentimento
Financeiro utilizando uma abordagem de aprendizagem supervisionada em que estudamos várias características para esse fim. Participámos em duas competições exterrnas
em ambas as tarefas, atingindo resultados ao nível do estado da arte. Além disso,
realizámos uma análise do poder preditivo de um grande conjunto de sinais extraídos
das notícias online para parever a popularidade de entidades no Twitter. Assim como,
um estudo de várias funções de agregação de sentimento do Twitter para estudar a
praticabilidade de utilizar informação de sentimento nos media sociais para prever
sondagens eleitorais.
Finalmente, criámos e disponibilizámos uma plataforma de recuperação de entidades
e prospeção de texto que conjuga todos os blocos necessários para a realização de
MRO. Pode ser reutilizada em diversos cenários de aplicação, desde o jornalismo
computacional à política e finança. Esta plataforma é capaz de recolher textos dos
media online, identificar entidades alvo, efectuar recuperação de entidades e relaçãos,
assim como classificar sentimento e intensidade associada. Suporta vários métodos de
agregação de dados e juntamente com métodos de visualização e previsão pode ser
utilizada tanto para análises descritivas como preditivas.
Table of contents
List of figures
xiii
List of tables
xv
1 Introduction
1.1 Thesis Statement . . . . . . . .
1.2 Objectives . . . . . . . . . . . .
1.3 Research Methodology . . . . .
1.4 Contributions and Applications
1.5 Foundations . . . . . . . . . . .
1.6 Thesis Outline . . . . . . . . . .
.
.
.
.
.
.
1
2
5
7
8
10
12
.
.
.
.
.
.
.
.
.
.
.
13
13
14
15
19
20
22
23
24
26
28
29
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2 Background and Related Work
2.1 Online Reputation Monitoring . . . .
2.1.1 Related Frameworks . . . . .
2.2 Entity Retrieval and Semantic Search
2.2.1 Markov Random Field for IR
2.2.2 Sequential Dependence Model
2.2.3 MRF for Entity Retrieval . .
2.3 Named Entity Disambiguation . . . .
2.4 Sentiment Analysis . . . . . . . . . .
2.5 Word Embeddings . . . . . . . . . .
2.6 Predicting Collective Attention . . .
2.7 Political Data Science . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3 Entity Retrieval for Online Reputation
3.1 Entity-Relationship Retrieval . . . . .
3.1.1 E-R Queries . . . . . . . . . . .
3.1.2 Modeling E-R Retrieval . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Monitoring
33
. . . . . . . . . . . . . . . . . . 34
. . . . . . . . . . . . . . . . . . 35
. . . . . . . . . . . . . . . . . . 36
Table of contents
x
3.2
3.3
3.4
Design Patterns for Entity-Relationship
3.2.1 Early Fusion . . . . . . . . . . .
3.2.2 Association Weights . . . . . .
3.2.3 Early Fusion Example . . . . .
3.2.4 Late Fusion . . . . . . . . . . .
3.2.5 Late Fusion Example . . . . . .
3.2.6 Implementation . . . . . . . . .
Entity-Relationship Dependence Model
3.3.1 Graph Structures . . . . . . . .
3.3.2 Feature Functions . . . . . . . .
3.3.3 Ranking . . . . . . . . . . . . .
3.3.4 Discussion . . . . . . . . . . . .
Summary of the Contributions . . . . .
Retrieval
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
4 Entity-Relationship Retrieval over a Web Corpus
4.1 RELink Query Collection . . . . . . . . . . . . . .
4.1.1 Tabular Data and Entity Relationships . . .
4.1.2 Selection of Tables . . . . . . . . . . . . . .
4.1.3 Formulation of Queries . . . . . . . . . . . .
4.1.4 Collection Statistics . . . . . . . . . . . . . .
4.2 Experimental Setup . . . . . . . . . . . . . . . . . .
4.2.1 Data and Indexing . . . . . . . . . . . . . .
4.2.2 Retrieval Method and Parameter Tuning . .
4.2.3 Test Collections . . . . . . . . . . . . . . . .
4.3 Results and Analysis . . . . . . . . . . . . . . . . .
4.4 Summary of the Contributions . . . . . . . . . . . .
5 Entity Filtering and Financial Sentiment
5.1 Entity Filtering . . . . . . . . . . . . . .
5.1.1 Task Overview . . . . . . . . . .
5.1.2 Pre-processing . . . . . . . . . . .
5.1.3 Features . . . . . . . . . . . . . .
5.1.4 Experimental Setup . . . . . . . .
5.1.5 Results . . . . . . . . . . . . . . .
5.2 Financial Sentiment Analysis . . . . . .
5.2.1 Task Overview . . . . . . . . . .
5.2.2 Financial Word Embeddings . . .
Analysis
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
40
41
43
44
46
48
51
52
53
55
60
61
62
.
.
.
.
.
.
.
.
.
.
.
63
64
65
65
67
67
69
69
70
71
72
77
.
.
.
.
.
.
.
.
.
79
80
81
81
81
82
84
86
87
87
Table of contents
5.3
5.2.3 Approach . . . . . . .
5.2.4 Experimental Setup . .
5.2.5 Results and Analysis .
5.2.6 Concluding Remarks .
Summary of the Contributions
xi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6 Text-based Entity-centric Prediction
6.1 Exploring Online News for Reputation Monitoring
6.1.1 Approach . . . . . . . . . . . . . . . . . .
6.1.2 Experimental Setup . . . . . . . . . . . . .
6.1.3 Results and Discussion . . . . . . . . . . .
6.2 Predicting Political Polls using Twitter
Sentiment . . . . . . . . . . . . . . . . . . . . . .
6.2.1 Methodology . . . . . . . . . . . . . . . .
6.2.2 Data . . . . . . . . . . . . . . . . . . . . .
6.2.3 Experimental Setup . . . . . . . . . . . . .
6.2.4 Results and Discussion . . . . . . . . . . .
6.2.5 Feature Importance . . . . . . . . . . . . .
6.2.6 Outlook . . . . . . . . . . . . . . . . . . .
6.3 Summary of the Contributions . . . . . . . . . . .
7 A Framework for Online Reputation Monitoring
7.1 Framework Overview . . . . . . . . . . . . . . . .
7.1.1 RELink . . . . . . . . . . . . . . . . . . .
7.1.2 TexRep . . . . . . . . . . . . . . . . . . .
7.2 RELink Use Case . . . . . . . . . . . . . . . . . .
7.2.1 News Processing Pipeline . . . . . . . . . .
7.2.2 Demonstration . . . . . . . . . . . . . . .
7.3 TexRep Use Case . . . . . . . . . . . . . . . . . .
7.3.1 Data Aggregation . . . . . . . . . . . . . .
7.3.2 Visualization . . . . . . . . . . . . . . . .
7.4 Learning Word Embeddings for ORM . . . . . . .
7.4.1 Neural Word Embedding Model . . . . . .
7.4.2 Experimental Setup . . . . . . . . . . . . .
7.4.3 Results and Analysis . . . . . . . . . . . .
7.4.4 Concluding Remarks . . . . . . . . . . . .
7.5 Summary of the Contributions . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
88
90
90
93
93
on Twitter
. . . . . . .
. . . . . . .
. . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
95
96
97
101
103
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
106
107
110
112
113
116
117
118
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
119
119
120
122
127
128
129
130
132
134
134
136
137
140
144
145
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
xii
Table of contents
8 Conclusions
147
8.1 Summary and Main Contributions . . . . . . . . . . . . . . . . . . . . . 147
8.2 Limitations and Future Work . . . . . . . . . . . . . . . . . . . . . . . 154
References
157
List of figures
1.1
Entity Retrieval and Text Mining as computational problems of ORM.
3
2.1
Markov Random Field document and term dependencies. . . . . . . . .
19
3.1
3.2
3.3
Bayesian networks for E-R Retrieval with queries of different lengths. .
Markov Random Field dependencies for E-R retrieval, |Q| = 3. . . . . .
Markov Random Field dependencies for E-R retrieval, |Q| = 5. . . . . .
39
53
54
4.1
4.2
4.3
4.4
Example of Wikipedia table row. . . . . . . . . . .
Example of metadata provided to editors. . . . . .
Illustration of E-R indexing from a web corpus. . .
′
′
Values of λ for ERDM: (a) all λ, (b) λE , (c) λR .
obtained using sum normalization. . . . . . . . . .
. . . .
. . . .
. . . .
were
. . . .
67
67
69
5.1
Results grouped by entity’s category using Run 2. . . . . . . . . . . . .
85
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
6.9
Daily popularity on Twitter of entities under study. . . . . . . . . . .
Training and testing sliding window - first 2 iterations. . . . . . . . .
Individual feature type F1 score for tp = 12 at k = 0.5. . . . . . . . .
Negatives share (berminghamsovn) of political leaders in Twitter. . .
Representation of the monthly poll results of each political candidate
Error predictions for polls results. . . . . . . . . . . . . . . . . . . . .
Error predictions for polls results variation. . . . . . . . . . . . . . . .
Mean absolute error buzz vs sentiment. . . . . . . . . . . . . . . . . .
Aggregate functions importance in the Random Forests models. . . .
.
.
.
.
.
.
.
.
.
102
102
105
110
112
114
114
115
117
7.1
7.2
7.3
7.4
High-level overview on the ORM framework. . . . . . .
RELink Framework architecture overview. . . . . . . .
Architecture and data flows of the TexRep framework.
News processing pipeline. . . . . . . . . . . . . . . . . .
.
.
.
.
120
121
124
128
. . . . .
. . . . .
. . . . .
(b) and
. . . . .
.
.
.
.
.
.
.
.
.
.
.
.
. .
. .
. .
(c)
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
76
xiv
7.5
7.6
7.7
List of figures
Cristiano Ronaldo egocentric network. . . . . . . . . . . . . . . . . . . 129
Twitter buzz share of political leaders. . . . . . . . . . . . . . . . . . . 133
Continuous line represents loss in the training data while dashed line
represents loss in the validation data. Left side: effect of increasing |V |
using 100% of training data. Right side: effect of varying the amount of
training data used with |V | = 32768. . . . . . . . . . . . . . . . . . . . 142
List of tables
3.1
3.2
3.3
3.4
3.5
E-R retrieval definitions. . . . . . . . . . . . . . . . . . . . . . . . . .
Illustrative example of the entity index in Early Fusion. . . . . . . . .
Illustrative example of the relationship index in Early Fusion. . . . .
Illustrative example of the document index in Late Fusion. . . . . . .
Clique sets and associated feature functions by type and input nodes.
.
.
.
.
.
34
44
46
49
56
4.1
4.2
4.3
4.4
4.5
4.6
Examples of query annotations. . . . . . . . . . . . . . . .
RELink collection statistics. . . . . . . . . . . . . . . . . .
ClueWeb09-B extractions statistics. . . . . . . . . . . . . .
Description of query sets used for evaluation. . . . . . . . .
Early Fusion and ERDM comparison using LM and BM25.
Results of ERDM compared with three baselines. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
68
68
70
71
73
74
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
RepLab 2013 Filtering Task dataset description. . . . . . . . . . . .
Entity filtering versions description. . . . . . . . . . . . . . . . . . .
Official results for each version plus our validation set accuracy. . .
Training set examples for both sub-tasks. . . . . . . . . . . . . . . .
Microblog results with all features on validation and test sets. . . .
Features performance breakdown on test set using RF. . . . . . . .
News Headlines results with all features on validation and test sets.
Features performance breakdown on test set using MLP. . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
83
84
84
87
90
91
92
92
6.1
6.2
Summary of the four type of features we consider. . . . . . . . . . . . . 100
F1 score of popularity high as function of tp and k equal to 0.5, 0.65
and 0.8 respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Distribution of positive, negative and neutral mentions per political party110
6.3
7.1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Number of 5-grams available for training for different sizes of target
vocabulary |V | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
xvi
List of tables
Overall statistics for 12 combinations of models learned varying |V | and
volume of training data. Results observed after 40 training epochs. . . 141
7.3 Evaluation of resulting embeddings using Class Membership, Class
Distinction and Word Equivalence tests for different thresholds of cosine
similarity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
7.2
Chapter 1
Introduction
Nowadays, people have pervasive access to connected devices, applications and services
that enable them to obtain and share information almost instantly, on a 24/7 basis.
With Social Media growing at an astonishing speed, user opinions about people, companies and products quickly spread over large communities. Consequently, companies
and personalities are under thorough scrutiny, with every event and every statement
potentially observed and evaluated by a global audience, which reflects one’s perceived
reputation.
Van Riel and Fombrun [1] define reputation as the “overall assessment of organizations by their stakeholders.” The authors use the term organization in the definition,
but it may as well apply to individuals (e.g. politicians) or products (e.g. mobile phone
brands). A stakeholder is someone who has some relationship with the organization,
such as employees, customers or shareholders. This definition and other similar ones
[2], focus on the perspective that reputation represents perceptions that others have
on the target entity.
However, the rise of Social Media and online news publishing has brought about
wider public awareness about the entities’ activities, influencing people’s perceptions
about their reputation. While traditional reputation analysis is mostly manual and
focused on particular entities, with online media it is possible to automate much of
the process of collecting, preparing and understanding large streams of content, to
identify facts and opinions about a much wider set of entities. Online Reputation
Monitoring (ORM) addresses this challenge: the use of computational tools to measure the reputation of entities from online media content. Early ORM started with
counting occurrences of a brand name in Social Media as a channel to estimate the
knowledge/reach of a brand.
Introduction
2
There are several challenges to collect, process and mine online media data for these
purposes [3]. Social Media texts are short, informal, with many abbreviations, slang,
jargon and idioms. Often, the users do not care about the correct use of grammar and
therefore the text tends to have misspellings, incomplete and unstructured sentences.
Furthermore, the lack of context poses a very difficult problem for tasks relevant in the
context of Text Mining, such as Named Entity Disambiguation or Sentiment Analysis.
Once we classify the sentiment polarity of a given document (e.g. tweet or news title),
it is necessary to aggregate several document scores to create meaningful hourly/daily
indicators. These tasks are technically complex for most of the people interested in
tracking entities on the web. For this reason, most research has focused on investigating
parts of this problem leading to the development of tools that only address sub-tasks
of this endeavor.
Text data usually includes a large number of entities and relationships between them.
We broadly define an entity to be a thing or concept that exists in the world, such as a
person, a company, organization, an event or a film. Entities exist as mentions across
documents and in external knowledge resources. In recent years, entities have gained
increased importance as the basic unit of information to answer particular information
needs, instead of entire documents or text snippets [4, 5]. The volume of entity-centric
data is rapidly increasing on the Web, including RDF and Linked Data, Schema.org,
Facebook’s Open Graph, and Google’s Knowledge Graph, describing entities (e.g.,
footballers and coaches) and relationships between them (e.g., “manages”).
These developments have a great impact in Online Reputation Monitoring as it is
mainly focused on entities. More specifically, the ORM process consists in searching
and tracking an entity of interest: the personality, the company, organization or
brand/product under analysis. On the other hand, news stories, topics and events
discussed in the news or Social Media usually contain mentions of entities or concepts
represented in a Knowledge Base. Thus, we can say that entities are the gravitational
force that drives the Online Reputation Monitoring process.
1.1
Thesis Statement
The ultimate goal of ORM is to track everything that is said on the Web about a
given target entity and consequently, to assess/predict the impact on its reputation.
From our perspective, this goal is very hard to achieve for two reasons. The first
reason has to do with the difficulty of computationally processing, interpreting and
accessing the huge amount of information published online everyday. The second
1.1 Thesis Statement
3
reason is inherent to the definition of reputation as being intangible but having tangible
outcomes. More specifically, Fombrun and Van Riel [6] and later Stacks [7] found
a correlation between several indicators, such as reputation or trust, and financial
indicators, such as sales or profits. However, this finding does not imply causality, as
financial indicators can be influenced by many factors, besides stakeholders’ perceived
reputation. In conclusion, there is no consensus on how to measure reputation, neither
intrinsically nor extrinsically.
To the best of our knowledge, current ORM is still very limited and naive. The
most standard approach consists in counting mentions of entity names and applying
sentiment analysis to produce descriptive reports of aggregated entity popularity and
overall sentiment. We propose to make progress in ORM by tackling two computational
problems: Entity Retrieval and Text Mining (Figure 1.1).
Online Reputation Monitoring
Text and Entities
Entity Retrieval
Text Mining
Fig. 1.1 Entity Retrieval and Text Mining as computational problems of ORM.
We believe that a ORM platform, besides providing aggregated statistics and trends
about entity popularity and sentiment on the news and social media, would benefit
from providing entity retrieval capabilities. End users often like to have the flexibility
to search for specific information that is not available in predefined charts. However,
ORM has some specificities that traditional entity search systems cannot cope with.
More specifically, an entity’s reputation is also influenced by the entity’s relationships
with other entities.
4
Introduction
For instance, the reputation of Apple Inc. was severely damaged with the so called
“Apple Foxconn scandal”. Foxconn was one of the several contractor companies in
Apple’s supply chain that was accused of exploiting Chinese workers. Although the facts
were not directly concerned with Apple itself, its relationship with Foxconn triggered
bad public opinion about Apple. The same happened recently with the “Weinstein sex
scandal”, as accusations of sexual harassment aimed at Harvey Weinstein created a
wave of damage to companies and personalities associated with the disgraced Hollywood
producer.
Therefore, a ORM platform should provide entity-relationship search capabilities.
Entity-Relationship (E-R) Retrieval is a complex case of entity retrieval where the goal
is to search for multiple unknown entities and relationships connecting them. Contrary
to traditional entity queries, E-R queries expect tuples of connected entities as answers.
For instance, “US technology companies contracts Chinese electronics manufacturers"
can be answered by tuples <Apple, Foxconn>, while “Companies founded by disgraced
Hollywood producer" is expecting tuples <Miramax, Harvey Weinstein>. In essence,
an E-R query can be decomposed into a set of sub-queries that specify types of entities
and types of relationships between entities.
On the other hand, ORM requires accurate and robust text processing and data
analysis methods. Text Mining plays an essential enabling role in developing better
ORM. There are several challenges with collecting and extracting relevant entity-centric
information from raw text data. It is necessary to filter noisy data otherwise downstream
processing tasks, such as sentiment analysis, will be compromised. More specifically, it
is essential to develop named entity disambiguation approaches that can distinguish
relevant text passages from non-relevant. Named entities are often ambiguous, for
example, the word “bush” is a surface form for two former U.S. presidents, a music
band and a shrub. The ambiguity of named entities is particularly problematic in
social media texts, where users often mention entities using a single term.
ORM platforms would be even more useful if they would be able to predict if social
media users will talk a lot about the target entities or not. For instance, on April 4th
2016, the UK Prime-minister, David Cameron, was mentioned on the news regarding
the Panama Papers story. He did not acknowledge the story in detail on that day.
However, the news cycle kept mentioning him about this topic in the following days
and his mentions on social media kept very high. He had to publicly address the issue
on April 9th, when his reputation had already been severely damaged, blaming himself
for not providing further details earlier. Thus we also want to study the feasibility of
1.2 Objectives
5
using entity-centric knowledge extracted from Social Media and online news to predict
real world surveys results, such as political polls.
1.2
Objectives
The work reported on this dissertation aimed to understand, formalize and explore
the scientific challenges inherent to the problem of using unstructured text data from
different Web sources for Online Reputation Monitoring. We now describe the specific
research challenges we proposed to overcome.
Entity-Relationship Retrieval: Existing strategies for entity search can be divided
in IR-centric and Semantic-Web-based approaches. The former usually rely on statistical
language models to match and rank co-occurring terms in the proximity of the target
entity [8]. The latter consists in creating a SPARQL query and using it over a structured
knowledge base to retrieve relevant RDF triples [9]. Neither of these paradigms provide
good support for entity-relationship (E-R) retrieval.
Recent work in Semantic-Web search tackled E-R retrieval by extending SPARQL
to support joins of multiple query results and creating an extended knowledge graph
[10]. Extracted entities and relationships are typically stored in a knowledge graph.
However, it is not always convenient to rely on a structured knowledge graph with
predefined and constraining entity types.
In particular, ORM is interested in transient information sources, such as online
news or social media. General purpose knowledge graphs are usually fed with more
stable and reliable data sources (e.g. Wikipedia). Furthermore, predefining and
constraining entity and relationship types, such as in Semantic Web-based approaches,
reduces the range of queries that can be answered and therefore limits the usefulness
of entity search, particularly when one wants to leverage free-text.
To the best of our knowledge, E-R retrieval using IR-centric approaches is a new
and unexplored research problem within the Information Retrieval research community.
One of the objectives of our research is to explore to what degree we can leverage the
textual context of entities and relationships, i.e., co-occurring terminology, to relax the
notion of an entity or relationship type.
Instead of being characterized by a fixed type, e.g., person, country, place, the entity
would be characterized by any contextual term. The same applies to the relationships.
Traditional knowledge graphs have fixed schema of relationships, e.g. child of, created
by, works for while our approach relies on contextual terms in the text proximity of
6
Introduction
every two co-occurring entities in a raw document. Relationships descriptions such
as “criticizes”, “hits back”, “meets” or “interested in” would be possible to search for.
This is expected to significantly reduce the limitations which structured approaches
suffer from, enabling a wider range of queries to be addressed.
Entity Filtering and Sentiment Analysis: Entity Filtering is a sub-problem of
Named Entity Disambiguation (NED) in which we have a named entity mention and
we want to classify it as related or not related with the given target entity. This is
a relatively easy problem in well formed texts such as news articles. However, social
media texts pose several problems to this task. We are particularly interested in Entity
Filtering of tweets and we aim to study a large set of features that can be generated to
describe the relationship between a given target entity and a tweet, as well as exploring
different learning algorithms to create supervised models for this task.
Sentiment Analysis has been thoroughly studied in the last decade [11]. There have
been several PhD thesis entirely dedicated to this subject. It is a broad problem with
several ramifications depending on the text source and specific application. Within the
context of ORM, we will focus in a particular domain: finance. Sentiment Analysis
on financial texts has received increased attention in recent years [12]. Neverthless,
there are some challenges yet to overcome [13]. Financial texts, such as microblogs or
newswire, usually contain highly technical and specific vocabulary or jargon, making
the development of specific lexical and machine learning approaches necessary.
Text-based Entity-centric Prediction: We hypothesize that for entities that are
frequently mentioned on the news (e.g. politicians) it is possible to establish a predictive
link between online news and popularity on social media. We cast the problem as a
supervised learning classification approach: to decide whether popularity will be high
or low based on features extracted from the news cycle. We aim to assess if online
news are valuable as source of information to effectively predict entity popularity on
Twitter. More specifically, we want to find if online news carry different predictive
power based on the nature of the entity under study and how predictive performance
varies with different times of prediction. We propose to explore different text-based
features and how particular ones affect the overall predictive power and specific entities
in particular.
On the other hand, we will study if it is possible to use knowledge extracted from
social media texts to predict the outcome of public opinion surveys. The automatic
content analysis of mass media in the social sciences has become necessary and possible
1.3 Research Methodology
7
with the rise of social media and computational power. One particularly promising
avenue of research concerns the use of sentiment analysis in microblog streams. However,
one of the main challenges consists in aggregating sentiment polarity in a timely fashion
that can be fed to the prediction method.
A Framework for ORM: The majority of the work in ORM consists in ad-hoc
studies where researchers collect data from a given social network and produce their
specific analysis or predictions, often unreproducible. The availability of open source
platforms in this area is scarse. Researchers typically use specific APIs and software
modules to produce their studies. However, there has been some effort among the
research community to address these issues through open source research platforms.
We therefore aim to create an adaptable text mining framework specifically tailored
for ORM that can be reused in multiple application scenarios, from politics to finance.
This framework is able to collect texts from online media, such as Twitter, and identify
entities of interest and classify sentiment polarity and intensity. The framework
supports multiple data aggregation methods, as well as visualization and modeling
techniques that can be used for both descriptive analytics, such as analyze how political
polls evolve over time, and predictive analytics, such as predict elections.
1.3
Research Methodology
We adopted distinct research methodologies in the process of developing the research
work described in this thesis. The origin of this work was the POPSTAR project.
POPSTAR (Public Opinion and Sentiment Tracking, Analysis, and Research) was a
project that developed methods for the collection, measurement and aggregation of
political opinions voiced in microblogs (Twitter), in blogs and online news. A first
prototype of the framework for ORM was implemented and served as the backend
of the POPSTAR website (http://www.popstar.pt/). The ground work concerned
with the development of a framework for ORM was carried in the scope of the project.
Therefore, the POPSTAR website served as use case for validating the effectiveness
and adaptability of the framework.
The Entity Filtering and Sentiment Analysis modules of the framework were evaluated using well known external benchmarks resulting in state-of-the-art performance.
We participated in RepLab 2013 Filtering Task and evaluated our Entity Filtering
method using the dataset created for the competition. One of our submissions obtained
the first place at the competition. We also participated in SemEval 2017 Task 5:
Introduction
8
Fine-grained Sentiment Analysis on Financial Microblogs and News. We were ranked
4th using one of the metrics at the sub-task 5.1 Microblogs.
We performed two experiments regarding the text-based entity centric predictions.
For predicting entity popularity on Twitter based on the news cycle we collected tweets
and news articles from Portugal using the SocialBus twitter collector and online news
from 51 different news outlets collected by SAPO. We used the number of entity
mentions on Twitter as target variable and we extracted text-based features from the
news datasets. Both datasets were aligned in time. We used the same Twitter dataset
for studying different sentiment aggregate functions to serve as features for predicting
political polls of a private opinion studies company, Eurosondagem.
Improvements of Entity-Relationship (E-R) retrieval techniques have been hampered
by a lack of test collections, particularly for complex queries involving multiple entities
and relationships. We created a method for generating E-R test queries to support
comprehensive E-R search experiments. Queries and relevance judgments were created
from content that exists in a tabular form where columns represent entity types and
the table structure implies one or more relationships among the entities. Editorial
work involved creating natural language queries based on relationships represented
by the entries in the table. We have publicly released the RELink test collection
comprising 600 queries and relevance judgments obtained from a sample of Wikipedia
List-of-lists-of-lists tables.
We evaluated the new methods proposed for E-R retrieval using the RELink query
collection together with two other smaller query collections created by research work
in Semantic Web-based E-R retrieval. We used a large web corpus, the ClueWeb-09B
containing 50 million web pages for creating E-R retrieval tailored indexes for running
our experiments. Moreover, we implemented a demo using a large news collection of
12 million Portuguese news articles, resulting in the best demo award at ECIR 2016.
1.4
Contributions and Applications
This work resulted in the following contributions:
1. A Text Mining framework that puts together all the building blocks required
to perform ORM. The framework is adaptable and can be reused in different
application scenarios, such as finance and politics. The framework provides entityspecific Text Mining functionalities that enable the collection, disambiguation,
sentiment analysis, aggregation, prediction and visualization of entity-centric
information from heterogeneous Web data sources. Furthermore, given that it is
1.4 Contributions and Applications
9
built using a modular architecture providing abstraction layers and well defined
interfaces, new functionalities can easily be integrated.
2. Generalization of the problem of entity-relationship search to cover entity types
and relationships represented by any attribute and predicate, respectively, rather
than a pre-defined set.
3. A general probabilistic model for E-R retrieval using Bayesian Networks.
4. Proposal of two design patterns that support retrieval approaches using the E-R
model.
5. Proposal of a Entity-Relationship Dependence model that builds on the basic
Sequential Dependence Model (SDM) to provide extensible entity-relationship
representations and dependencies, suitable for complex, multi-relations queries.
6. An Entity-relationship indexing and retrieval approach including learning to
rank/data fusion methods that can handle entity and relationships ranking and
merging of results.
7. The proposal of a method and strategy for automatically obtaining relevance
judgments for entity-relationship queries.
8. We make publicly available queries and relevance judgments for the previous
task.
9. Entity Filtering and Financial Sentiment Analysis methods tailored for Twitter
that is able to cope with short informal texts constraints.
10. Analysis of the predictive power of online news regarding entity-centric metrics
on Twitter, such as popularity or sentiment.
11. Analysis of how to combine entity-centric knowledge obtained from heterogeneous
sources for survey-like prediction tasks.
We believe this work can be useful in a wide range of applications from which we
highlight six:
Reputation Management is concerned with influencing and controlling company or individual reputation and consequently tracking what is said about
entities online is one of the main concerns of this area. For instance, knowing if
a given news article will have a negative impact on entity’s reputation would be
crucial for damage control.
Introduction
10
Digital Libraries are special libraries comprising a collection of digital objects
(e.g. text or images) stored in a electronic media format. They are ubiquitous
nowadays, from academic repositories, to biomedical databases, law enforcement
repositories, etc. We believe the contributions we make to the Entity-Relationship
Retrieval research problem can be applied to any digital library enabling a new
wide range of search capabilities.
Fraud Detection and inside trading detection is an area where information
about entities (individuals and companies) and relationships between entities is
very useful to discover hidden relationships and contexts of entities that might
represent conflicts of interests or even fraud.
Journalism, or more specifically, computational journalism would benefit of a
powerful entity-relationship search tool in which journalists could investigate how
entities were previously mentioned on the Web, including online news through
time, as well as relationships among entities and their semantics.
Political Science has given a lot of attention to Social Media in recent years due
to the sheer amount of people reactions and opinions regarding politically relevant
events. Being able to analyze the interplay between online news and Social Media
from a political entity perspective can be very interesting for political scientists.
On the other hand, it is becoming increasingly difficult to obtain pollsresponses
via telephone and it is necessary to start testing alternative approaches.
Social Media Marketing focuses on communicating through social networks
with company potential and effective customers. Evaluating the success of a
given campaign is a key aspect of this area. Therefore assessing the volume and
polarity of mentions of a given company before and after a campaign would be
very useful.
1.5
Foundations
Most of the material of this thesis was previously published in journal, conference and
workshop publications:
• P.Saleiro, E. M. Rodrigues, C. Soares, E. Oliveira, “TexRep: A Text Mining
Framework for Online Reputation Monitoring”, New Generation Computing,
Volume 35, Number 4 2017 [14]
1.5 Foundations
11
• P. Saleiro, N. Milic-Frayling, E. M. Rodrigues, C. Soares, “RELink: A Research
Framework and Test Collection for Entity-Relationship Retrieval”, 40th International ACM SIGIR Conference on Research and Development in Information
Retrieval (SIGIR 2017) [15]
• P. Saleiro, N. Milic-Frayling, E. M. Rodrigues, C. Soares, “Early Fusion Strategy
for Entity-Relationship Retrieval”, The First Workshop on Knowledge Graphs
and Semantics for Text Retrieval and Analysis (KG4IR@SIGIR 2017) [16]
• P. Saleiro, L. Sarmento, E. M. Rodrigues, C. Soares, E. Oliveira, “Learning Word
Embeddings from the Portuguese Twitter Stream: A Study of some Practical
Aspects”, Progress in Artificial Intelligence (EPIA 2017) [17]
• P. Saleiro, E. M. Rodrigues, C. Soares, E. Oliveira, “FEUP at SemEval-2017 Task
5: Predicting Sentiment Polarity and Intensity with Financial Word Embeddings”,
International Workshop on Semantic Evaluation (SemEval@ACL 2017) [18]
• P. Saleiro and C. Soares, “Learning from the News: Predicting Entity Popularity
on Twitter” in Advances in Intelligent Data Analysis XV (IDA 2016) [19]
• P. Saleiro, J. Teixeira, C. Soares, E. Oliveira, “TimeMachine: Entity-centric
Search and Visualization of News Archives” in Advances in Information Retrieval:
38th European Conference on IR Research (ECIR 2016) [20]
• P. Saleiro, L. Gomes, C. Soares, “Sentiment Aggregate Functions for Political
Opinion Polling using Microblog Streams” in International C* Conference on
Computer Science and Software Engineering (C3S2E 2016) [21]
• P. Saleiro, S. Amir, M. J. Silva, C. Soares , “POPmine: Tracking Political Opinion
on the Web” in IEEE International Conference on Computer and Information
Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (IUCC
2015) [22]
• P. Saleiro, L. Rei, A. Pasquali, C. Soares, et al., “POPSTAR at RepLab 2013:
Name ambiguity resolution on Twitter” in Fourth International Conference of
the CLEF initiative (CLEF 2013) [23]
Introduction
12
1.6
Thesis Outline
In Chapter 2 we discuss related work to this thesis. In Chapter 3 we present a
formalization of the problem of E-R retrieval using a IR-centric approach. We provide
two design patterns for fusion-based E-R retrieval: Early Fusion and Late Fusion. We
end the chapter by introducing a new supervised early fusion-based Entity Relationship
Dependence Model (ERDM) that can be seen as an extension of the MRF framework
for retrieval adapted to E-R retrieval. In Chapter 4 we describe a set of experiments
on E-R retrieval over a Web corpus. First we introduce a new query collection, RELink
QC, specifically tailored to this problem. We developed a semi-automatic approach
to collect relevance judgments from tabular data and the editorial work consisted in
creating E-R queries answered by those relevance judgments. We run experiments
using the ClueWeb09-B as dataset and provide evaluation results for the new proposed
methods for E-R retrieval.
Chapter 5 is dedicated to Entity Filtering and Financial Sentiment Analysis. We
evaluate our approaches using well known external benchmarks, namely, RepLab
2013 and SemEval 2017. In Chapter 6, we present two experiments of text-based
entity-centric predictions. In the first experiment, we try to predict the popularity of
entities on social media using solely features extracted from the news cycle. On the
second experiment, we try to assess which sentiment aggregate functions are useful in
predicting political polls results.
In Chapter 7, we present an unified framework of ORM. The framework is divided
in two major containers: RELink (Entity Retrieval) and TexRep (Text Mining). We
present the data flow within the framework and how it can be used as a reference
open source framework for researching in ORM. We also present some case studies of
using this framework. We end this thesis with Chapter 8 which is dedicated to the
conclusions.
Chapter 2
Background and Related Work
This chapter introduces an overview of the background concepts and previous research
work on the tasks addressed in this dissertation. We start by presenting a brief
description of the task of Online Reputation Monitoring (ORM), including related
frameworks for ORM. We then survey previous research work in Entity Retrieval and
Semantic Search, including a detailed explanation of the Markov Random Field model
for retrieval and its variations. We describe the tasks of Named Entity Disambiguation,
Sentiment Analysis and previous work on training word embeddings. We end this
chapter by providing an overview of related work on text-based predictions, including
predicting social media attention or the outcome of political elections.
2.1
Online Reputation Monitoring
The reputation of a company is important for the company itself but as well for the
stakeholders. More specifically, stakeholders make decisions about the company and its
products faster if they are aware of the image of the company [24]. From the company
perspective, reputation is an asset as it attracts stakeholders and it can represent
economic profit at the end [25, 6]
In 2001, Newell and Goldsmith used questionnaire and survey methodologies to
introduce the first standardized and reliable measure of credibility of companies from a
consumer perspective [26]. There have been also studies that find a correlation between
company indicators such as reputation, trust and credibility, and financial indicators,
such as sales and profits [6, 7]. These studies found that although reputations are
intangible they influence tangible assets. Following this reasoning, Fombrum created a
very successful measurement framework, named RepTrak [27].
Background and Related Work
14
A different methodology compared to questionnaires is media analysis (news, TV
and radio broadcasts). Typically, the analysis involves consuming and categorizing
media according to stakeholder and polarity (positive, negative) towards the company.
Recently, Social Media analysis is becoming an important proxy of people opinion,
originating the field of Online Reputation Monitoring [28]. While traditional reputation
monitoring is mostly manual, online media pose the opportunity to process, understand
and aggregate large streams of facts about about a company or individual.
ORM requires some level of continuous monitoring [29]. It is crucial to detect early
the changes in the perception of a company or personality conveyed in Social Media.
Online buzz may be good or bad and consequently, companies must react and address
negative trends [30, 31]. It also creates an opportunity to monitor the reputation
of competitors. In this context, Text Mining plays a key, enabling role as it offers
methods for deriving high-quality information from textual content [32]. For instance,
Gonzalo [31] identifies 5 different Text Mining research areas relevant to ORM: entity
filtering, topic tracking, reputation priority detection, user profiling and automatic
reporting/summarization.
Social Media as a new way of communication and collaboration is an influence
for every stakeholder of society, such as personalities, companies or individuals [33].
Social Media users share every aspect of their lives and that includes information about
events, news stories, politicians, brands or organizations. Companies have access to all
this sharing which opens new horizons for obtaining insights that can be valuable to
them and their online reputation. Companies also invest a big share of their public
relations on Social Media. Building a strong reputation can take long time and effort
but destroying it can take place overnight. Therefore, as the importance of Social
Media increased, so did the importance of having powerful tools that deal with this
enormous amount of data.
2.1.1
Related Frameworks
The great majority of work in ORM consists in ad-hoc studies and platforms for ORM
are usually developed by private companies that do not share internal information.
However, there are some open source research projects that can be considered as related
frameworks to this work.
Trendminer [34] is one of such platforms that enables real time analysis of Twitter
data, but has a very simple sentiment analysis using word counts and lacks flexibility
in order to support entity-centric data processing. A framework for ORM should be
2.2 Entity Retrieval and Semantic Search
15
entity-centric, i.e., collect, process and aggregate texts and information extracted from
those texts in relation to the entities being monitored.
conTEXT [35] addresses adaptability and reusability by allowing a modular interface and allowing plugin components to extend their framework, specially from the
perspective of the data sources and text analysis modules. For instance, it does not
support Sentiment Analysis module by default but it could be plugged in. Neverthless,
conTEXT does not support the plugin of aggregation and prediction modules which
makes it not suitable for ORM. The FORA framework [30] is specifically tailored for
ORM. It creates an ontology based on fuzzy clustering of texts but it is only concerned
with extracting relevant linguistic units regarding the target entities and does not
include automatic sentiment analysis and it does not allow the plugin of new modules.
POPmine [36] was the first version of our Text Mining framework for ORM and
it was developed specifically in the context of a project in political data science. It
comprises a richer set of modules, including cross media data collection (Twitter, blog
posts and online news) and real-time trend analysis based on entity filtering and
sentiment analysis modules. In fact, our current version of TexRep, our Text Mining
framework for ORM, can be seen as an extension of the POPmine architecture by
creating a more general purpose framework for ORM which is not restricted to political
analysis. While it would be possible to adapt POPmine’s entity disambiguation and
sentiment analysis modules, its aggregations are specific to the political scenarios. On
the other hand, TexRep supports users to define and plug custom-specific aggregate
functions. Moreover, POPmine has limited user configurations (e.g. lacks support for
pre-trained word embeddings) and does not include predictive capabilities.
2.2
Entity Retrieval and Semantic Search
Information Retrieval deals with the “search for information”. It is defined as the
activity of finding relevant information resources (usually documents) that meet an
information need (usually a query), from within a large collection of resources of an
unstructured nature (usually text) [37].
In early boolean retrieval systems, documents were retrieved if the exact query term
was present and they were represented as a list of terms [37]. With the introduction
of the Vector Space Model, each term represents a dimension in a multi-dimensional
space, and consequently, each document and query are represented as vectors [38].
Values of each dimension of the document vector correspond to the term frequency
16
Background and Related Work
(TF) of the term in the document. Therefore, the ranking list of documents is produced
based on their spatial distance to the query vector.
The concept of inverse document frequency (IDF) was later introduced to limit the
effect of common terms in a collection [39]. A term that occurs in many documents
of the collection has a lower IDF than terms that occur less often. The combination
TF-IDF and variants, such as BM25 [40], became commonly used weighting statistics
for Vector Space Model.
Recently, it has been observed that when people have focused information needs,
entities better satisfy those queries than a list of documents or large text snippets
[5]. This type of retrieval is called Entity Retrieval or Entity-oriented retrieval and
includes extra Information Extraction tasks for processing documents, such as Named
Entity Recognition (NER) and Named Entity Disambiguation (NED). Entity Retrieval
is closely connected with Question answering (QA) though, QA systems focus on
understanding the semantic intent of a natural language query and deciding which
sentences represent the answer to the user.
Considering the query “British politicians in Panama papers”, the expected result
would be a list of names rather than documents related to British politics and the
“Panama Papers” news story. There are two search patterns related to Entity Retrieval
[4]. First, the user knows the existence of a certain entity and aims to find related
information about it. For example, a user searching for product related information.
Second, the user defines a predicate that constrains the search to a certain type of
entities, e.g. searching for movies of a certain genre.
Online Reputation Monitoring systems usually focus on reporting statistical insights
based on information extracted from Social Media and online news mentioning the
target entity. However, this kind of interaction limits the possibility of users to explore
all the knowledge extracted about the target entity. We believe Entity Retrieval could
enhance Online Reputation Monitoring by allowing free text search over all mentions of
the target entity and, consequently, allow users to discover information that descriptive
statistical insights might not be able to identify.
Entity Retrieval differs from traditional document retrieval in the retrieval unit.
While document retrieval considers a document as the atomic response to a query, in
Entity Retrieval document boundaries are not so important and entities need to be
identified based on occurrence in documents [41]. The focus level is more granular as
the objective is to search and rank entities among documents. However, traditional
Entity Retrieval systems does not exploit semantic relationships between terms in the
2.2 Entity Retrieval and Semantic Search
17
query and in the collection of documents, i.e. if there is no match between query terms
and terms describing the entity, relevant entities tend to be missed.
Entity Retrieval has been an active research topic in the last decade, including
various specialized tracks, such as Expert finding track [42], INEX entity ranking track
[43], TREC entity track [44] and SIGIR EOS workshop [45]. Previous research faced
two major challenges: entity representation and entity ranking. Entities are complex
objects composed by a different number of properties and are mentioned in a variety
of contexts through time. Consequently, there is no single definition of the atomic unit
(entity) to be retrieved. Additionally, it is a challenge to devise entity rankings that
use various entity representations approaches and tackle different information needs.
There are two main approaches for tackling Entity Retrieval: “profile based approach”
and “voting approach” [46]). The “profile based approach” starts by applying NER
and NED in the collection in order to extract all entity occurrences. Then, for each
entity identified, a meta-document is created by concatenating every passage in which
the entity occurs. An index of entity meta-documents is created and a standard
document ranking method (e.g. BM25) is applied to rank meta-documents with
respect to a given query [47, 48]. One of the main challenges of this approach is the
transformation of original text documents to an entity-centric meta-document index,
including pre-processing the collection in order to extract all entities and their context.
In the “voting approach”, the query is processed as typical document retrieval to
obtain an initial list of documents [46, 49]. Entities are extracted from these documents
using NER and NED techniques. Then, score functions are calculated to estimate the
relation of entities captured and the initial query. For instance, counting the frequency
of occurrence of the entity in the top documents combined with each document score
(relevance to the query) [46]. Another approach consists in taking into account the
distance between the entity mention and the query terms in the documents [50].
Recently, there is an increasing research interest in Entity Search over Linked Data,
also referred as Semantic Search, due to the availability of structured information
about entities and relations in the form of Knowledge Bases [51–53]. Semantic Search
exploits rich structured entity related in machine readable RDF format, expressed as a
triple (entity, predicate, object). There are two types of search: keyword-based and
natural language based search [54, 55]. Regardless of the search type, the objective is
to interpret the semantic structure of queries and translate it to the underlying schema
of the target Knowledge Base. Most of the research focus is on interpreting the query
intent [54, 55] while others focus on how to devise a ranking framework that deals with
18
Background and Related Work
similarities between different attributes of the entity entry in the KB and the query
terms [53]
Relationship Queries: Li et al. [56] were the first to study relationship queries
for structured querying entities over Wikipedia text with multiple predicates. This
work used a query language with typed variables, for both entities and entity pairs, that
integrates text conditions. First it computes individual predicates and then aggregates
multiple predicate scores into a result score. The proposed method to score predicates
relies on redundant co-occurrence contexts.
Yahya et al. [10] defined relationship queries as SPARQL-like subject-predicateobject (SPO) queries joined by one or more relationships. The authors cast this problem
into a structured query language (SPARQL) and extended it to support textual phrases
for each of the SPO arguments. Therefore it allows to combine both structured
SPARQL-like triples and text simultaneously. It extended the YAGO knowledge base
with triples extracted from ClueWeb using an Open Information Extraction approach
[57].
In the scope of relational databases, keyword-based graph search has been widely
studied, including ranking [58]. However, these approaches do not consider full documents of graph nodes and are limited to structured data. While searching over
structured data is precise it can be limited in various respects. In order to increase the
recall when no results are returned and enable prioritization of results when there are
too many, Elbassuoni et al. [59] propose a language-model for ranking results. Similarly,
the models like EntityRank by Cheng et al. [60] and Shallow Semantic Queries by
Li et al. [56], relax the predicate definitions in the structured queries and, instead,
implement proximity operators to bind the instances across entity types. Yahya et al.
[10] propose algorithms for application of a set of relaxation rules that yield higher
recall.
Entity Retrieval and proximity: Web documents contain term information
that can be used to apply pattern heuristics and statistical analysis often used to infer
entities as investigated by Conrad and Utt [61], Petkova and Croft [50], Rennie and
Jaakkola [62]. In fact, early work by Conrad and Utt [61] demonstrates a method that
retrieves entities located in the proximity of a given keyword. They show that using a
fixed-size window around proper-names can be effective for supporting search for people
and finding relationship among entities. Similar considerations of the co-occurrence
statistics have been used to identify salient terminology, i.e. keyword to include in the
document index [50].
2.2 Entity Retrieval and Semantic Search
2.2.1
19
Markov Random Field for IR
In this section we detail the generic Markov Random Field (MRF) model for retrieval
and its variation, the Sequential Dependence Model (SDM). As we later show, this
model is the basis for our entity-relationship retrieval model.
The Markov Random Field (MRF) model for retrieval was first proposed by Metzler
and Croft [63] to model query term and document dependencies. In the context of
retrieval, the objective is to rank documents by computing the posterior P (D|Q), given
a document D and a query Q:
P (D|Q) =
P (Q, D)
P (Q)
(2.1)
For that purpose, a MRF is constructed from a graph G, which follows the local
Markov property: every random variable in G is independent of its non-neighbors
given observed values for its neighbors. Therefore, different edge configurations imply
different independence assumptions.
Fig. 2.1 Markov Random Field document and term dependencies.
Metzler and Croft [63] defined that G consists of query term nodes qi and a document
node D, as depicted in Figure 2.1. The joint probability mass function over the random
variables in G is defined by:
PG,Λ (Q, D) =
1 Y
ψ(c; Λ)
ZΛ c∈C(G)
(2.2)
where Q = q1 , ...qn are the query term nodes, D is the document node, C(G) is the
set of maximal cliques in G, and ψ(c; Λ) is a non-negative potential function over clique
P
Q
configurations. The parameter ZΛ = Q,D c∈C(G) ψ(c; Λ) is the partition function
that normalizes the distribution. It is generally unfeasible to compute ZΛ , due to
the exponential number of terms in the summation, and it is ignored as it does not
influence ranking.
Background and Related Work
20
The potential functions are defined as compatibility functions between nodes in a
clique. For instance, a tf-idf score can be measured to reflect the “aboutness” between
a query term qi and a document D. Metzler and Croft [63] propose to associate one
or more real valued feature function with each clique in the graph. The non-negative
potential functions are defined using an exponential form ψ(c; Λ) = exp[λc f (c)], where
λc is a feature weight, which is a free parameter in the model, associated with feature
function f (c). The model allows parameter and feature functions sharing across cliques
of the same configuration, i.e. same size and type of nodes (e.g. 2-cliques of one query
term node and one document node).
For each query Q, we construct a graph representing the query term dependencies,
define a set of non-negative potential functions over the cliques of this graph and rank
documents in descending order of PΛ (D|Q):
rank
PΛ (D|Q) = log PΛ (D|Q)
rank
= log PΛ (Q, D) − log PΛ (Q)
rank
=
X
log ψ(c; Λ)
(2.3)
c∈C(G)
rank
=
X
log exp[λc f (c)]
c∈C(G)
rank
=
X
λc f (c)
c∈C(G)
(2.4)
Metzler and Croft concluded that given its general form, the MRF can emulate
most of the retrieval and dependence models, such as language models [64].
2.2.2
Sequential Dependence Model
The Sequential Dependence Model (SDM) is the most popular variant of the MRF
retrieval model [63]. It defines two clique configurations represented in the following
potential functions ψ(qi , D; Λ) and ψ(qi , qi+1 , D; Λ). Basically, it considers sequential
dependency between adjacent query terms and the document node.
The potential function of the 2-cliques containing a query term node and a
document node is represented as ψ(qi , D; Λ) = exp[λT fT (qi , D)]. The clique configuration containing contiguous query terms and a document node is represented
by two real valued functions. The first considers exact ordered matches of the
2.2 Entity Retrieval and Semantic Search
21
two query terms in the document, while the second aims to capture unordered
matches within N fixed window sizes. Consequently, the second potential function is
ψ(qi , qi+1 , D; Λ) = exp[λO fO (qi , qi+1 , D) + λU fU (qi , qi+1 , D)].
Replacing ψ(c; Λ) by these potential functions in Equation 3.38 and factoring out
the parameters λ, the SDM can be represented as a mixture model computed over
term, phrase and proximity feature classes:
rank
X
P (D|Q) = λT
fT (qi , D) +
qi ∈Q
X
λO
fO (qi , qi+1 , D) +
qi ,qi+1 ∈Q
X
λU
fU (qi , qi+1 , D)
qi ,qi+1 ∈Q
where the free parameters λ must follow the constraint λT +λO +λU = 1. Coordinate
Ascent was chosen to learn the optimal λ values that maximize mean average precision
using training data [65]. Considering tf the frequency of the term(s) in the document
D, cf the frequency of the term(s) in the entire collection C, the feature functions in
SDM are set as:
"
#
cfq
tfqi ,D +µ |C|i
fT (qi , D) = log
(2.5)
|D|+µ
fO (qi , qi+1 , D) = log
tf#1(qi ,qi+1 ),D +µ
|D|+µ
fU (qi , qi+1 , D) = log
cf#1(q ,q
i i+1)
|C|
tf#uwN (qi ,qi+1 ),D +µ
cf#uwN (q ,q
i i+1)
|C|
|D|+µ
(2.6)
(2.7)
where µ is the Dirichlet prior for smoothing, #1(qi , qi+1 ) is a function that searches
for exact matches of the phrase “qi qi+1 ” and #uwN (qi , qi+1 ) is a function that searches
for co-occurrences of qi and qi+1 within a window of fixed-N terms (usually 8 terms)
across document D. SDM has shown state-of-the-art performance in ad-hoc document
retrieval when compared with several bigram dependence models and standard bag-ofwords retrieval models, across short and long queries [66].
Background and Related Work
22
2.2.3
MRF for Entity Retrieval
The current state-of-the-art methods in ad-hoc entity retrieval from knowledge graphs
are based on MRF [53, 67]. The Fielded Sequential Dependence Model (FSDM) [53]
extends SDM for structured document retrieval and it is applied to entity retrieval
from knowledge graphs. In this context, entity documents are composed by fields
representing metadata about the entity. Each entity document has five fields: names,
attributes, categories, similar entity names and related entity names. FSDM builds
individual language models for each field in the knowledge base. This corresponds to
replacing SDM feature functions with those of the Mixture of Language Models [68].
The feature functions of FSDM are defined as:
f˜T (qi , D) = log
F
X
tfqi ,Dj + µj
wjT
j
f˜O (qi , qi+1 , D) = log
F
X
f˜U (qi , qi+1 , D) = log
|Dj | + µj
tf#1(qi ,qi+1 ),Dj + µj
O
wj
(2.8)
cf#1(qi ,qi+1 ),j
|Cj |
|Dj | + µj
j
F
X
cfqi ,j
|Cj |
tf#uwN (qi ,qi+1 ),Dj + µj
U
wj
cf#uwN (qi ,qi+1 ),j
|Dj | + µj
j
|Cj |
(2.9)
(2.10)
where µj are the Dirichlet priors for each field and wj are the weights for each field
P
and must be non-negative with constraint Fj wj = 1. Coordinate Ascent was used in
two stages to learn wj and λ values [53].
The Parameterized Fielded Sequential Dependence Model (PFSDM) [67] extends
the FSDM by dynamically calculating the field weights wj to different query terms.
Part-of-speech features are applied to capture the relevance of query terms to specific
fields of entity documents. For instance, NNP feature is positive if query terms
are proper nouns, therefore the query terms should be mapped to the names field.
Therefore, the field weight contribution of a given query term qi and a query bigram
qi ,qi+1 in a field j are a linear weighted combination of features:
wqi ,j =
X
U
αj,k
ϕk (qi , j)
(2.11)
k
wqi ,qi+1 ,j =
X
k
B
αj,k
ϕk (qi , qi+1 , j)
(2.12)
2.3 Named Entity Disambiguation
23
U
where ϕk (qi , j) is the k feature function of a query unigram for the field j and αj,k
is its respective weight. For bigrams, ϕk (qi , qi+1 , j) is the k feature function of a query
B
bigram for the field j and αj,k
is its respective weight. Consequently, PFSDM has
F ∗ U + F ∗ B + 3 total parameters, where F is the number of fields, U is the number
of field mapping features for unigrams, B is the number of field mapping features for
bigrams, plus the three λ parameters. Their estimation is performed in a two stage
optimization. First α parameters are learned separately for unigrams and then bigrams.
This is achieved by setting to zero the corresponding λ parameters. In the second
stage, the λ parameters are learned. Coordinate Ascent is used in both stages.
The ELR model exploits entity mentions in queries by defining a dependency
between entity documents and entity links in the query [69].
2.3
Named Entity Disambiguation
Given a mention in a document, Named Entity Disambiguation (NED) or Entity
Linking aims to predict the entity in a reference knowledge base that the string
refers to, or NIL if no such entity is available. Usually the reference knowledge base
(KB) includes a set of documents, where each document describes one specific entity.
Wikipedia is by far the most popular reference KB [70].
Previous research typically performs three steps to link an entity mention to a KB:
1) representation of the mention, i.e. extend the entity mention with relevant knowledge
from the background document, 2) candidate generation, i.e. find all possible KB
entries that the mention might refer to and their representation 3) disambiguation, by
computing the similarity between the represented mention and the candidate entities.
Entity Filtering, or targeted entity disambiguation, is a special case of NED in
which there is only one candidate entity, i.e. the entity that is being monitored. There
is an increasing interest in developing Entity Filtering methods for Social Media texts,
considering its specificities and limitations [71, 72]. These approaches focus on finding
relevant keywords for positive and negative cases using co-occurrence, web and collection
based features. Another line of work creates topic-centric entity extraction systems
where entities belong to a certain topic and are used as evidence to disambiguate
the short message given its topic [73]. Similarly, Hangya et al. [74] create features
representing topic distributions over tweets using Latent Dirichlet Allocation (LDA).
The majority of research work in NED is usually applied to disambiguate entities in
reasonably long texts as news or blog posts. In recent years, there has been an increasing
interest in developing NED methods for Social Media texts and its specificities and
Background and Related Work
24
limitations [75–78]. A survey and evaluation of state-of-the-art NER and NED for
Tweets concluded that current approaches do not perform robustly on “ill-formed, terse,
and linguistically compressed” microblog texts [79]. Some Twitter-specific methods
reach F1 measures of over 80%, but are still behind the state-of-the-art results obtained
on well-formed news texts.
Social Media texts are too short to provide sufficient information to calculate
context similarity accurately [76, 80, 78, 77, 81]. In addition, most of state-of-theart approaches leverage on neighboring entities in the documents but, once again,
tweets are short and do not have more than one or two entities mentioned. Most of
them [82, 77, 81] extract information obtained from other tweets, and disambiguate
entity mentions in these tweets collectively. The assumption is that Twitter users are
content generators and tend to scatter their interests over many different messages
they broadcast, which is not necessarily true [83].
Entity Filtering has also been studied in the context of real-time classification.
Davis et al. [81] propose a pipeline containing three stages. Clearly positive examples
are exploited to create filtering rules comprising collocations, users and hashtags.
The remaining examples are classified using a Expectation-Maximization (EM) model
trained using the clearly positive examples. Recently, Habib et al. [84] proposed an
hybrid approach where authors first query Google to retrieve a set of possible candidate
homepages and then enrich the candidate list with text from the Wikipedia. They
extract a set of features for each candidate, namely, a language model and overlapping
terms between tweet and document, as well as URL length and mention-URL string
similarity. In addition, a prior probability of the mention corresponding to a certain
entity on the YAGO [85] knowledge base is also used.
Recent work in NED or Entity Linking includes graph based algorithms for collective
entity disambiguation, such as TagMe[86], Babelfy [87] and WAT [88]. Word and entity
embeddings have been also used for entity disambiguation [89–91]. More specifically,
Fang [90] and Moreno [91] propose to learn an embedding space for both entities and
words and then compute similarity features based on the combined representations.
2.4
Sentiment Analysis
In the last decade, the automatic processing of subjective and emotive text, commonly
known as Sentiment Analysis, has triggered huge interest from the Text Mining research
community [92]. A typical task in Sentiment Analysis is text polarity classification and
in the context of this work can be formalized as follows: given a text span that mentions
2.4 Sentiment Analysis
25
a target entity, decide whether it conveys positive, negative or neutral sentiment towards
the target.
With the rise of Social Media, research on Sentiment Analysis shifted towards
Twitter. New challenges have risen, including slang, misspelling, emoticons, poor grammatical structure [92]. A number of competitions were organized, such as SemEval [93],
leading to the creation of resources for research [94].
There are two main approaches to sentiment polarity classification: lexicon-based using a dictionary of terms and phrases with annotated polarity – or supervised learning
– building a model of the differences in language associated with each polarity, based
on training examples. In the supervised learning approach, a classifier is specifically
trained for a particular type of text (e.g. tweets about politics). Consequently, it is
possible to capture peculiarities of the language used in that context. As expected,
this reduces the generality of the model, as it is biased towards a specific domain.
Supervised learning approaches require training data. In Twitter, most of previous
work obtained training data by assuming that emoticons represent the tweet polarity
(positive, negative, neutral) [95], or by using third party software, such as the Stanford
Sentiment Analyzer [96].
Lexicon-based approaches have shown to work effectively on conventional text [97]
but tend to be ill suited for Twitter data. With the purpose of overcoming this
limitation, an algorithm that uses a human-coded lexicon specifically tailored to Social
Media text was introduced [98]. SentiStrength has become a reference in recent
years due to its relatively good performance and consistent performance on polarity
classification of Social Media texts. Nevertheless, it is confined to a fixed set of words
and it is context independent.
The recent interest in deep learning led to approaches that use deep learned
word embeddings as features in a variety of Text Mining tasks [99, 100]. In Sentiment
Analysis, recent work integrated polarity information of text into the word embedding by
extending the probabilistic document model obtained from Latent Dirichlet Allocation
[101]. While others learned task-specific embeddings from an existing embedding and
sentences with annotated polarity [102]. Or learning polarity specific word embeddings
from tweets collected using emoticons [103] and directly incorporating the supervision
from sentiment polarity in the loss functions of neural networks [104].
Background and Related Work
26
2.5
Word Embeddings
The most popular and simple way to model and represent text data is the Vector Space
Model [105]. A vector of features in a multi-dimensional feature space represents each
lexical item (e.g. a word) in a document and each item is independent of other items
in the document. This allows to compute geometric operations over vectors of lexical
items using well established algebraic methods. However, the Vector Space Model
faces some limitations. For instance, the same word can express different meanings in
different contexts - the polysymy problem - or different words may be used to describe
the same meaning - the synonymy problem. Since 2000, a variety of different methods
(e.g. LDA [106]) and resources (e.g. DBpedia [107]) have been developed to try to
assign semantics, or meaning, to concepts and parts of text.
Word embedding methods aim to represent words as real valued continuous vectors
in a much lower dimensional space when compared to traditional bag-of-words models.
Moreover, this low dimensional space is able to capture lexical and semantic properties
of words. Co-occurrence statistics are the fundamental information that allows creating
such representations. Two approaches exist for building word embeddings. One creates
a low rank approximation of the word co-occurrence matrix, such as in the case of
Latent Semantic Analysis [108] and GloVe [109]. The other approach consists in
extracting internal representations from neural network models of text [110, 111, 100].
Levy and Goldberg [112] showed that the two approaches are closely related.
Although, word embedding research goes back several decades, it was the recent
developments of Deep Learning and the word2vec framework [100] that captured
the attention of the NLP community. Moreover, Mikolov et al. [113] showed that
embeddings trained using word2vec models (CBOW and Skip-gram) exhibit linear
structure, allowing analogy questions of the form “man:woman::king:??.” and can boost
performance of several text classification tasks.
In this context, the objective is to maximize the likelihood that words are predicted
given their context. word2vec has two models for learning word embeddings, the
skip-gram model (SG) and the continuous-bag-of-word model (CBOW). Here we focus
on CBOW. More formally, every word is mapped to a unique vector represented
by a column in a projection matrix W ∈ Rd×V with d as embedding dimension
and V as the total number of words in the vocabulary. Given a sequence of words
w−2 , w−1 , wt , w1 , w2 , ..., wT , the objective is to maximize the average log probability:
V
X
1X
log P (wt |wt+j )
T t=1 −c≤j≤c,j̸=0
(2.13)
2.5 Word Embeddings
27
where c is the size of the context window and wt+j is a word in the context window
of the center word wt . The context vector is obtained by averaging the embeddings of
each word w−c≤j≤c,j̸=0 and the prediction of the center word wt is performed using a
softmax multiclass classifier over all vocabulary V :
eywt
P (wt |wt+j ) = P yw
e i
(2.14)
Each of yi is un-normalized log-probability for each output word i. After training, a
low dimensionality embedding matrix E encapsulating information about each word in
the vocabulary and its surrounding contexts is learned, transforming a one-hot sparse
representation of words into a compact real valued embedding vector of size d × 1.
This matrix can then be used as input to other learning algorithms tailored for specific
tasks to further enhance performance.
For large vocabularies it is unfeasible to compute the partition function (normalizer)
of softmax therefore Mikolov [100] proposes to use the hierarchical softmax objective
function or to approximate the partition function using a technique called negative
sampling. Stochastic gradient descent is usually applied for training the softmax where
the gradient is obtained via backpropagation.
There are several approaches to generating word embeddings. One can build
models that explicitly aim at generating word embeddings, such as Word2Vec or GloVe
[100, 109], or one can extract such embeddings as by-products of more general models,
which implicitly compute such word embeddings in the process of solving other language
tasks.
One of the issues of recent work in training word embeddings is the variability of
experimental setups reported. For instance, in the paper describing GloVe [109] the
authors trained their model on five corpora of different sizes and built a vocabulary
of 400K most frequent words. Mikolov et al. [113] trained with 82K vocabulary
while Mikolov et al. [100] was trained with 3M vocabulary. Recently, Arora et al.
[114] proposed a generative model for learning embeddings that tries to explain some
theoretical justification for nonlinear models (e.g. word2vec and GloVe) and some
hyper parameter choices. The authors evaluated their model using 68K vocabulary.
SemEval 2016-Task 4: Sentiment Analysis in Twitter organizers report that participants either used general purpose pre-trained word embeddings, or trained from
Tweet 2016 dataset or “from some sort of dataset” [115]. However, participants neither
report the size of vocabulary used neither the possible effect it might have on the task
specific results.
28
Background and Related Work
Recently, Rodrigues et al. [116] created and distributed the first general purpose
embeddings for Portuguese. Word2vec gensim implementation was used and authors
report results with different values for the parameters of the framework. Furthermore,
authors used experts to translate well established word embeddings test sets for
Portuguese language, which they also made publicly available and we use some of those
in this work.
2.6
Predicting Collective Attention
Online Reputation Monitoring systems would be even more useful if they would be
able to know in advance if social media users will talk a lot about the target entities
or not. In recent years, a number of research works have studied the relationship and
predictive behavior of user response to the publication of online media items, such
as, commenting news articles, playing Youtube videos, sharing URLs or retweeting
patterns [117–120]. The first attempt to predict the volume of user comments for
online news articles used both metadata from the news articles and linguistic features
[119]. The prediction was divided in two binary classification problems: if an article
would get any comments and if it would be high or low number of comments. Similarly,
other studies found that shallow linguistic features (e.g. TF-IDF or sentiment) and
named entities have good predictive power [121, 122].
Research work more in line with ours, tries to predict the popularity of news articles
shares (url sharing) on Twitter based on content features [117]. The authors considered
the news source, the article’s category, the article’s author, the subjectivity of the
language in the article, and number of named entities in the article as features. Recently,
there was a large study of the life cycle of news articles in terms of distribution of
visits, tweets and shares over time across different sections of the publisher [123]. Their
work was able to improve, for some content type, the prediction of web visits using
data from social media after ten to twenty minutes of publication.
Other lines of work, focused on temporal patterns of user activities and have
consistently identified broad classes of temporal patterns based on the presence of
a clear peak of activity [124–126, 118]. Classes differentiate by the specific amount
and duration of activity before and after the peak. Crane and Sornette [124] define
endogenous or exogenous origin of events based on being triggered by internal aspects
of the social network or external, respectively. They find that hashtag popularity is
mostly influenced by exogenous factors instead of epidemic spreading. Other work [125]
extend these classes by creating distinct clusters of activity based on the distributions
2.7 Political Data Science
29
in different periods (before, during and after the peak) that can be interpreted based
on semantics of hashtags. Consequently, the authors applied text mining techniques to
semantically describe hashtag classes. Yang and Leskovec [118] propose a new measure
of time series similarity and clustering. The authors obtain six classes of temporal
shapes of popularity of a given phrase (meme) associated with a recent event, as well
as the ordering of media sources contribution to its popularity.
Recently, Tsytsarau et al. [127] studied the time series of news events and their
relation to changes of sentiment time series expressed on related topics on social media.
The authors proposed a novel framework using time series convolution between the
importance of events and media response function, specific to media and event type.
Their framework is able to predict time and duration of events as well as shape through
time.
2.7
Political Data Science
Content analysis of mass media has an established tradition in the social sciences,
particularly in the study of effects of media messages, encompassing topics as diverse as
those addressed in seminal studies of newspaper editorials [128], media agenda-setting
[129], or the uses of political rhetoric [130], among many others. By 1997, Riffe and
Freitag [131], reported an increase in the use of content analysis in communication
research and suggested that digital text and computerized means for its extraction
and analysis would reinforce such a trend. Their expectation has been fulfilled: the
use of automated content analysis has by now surpassed the use of hand coding [132].
The increase in the digital sources of text, on the one hand, and current advances
in computation power and design, on the other, are making this development both
necessary and possible, while also raising awareness about the inferential pitfalls
involved [133, 134].
One avenue of research that has been explored in recent years concerns the use of
social media to predict present and future political events, namely electoral results [135–
143]. Although there is no consensus about methods and their consistency [144, 145].
Gayo-Avello [146] summarizes the differences between studies conducted so far by
stating that they vary about period and method of data collection, data cleansing
and pre-processing techniques, prediction approach and performance evaluation. One
particular challenge when using sentiment is how to aggregate opinions in a timely
fashion that can be fed to the prediction method. Two main strategies have been
used to predict elections: buzz, i.e., number of tweets mentioning a given candidate or
30
Background and Related Work
party and the use of sentiment polarity. Different computational approaches have been
explored to process sentiment in text, namely machine learning and linguistic based
methods [147–149]. In practice, algorithms often combine both strategies.
Johnson et al. [150] concluded that more than predicting elections, social media
can be used to gauge sentiment about specific events, such as political news or speeches.
Defending the same idea, Diakopoulos el al. [151] studied the global sentiment variation
based on Twitter messages of an Obama vs McCain political TV debate while it was
still happening. Tumasjan et al. [140] used Twitter data to predict the 2009 Federal
Election in Germany. They stated that “the mere number of party mentions accurately
reflects the election result”. Bermingham et al. [135] correctly predicted the 2011 Irish
General Elections also using Twitter data. Gayo-Avello et al. [145] also tested the
share of volume as predictor in the 2010 US Senate special election in Massachusetts.
On the other hand, several other studies use sentiment as a polls result indicator.
Connor et al. [142] used a sentiment aggregate function to study the relationship
between the sentiment extracted from Twitter messages and polls results. They defined
the sentiment aggregate function as the ratio between the positive and negative messages
referring an specific political target. They used the sentiment aggregate function as
predictive feature in the regression model, achieving a correlation of 0.80 between the
results and the poll results, capturing the important large-scale trends. Bermingham
et al. [135] also included in their regression model sentiment features. Bermingham
et al. introduced two novel sentiment aggregate functions. For inter-party sentiment,
they modified the share of volume function to represent the share of positive and
negative volume. For intra-party sentiment , they used a log ratio between the number
of positive and negative mentions of a given party. Moreover, they concluded that the
inclusion of sentiment features augmented the effectiveness of their model. Gayo-Avello
et al. [145] introduced a different aggregate function. In a two-party race, all negative
messages on party c2 are interpreted as positive on party c1, and vice-versa.
In summary, suggestions for potentially independent or in other words predictive
metrics appear in a wide variety of forms: the mention share that a party received
within all party mentions during a given time-span [135, 152–155, 140], the mention
share of political candidates [156–159, 153], the share of positive mentions a party
received [135, 160], the positive mention share of candidates [142, 161, 158], the share of
users commenting on a candidate or party [155], the share of mentions for a candidate
followed by a word indicative of electoral success or failure [162], the relative increase
of positive mentions of a candidate [163] or simply a collection of various potentially
2.7 Political Data Science
31
politically relevant words identified by their statistical relationship with polls or political
actors in the past [164–167].
Suggestions for the dependent variable, metrics of political success, show a similar
variety. They include the vote share that a party received on election day [135, 163, 152–
154], the vote share of a party adjusted to include votes only for parties included in
the analysis [140], the vote share of candidates on election day [157–159, 162, 153],
campaign tracking polls [164, 165, 158, 166, 142, 161, 160], politicians’ job approval
ratings [167, 142], and the number of seats in parliament that a party received after
the election [155].
Chapter 3
Entity Retrieval for Online
Reputation Monitoring
We start by presenting a formal definition of E-R queries and how can we model the
E-R retrieval problem from a probabilistic perspective. We assume that a E-R query
can be formulated as a sequence of individual sub-queries each targeting a specific
entity or relationship. If we create specific representations for entities (e.g. context
terms) as well as for pairs of entities, i.e. relationships then we can create a graph of
probabilistic dependencies between sub-queries and entity/relationship representations.
We show that these dependencies can be depicted in a probabilistic graphical model,
i.e. a Bayesian network. Therefore, answering an E-R query can be reduced to a
computation of factorized conditional probabilities over a graph of sub-queries and
entity/relationship documents.
However, it is not possible to compute these conditional probabilities directly from
raw documents in a collection. Such as with traditional entity retrieval, documents
serve as proxies to entities (and relationships) representations. It is necessary to fuse
information spread across multiple documents. We propose two design patterns inspired
from Model 1 and Model 2 of Balog et al. [46] to create entity/relationship centric and
document centric representations.
The first design pattern - Early Fusion - consists in aggregating context terms
of entity and relationship occurrences to create two dedicated indexes, the entity
index and the relationship index. Then it is possible to use any retrieval method
to compute the relevance score of entity and relationship documents given the E-R
sub-queries. The second design pattern - Late Fusion - can be applied on top of a
standard document index alongside a set of entity occurrences in each document. First
we compute the relevance score of documents given a E-R sub-query, then based on
Entity Retrieval for Online Reputation Monitoring
34
the entity occurrences of the top k results we compute individual entity or relationship
scores. Once again any retrieval method can be used to score documents.
When combined with traditional retrieval methods (e.g. Language Models or BM25)
these design patterns can be used to create unsupervised baselines for E-R retrieval.
Finally, we follow a recent research line in entity retrieval [53, 69, 67] which exploits
term dependencies using the Markov Random Field (MRF) framework for retrieval[63].
We introduce the Entity-Relationship Dependence Model (ERDM), a novel supervised
Early Fusion-based model for E-R retrieval that creates a MRF to compute term
dependencies of E-R queries and entity/relationship documents.
3.1
Entity-Relationship Retrieval
E-R retrieval is a complex case of entity retrieval. E-R queries expect tuples of related
entities as results instead of a single ranked list of entities as it happens with general
entity queries. For instance, the E-R query “Ethnic groups by country" is expecting
a ranked list of tuples <ethnic group, country> as results. The goal is to search for
multiple unknown entities and relationships connecting them.
Table 3.1 E-R retrieval definitions.
Q
QEi
QRi−1,i
DEi
DRi−1,i
QE
QR
DE
DR
|Q|
TE
E-R query (e.g. “congresswoman hits back at US president”).
Entity sub-query in Q (e.g. “congresswoman”).
Relationship sub-query in Q (e.g. “hits back at”).
Term-based representation of an entity (e.g. <Frederica Wilson> =
{representative, congresswoman}). We use the terminology representation
and document interchangeably.
Term-based representation of a relationship (e.g. <Frederica Wilson,
Donald Trump> = {hits,back}). We use the terminology representation
and document interchangeably.
The set of entity sub-queries in a E-R query (e.g. {“congresswoman”,“US
president” }).
The set of relationship sub-queries in a E-R query.
The set of entity documents to be retrieved by a E-R query.
The set of relationship documents to be retrieved by a E-R query.
E-R query length corresponding to the number of entity and relationship
sub-queries.
The entity tuple to be retrieved (e.g. <Frederica Wilson, Donald
Trump>).
3.1 Entity-Relationship Retrieval
35
In this section, we present a definition of E-R queries and a probabilistic formulation
of the E-R retrieval problem from an Information Retrieval perspective. Table 3.1
presents several definitions that will be used throughout this chapter.
3.1.1
E-R Queries
E-R queries aim to obtain a ordered list of entity tuples TE = <E1 , E2 , ..., En > as a
result. Contrary to entity search queries where the expected result is a ranked list of
single entities, results of E-R queries should contain two or more entities. For instance,
the complex information need “Silicon Valley companies founded by Harvard graduates”
expects entity-pairs (2-tuples) <company, founder> as results. In turn, “European
football clubs in which a Brazilian player won a trophy" expects triples (3-tuples) <club,
player, trophy> as results.
Each pair of entities Ei−1 , Ei in an entity tuple is connected with a relationship
R(Ei−1 , Ei ). A complex information need can be expressed in a relational format,
which is decomposed into a set of sub-queries that specify types of entities E and types
of relationships R(Ei−1 , Ei ) between entities.
For each relationship sub-query there must be two sub-queries, one for each of the
entities involved in the relationship. Thus a E-R query Q that expects 2-tuples, is
mapped into a triple of sub-queries Q = {QE1 , QR1,2 , QE2 }, where QE1 and QE2 are
the entity attributes queried for E1 and E2 respectively, and QR1,2 is a relationship
attribute describing R(Ei , Ei+1 ).
If we consider a E-R query as a chain of entity and relationship sub-queries
Q = {QE1 , QR1,2 , QE2 , ..., QEn−1 ,QRn−1,n , QEn } and we define the length of a E-R
query |Q| as the number of sub-queries, then the number of entity sub-queries must
be |Q|+1
and the number of relationship sub-queries equal to |Q|−1
. Consequently, the
2
2
size of each entity tuple TE to be retrieved must be equal to the number of entity
sub-queries. For instance, the E-R query “soccer players who dated a top model” with
answers such as <Cristiano Ronaldo, Irina Shayk>) is represented as three sub-queries
QE1 = {soccer players}, QR1,2 = {dated}, QE2 = {top model}.
Automatic mapping of terms from a E-R query Q to sub-queries QEi or QRi−1,i is
out of the scope of this work and can be seen as a problem of query understanding
[168, 54, 169]. We assume that the information needs are decomposed into constituent
entity and relationship sub-queries using Natural Language Processing techniques or
by user input through an interface that enforces the structure Q = {QE1 , QR1,2 , QE2 ,
..., QEn−1 ,QRn−1,n , QEn }.
36
3.1.2
Entity Retrieval for Online Reputation Monitoring
Modeling E-R Retrieval
Our approach to E-R retrieval assumes that we have a raw document collection (e.g.
news articles) and each document Dj is associated with one or more entities Ei . In other
words, documents contain mentions to one or more entities that can be related between
them. Since our goal is to retrieve tuples of related entities given a E-R query that
expresses entity attributes and relationship attributes, we need to create term-based
representations for both entities and relationships. We denote a representation of an
entity Ei as DEi .
In E-R retrieval we are interested in retrieving tuples of entities TE = <E1 , E2 , ..., En >
as a result. The number of entities in each tuple can be two, three or more depending
on the structure of the particular E-R query. When a E-R query aims to get tuples
of more than two entities, we assume it is possible to combine tuples of length two.
For instance, we can associate two tuples of length two that share the same entity to
retrieve a tuple of length three. Therefore we create representations of relationships as
pairs of entities. We denote a representation of a relationship R(Ei−1 , Ei ) as DRi−1,i .
Considering the example query “Which spiritual leader won the same award as a
US vice president?” it can be formulated in the relational format as QE1 = {spiritual
leader}, QR1,2 = {won}, QE2 = {award}, QR2,3 = {won}, QE3 = {US vice president}.
Associating the tuples of length two <Dalai Lama, Nobel Peace Prize> and <Nobel
Peace Prize, Al Gore> would result in the expected 3-tuple <Dalai Lama, Nobel Peace
Prize, Al Gore>.
For the sake of clarity we now consider an example E-R query with three sub-queries
(|Q| = 3). This query aims to retrieve a tuple of length two, i.e. a pair of entities
connected by a relationship. Based on the definition of a E-R query, each entity in the
resulting tuple must be relevant to the corresponding entity sub-queries QE . Moreover,
the relationship between the two entities must also be relevant to the relationship
sub-queries QR . Instead of calculating a simple posterior P (D|Q) as with traditional
information retrieval, in E-R retrieval the objective is to rank tuples based on a joint
posterior of multiple entity and relationship representations given a E-R query, such as
P (DE2 , DE1 , DR1,2 |Q) when |Q| = 3.
E-R queries can be seen as chains of interleaved entity and relationship subqueries. We take advantage of the chain rule to formulate the joint probability
P (DE2 , DE1 , DR1,2 , Q) as a product of conditional probabilities. Formally, we want
to rank entity and relationship candidates in descending order of the joint posterior
P (DE2 , DE1 , DR1,2 |Q) as:
3.1 Entity-Relationship Retrieval
37
P (DE2 , DE1 , DR1,2 , Q)
P (Q)
E2
E1
R1,2
, Q).P (DE1 |DR1,2 , Q).P (DR1,2 |Q).P (Q)
rank P (D |D , D
=
P (Q)
rank
P (DE2 , DE1 , DR1,2 |Q) =
rank
= P (DE2 |DR1,2 , Q).P (DE1 |DR1,2 , Q).P (DR1,2 |Q)
rank
∝ P (DE2 |DR1,2 , QE2 ).P (DE1 |DR1,2 , QE1 ).P (DR1,2 |QR1,2 )
(3.1)
(3.2)
We consider conditional independence between entity representations within the
joint posterior, i.e., the probability of a given entity representation DEi being relevant
given a E-R query is independent of knowing that entity DEi+1 is relevant as well. As an
example, consider the query “action movies starring a British actor”. Retrieving entity
representations for “action movies” is independent of knowing that <Tom Hardy> is
relevant to the sub-query “British actor”. However, it is not independent of knowing
the set of relevant relationships for sub-query “starring”. If a given action movie is not
in the set of relevant entity-pairs for “starring” it does not make sense to consider it as
relevant. Consequently, P (DE2 |DE1 , DR1,2 , Q) = P (DE2 |DR1,2 , Q).
Since E-R queries can be decomposed in constituent entity and relationship subqueries, ranking candidate tuples using the joint posterior P (DE2 , DE1 , DR1,2 |Q) is
rank proportional to the product of conditional probabilities on the corresponding
entity and relationship sub-queries QE2 , QE1 and QR1,2 .
We now consider a longer E-R query aiming to retrieve a triple of connected entities.
This query has three entity sub-queries and two relationship sub-queries, thus |Q| = 5.
As we previously explained, when there are more than one relationship sub-queries
we need to join entity-pairs relevant to each relationship sub-query that have one
entity in common. From a probabilistic point of view this can be seen as conditional
dependence from the entity-pairs retrieved from the previous relationship sub-query,
i.e. P (DR2,3 |DR1,2 , Q) ̸= P (DR2,3 |Q). To rank entity and relationship candidates we
need to calculate the following joint posterior:
38
Entity Retrieval for Online Reputation Monitoring
rank
P (DE3 , DE2 ,DE1 , DR2,3 , DR1,2 |Q) = P (DE3 |DE2 , DE1 , DR2,3 , DR1,2 , Q).
P (DE3 |DE2 , DR2,3 , DR1,2 , Q).P (DE1 |DR2,3 , DR1,2 , Q).
P (DR2,3 |DR1,2 , Q).P (DR1,2 |Q)
rank
= P (DE3 |DR2,3 , Q).P (DE2 |DR2,3 , DR1,2 , Q).
P (DE1 |DR1,2 , Q).P (DR2,3 |DR1,2 , Q).P (DR1,2 |Q)
rank
∝ P (DE3 |DR2,3 , QE3 ).P (DE2 |DR2,3 , DR1,2 , QE2 ).
P (DE1 |DR1,2 , QE1 ).P (DR2,3 |DR1,2 , QR2,3 ).P (DR1,2 |QR1,2 )
(3.3)
(3.4)
When compared to the previous example, the joint posterior for |Q| = 5 shows
that entity candidates for DE2 are conditional dependent of both DR2,3 and DR1,2 . In
other words, entity candidates for DE2 must belong to entity-pairs candidates for both
relationships representations that are connected with E2 , i.e. DR2,3 and DR1,2 .
We are now able to make a generalization of E-R retrieval as a factorization of
conditional probabilities of a joint probability of entity representations DEi , relationship representations DRi−1,i , entity sub-queries QEi and relationship sub-queries
QRi−1,i . These set of random variables and their conditional dependencies can be easily
represented in a probabilistic directed acyclic graph,i.e. a Bayesian network [170].
In Bayesian networks, nodes represent random variables while edges represent
conditional dependencies. Every other nodes that point to a given node are considered
parents. Bayesian networks define the joint probability of a set of random variables
as a factorization of the conditional probability of each random variable conditioned
Q
on its parents. Formally, P (X1 , . . . , Xn ) = ni=1 P (Xi |pai ), where pai represents all
parent nodes of Xi .
Figure 3.1 depicts the representation of E-R retrieval for different query lengths |Q|
using Bayesian networks. We easily conclude that graphical representation contributes
to establish a few guidelines for modeling E-R retrieval. First, each sub-query points to
the respective document node. Second, relationship document nodes always point to
the contiguous entity representations. Last, when there are more than one relationship
sub-query, relationship documents also point to the subsequent relationship document.
Once we draw the graph structure for the number of sub-queries in Q we are able to
compute a product of conditional probabilities of each node given its parents. Adapting
3.1 Entity-Relationship Retrieval
39
(a) |Q| = 3
(b) |Q| = 5
(c) |Q| = 7
Fig. 3.1 Bayesian networks for E-R Retrieval with queries of different lengths.
the general joint probability formulation of Bayesian networks to E-R retrieval we come
up with the following generalization:
|Q|−1
2
|Q|+1
2
rank Y
P (DE , DR |Q) =
P (DEi |DRi−1,i , DRi,i+1 , QEi )
i=1
P (DRi,i+1 |DRi−1,i , QRi,i+1 )
Y
i=1
(3.5)
We denote D as the set of all candidate relationship documents in the graph and
DE the set of all candidate entity documents in the graph. In Information Retrieval is
often convenient to work in the log-space as it does not affect ranking and transforms
the product of conditional probabilities in a summation, as follows:
R
rank
P (DE , DR |Q) = log P (DE , DR |Q)
|Q|+1
2
rank X
=
i=1
|Q|−1
2
logP (DEi |DRi−1,i , DRi,i+1 , QEi ) +
X
logP (DRi,i+1 |DRi−1,i , QRi,i+1 )
i=1
(3.6)
(3.7)
40
Entity Retrieval for Online Reputation Monitoring
We now present two design patterns to compute each conditional probability for
every entity and relationship candidate documents.
3.2
Design Patterns for Entity-Relationship Retrieval
Traditional ad-hoc document retrieval approaches create direct term-based representations of raw documents. A retrieval model (e.g. Language Models) is then used
to match the information need, expressed as a keyword query, against those representations. However, E-R retrieval requires collecting evidence for both entities and
relationships that can be spread across multiple documents. It is not possible to create
direct term-based representations. Raw documents serve as proxy to connect queries
with entities and relationships.
Abstractly speaking, entity retrieval can be seen as a problem of object retrieval in
which the search process is about fusing information about a given object, such as in
the case of verticals (e.g. Google Finance). Recently, Zhang and Balog [171] presented
two design patterns for fusion-based object retrieval.
The first design pattern – Early Fusion – is an object-centric approach where a termbased representation of objects is created earlier in the retrieval process. First, it creates
meta-documents by aggregating term counts across the documents associated with
the objects. Later, it matches queries against these meta-documents using standard
retrieval methods.
The second design pattern - Late Fusion - is a document-centric approach where
relevant documents to the query are retrieved first and then later in the retrieval process,
it ranks objects associated with top documents. These design patterns represent a
generalization of Balog’s Model 1 and Model 2 for expertise retrieval [46].
In essence, E-R retrieval is an extension, or a more complex case, of object-retrieval
where besides ranking objects we need to rank tuples of objects that satisfy the
relationship expressed in the E-R query. This requires creating representations of both
entities and relationships by fusing information spread across multiple raw documents.
We propose novel fusion-based design patterns for E-R retrieval that are inspired from
the design patterns presented by Zhang and Balog [171] for single object-retrieval.
We extend those design patterns to accommodate the specificities of E-R retrieval.
We hypothesize that it should be possible to generalize the term dependence models
to represent entity-relationships and achieve effective E-R retrieval without entity or
relationship type restrictions (e.g. categories) as it happens with the Semantic Web
based approaches.
3.2 Design Patterns for Entity-Relationship Retrieval
3.2.1
41
Early Fusion
The Early Fusion strategy presented by Zhang and Balog [171] consists in creating
a term-based representation for each object under retrieval, i.e., a meta-document
containing all terms in the proximity of every object mention across a document
collection. As described in previous section, E-R queries can be formulated as a
sequence of multiple entity queries QE and relationship queries QR . In a Early Fusion
approach, each of these queries should match against a previously created term-based
representation. Since there are two types of queries, we propose to create two types of
term-based representations, one for entities and other for relationships.
Our Early Fusion design pattern is similar to Model 1 of Balog et al. [46]. It can
be thought as creating two types of meta-documents DE and DR . A meta-document
DEi is created by aggregating the context terms of the occurrences of Ei across the
raw document collection. On the other hand, for each each pair of entities Ei−1 and Ei
that co-occur close together across the raw document collection we aggregate context
terms that describe the relationship to create a meta-document DRi−1,i
In our approach we focus on sentence level information about entities and relationships although the design pattern can be applied to more complex segmentations of
text (e.g. dependency parsing). We rely on Entity Linking methods for disambiguating
and assigning unique identifiers to entity mentions on raw documents D. We collect
entity contexts across the raw document collection and index them in the entity index.
The same is done by collecting and indexing entity pair contexts in the relationship
index.
We define the (pseudo) frequency of a term t for an entity meta-document DEi as
follows:
f (t, DEi ) =
n
X
f (t, Ei , Dj )w(Ei , Dj )
(3.8)
j=1
where n is the total number of raw documents in the collection, f (t, Ei , Dj ) is the
term frequency in the context of the entity Ei in a raw document Dj . w(Ei , Dj ) is the
entity-document association weight that corresponds to the weight of the document
Dj in the mentions of the entity Ei across the raw document collection. Similarly,
the term (pseudo) frequency of a term t for a relationship meta-document DRi−1,i is
defined as follows:
f (t, D
Ri−1,i
)=
n
X
j=1
f (t, Ri−1,i , Dj )w(Ri−1,i , Dj )
(3.9)
42
Entity Retrieval for Online Reputation Monitoring
where f (t, Ri−1,i , Dj is the term frequency in the context of the pair of entity mentions corresponding to the relationship Ri−1,i in a raw document Dj and w(Ri−1,i , Dj )
is the relationship-document association weight. In this work we use binary associations
weights indicating the presence/absence of an entity mention in a raw document, as
well as for a relationship. However, other weight methods can be used.
The relevance score for an entity tuple TE can then be calculated using the posterior
P (DE , DR |Q) defined in previous section (equation 3.6). We calculate the individual
conditional probabilities as a product of a retrieval score with an association weight.
Formally we consider:
logP (DEi |DRi−1,i , DRi,i+1 , QEi ) = score(DEi , QEi )w(Ei , Ri−1,i , Ri,i+1 )
logP (DRi,i+1 |DRi−1,i , QRi,i+1 ) = score(DRi,i+1 , QRi,i+1 )w(Ri,i+1 , Ri−1,i )
(3.10)
(3.11)
where score(DRi,i+1 , QRi,i+1 ) represents the retrieval score resulting of the match of
the query terms of a relationship sub-query QRi,i+1 and a relationship meta-document
DRi,i+1 . The same applies to the retrieval score score(DEi , QEi ) which corresponds to
the result of the match of an entity sub-query QEi with a entity meta-document DEi .
For computing both score(DRi,i+1 , QRi,i+1 ) and score(DEi , QEi ) any retrieval model
can be used. Different scoring functions will be introduced below.
We use a binary association weight for w(Ei , Ri−1,i , Ri,i+1 ) which represents the
presence of a relevant entity Ei to a sub-query QEi in its contiguous relationships in
the Bayesian network, i.e. Ri−1,i and Ri,i+1 which must be relevant to the sub-queries
QRi−1,i and QRi,i+1 . This entity-relationship association weight is the building block
that guarantees that two entities relevant to sub-queries QE that are also part of a
relationship relevant to a sub-query QR will be ranked higher than tuples where just
one or none of the entities are relevant to the entity sub-queries QE . On the other hand,
the entity-relationship association weight w(Ri,i+1 , Ri−1,i ) guarantees that consecutive
relationships share one entity between them in order to create triples or 4-tuples of
entities for longer E-R queries (|Q| > 3).
The relevance score of an entity tuple TE given a query Q is calculated by summing
individual relationship and entity relevance scores for each QRi−1,i and QEi in Q. We
define the score for a tuple TE given a query Q as follows:
3.2 Design Patterns for Entity-Relationship Retrieval
E
R
rank
P (D , D |Q) =
43
|Q|+1
2
score(DEi , QEi )w(Ei , Ri−1,i , Ri,i+1 )+
X
i=1
(3.12)
|Q|−1
2
X
score(DRi,i+1 , QRi,i+1 )w(Ri,i+1 , Ri−1,i )
i=1
(3.13)
Considering Dirichlet smoothing unigram Language Models (LM) the constituent
retrieval scores can be computed as follows:
scoreLM (DRi,i+1 , QRi,i+1 ) =
log
t∈DRi,i+1 ∩QRi,i+1
scoreLM (DEi , QEi ) =
X
t∈DEi ∩QEi
f (t,C R ) R
µ
|C R |
+ µR
Ri,i+1
)+
f (t, D
X
|DRi,i+1 |
f (t,C E ) E
f (t, D ) + |C E | µ
log
|DEi | + µE
Ei
(3.14)
(3.15)
where t is a term of a sub-query QEi or QRi,i+1 , f (t, DEi ) and f (t, DRi,i+1 ) are
the (pseudo) frequencies defined in equations 3.8 and 3.9. The collection frequencies
f (t, C E ), f (t, C R ) represent the frequency of the term t in either the entity index C E
or in the relationship index C R . |DEi | and|DRi,i+1 | represent the total number of terms
in a meta-document while |C R | and |C E | represent the total number of terms in a
collection of meta-documents. Finally, µE and µR are the Dirichlet prior for smoothing
which generally corresponds to the average document length in a collection.
3.2.2
Association Weights
Both Early Fusion and Late Fusion share three components: w(Ri,i+1 , Dj ), w(Ei , Dj )
and w(Ei , Ri,i+1 ). The first two represent document associations which determine
the weight a given raw document contributes to the relevance score of a particular
entity tuple TE . The last one is the entity-relationship association which indicates the
strength of the connection of a given entity Ei within a relationship Ri,i+1 .
In our work we only consider binary association weights but other methods could
be used. According to the binary method we define the weights as follows:
w(Ri,i+1 , Dj ) = 1 if R(Ei , Ei+1 ) ∈ Dj , 0 otherwise
(3.16)
Entity Retrieval for Online Reputation Monitoring
44
w(Ei , Dj ) = 1 if Ei ∈ Dj , 0 otherwise
(3.17)
w(Ei , Ri−1,i , Ri,i+1 ) = 1 if Ei ∈ DRi−1,i and Ei ∈ DRi,i+1 , 0 otherwise
(3.18)
w(Ri,i+1 , Ri−1,i ) = 1 if Ei ∈ DRi−1,i and Ei ∈ DRi,i+1 , 0 otherwise
(3.19)
Under this approach the weight of a given association is independent of the number
of times an entity or a relationship occurs in a document. A more general approach
would be to assign real numbers to the association weights depending on the strength
of the association [8]. For instance, uniform weighting would be proportional to the
inverse of the number of documents where a given entity or relationship occurs. Other
option could be a TF-IDF approach.
3.2.3
Early Fusion Example
Let us consider an illustrative example of the Early Fusion design pattern for E-R
retrieval using unigram Language Models and the E-R query Q = {soccer players who
dated a top model}. This query can be decomposed in three sub-queries, QEi = { soccer
players}, QEi+1 = { top model} and QRi,i+1 = { dated }. The first two sub-queries
target the entity index and the last targets the relationship index. Table 3.2 presents
a toy entity index with 3 entities as example for each of the two entity sub-queries,
including the term frequency f (t, DEi ) for each sub-query term.
Table 3.2 Illustrative example of the entity index in Early Fusion.
Ei
<Tom Brady>
<Cristiano Ronaldo>
<Lionel Messi>
<Luís Figo>
<Gisele Bundchen>
<Irina Shayik>
<Helen Svedin>
...
f (t, DEi )
soccer:0
player:600
soccer:800
player:800
soccer:700
player:700
soccer:200
player:200
top:400
model:400
top:300
model:300
top:150
model:150
...
|DEi |
3000
5000
4000
800
3000
2000
600
...
3.2 Design Patterns for Entity-Relationship Retrieval
45
Considering the remaining variables required to calculate the scoreLM (DEi , QEi ):
|C E | = 100000
|µE | = 1500
f (soccer, C E ) = 3000
f (player, C E ) = 8000
f (top, C E ) = 8000
f (model, C E ) = 4000
We calculate the scoreLM (DEi , QEi ) for the respective entities and sub-queries. For
the first entity query – “soccer players” – the ranked list of relevant entities and the
respective LM score would be the following:
1. <Lionel Messi>: -1.6947
2. <Cristiano Ronaldo>: -1.7351
3. <Luís Figo>: -1.8291
4. <Tom Brady>: -2.7958
For the second entity query – “top models”:
1. <Gisele Bundchen>: -1.6295
2. <Irina Shayik>: -1.7093
3. <Helen Svedin>: -1.9698
Table 3.3 shows 3 relationships, i.e. entity pairs, relevant to the sub-query “dated”
and the respective term frequency f (t, DRi,i+1 ).
Considering the remaining variables required to calculate the scoreLM (DRi,i+1 , QRi,i+1 ):
|C R | = 20000
|µR | = 500
f (dated, C R ) = 5000
Entity Retrieval for Online Reputation Monitoring
46
Table 3.3 Illustrative example of the relationship index in Early Fusion.
Ri,i+1
<Gisele Bundchen, Tom Brady>
<Irina Shayik, Cristiano Ronaldo>
<Helen Svedin, Luís Figo>
...
f (t, DRi,i+1 )
dated:500
dated:300
dated:100
...
|DRi,i+1 |
800
600
200
...
We calculate the scoreLM (DRi,i+1 , QRi,i+1 )for the respective relationship and the
sub-query and we obtain the following ranked list:
1. <Gisele Bundchen, Tom Brady>: -0.3180
2. <Irina Shayik, Cristiano Ronaldo>: -0.4130
3. <Helen Svedin, Luís Figo>: -0.4929
We can now sum up individual scores for each sub-query and calculate the final
score for the early fusion design pattern score(TE , Q) using the equation 3.12. The
final ranked list of tuples is the following:
1. <Irina Shayik, Cristiano Ronaldo>: -3.8575
2. <Helen Svedin, Luís Figo>: -4.2919
3. <Gisele Bundchen, Tom Brady>: -4.6977
The entity tuple <Irina Shayik, Cristiano Ronaldo> is the most relevant to the
query “soccer players who dated a top model”. Although <Gisele Bundchen, Tom
Brady> has higher individual scores in two sub-queries (“top model” and “dated”) it
ranks last due to the poor relevance of Tom Brady to the sub-query “soccer player”.
The entity <Lionel Messi> is the most relevant entity to the sub-query “soccer player”
but it is not relevant to the relationship sub-query, therefore it is excluded from the
final ranked list of entity tuples.
3.2.4
Late Fusion
The Late Fusion design pattern presented by Zhang and Balog [171] is a documentcentric strategy, i.e. first we query raw individual documents then we aggregate
the associated objects with the relevant documents. Instead of creating term-based
representations of entities and relationships (pairs of entities), in late fusion we use the
3.2 Design Patterns for Entity-Relationship Retrieval
47
raw documents as hidden variables, separating the E-R query from the relevant entity
tuples to be retrieved.
Our vision of ORM implies processing raw documents to detect entities occurrences
and extract sentence level information that will be used in downstream Entity Retrieval
and Text Mining tasks. Therefore, we are not interested in applying a Late Fusion
strategy in this work. However, we believe it makes sense to present a theoretical
formulation of a Late Fusion design pattern for E-R retrieval. We leave the practical
experiments with Late Fusion for future work in the context of generic E-R retrieval.
The process of retrieving entity tuples using our late fusion strategy consists in
processing each sub-query independently, as in the early fusion strategy, but in this
case, we use a single index comprising a term based representation of the collection
of raw documents. A retrieval model is used to calculate a relevance score of each
individual raw document and a given sub-query. Once we have the relevant documents
we use entity linking to extract the entities that are mentioned in each relevant raw
document. Following this strategy we calculate aggregated counts of entity occurrences
weighted by the individual relevance score of the individual raw documents. At the
end, we join the results of each sub-query and calculate the overall relevance score of
the entity tuples.
Formally, we define the relevance score of an entity tuple TE given a query Q as
follows:
rank
P (DE , DR |Q) =
|Q|+1
2
n
X X
score(Dj , QEi )w(Ei , Dj )w(Ei , Ri−1,i , Ri,i+1 )+
i=1 j=1
(3.20)
|Q|−1
2
n
X X
score(Dj , QRi,i+1 )w(Ri,i+1 , Dj )w(Ri,i+1 , Ri−1,i )
i=1 j=1
(3.21)
where score(Dj , QRi,i+1 ) represents the retrieval score resulting of the match of the
query terms of a relationship sub-query QRi,i+1 and a raw document Dj . The same
applies to the retrieval score score(Dj , QEi ) which corresponds to the result of the
match of an entity sub-query QEi with a raw document Dj . The weights w(Ri,i+1 , Dj )
and w(Ei , Dj ) represent association weights between relationships and raw documents,
and entities and raw documents, respectively. We use binary association weights in
this work but other weights can be used. We also use a binary association weight
Entity Retrieval for Online Reputation Monitoring
48
for w(Ei , Ri−1,i , Ri,i+1 ) and w(Ri,i+1 , Ri−1,i ) which represent the entity-relationship
association weights, similarly to what happens with the case of Early Fusion.
For computing both score(Dj , QRi,i+1 ) and score(Dj , QEi ) any retrieval model can
be used. Considering BM25 the scores can be computed as follows:
scoreBM 25 (Dj , QRi,i+1 ) =
X
log
t∈Dj ∩QRi,i+1
N − n(t) + 0.5
f (t, Dj )(K1 + 1)
.
|Dj |
n(t) + 0.5
f (t, Dj ) + K1 (1 − b + b avg(|D|)
)
(3.22)
f (t, Dj )(K1 + 1)
N − n(t) + 0.5
.
|Dj |
n(t) + 0.5
)
f (t, Dj ) + K1 (1 − b + b avg(|D|)
t∈Dj ∩QEi
(3.23)
Ei
Ri,i+1
where t is a term of a sub-query Q or Q
and f (t, Dj ) is the query term
frequency in a raw document Dj . The inverse document frequency, IDF (t), is computed
−n(t)+0.5
as log Nn(t)+0.5
with N as the number of documents on the collection and n(t) the
number of documents where the term occurs.|Dj | is the total number of terms in a
raw document Dj and avg(|D|) is the average document length. K1 and b are free
parameters usually chosen as 1.2 and 0.75, in the absence of specific optimization.
scoreBM 25 (Dj , QEi ) =
3.2.5
X
log
Late Fusion Example
Considering the same toy example query introduced in the previous sub-section, we
now have a single index, the document index, as illustrated in Table 3.4. The remaining
parameters required for calculating the scoreBM 25 (Dj , QEi ) and scoreBM 25 (Dj , QRi,i+1 )
are the following:
• N=2000
• n(soccer)=100
• n(player)=130
• n(dated)=60
• n(top)=250
• n(model)=80
• avg(|D|)=120
3.2 Design Patterns for Entity-Relationship Retrieval
49
Table 3.4 Illustrative example of the document index in Late Fusion.
Dj
docid-1
docid-2
docid-3
docid-4
docid-5
docid-6
docid-7
docid-8
docid-9
...
f (t, Dj )
soccer:10
player:10
soccer:5
player:5
soccer:5
player:5
top:4
model:4
|Dj |
dated:5
top:6
model:6
model:4
dated:2
player:2
dated:3
top:2
model:2
dated:2
soccer:2
player:2
...
200
Ei
<Cristiano Ronaldo>
<Lionel Messi>
150
<Cristiano Ronaldo>
100
<Luís Figo>
150
<Gisele Bundchen>
80
<Gisele Bundchen>
<Tom Brady>
100
<Irina Shayik>
100
120
<Gisele Bundchen>
<Adriana Lima>
<Tom Brady>
<Irina Shayik>
<Cristiano Ronaldo>
150
<Luís Figo>
<Helen Svedin>
...
...
For the first entity sub-query, “soccer players”, the relevant documents ranked by
the scoreBM 25 (Dj , QEi ) are the following:
1. docid-1 (<Cristiano Ronaldo>, <Lionel Messi>): 4.7606
2. docid-3 (<Luís Figo>): 4.6426
3. docid-2 (<Cristiano Ronaldo>): 4.3716
4. docid-9 (<Luís Figo>, <Helen Svedin>): 3.2803
5. docid-7 (<Gisele Bundchen>, <Adriana Lima>, <Tom Brady>): 1.8418
For the second entity sub-query, “top model”:
1. docid-6 (<Irina Shayik>): 3.1618
50
Entity Retrieval for Online Reputation Monitoring
2. docid-4 (<Gisele Bundchen>): 2.7393
3. docid-9 (<Luís Figo>, <Helen Svedin>): 2.1694
4. docid-7 (<Gisele Bundchen>, <Adriana Lima>, <Tom Brady>): 1.4714
For the relationship sub-query, “dated”:
1. docid-5 (<Gisele Bundchen>, <Tom Brady>): 2.8081
2. docid-8 (<Irina Shayik>, <Cristiano Ronaldo>): 2.3668
3. docid-7 (<Gisele Bundchen>, <Adriana Lima>, <Tom Brady>): 2.1728
4. docid-9 (<Luís Figo>, <Helen Svedin>): 1.9349
Since in Late Fusion there is no relationship meta-documents that could be used
directly as entity tuples, we need to extract the candidate tuples from the raw documents
retrieved using the relationship sub-query. When there are more than two entity
associations in a relevant document we combine entities to create tuples. For instance,
docid-7 has three entity associations therefore we extract three candidate tuples:
<Gisele Bundchen, Tom Brady>, <Gisele Bundchen, Adriana Lima> and <Adriana
Lima, Tom Brady>.
For each candidate tuple we sum up scoreBM 25(Dj , QRi,i+1 )w(Ri,i+1 , Dj ) over
every relevant document Dj for the relationship sub-query that is associated with each
entity tuple. The same applies to individual entities from the candidate tuples that
are associated with relevant documents for each entity sub-query. For instance, for the
entity sub-query “soccer players” we sum score(Dj , QEi )w(Ei , Dj )w(Ei , Ri,i+1 ) over
the relevant documents that mentioned an entity that belongs to a candidate tuple.
When both entities of the candidate tuple are mentioned in relevant documents for
both entity sub-queries, e.g. <Helen Svedin, Luís Figo>, we assign each entity to the
sub-query that maximizes the final score score(TE , Q), i.e., we use the scores of the
entity sub-query “soccer player” for <Luís Figo> and the entity sub-query “top model”
for <Helen Svedin>. The final ranked list of entity tuples is the following:
1. <Irina Shayik, Cristiano Ronaldo>: 14.0443
2. <Helen Svedin, Luís Figo>: 12.5970
3. <Gisele Bundchen, Tom Brady>: 10.9784
3.2 Design Patterns for Entity-Relationship Retrieval
51
4. <Gisele Bundchen, Adriana Lima>: 9.3459
5. <Adriana Lima, Tom Brady>: 6.9245
Once again <Lionel Messi> is excluded from the final ranked list of entity tuples
because he is not associated with any document relevant to the relationship sub-query
“dated”. On the other hand, <Adriana Lima> is included in the final ranking although
it is not true that she has dated either <Tom Brady> or <Gisele Bundchen>. In
this example, the top three entity tuples are ranked in the same order as in the Early
Fusion strategy example.
3.2.6
Implementation
In this section we proposed two design patterns for E-R retrieval: Early Fusion (EF)
and Late Fusion (LF). Both can be seen as a flexible framework for ranking tuples of
entities given a E-R query expressed as a sequence of entity and relationship sub-queries.
This framework is flexible enough to allow using any retrieval method to compute
individual retrieval scores between document and query nodes in a E-R graph structure.
When using Language Models (LM) or BM25 as scoring functions, these design patterns
can be used to create unsupervised baseline methods for E-R retrieval (e.g. EF-LM,
EF-BM25, LF-LM, LF-BM25, etc.).
In the case of Early Fusion there is some overhead over traditional document search,
since we need to create two E-R dedicated indexes that will store entity and relationship
meta-documents. The entity index is created by harvesting the context terms in the
proximity of every occurrence of a given entity across the raw document collection.
This process must be carried for every entity in the raw document collection. A similar
process is applied to create the relationship index. For every two entities occurring
close together in a raw document we extract the text between both occurrences as
a term-based representation of the relationship between the two. Once again, this
process must be carried for every pair of co-occurring entities in sentences across the
raw document collection.
Late Fusion requires less overhead and can be implemented on top of a web search
engine with reduced effort. We only need to have a list of entity occurrences alongside
each document. Therefore there is no need to create a separate index(es). On the
other hand, it requires more processing on query time since we need to first rank raw
documents for each sub-query and then aggregate entity occurrences at the top k
documents retrieved. Moreover, it does not contain any proximity-based information
52
Entity Retrieval for Online Reputation Monitoring
on the entity occurrences, so two entities occurring very far in the text might be
considered as relationship candidates. It might be prone to a higher false positive rate.
One advantage of Early Fusing lies in its flexibility as we need to create two separate
indexes for E-R retrieval it is possible to combine data from multiple sources in seamless
way. For instance, one could use a well established knowledge base (e.g. DBpedia) as
entity index and use a specific collection, such as a news collection or a social media
stream, for harvesting relationships having a more transient nature.
Common to both design patterns is a challenge inherent to the problem of E-R
retrieval: the size of the search space. Although the E-R problem is formulated as a
sequence of independent sub-queries, the results of those sub-queries must be joined
together. Consequently, we have a multi-dimensional search space in which we need to
join results based on shared entities.
This problem becomes particularly hard when sub-queries are short and contain
very popular terms. Let us consider “actor” as QEi , there will be many results to
this sub-query, probably thousands. There is a high probability that will need to
process thousands of sub-results before finding one entity that is also relevant to the
relationship sub-query QRi−1,i . If at the same time we have computational power
constraints, we will probably apply a strategy of just considering top k results for
each sub-query which can lead to reduced recall in the case of short sub-queries with
popular terms.
3.3
Entity-Relationship Dependence Model
In this section we present the Entity-Relationship Dependence Model (ERDM), a novel
supervised Early Fusion-based model for E-R retrieval. Recent approaches to entity
retrieval [53, 67, 69] have demonstrated that using models based on Markov Random
Field (MRF) framework for retrieval [63] to incorporate term dependencies can improve
entity search performance. This suggests that MRF could be used to model E-R query
term dependencies among entities and relationships documents.
One of the advantages of the MRF framework for retrieval is its flexibility, as we
only need to construct a graph G representing dependencies to model, define a set of
non-negative potential functions ψ over the cliques of G and to learn the parameter
vector Λ to score each document D by its unique and unnormalized joint probability
with Q under the MRF [63].
The non-negative potential functions are defined using an exponential form ψ(c; Λ) =
exp[λc f (c)], where λc is a feature weight, which is a free parameter in the model,
3.3 Entity-Relationship Dependence Model
53
associated with feature function f (c). Learning to rank is then used to learn the feature
weights that minimize the loss function. The model allows parameter and feature
functions sharing across cliques of the same configuration, i.e. same size and type of
nodes (e.g. 2-cliques of one query term node and one document node).
3.3.1
Graph Structures
The Entity-Relationship Dependence Model (ERDM) creates a MRF for modeling
implicit dependencies between sub-query terms, entities and relationships. Each entity
and each relationship are modeled as document nodes within the graph and edges reflect
term dependencies. Contrary to traditional ad-hoc retrieval using MRF (e.g. SDM),
where the objective is to compute the posterior of a single document given a query, the
ERDM allows the computation of a joint posterior of multiple documents (entities and
relationships) given a E-R query which consists also of multiple sub-queries.
Fig. 3.2 Markov Random Field dependencies for E-R retrieval, |Q| = 3.
The graph structures of the ERDM for two E-R queries, one with |Q| = 3 and
other with |Q| = 3 are depicted in Figure 3.2 and Figure 3.3, respectively. Both
graph structures contain two different types of query nodes and document nodes:
entity query and relationship query nodes, QE and QR , plus entity and relationship
document nodes, DE and DR . Within the MRF framework, DE and DR are considered
“documents” but they are not actual real documents but rather objects representing
an entity or a relationship between two entities. Unlike real documents, these objects
do not have direct and explicit term-based representations. Usually, it is necessary to
gather evidence across multiple real documents that mention the given object, in order
to be able to match them against keyword queries. Therefore, ERDM can be seen as
Early Fusion-based retrieval model. The existence of two different types of documents
implies two different indexes: the entity index and the relationship index.
54
Entity Retrieval for Online Reputation Monitoring
Fig. 3.3 Markov Random Field dependencies for E-R retrieval, |Q| = 5.
The relationship-specific dependencies of ERDM are found in the 2-cliques formed
by one entity document and one relationship document: DEi−1 - DRi−1,i , DEi - DRi−1,i
and for |Q| = 5, DEi - DRi,i+1 and DRi−1,i - DRi,i+1 . The graph structure does not
need to assume any explicit dependence between entity documents given a relationship
document. They have an implicit connection through the dependencies with the
relationship document. The likelihood of observing an entity document DEi given a
relationship document DRi−1,i is not affected by the observation of any other entity
document.
Explicit dependence between the two entity documents could be used to represent
the direction of the relationship between the two entities. To support this dependence,
relationship documents would need to account the following constraint: R(Ei−1 , Ei ) ̸=
R(Ei , Ei−1 ), ∀ DRi−1,i ∈ C R , with C R representing the relationship index. Then, we
would compute an ordered feature function between entities in a relationship, similar to
the ordered bigram feature function in SDM. In this work, we do not explicitly model
asymmetric relationships. For instance, if a user searches for the relationship entity
A “criticized” entity B but was in fact entity B who criticized entity A we assume
that the entity tuple <entity A, entity B> is still relevant for the information need
expressed in the E-R query.
ERDM follows the SDM [63] dependencies between query terms and documents
due to its proved effectiveness in multiple contexts. Therefore, ERDM assumes a
dependence between neighboring sub-query terms:
Ei
Ei
P (qjEi |DEi , qj̸E=i l ) = P (qjEi |DEi , qj−1
, qj+1
)
(3.24)
3.3 Entity-Relationship Dependence Model
R
R
55
R
R
R
i−1,i
i−1,i
i−1,i
Ei
P (qj i−1,i |DRi−1,i , qj̸=i−1,i
|DRi−1,i , qj−1
, qj+1
)
l , D ) = P (qj
(3.25)
MRF for retrieval requires the definition of the sets of cliques (maximal or nonmaximal) within the graph that one or more feature functions is to be applied to. The
set of cliques in ERDM containing at least one document are the following:
• T E - set of 2-cliques containing an entity document node and exactly one term
in a entity sub-query.
• OE - set of 3-cliques containing an entity document node and two ordered terms
in a entity sub-query.
• T R - set of 2-cliques containing a relationship document node and exactly one
term in a relationship sub-query.
• OR - set of 3-cliques containing a relationship document node and two ordered
terms in a relationship sub-query.
• S ER - set of 2-cliques containing one entity document node and one relationship
document node.
• S RER - set of 3-cliques containing one entity document node and two consecutive
relationship document nodes.
The joint probability mass function of the MRF is computed using the set of
potential functions over the configurations of the maximal cliques in the graph [63].
Non-negative potential functions are constructed from one or more real valued feature
functions associated with the respective feature weights using an exponential form.
3.3.2
Feature Functions
ERDM has two types of feature functions: textual and non-textual. Textual feature
functions measure the textual similarity between one or more sub-query terms and a
document node. Non-textual feature functions measure compatibility between entity
and relationship documents, i.e., if they share a given entity.
Table 3.5 presents an overview of the feature functions associated with clique
sets and the type of input nodes. Although we could define a wide set of different
feature functions, we decided to adapt SDM textual feature functions to ERDM clique
configurations. Therefore we define unigram based feature functions fTE and fTR to
Entity Retrieval for Online Reputation Monitoring
56
Table 3.5 Clique sets and associated feature functions by type and input nodes.
Clique Set
TE
OE
TR
OR
S ER
S RER
Feature Functions
fTE
fOE and fUE
fTR
fOR and fUE
fSER
fSRER
Type
Textual
Textual
Textual
Textual
Non-textual
Non-textual
Input Nodes
{qjEi , DEi }
Ei
{qjEi , qj+1
, DEi }
Ri−1,i
{qj
, DRi−1,i }
R
Ri−1,i
{qj i−1,i , qj+1
, DRi−1,i }
{DEi , DRi−1,i }
{DEi , DRi−1,i , DRi,i+1 }
2-cliques containing a single sub-query term and a entity or relationship document
node.
For 3-cliques containing consecutive sub-query terms and a document node, we
define two feature functions. One considers consecutive sub-query terms and matches
ordered bigrams with entity or relationship documents. This feature function is denoted
as fOE and fOR , depending if the clique is OE or OR . The second feature function matches
bigrams with documents using an unordered window of 8 terms (uw8), i.e., it matches
bigrams with documents if the two terms of the bigram occur with a maximum of
6 other terms between each other. This feature function is denoted as fUE and fUR ,
depending if the clique is OE or OR .
For each textual feature function we decided to use two variants: Dirichlet smoothing
Language Models (LM) and BM25. We now present the summary of the textual feature
functions used in this work.
LM-T-E
E
(qjEi , DEi ) = log
fT,LM
E
f (q i ,C E )
E
f (qj i ,DEi )+ j E
µE
|C |
|DEi |+µE
LM-O-E
E
E
f#1 (q i ,q i ,C E )
E
Ei
j
j+1
f#1 (qj i ,qj+1
µE
,DEi )+
|C E |
|DEi |+µE
E
E
f#uw8 (q i ,q i ,C E )
E
Ei
j
j+1
µE
f#uw8 (qj i ,qj+1
,DEi )+
E
|C |
E
E
|D i |+µ
Ei
E
fO,LM
(qjEi , qj+1
, DEi ) = log
LM-U-E
Ei
E
fU,LM
(qjEi , qj+1
, DEi ) = log
3.3 Entity-Relationship Dependence Model
57
LM-T-R
Ri−1,i
f (qj
R
R
(qj i−1,i , DRi−1,i ) = log
fT,LM
Ri−1,i R
,C )
j
µR
|C R |
Ri−1,i
|D
|+µR
,DRi−1,i )+
f (q
LM-O-R
Ri−1,i Ri−1,i R
f#1 (q
,q
,C )
j
j+1
µR
|C R |
Ri−1,i
|D
|+µR
Ri−1,i Ri−1,i R
f#uw8 (q
,q
,C )
Ri−1,i
R
j
j+1
,DRi−1,i )+
f#uw8 (qj i−1,i ,qj+1
µR
|C R |
Ri−1,i
R
R
Ri−1,i
f#1 (qj
R
i−1,i
R
(qj i−1,i , qj+1
, DRi−1,i ) = log
fO,LM
R
i−1,i
,qj+1
,DRi−1,i )+
LM-U-R
R
R
i−1,i
R
fU,LM
(qj i−1,i , qj+1
, DRi−1,i ) = log
|D
|+µ
R
Here, f (qjEi , DEi ) and f (qj i−1,i , DRi−1,i ) represent the sub-query term frequencies in a
entity document and relationship document, respectively. The collection frequencies
R
f (qjEi , C E ), f (qj i−1,i , C R ) represent the frequency of sub-query term in either the entity
index C E or in the relationship index C R . The variants of these functions f#1 and
f#uw8 represent ordered and unordered bigram matching frequency. |DEi | and|DRi,i+1 |
represent the total number of terms in a meta-document while |C R | and |C E | represent
the total number of terms in a collection of meta-documents. Finally, µE and µR are
the Dirichlet prior for smoothing which generally corresponds to the average document
length in a collection.
BM25-T-E
E
Ei
E
Ei
fT,BM
25 (qj , D )
BM25-O-E
= log
N E −n(qj i )+0.5
E
n(qj i )+0.5
E
.
f (qj i ,DEi )(K1 +1)
E
f (qj i ,DEi )+K1 (1−b+b
|D Ei |
)
avg(|D E |)
Entity Retrieval for Online Reputation Monitoring
58
Ei Ei
E
Ei
fO,BM
25 (qj , qj+1 , D ) =log
Ei
N E − n#1 (qjEi , qj+1
) + 0.5
·
Ei Ei
n#1 (qj , qj+1 ) + 0.5
(3.26)
Ei
f#1 (qjEi , qj+1
, DEi )(K1 + 1)
Ei
|D |
Ei
f#1 (qjEi , qj+1
, DEi ) + K1 (1 − b + b avg(|D
E |) )
(3.27)
BM25-U-E
Ei
E
Ei
fU,BM
25 (qj , D ) =log
Ei
N E − n#uw8 (qjEi , qj+1
) + 0.5
·
Ei Ei
n#uw8 (qj , qj+1 ) + 0.5
(3.28)
Ei
, DEi )(K1 + 1)
f#uw8 (qjEi , qj+1
Ei
|D |
Ei
, DEi ) + K1 (1 − b + b avg(|D
f#uw8 (qjEi , qj+1
E |) )
(3.29)
BM25-T-R
R
Ri−1,i
R
fT,BM
, DRi−1,i )
25 (qj
=log
N R − n(qj i−1,i ) + 0.5
R
n(qj i−1,i ) + 0.5
·
(3.30)
R
f (qj i−1,i , DRi−1,i )(K1 + 1)
R
Ri−1,i
|
|D
f (qj i−1,i , DRi−1,i ) + K1 (1 − b + b avg(|D
R |) )
(3.31)
BM25-O-R
3.3 Entity-Relationship Dependence Model
59
R
R
R
i−1,i
i−1,i
R
fO,BM
, qj+1
, DRi−1,i ) =log
25 (qj
R
i−1,i
N R − n#1 (qj i−1,i , qj+1
) + 0.5
R
R
i−1,i
n#1 (qj i−1,i , qj+1
) + 0.5
R
·
R
i−1,i
f#1 (qj i−1,i , qj+1
, DRi−1,i )(K1 + 1)
R
Ri−1,i
R
|D
|
i−1,i
f#1 (qj i−1,i , qj+1
, DRi−1,i ) + K1 (1 − b + b avg(|D
R |) )
(3.32)
(3.33)
BM25-U-R
R
R
Ri−1,i Ri−1,i
R
fU,BM
, qj+1 , DRi−1,i )
25 (qj
=log
i−1,i
) + 0.5
N R − n#uw8 (qj i−1,i , qj+1
R
R
i−1,i
) + 0.5
n#uw8 (qj i−1,i , qj+1
R
·
R
i−1,i
, DRi−1,i )(K1 + 1)
f#uw8 (qj i−1,i , qj+1
R
Ri−1,i
R
|
|D
i−1,i
, DRi−1,i ) + K1 (1 − b + b avg(|D
f#uw8 (qj i−1,i , qj+1
R |) )
(3.34)
(3.35)
Here, N E and N R represent the total number of documents in the entity index and
relationship index, respectively. The document frequency of unigrams and bigrams
is represented using n(),n#1 () and n#uw8 (). |DEi | and |DRi−1,i | are the total number
of terms in a entity or relationship document while avg(|DE |) and avg(|DR |) are the
average entity or relationship document length. K1 and b are free parameters usually
chosen as 1.2 and 0.75, in the absence of specific optimization.
We define two non-textual features in ERDM. The first one, fTER is assigned to
2-cliques composed by one entity document and one relationship document and it is
inspired in the feature function fE of Hasibi and Balog’s ELR model [69]. It is defined
as follows:
h
i)
fSER (DEi , DRi−1,i ) = (1 − α)f (DEi , DRi−1,i ) + α n(E
NR
i
(3.36)
where the linear interpolation implements the Jelinek-Mercer smoothing method
with α ∈ [0, 1] and f (DEi , DRi−1,i ) = {0, 1} which measures if the entity Ei represented
Entity Retrieval for Online Reputation Monitoring
60
in DEi belongs to the relationship R(Ei−1 , Ei ) represented in DRi−1,i . The background
model employs the notion of entity popularity within the collection of relationship
documents. n(DEi ) represents the number of relationship documents DR that contain
the entity Ei and N R represents the total number of relationship documents in the
relationship index.
For E-R queries with more than one relationship sub-query, we draw an edge
between consecutive relationship documents within the ERDM graph. This edge
creates a 3-clique containing two relationship documents and one entity document.
The feature function fSRER measures if a given entity Ei is shared between consecutive
relationship documents within the graph. We opted to define a simple binary function:
fSER (DEi , DRi−1,i , DRi,i+1 ) = 1 if Ei ∈ DEi ∩ DRi−1,i ∩ DRi,i+1 , 0 otherwise (3.37)
In summary, we described the set of feature functions associated with each clique
configuration within the ERDM graph. We leave for future work the possibility of
exploring other type of features to describe textual similarity and compatibility between
different nodes in the ERDM graph, such as neural language models.
3.3.3
Ranking
We have defined the set of clique configurations and the real valued feature functions
that constitute the non-negative potential functions over the cliques in the graph of
ERDM. We can now formulate the calculation of the posterior P (DE , DR |Q using the
probability mass function of the MRF, as follows:
3.3 Entity-Relationship Dependence Model
rank
PΛ (DE , DR |Q) =
X
61
λc f (c)
c∈C(G)
rank E
= λT
XX
λE
O
XX
λE
U
XX
λR
T
X X
fTE (qjEi , DEi )+
E QEi
Ei
fOE (qjEi , qj+1
, DEi )+
E QEi
Ei
, DEi )+
fUE (qjEi , qj+1
E QEi
R
fTR (qj i−1,i , DRi−1,i )+
(3.38)
R QRi,j
λR
O
X X
λR
U
X X
R
R
R
R
i−1,i
, DRi−1,i )+
fOR (qj i−1,i , qj+1
R QRi,j
i−1,i
, DRi−1,i )+
fUR (qj i−1,i , qj+1
R QRi,j
λER
S
XX
R
λRER
S
fSER (DEi , DRi−1,i )+
E
XX
R
fSER (DEi , DRi−1,i , DRi,i+1 )
E
(3.39)
In essence, E-R retrieval using the ERDM corresponds to ranking candidate entity
tuples using a linear weighted sum of the feature functions over the cliques in the graph.
Therefore, we can apply any linear learning to rank algorithm to optimize the ranking
with respect to the vector of feature weights Λ. Given a training set T composed
by relevance judgments, a ranking of entity tuples RΛ and an evaluation function
E(RΛ ; T ) that produces a real valued output, our objective is to find the values of the
vector Λ that maximizes E. As explained in [65], we require E to only consider the
ranking produced and not individual scores. This is the standard characteristic among
information retrieval evaluation metrics (e.g. MAP or NDCG).
3.3.4
Discussion
In this section we introduced the Entity-Relationship Dependence Model (ERDM), a
novel supervised Early Fusion-based model for E-R retrieval. Inspired by recent work
in entity retrieval we believe that modeling term dependencies between sub-queries
and entity/relationship documents can increase search performance.
Entity Retrieval for Online Reputation Monitoring
62
ERDM can be seen as an extension of the SDM model [63] for ad-hoc document
retrieval in a way that besides modeling query term dependencies we create graph
structures that depict dependencies between entity and relationship documents. Consequently, instead of computing a single posterior P (D|Q) we propose to use the MRF for
retrieval for computing a joint posterior of multiple entity and relationship documents
given a E-R query, P (DE , DR |Q).
Moreover, since ERDM is a supervised model, we believe that tuning weights of
feature functions, besides optimizing search performance, can also help to explain the
inter-dependencies between sub-query terms and the respective documents, but also
how entity documents and relationship documents contribute to the overall relevance
of entity tuples given a E-R query.
3.4
Summary of the Contributions
In this chapter we present several contributions to the problem of entity-relationship
retrieval from a IR perspective:
• Generalization of the problem of entity-relationship search to cover entity types
and relationships represented by any attribute and predicate, respectively, rather
than a predefined set.
• A general probabilistic model for E-R retrieval using Bayesian Networks.
• Proposal of two design patterns that support retrieval approaches using the E-R
model.
• Proposal of a Entity-Relationship Dependence model that builds on the basic
Sequential Dependence Model (SDM) to provide extensible entity-relationship
representations and dependencies, suitable for complex, multi-relations queries.
Chapter 4
Entity-Relationship Retrieval over
a Web Corpus
We start this chapter by presenting a new semi-automatic method for generating E-R
test collections together with a new E-R test collection, the RELink Query Collection
comprising 600 E-R queries. We leverage web tabular data containing entities and
relationships among them as they share the same row in a table. We exploit the
Wikipedia Lists-of-lists-of-lists tree of articles containing lists of Entities in the form of
tables. We developed a table parser that extracts tuples of entities from these tables
together with associated metadata. This information is then provided to editors that
create E-R queries fulfilled by the extracted tuples.
We then report a set of evaluations of the ERDM model using four different query
sets. In order to leverage information about entities and relations in a corpus, it is
necessary to create a representation of entity related information that is amenable
to ER search. In our approach we focus on sentence level information about entities
although the method can be applied to more complex segmentation of text. Our
experiments are based on the ClueWeb-09-B data set with FACC1 text annotation
that refer to entities found in the text, including the variances of their surface forms.
Each entity is designated by its unique ID and for each unique entity instance we
created ’entity documents’ comprising a collection of sentences that contain the entity.
These context documents are indexed, comprising the entity index. The same is done
by creating entity pair documents and the entity pair index. These two indexes enable
us to execute E-R queries using different retrieval models, including the ERDM that
models the dependence between entities.
Entity-Relationship Retrieval over a Web Corpus
64
4.1
RELink Query Collection
1
Improvements of entity-relationship (E-R) search techniques have been hampered by a
lack of test collections, particularly for complex queries involving multiple entities and
relationships. In this section we describe a method for generating E-R test queries to
support comprehensive E-R search experiments. Queries and relevance judgments are
created from content that exists in a tabular form where columns represent entity types
and the table structure implies one or more relationships among the entities. Editorial
work involves creating natural language queries based on relationships represented by
the entries in the table.
We have publicly released the RELink test collection comprising 600 queries and
relevance judgments obtained from a sample of Wikipedia List-of-lists-of-lists tables.
The latter comprise tuples of entities that are extracted from columns and labelled by
corresponding entity types and relationships they represent.
Improvement of methods for both extraction and search is hampered by a lack of
query sets and relevance judgments, i.e., gold standards that could be used to compare
effectiveness of different methods. In this section we introduce:
1. A low-effort semi-automatic method for acquiring instances of entities and entity
relationships from tabular data.
2. RELink Query Collection (QC) of 600 E-R queries with corresponding relevance
judgments
Essential to our approach is the observation that tabular data typically includes
entity types as columns and entity instances as rows. The table structure implies
a relationship among table columns and enables us to create E-R queries that are
answered by the entity tuples across columns. Following this approach, we prepared
and released the RELink QC comprising 600 E-R queries and relevance judgments
based on a sample of Wikipedia List-of-lists-of-lists tables.
The query collection and the research framework are publicly available2 , enabling
the community to expand the RELink Framework with additional document collections
and alternative indexing and search methods. It is important to maintain and enhance
the RELink QC by providing updates to the existing entity types and creating new
queries and relevant instances from additional tabular data.
1
The material contained in this section was published in P. Saleiro, N. Milic-Frayling, E. M.
Rodrigues, C. Soares, “RELink: A Research Framework and Test Collection for Entity-Relationship
Retrieval”[15].
2
https://sigirelink.github.io/RELink/
4.1 RELink Query Collection
4.1.1
65
Tabular Data and Entity Relationships
Information that satisfies complex E-R queries is likely to involve instances of entities
and their relationships dispersed across Web documents. Sometimes such information
is collected and published within a single document, such as a Wikipedia page. In such
cases, traditional search engines can provide excellent search results without applying
special E-R techniques or considering entity and relationship types. Indeed, the data
collection, aggregation, and tabularization has been done by a Wikipedia editor.
That also means that a tabular Wikipedia content, comprising various entities, can
be considered as representing a specific information need, i.e., the need that motivated
editors to create the page in the first place. Such content can, in fact, satisfy many
different information needs. We focus on exploiting tabular data for exhaustive search
for pre-specified E-R types. In order to specify E-R queries, we can use column headings
as entity types. All the column entries are then relevance judgments for the entity
query. Similarly, for a given pair of columns that correspond to distinct entities, we
formulate the implied relationship. For example the pair <car, manufacturing plant>
could refer to “is made in” or “is manufactured in” relationships. The instances of
entity pairs in the table then serve as evidence for the specific relationship. This can
be generalized to more complex information needs that involve multiple entity types
and relationships.
Automated creation of E-R queries from tabular content is an interesting research
problem. For now we asked human editors to provide natural language and structured
E-R queries for specific entity types. Once we collect sufficient amounts of data from
human editors we will be able to automate the query creation process with machine
learning techniques. For the RELink QC we compiled a set of 600 queries with E-R
relevance judgments from Wikipedia lists about 9 topic areas.
4.1.2
Selection of Tables
Wikipedia contains a dynamic index “The Lists of lists of lists”3 which represents
the root of a tree that spans curated lists of entities in various domains. We used a
Wikipedia snapshot from October 2016 to traverse “The Lists of lists of lists” tree
starting from the root page and following every hyperlink of type “List of ” and their
children. This resulted in a collection of 95,569 list pages. While most of the pages
contain tabular data, only 18,903 include tables with consistent column and row
structure. As in [172], we restrict content extraction to wikitable HTML class that
3
http://en.wikipedia.org/wiki/List_of_lists_of_lists
66
Entity-Relationship Retrieval over a Web Corpus
typically denotes data tables in Wikipedia. We ignore other types of tables such as
infoboxes.
In this first instance, we focus on relational tables, i.e., the tables that have a key
column, referring to the main entity in the table [173]. For instance, the ”List of books
about skepticism” contains a table “Books” with columns “Author”, “Category” and
“Title”, among others. In this case, the key column is “Title” which contains titles of
books about skepticism. We require that any relationship specified for the entity types
in the table must contain the “Title” type, i.e., involve the “Title” column.
In order to detect key columns we created a Table Parser that uses the set of
heuristics adopted by Lehmberg et al. [173], e.g., the ratio of unique cells in the
column or text length. Once the key column is identified, the parser creates entity
pairs consisting of the key column and one other column in the table. The content
of the column cells then constitutes the set of relevant judgments for the relationship
specified by the pair of entities.
For the sake of simplicity we consider only those Wikipedia lists that contain a
single relational table. Furthermore, our goal is to create queries that have verifiable
entity and entity pair instances. Therefore, we selected only those relational tables
for which the key column and at least one more column have cell content linked to
Wikipedia articles.
With these requirements, we collected 1795 tables. In the final step, we selected 600
tables by performing stratified sampling across semantic domains covered by Wikipedia
lists. For each new table, we calcuated the Jaccard similarity scores between the title
of the corresponding Wikipedia page and the titles of pages associated with tables
already in the pool. By setting the maximum similarity threshold to 0.7 we obtained a
set of 600 tables.
The process of creating RELink queries involves two steps: (1) automatic selection
of tables and columns within tables and (2) manual specification of information needs.
For example, in the table “Grammy Award for Album of the Year” the columns
“winner”, “work” were automatically selected to serve as entity types in the E-R query
(Figure 4.1). The relationship among these entities is suggested by the title and we let
a human annotator to formulate the query.
The RELink query set was created by 6 annotators. We provided the annotators
with access to the full table, metadata (e.g., table title or the first paragraph of the
page) and entity pairs or triples to be used to specify the query (Figure 4.2). For each
entity pair or triple the annotators created a natural language information need and an
E-R query in the relational format Q = {QEi−1 , QRi−1,i , QEi }, as shown in Table 4.1.
4.1 RELink Query Collection
67
Fig. 4.1 Example of Wikipedia table row.
Fig. 4.2 Example of metadata provided to editors.
4.1.3
Formulation of Queries
The relational query format is introduced to support a variety of experiments with
E-R queries. In essence, a complex information need is decomposed into a set of subqueries that specify types of entities E and types of relationships R(Ei−1 , Ei ) between
entities. For each relationship query there is one query for each entity involved in the
relationship. Thus a query Q that expects a pair of entities for a given relationship,
is mapped into three sub-queries (QEi−1 , QRi−1,i , QEi ), where QEi−1 and QEi are the
entity types for Ei−1 and Ei respectively, and QRi−1,i is a relationship type describing
R(Ei−1 , Ei ).
4.1.4
Collection Statistics
RELink QC covers 9 thematic areas from the Lists-of-Lists-of-Lists in Wikipedia:
Mathematics and Logic, Religion and Belief Systems, Technology and Applied Sciences,
Miscellaneous, People, Geography and Places, Natural and Physical Sciences, General
Reference and Culture and the Arts. The most common thematic areas are Culture
and the Arts with 70 queries and Geography and Places with 67 queries.
In Table 4.2 we show the characteristics of the natural language and relational
queries. Among 600 E-R queries, 381 refer to entity pairs and 219 to entity triples. As
Entity-Relationship Retrieval over a Web Corpus
68
Table 4.1 Examples of query annotations.
ID
NL Query
Relational Format
RELink_P_164
What are the regiments held
by the Indian Army?
{regiment, held by, Indian Army}
RELink_T_071
In which seasons NHL players
scored more than 50 goals and
the team they represented?
{NHL season, scored
more than 50 goals in,
NHL player, played for,
NHL team }
Table 4.2 RELink collection statistics.
Total queries
Avg. queries length
Avg. QE length
Avg. QR length
# uniq. entity attributes (QE )
# uniq. relationships (QR )
Avg. # relevant judgments
2-entity
381
56.5
20.9
11.8
679
145
67.9
3-entity
219
83.8
20.9
12.6
592
205
41.8
All
600
66.5
20.9
12.3
1251
317
58.5
expected, natural language descriptions of 3-entity queries are longer (on average 83.8
characters) compared to 2-entity queries (56.5 characters).
We further analyze the structure of relational queries and their components, i.e.,
entity queries QE that specify the entity type and relationship queries QR that specify
the relationship type. Across 600 queries, there are 1251 unique entity types QE (out
of total 1419 occurrences). They are rather unique across queries: only 65 entity
types occur in more than one E-R query and 44 occur in exactly 2 queries. The most
commonly shared entity type is “country”, present in 9 E-R queries.
In the case of relationships, there are 317 unique relationship types QR (out of 817
occurrences) with a dominant type “located in” that occurs in 140 queries. This is not
surprising since in many domains the key entity is tied to a location that is included in
one of the columns. Nevertheless, there are only 44 relationship types QR occurring
more than once implying that RELink QC is a diverse set of queries, including 273
relationship types occurring only once.
4.2 Experimental Setup
4.2
69
Experimental Setup
In this section we detail how we conducted our experiments in E-R retrieval. Since
we only have access to test collections comprising general purpose E-R queries we
decided to use a Web corpus as dataset, more precisely ClueWeb-09-B4 .The ClueWeb09
dataset was created to support research on information retrieval and related human
language technologies and contains 1 billion web pages. The part B is a subset of
the most popular 50 million English web pages, including the Wikipedia. Part B was
created as a resource for research groups without processing power for processing the
all ClueWeb09 collection. We used the ClueWeb-09-B Web collection with FACC1 text
span annotations linked to Wikipedia entities to show how RELink can be used for
E-R retrieval over Web content. We developed our prototype using Apache Lucene for
indexing and search. We used a specific Python library (PyLucene) that allowed our
customized implementation tailored for E-R retrieval.
4.2.1
Data and Indexing
E1
E3
E4
E3
E2
E1E3
E3E4
E3
E1 E2
E3E4
D1
D2
D3
D4
E1
E1 E2
E2
E1E3
E3
E3
E3
E3E4
E3E4
E4
Entity Index
Relationship Index
Fig. 4.3 Illustration of E-R indexing from a web corpus.
As a text corpus, we use ClueWeb-09-B combined with FACC1 text span annotations
with links to Wikipedia entities (via Freebase). The entity linking precision and recall in
FACC1 is estimated to be 80-85% and 70-85%, respectively [174]. For our experiments
we created two main indexes: one for entity extractions and one for entity pairs
4
https://lemurproject.org/clueweb09/
Entity-Relationship Retrieval over a Web Corpus
70
(relationships) extractions. We extract entity and pairs occurrences using an Open
Information Extraction method like OLLIE [57] over the annotated ClueWeb-09-B
corpus as follows. For each entity annotation, we extract the sentence where it occurred
as an entity context. For pairs of entities, we look for co-occurring entities in the
same sentence and we extract the separating string, i.e., the context of the relationship
connecting them. Figure 4.3 illustrates the indexing process adopted in this work.
We obtained 476 million entity extractions and 418 million entity pairs extractions,
as described in Table 4.3. In order to compute |DEi | and |DRi−1,i | we incrementally
updated two auxiliary indices, containing the number of terms per entity and per entity
pair, respectively. We ran our experiments using Apache Lucene and made use of
GroupingSearch for grouping extractions by entity and entity pair at query time. To
get the statistics for ordered and unordered bigrams we made use of SpanNearQuery.
Table 4.3 ClueWeb09-B extractions statistics.
Entities
Entity pairs
4.2.2
Total
476,985,936
418,079,378
Unique
1,712,010
71,660,094
Avg. doc. len.
9977
138
Retrieval Method and Parameter Tuning
For experiments using ERDM we adopted a three stage retrieval method. First, queries
QEi−1 ,QEi are submitted against the entity index and QRi−1,i is submitted against the
entity-pair index. Initial sets of top 20000 results grouped by entity or entity-pairs,
respectively, are retrieved using Lucene’s default search settings. Second, the feature
functions of the specific retrieval model are calculated for each set, using an in-house
implementation. This process is easily parallelized. The final ranking score for each
entity-pair is then computed using the learned λ weights. Evaluation scores are reported
on the top 100 entity-pair results.
Parameter tuning for ERDM and baselines was directly optimized with respect to
the Mean Average Precision (MAP). We make use of the RankLib’s implementation
of the coordinate ascent algorithm under the sum normalization and non-negativity
constraints with 3 random restarts. Coordinate ascent is a commonly used optimization
technique [65] that iteratively optimizes a single parameter while holding all other
parameters fixed.
Parameters are estimated using 5-fold cross validation for each of the 4 query sets
separately. To be able to use the same train and test folds throughout all experiments,
4.2 Experimental Setup
71
we first randomly create fixed train and test folds from the initial result set, for each
query set. All reported evaluation metrics were macro-averaged over 5 folds.
We do not optimize the Dirichlet priors µE and µR in language models and set
them equal to the traditional average document length, i.e., the average entity and
entity pairs extractions length, respectively. The unordered window size N for fUE and
fUR is set to be 8, as suggested in [63].
4.2.3
Test Collections
We ran experiments with a total of 548 E-R queries. We decided to just perform
experiments using queries aiming 2-tuples of entities. We leave for future work the
evaluation of queries aiming at triples. Besides RELink QC we used other 3 relationshipcentric query sets, with pairs of Wikipedia entities as answers, i.e., relevance judgments.
The query sets cover a wide range of domains as described in Table 4.4. Query sets for
entity-relationship retrieval are scarce. Generally entity retrieval query sets are not
relationship-centric [10].
Table 4.4 Description of query sets used for evaluation.
Query Set
QALD-2
Count
79
Domains
Geography and places, Politics and society, Culture and the Arts, Technology
and science
ERQ
28
COMPLEX
60
RELink
381
Award, City, Club, Company, Film, Novel,
Person, Player, Song, University
Cinema, Music, Books, Sports, Computing,
Military conflicts
General Reference, Culture and the Arts,
Geography and places, Mathematics and
logic, Natural and physical Sciences, People, Religion and belief systems, Society
and social sciences, Technology and applied science
Total
548
One exception is the QALD-2 query set used in the DBpedia-entity collection [175].
It contains a subset of relational queries, e.g.“Who designed the Brooklyn Bridge?”.
Most of relational queries in QALD-2 have a fixed relevant entity, e.g., “Brooklyn
Bridge” and can be easily transformed from single entity relevance judgments into pairs.
From the 79 relational queries in QALD-2, we identified 6 with no fixed relevant entity
72
Entity-Relationship Retrieval over a Web Corpus
in the query (e.g. “Give me the capitals of all countries in Africa.”). In these cases,
for provided single entity relevance judgment we needed to annotate the missing entity
manually to create a pair. For instance, given a capital city in Africa we identified the
corresponding African country.
In addition, we used two benchmarks created in previous work using SemanticWeb-based approaches: ERQ [56] and COMPLEX [10]. Neither ERQ nor COMPLEX
provide complete relevance judgments and consequently, we manually evaluated each
answer in our experiments. ERQ consists of 28 queries that were adapted from INEX17
and OWN28 [56]. However, 22 of the queries have a given fixed entity in the query
(e.g. “Find Eagles songs”). Only 6 queries are asking for pairs of unknown entities,
such as “Find films starring Robert De Niro and please tell directors of these films.”.
COMPLEX queries were created with a semi-automatic approach [10]. It contains
70 queries from which we removed 10 that expect 3-tuples of entities. This query
set consists of pure relationship-centric queries for unknown pairs of entities, such as
“Currency of the country whose president is James Mancham “Kings of the city which
led the Peloponnesian League.” and “Who starred in a movie directed by Hal Ashby?”.
We used four different retrieval metrics, Mean Average Precision at 100 results
(MAP), precision at 10 (P@10), mean reciprocal rank (MRR) and normalized discounted
cumulative gain at 20 (NDCG@20).
4.3
Results and Analysis
We start by performing a simple experiment for comparing Early Fusion and ERDM
using both Language Models (LM) and BM25 as retrieval functions. Since we are only
interested in comparing relative performance we opted to scale down our experimental
setup. Instead of computing the term frequency for every extraction for a given entity
or relationship we cap to 200 the number for each group of documents retrieved in
the first passage. We tried several different values and for values below 200 extraction
the performance reduced significantly. For 200, while the performance reduces it is
not dramatic. This setup reduces the experimental runtime and since we had limited
resources this proved to be useful.
Table 4.5 depicts the results for this comparative evaluation. We decided to only
use the three test collections specifically tailored for relationship retrieval. As we
can see the results are very similar between EF and ERDM for both LM and BM25
variants. In the three test collections ERDM presents slightly better performance
than the corresponding EF variant (e.g. BM25). However when performing statistical
4.3 Results and Analysis
73
significance tests we obtained p-values above 0.05 when comparing EF and ERDM.
This is very interesting as it shows that for general purpose E-R evaluation the overhead
of computing sequential dependencies does not carry significant improvements.
Table 4.5 Early Fusion and ERDM comparison using LM and BM25.
EF-LM
EF-BM25
ERDM-LM
ERDM-BM25
EF-LM
EF-BM25
ERDM-LM
ERDM-BM25
EF-LM
EF-BM25
ERDM-LM
ERDM-BM25
ERQ
MAP
P@10 MRR NDCG@20
0.251
0.15
0.3408 0.3508
0.1939 0.1423 0.1783 0.2861
0.2611 0.1615 0.3151 0.3589
0.2106 0.1462 0.2839 0.3257
COMPLEX
MAP
P@10 MRR NDCG@20
0.1703 0.0596 0.1839 0.2141
0.1855 0.0719 0.1907 0.2454
0.1719 0.0789 0.2466 0.2492
0.1955 0.0772 0.2257 0.248
RELink(381 queries)
MAP
P@10 MRR NDCG@20
0.0186 0.0063 0.0192 0.0249
0.0203 0.0071 0.0227 0.0259
0.0213 0.0058 0.0273 0.0255
0.0213 0.0061 0.0265 0.0275
On the other hand, we detect sensitivity to the retrieval function used. In ERQ, both
ERDM-LM and EF-LM outperform BM25 but the opposite happens for COMPLEX
and RELink. This sensitivity means that we cannot generalize the assumption that
one of the retrieval functions is more adequate for E-R retrieval.
Another important observation has to do with the overall lower results on the
RELink test collection in comparison with ERQ and COMPLEX. Contrary to our
expectations ClueWeb-09B has very low coverage of entity tuples relevant to the
RELink test collection.
We now present the results of comparing ERDM with three baselines using sequential
dependence to evaluate the impact of modeling dependencies between query terms.
The first baseline method, BaseEE, consists in submitting two queries against the entity
index: QEi−1 + QRi−1,i and QRi−1,i + QEi . Entity-pairs are created by cross product of
the two entity results set retrieved by each query. For each method we compute the
Sequential Dependence Model(SDM) [63] scores.
Entity-Relationship Retrieval over a Web Corpus
74
The second baseline method, BaseE, consists in submitting again a single query
Q towards the entity index used in ERDM. Entity-pairs are created by cross product
of the entity results set with itself. The third baseline method, BaseR, consists in
submitting a single query Q towards an entity-pair index. This index is created using
the full sentence for each entity-pair co-occurrence in ClueWeb-09-B, instead of just
the separating string as in ERDM. This approach aims to capture any entity context
that might be present in a sentence. ERDM relies on the entity index for that purpose.
In this evaluation we decided to not cap the number of extractions to compute
term frequencies inside each group of results returned from the first passage with
Lucene GroupingSearch. Due to the low coverage of ClueWeb for the entire RELink
collection, we decided to just perform the evaluation using the top 100 queries with
highest number of relevance judgments in our indexes. We also include results for the
adapted QALD-2 test collection.
Table 4.6 Results of ERDM compared with three baselines.
BaseEE
BaseE
BaseR
ERDM
MAP
0.0087
0.0306
0.0872
0.1520
BaseEE
BaseE
BaseR
ERDM
MAP
0.0085
0.0469
0.1041
0.3107
BaseEE
BaseE
BaseR
ERDM
MAP
0.0035
0.0264
0.0585
0.2879
BaseEE
BaseE
BaseR
ERDM
MAP
0.03
0.0395
0.0451
0.1249
QALD-2
P@10
MRR
NDCG@20
0.0027
0.0093
0.0055
0.004684 0.0324
0.0363
0.01678
0.0922
0.0904
0.0405
0.1780
0.1661
ERQ
P@10
MRR
NDCG@20
0.004
0.00730 0.0030
0.01086
0.0489
0.038
0.05086
0.1089
0.1104
0.1903
0.37613 0.3175
COMPLEX
P@10
MRR
NDCG@20
0
0.00430 0
0.005
0.03182 0.1223
0.01836
0.0748
0.0778
0.1417
0.32959 0.3323
RELink(100 queries)
P@10
MRR
NDCG@20
0.01
0.0407
0.02946
0.019
0.0679
0.03948
0.021
0.0663
0.07258
0.048
0.1726
0.1426
4.3 Results and Analysis
75
Table 4.6 presents the results of our experiments on each query set. We start by
comparing the three baselines among each other. As follows from Table 4.6, BaseR
baseline outperforms BaseEE and BaseE on all query sets, while BaseEE is the worst
performing baseline. The BaseR retrieval is the only relationship-centric approach from
the three baselines, as its document collection comprises entity-pairs that co-occurred
in ClueWeb-09-B corpus. BaseEE and BaseE retrieve entity pairs that are created in a
post-processing step which reduces the probability of retrieving relevant results. This
results shows the need for a relationship-centric document collection when aiming to
answer entity-relationship queries.
ERDM significantly outperform all baselines on all query sets. We performed
statistical significance testing of MAP using ERDM against each baseline obtaining
p-values below 0.05 on all the query sets. This results show that our Early Fusion
approach using two indexes (one for entities and other for relationships) is adequate
and promising. We believe this approach can become a reference for future research in
E-R retrieval from an IR-centric perspective.
Nevertheless, based on the absolute results obtained on each evaluation metric and
for each query set we can conclude that E-R retrieval is still very far from being a solved
problem. There is room to explore new feature functions and retrieval approaches.
This is a very difficult problem and the methods we proposed are still far from optimal
performance. Queries such as “Find world war II flying aces and their services” or
“Which mountain is the highest after Annnapurna?” are examples of queries with zero
relevant judgments returned.
On the other hand, ERDM exhibits interesting performance in some queries with
high complexity, such as “Computer scientists who are professors at the university
where Frederick Terman was a professor.” We speculate about some aspects that might
influence performance.
One aspect has to do with the lack of query relaxation in our experimental setup.
The relevant entity tuples might be in our indexes but if the query terms used to search
for entity tuples do not match the query terms harvested from ClueWeb-09B it is not
possible to retrieve those relevant judgments. Query relaxation approaches should be
tried in future work. More specifically, with the recent advances in word embeddings it
is possible to expand queries with alternative query terms that are in the indexes.
On the other hand, we adopted a very simple approach for extracting entities and
relationships. The use of dependency parsing and more complex methods of relation
extraction would allow to filter out noisy terms. We also leave this for future work.
Moreover, to further assess the influence of the extraction method we propose to use
Entity-Relationship Retrieval over a Web Corpus
76
selective text passages containing the target entity pairs and the query terms associated
as well. Then different extraction methods could be tried and straightforward evaluation
of their impact.
(a)
(b)
(c)
′
′
Fig. 4.4 Values of λ for ERDM: (a) all λ, (b) λE , (c) λR . (b) and (c) were obtained
using sum normalization.
To understand how much importance is attributed to the different types of clique
sets, we plot the values of the lambda parameters: λE parameters represent the feature
importance of the set of functions targeting the dependence between entity query terms
and the entity documents in overall ranking score for entity-pairs; λR represent the
importance of the feature functions of the relationship type queries and finally, the
value for λER which is assigned to the feature function that evaluates if each entity
retrieved from both entity type queries belongs to the entity-pair retrieved from the
relationship type query.
4.4 Summary of the Contributions
77
We plot the feature weights learned on each query set, as depicted in Figure 4.4. We
see that λER and λE
T (weight for the unigram language model in the entity type queries)
dominate the ranking function. We further evaluated the relative weights for each one
of the three SDM-like functions using a sum normalization of the three weights for
both entity documents and entity-pair documents. We observe that λE
T dominates on
R
every query set, however the same does not happen with λT . For relationship type
queries the bigram features have higher values for COMPLEX and RELink.
4.4
Summary of the Contributions
In this chapter we presented the following contributions to the E-R retrieval research
area:
1. Indexing method that supports generalization of entity types and entity-relationships
to any attribute and predicate, respectively
2. A semi-automatic method for generating E-R test collections, which resulted in
the RELink Query Collection comprising 600 E-R queries.
3. Results of experiments at scale, with a comprehensive set of queries and corpora.
Chapter 5
Entity Filtering and Financial
Sentiment Analysis
In this chapter we present the work developed to tackle two fundamental Text Mining
problems in ORM: Entity Filtering and Sentiment Analysis. We start by describing
our participation at the Filtering task of RepLab 2013 [32]. We developed a supervised
method to classify tweets as relevant or non-relevant to given target entity. This
method obtained the first place at the competition. Entity Filtering can be seen as
target based Named Entity Disambiguation (NED). Given a target entity under study,
we need to develop a binary classifier to filter out tweets that are not talking about the
target entity. This task is fundamental in ORM as downstream tasks such as Sentiment
Analysis or entity-centric predictions would produce misleading results if noisy signals
were used.
Sentiment Analysis has been widely studied over the last decade. It is a research
area with several ramifications as it is dependent on the type of texts and the objective
of the analysis. We decided to focus our efforts in a not so well explored sub-area of
Sentiment Analysis. SemEval 2017 Task 5 focused on fine-grained sentiment analysis
of financial news and microblogs. As one of the use cases of ORM is to track the
online reputation of companies and try to assess its impact on the stock market we
decided it was a specific task within Sentiment Analysis in which we could make a
contribution. We obtained the fourth place in the Microblogs sub-task using one of
the evaluation metrics. The task consisted in predicting a real continuous variable
from -1.0 to +1.0 representing the polarity and intensity of sentiment concerning
companies/stocks mentioned in short texts. We modeled it as a regression analysis
problem.
Entity Filtering and Financial Sentiment Analysis
80
5.1
Entity Filtering1
The relationship between people and public entities has changed with the rise of social
media. Online users of social networks, blogs and micro-blogs are able to directly
express and spread opinions about public entities, such as politicians, artists, companies
or products. Online Reputation Monitoring (ORM) aims to automatically process
online information about public entities. Some of the common tasks within ORM
consist in collecting, processing and aggregating social network messages to extract
opinion trends about such entities.
Twitter, one of the most used online social networks, provides a search system that
allows users to query for tweets containing a set of keywords. ORM systems often use
Twitter as a source of information when monitoring a given entity. However, search
results are not necessarily relevant to that entity because keywords can be ambiguous.
For instance, a tweet containing the word “columbia” can be related with several
entities, such as a federal state, a city or a university. Furthermore, tweets are short
which results in a reduced context for entity disambiguation. When monitoring the
reputation of a given entity on Twitter, it is first necessary to guarantee that all tweets
are relevant to that entity. Consequently, other processing tasks, such as sentiment
analysis will benefit from filtering out noise in the data stream.
In this work, we tackle the aforementioned problem by applying a supervised
learning approach. Given a set of entities E = {e1 , e2 , ..., ei , ...}, a stream of texts
S = {s1 , s2 , ..., si , ...} (e.g. tweets), we are interested in monitoring the mentions of an
entity ei on the stream S, i.e. the discrete function fm (ei , S). We cast the prediction
of fm as a supervised learning classification problem, in which we want to infer the
target variable fˆm (ei , S) ∈ {0, 1}
We implemented a large set of features that can be generated to describe the
relationship between an entity representation and a text mention. We use metadata
(e.g. entity names, category) provided in the user configurations, text represented with
TF-IDF, similarity between texts and Wikipedia, Freebase entities disambiguation,
feature selection of terms based on frequency and feature matrix transformation using
SVD. The learning algorithms from scikit-learn Python library that were tested for
Entity Filtering include Naive Bayes, SVM, Random Forests, Logistic Regression and
MultiLayer Perceptron.
1
Most of the material contained in this section was published in P.Saleiro, E. M. Rodrigues, C.
Soares, E. Oliveira, “TexRep: A Text Mining Framework for Online Reputation Monitoring” [14]
5.1 Entity Filtering
5.1.1
81
Task Overview
RepLab 2013 [32] focused on monitoring the online reputation of entities on Twitter.
The Filtering task consisted in determining which tweets are relevant to each entity.
The corpus consists of a collection of tweets obtained by querying the Twitter Search
API with 61 entity names during the period from the June 2012 until the December
2012. The corpus contains tweets both in English and Spanish. The balance between
both languages varies for each entity. Tweets were manually annotated as “Related” or
“Unrelated” to the respective target entity.
The data provided to participants consists in tweets and a list of 61 entities. For
each tweet in the corpus we have the target entity id, the language of the tweet, the
timestamp and the tweet id. The content of each URL in the tweets is also provided.
Due to Twitter’s terms of service, the participants were responsible to download the
tweets using the respective id. The data related with entities contain the query used
to collect the tweets (e.g. “BMW”), the official name of the entity (e.g. “Bayerische
Motoren Werke AG”), the category of the entity (e.g. “automotive”), the content of
its homepage and both Wikipedia articles in English and Spanish.
5.1.2
Pre-processing
The Entity Filtering module includes methods to normalize texts by removing all
punctuation, converting text to lower case, removing accents and converting non-ASCII
characters to their ASCII equivalent. Lists of stop words for several languages are
also available, which are used to filter out non relevant words. We rely on the Natural
Language Toolkit (NLTK) to provide those lists.
Contrary to other types of online texts (e.g. news or blog posts) tweets contain
informal and non-standard language including emoticons, spelling errors, wrong letter
casing, unusual punctuation and abbreviations. Therefore, when dealing with tweets,
the Entity Filtering module uses a tokenizer [176] optimized for segmenting words in
tweets. After tokenization we extract user mentions and URLS and hashtags textual
content.
5.1.3
Features
Many different types of features can be used to optimize relevance classification,
including language models, keyword similarities between tweets and entities as well as
external resources projections. We implemented a large number of those. We assume
that future users of our framework for ORM will provide entity-specific data (e.g.
82
Entity Filtering and Financial Sentiment Analysis
homepage/Wikipedia content) prior to training and configuring the Entity Filtering
module.
Language Model: text is encapsulated in a single feature to avoid high dimensionality
issues when adding other features. A TF-IDF representation of unigrams, bigrams
and trigrams for training a text classifier which calculates the probability of a
text being related to the expected entity. The output probabilities of the classifier
are used as a feature.
Keyword similarity: similarity scores between metadata and the texts, obtained by
calculating the ratio of the number of common terms in the texts and the terms
of query and entity name. Similarities at character level are also available in
order to include possible spelling errors in the text.
Web similarity: similarity between the text and the normalized content of the entity’s
homepage and normalized Wikipedia articles are also available. The similarity
value is the number of common terms multiplied by logarithm of the number of
terms in tweet.
Freebase: For each keyword of the entity’s query that exists in the text, two bigrams
are created, containing the keyword and the previous/subsequent word. These
bi-grams are submitted to the Freebase Search API and the list of retrieved
entities are compared with the id of the target entity on Freebase. A Freebase
score is computed by using the inverse position of the target entity in the list of
results retrieved. If the target entity is the first result, the score is 1, if it is the
second, the score is 0.5, and so on. If the target entity is not in the results list,
the score is zero. The feature corresponds to the maximum score of the extracted
bigrams of each text.
Category classifier: a sentence category classifier is created using the Wikipedia
articles of each entity. Each sentence of the Wikipedia articles is annotated with
the category of the corresponding entity. TF-IDF for unigrams, bigrams and
trigrams are calculated and a multi-class classifier (SVM) is trained to classify
each text. The feature is the probability of the text being relevant to its target
class.
5.1.4
Experimental Setup
The dataset used for the competition consists of a collection of tweets both in English
and Spanish, possibly relevant to 61 entities from four domains: automotive, banking,
5.1 Entity Filtering
83
Dataset
Related
Unrelated
Total
Training
Development
Validation
Test
33,193
26,534
6,659
75,470
10,389
8,307
2,082
21,378
43,582
34,841
8,741
96,848
Table 5.1 RepLab 2013 Filtering Task dataset description.
universities and music.The dataset consists of a collection of tweets obtained by querying
the Twitter Search API with 61 entity names during the period from the June 2012
until the December 2012. The balance between both languages varies for each entity.
The complementary data about each target entity is the following:
• query used to collect the tweets (e.g. “BMW”)
• official name of the entity (e.g. “Bayerische Motoren Werke AG”)
• category of the entity (e.g. “automotive”)
• content of entity homepage
• Wikipedia article both in English and Spanish
Tweets were manually annotated as “Related” or “Unrelated” to the respective
target entity. The dataset is divided in training, test and development (Table 5.1). The
training set consists in a total of 45,671 tweets from which we were able to download
43,582. Approximately 75% of tweets in the training set are labeled as “Related”.
We split the training dataset into a development set and a validation set, containing
80% and 20% of the original, respectively. We adopted a randomly stratified split
approach per entity, i.e., we group tweets of each target entity and randomly split them
preserving the balance of “Related”/“Unrelated” tweets. The test dataset consists of
90,356 tweets from which we were able to download 88,934.
We used the development set for trying new features and test algorithms. We
divided the development set in 10 folds generated with the randomly stratified approach.
We used the validation set to validate the results obtained in the development set. The
purpose of this validation step is to evaluate how well the Entity Filtering classifier
generalizes from its training data to the validation data and thus estimate how well
it will generalize to the test set. It allows us to spot overfitting. After validation, we
trained the classifier using all of the data in the training dataset and evaluated in the
test set.
Entity Filtering and Financial Sentiment Analysis
84
5.1.5
Results
We created different classifier runs using different learners, features and we also created
entity specific models as explained in Table 5.2, [177]. We applied selection of features
based on frequency and transformation of content representation using SVD. The
learners tested include Naive Bayes (NB), SVM, Random Forests (RF), Logistic
Regression (LR) and MultiLayer Perceptron (MLP). The evaluation measures used are
accuracy and the official metric of the competition, F-measure which is the harmonic
mean of Reliability and Sensitivity [178]. We present results for the top 4 models
regarding the F-measure. We replicated the best system at RepLab 2013 in the run 1.
Run
Learner
Features
No. of models
1
2
3
SVM
RF
RF
All
All
All
global
global
per entity
Table 5.2 Entity filtering versions description.
Table 5.3 shows the results of top performing runs and the official baseline of the
competition. This baseline classifies each tweet with the label of the most similar tweet
of target entity in the training set using Jaccard similarity coefficient. The baseline
results were obtained using 99.5% of the test set.
Run
Acc. (Val. Set)
1
0.944
2
0.945
3
0.948
Official Baseline Best RepLab
-
Acc.
R
S
F-measure
0.906
0.908
0.902
0.8714
0.908
0.759
0.729
0.589
0.4902
0.729
0.428 0.470
0.451 0.488
0.444 0.448
0.3199 0.3255
0.451 0.488
Table 5.3 Official results for each version plus our validation set accuracy.
Based on the results achieved we are able to conclude that the models of our
classifier are able to generalize successfully. Results obtained in the validation set
are similar to those obtained in the test set. During development, solutions based
on one model per entity were consistently outperformed by solutions based on global
models. We also noticed during development that language specific models (English
and Spanish) did not exhibit improvements in global accuracy, therefore we opted to
use language as a feature. Results show that the best model uses the Random Forests
5.1 Entity Filtering
85
classifier with 500 estimators for training a global model. Though, the Language
Modeling feature encapsulates text by using a specific model trained just with TF-IDF
of n-grams of tweets.
We performed a “break down” analysis for each one of the four categories of RepLab
2013 using Run 2 model, as depicted in Figure 5.1. We observe that University,
Banking and Automotive categories exhibit similar average F-measure results, all above
0.50. In contrast, results for Music shows it is a rather difficult category of entities to
disambiguate (achieving F-measure of 0.39). In fact, some of the entity names of this
category contain very ambiguous tokens, such as “Alicia Keys”, “U2”, “The Wanted”
or “The Script”.
Fig. 5.1 Results grouped by entity’s category using Run 2.
The main goal of this task was to classify tweets as relevant or not to a given target
entity. We have explored several types of features, namely similarity between keywords,
language models and we have also explored external resources such as Freebase and
Wikipedia. Results show that it is possible to achieve an Accuracy over 0.90 and an
F-measure of 0.48 in a test set containing more than 90000 tweets of 61 entities. In
future work, we expect to include the possibility of using entity-specific embedding to
learn a joint embedding space of entities and words, similar to [91].
Entity Filtering and Financial Sentiment Analysis
86
5.2
Financial Sentiment Analysis2
Sentiment Analysis on financial texts has received increased attention in recent years [12].
Nevertheless, there are some challenges yet to overcome [13]. Financial texts, such as
microblogs or newswire, usually contain highly technical and specific vocabulary or
jargon, making the development of specific lexical and machine learning approaches
necessary. Most of the research in Sentiment Analysis in the financial domain has
focused in analyzing subjective text, labeled with explicitly expressed sentiment.
However, it is also common to express financial sentiment in an implicit way.
Business news stories often refer to events that might indicate a positive or negative
impact, such as in the news title “company X will cut 1000 jobs”. Economic indicators,
such as unemployment and change over time such as drop or increase can also provide
clues on the implicit sentiment [179]. Contrary to explicit expressions (subjective
utterances), factual text types often contain objective statements that convey a desirable
or undesirable fact [92].
Recent work proposes to consider all types of implicit sentiment expressions [180].
The authors created a fine grained sentiment annotation procedure to identify polar
expressions (implicit and explicit expressions of positive and negative sentiment). A
target (company of interest) is identified in each polar expression to identify the
sentiment expressions that are relevant. The annotation procedure also collected
information about the polarity and the intensity of the sentiment expressed towards the
target. However, there is still no automatic approach, either lexical-based or machine
learning based, that tries to model this annotation scheme.
In this work, we propose to tackle the aforementioned problem by taking advantage
of unsupervised learning of word embeddings in financial tweets and financial news
headlines to construct a domain-specific syntactic and semantic representation of words.
We combine bag-of-embeddings with traditional approaches, such as pre-processing
techniques, bag-of-words and financial lexical-based features to train a regressor for
sentiment polarity and intensity. We study how different regression algorithms perform
using all features in two different sub-tasks at SemEval-2017 Task 5: microblogs and
news headlines mentioning companies/stocks. Moreover, we compare how different
combinations of features perform in both sub-tasks. The system source code and word
embeddings developed for the competition are publicly available.3
2
The material contained in this section was published in P. Saleiro, E. M. Rodrigues, C. Soares, E.
Oliveira, “FEUP at SemEval-2017 Task 5: Predicting Sentiment Polarity and Intensity with Financial
Word Embeddings” [18]
3
https://github.com/saleiro/Financial-Sentiment-Analysis
5.2 Financial Sentiment Analysis
5.2.1
87
Task Overview
The task 5 of SemEval 2017 [181] consisted of fine-grained sentiment analysis of financial
short texts and it was divided in two sub-tasks based on the type of text. Sub-task
5.1 – Microblogs – consisted of stocktwits and tweets focusing on stock market events
and assessments from investors and traders. Companies/stocks were identified using
stock symbols, the so called cashtags, e.g.“$AMZN” for the company Amazon.com, Inc.
Sub-task 5.2 – News Headlines – consisted of sentences extracted from Yahoo Finance
and other financial news sources on the Internet. In this case, companies/stocks
were identified using their canonical name and were previously annotated by the task
organizers.
Sub-task
5.1 - Microblogs
5.2 - Headlines
Company
JPMorgan
Glencore
Text Span
Sentiment Score
“its time to sell banks"
-0.763
“Glencore’s annual results +0.900
beat forecasts"
Table 5.4 Training set examples for both sub-tasks.
The goal of both sub-tasks was the following: predict the sentiment polarity and
intensity for each of the companies/stocks mentioned in a short text instance (microblog
message or news sentence). The sentiment score is a real continuous variable in the range
of -1.0 (very negative/bearish) to +1.0 (very positive/bullish), with 0.0 designating
neutral sentiment. Table 5.4 presents two examples from the training set. Task
organizers provided 1700 microblog messages for training and 800 messages for testing
in sub-task 5.1, while in sub-task 5.2, 1142 news sentences were provided for training
and 491 for testing. Submissions were evaluated using the cosine similarity [181].
5.2.2
Financial Word Embeddings
Mikolov et al. [182] created word2vec, a computationally efficient method to learn
distributed representation of words, where each word is represented by a distribution of
weights (embeddings) across a fixed set of dimensions. Furthermore, Mikolov et al. [100]
showed that this representation is able to encode syntactic and semantic similarities in
the embedding space.
The training objective of the skip-gram model, defined by Mikolov et al. [100], is
to learn the target word representation (embeddings) that maximize the prediction of
its surrounding words in a context window. Given the wt word in a vocabulary the
objective is to maximize the average log probability:
Entity Filtering and Financial Sentiment Analysis
88
T
X
1X
log P (wt+j |wt )
T t=1 −c≤j≤c,j̸=0
(5.1)
where c is the size of the context window, T is the total number of words in the
vocabulary and wt+j is a word in the context window of wt . After training, a low
dimensionality embedding matrix E encapsulates information about each word in the
vocabulary and its use (surrounding contexts).
We used word2vec to learn word embeddings in the context of financial texts using
unlabeled tweets and news headlines mentioning companies/stocks from S&P 500.
Tweets were collected using the Twitter streaming API with cashtags of stocks titles
serving as request parameters. Yahoo Finance API was used for requesting financial
news feeds by querying the canonical name of companies/stocks. The datasets comprise
a total of 1.7M tweets and 626K news titles.
We learned separate word embeddings for tweets and news headlines using the
skip-gram model. We tried several configurations of word2vec hyperparameters. The
setup resulting in the best performance in both sub-tasks was skip-gram with 50
dimensions, removing words occurring less than 5 times, using a context window of 5
words and 25 negative samples per positive example.
Even though the text collections for training embeddings were relatively small, the
resulting embedding space exhibited the ability to capture semantic word similarities
in the financial context. We performed simple algebraic operations to capture semantic
relations between words, as described in Mikolov et al. [113]. For instance, the
skip-gram model trained on tweets shows that vector (“bearish”) - vector(“loss”) +
vector(“gain”) results in vector (“bullish”) as most similar word representation.
5.2.3
Approach
In this section we describe the implementation details of the proposed approach.
Pre-Processing
A set of pre-processing operations are applied to every microblog message and news
sentence in the training/test sets of sub-tasks 5.1 and 5.2, as well as in the external
collections for training word embeddings:
• Character encoding and stopwords: every message and headline was encoded
in UTF-8. Standard english stopword removal is also applied.
5.2 Financial Sentiment Analysis
89
• Company/stock and cash obfuscation: both cashtags and canonical company names strings were replaced by the string _company_. Dollar or Euro signs
followed by numbers were replaced by the string _cash_amount_.
• Mapping numbers and signs: numbers were mapped to strings using bins
(0-10, 10-20, 20-50, 50-100, >100). Minus and plus signs were coverted to minus
and plus, “B” and “M” to billions and millions, respectively. The % symbol was
converted to percent. Question and exclamation marks were also converted to
strings.
• Tokenization, punctuation, lowercasing: tokenization was performed using
Twokenizer [183], the remaining punctuation was removed and all characters were
converted to lowercase.
Features
We combined three different group of features: bag-of-words, lexical-based features and
bag-of-embeddings.
• Bag-of-words: we apply standard bag-of-words as features. We tried unigrams,
bi-grams and tri-grams with unigrams proving to obtain higher cosine similarity
in both sub-tasks.
• Sentiment lexicon features: we incorporate knowledge from manually curated
sentiment lexicons for generic Sentiment Analysis as well as lexicons tailored
for the financial domain. The Laughran-Mcdonald financial sentiment dictionary [184] has several types of word classes: positive, negative, constraining,
litigious, uncertain and modal. For each word class we create a binary feature
for the match with a word in a microblog/headline and a polarity score feature
(positive - negative normalized by the text span length). As a general-purpose
sentiment lexicon we use MPQA [185] and created binary features for positive,
negative and neutral words, as well as, the polarity score feature.
• Bag-of-Embeddings: we create bag-of-embeddings by taking the average of
word vectors for each word in a text span. We used the corresponding embedding
matrix trained on external Twitter and Yahoo Finance collections for sub-task
5.1 and sub-task 5.2, respectively.
Entity Filtering and Financial Sentiment Analysis
90
5.2.4
Experimental Setup
In order to avoid overfitting we created a validation set from the original training
datasets provided by the organizers. We used a 80%-20% split and sampled the
validation set using the same distribution as the original training set. We sorted
the examples in the training set by the target variable values and skipped every 5
examples. Results are evaluated using Cosine similarity [181] and Mean Average Error
(MAE). The former gives more importance to differences in the polarity of the predicted
sentiment while the latter is concerned with how well the system predicts the intensity
of the sentiment.
We opted to model both sub-tasks as single regression problems. Three different
regressors were applied: Random Forests (RF), Support Vector Machines (SVM)
and MultiLayer Perceptron (MLP). Parameter tuning was carried using 10 fold cross
validation on the training sets.
5.2.5
Results and Analysis
In this section we present the experimental results obtained in both sub-tasks. We
provide comparison of different learning algorithms using all features, as well as, a
comparison of different subsets of features, to understand the information contained in
each of them and also how they complement each other.
Task 5.1 - Microblogs
Table 5.5 presents the results obtained using all features in both validation set and test
sets. Results in the test set are worse than in the validation set with the exception
of MLP. The official score obtained in sub-task 5.1 was 0.6948 using Random Forests
(RF), which is the regressor that achieves higher cosine similarity and lower MAE in
both training and validation set.
Regressor Set
RF
Val
RF
Test
SVR
Val
SVR
Test
MLP
Val
MLP
Test
Table 5.5 Microblog results with all
Cosine MAE
0.7960 0.1483
0.6948 0.1886
0.7147 0.1944
0.6227 0.2526
0.6720 0.2370
0.6789 0.2132
features on validation and test sets.
5.2 Financial Sentiment Analysis
91
We compared the results obtained with different subsets of features using the
best regressor, RF, as depicted in Table 5.6. Interestingly, bag-of-words (BoW) and
bag-of-embeddings (BoE) complement each other, obtaining better cosine similarity
than the system using all features. Financial word embeddings (BoE) capture relevant
information regarding the target variables. As a single group of features it achieves
a cosine similarity of 0.6118 and MAE of 0.2322. It is also able to boost the overall
performance of BoW with gains of more than 0.06 in cosine similarity and reducing
MAE more than 0.03.
The individual group of features with best performance is Bag-of-words while the
worst is a system trained using Lex (only lexical-based features). While Lex alone
exhibits poor performance, having some value but marginal, when combined with
another group of features, it improves the results of the latter, as in the case of BoE +
Lex and BoW + Lex.
Features
Cosine MAE
Lex
0.3156 0.3712
BoE
0.6118 0.2322
BoW
0.6386 0.2175
BoE + Lex 0.6454 0.2210
Bow + Lex 0.6618 0.2019
Bow + BoE 0.7023 0.1902
All
0.6948 0.1886
Table 5.6 Features performance breakdown on test set using RF.
Task 5.2 - News Headlines
Results obtained in news headlines are very different from the ones of the previous
sub-task, proving that predicting sentiment polarity and intensity in news headlines is
a completely different problem compared to microblogs. Table 5.7 shows that MLP
obtains the best results in the test set using both metrics while SVR obtains the
best performance in the validation set. The best regressor of sub-task 5.1, RF is
outperformed by both SVR and MLP. The official result obtained at sub-task 5.2 was
a cosine similarity of 0.68 using MLP.
Table 5.8 shows the results of the different groups of features in sub-task 5.2 for
MLP regressor. The most evident observation is that word embeddings are not effective
in this scenario. On the other hand, lexical based features have significantly better
performance in news headlines than in microblogs. Despite this, the best results are
obtained using all features.
Entity Filtering and Financial Sentiment Analysis
92
Regressor Set
RF
Val
RF
Test
SVR
Val
SVR
Test
MLP
Val
MLP
Test
Table 5.7 News Headlines results with
Features
BoE
Lex
BoW
BoE + Lex
BoW + Lex
BoW + BoE
All
Table 5.8 Features performance
Cosine MAE
0.5316 0.2539
0.6562 0.2258
0.6397 0.2422
0.6621 0.2424
0.6176 0.2398
0.6800 0.2271
all features on validation and test sets.
Cosine MAE
0.0383 0.3537
0.5538 0.2788
0.6420 0.2364
0.5495 0.2830
0.6733 0.2269
0.6417 0.2389
0.6800 0.2271
breakdown on test set using MLP.
Analysis
Financial word embeddings were able to encapsulate valuable information in sub-task
5.1 - Microblogs but not so much in the case of sub-task 5.2 - News Headlines. We
hypothesize that as we had access to a much smaller dataset (∼ 600K) for training
financial word embeddings for news headlines, this resulted in reduced ability to capture
semantic similarities in the financial domain. Other related works in Sentiment Analysis
usually take advantage of a much larger dataset for training word embeddings [186].
On the other hand, lexical features showed poor performance in microblog texts
but seem to be very useful using news headlines. The fact that microblogs have poor
grammar, slang and informal language reveals that financial lexicons created using
well written and formal financial reports, result better in news headlines rather than
in microblog texts.
After inspecting microblog texts and headlines in which our models showed poor
performance we believe it would be important to also encapsulate syntactic and semantic
dependencies in our models. For instance, our model predicted a sentiment score of
-0.467 for the microblog message “was right to reject the offer” while the true value is
0.076. Similar examples include “Glencore shares in record crash as profit fears grow”
and “I would rather be a buyer at these levels then trying to sell”, in which our models
5.3 Summary of the Contributions
93
has absolute errors around 0.5. Other type of errors have to do with intensity of the
sentiment in which our model correctly predicts the polarity but still has a large error.
5.2.6
Concluding Remarks
Work reported here reported is concerned with the problem of predicting sentiment
polarity and intensity of financial short texts. Previous work showed that sentiment
is often depicted in an implicit way in this domain. We created financial-specific
continuous word representations in order to obtain domain specific syntactic and
semantic relations between words. We combined traditional bag-of-words and lexicalbased features with bag-of-embeddings to train a regressor of both sentiment polarity
and intensity. Results show that different combination of features attained different
performances on each sub-task. Future work will consist on collecting larger external
datasets for training financial word embeddings of both microblogs and news headlines.
We also have planned to perform the regression analysis using Deep Neural Networks.
5.3
Summary of the Contributions
In this chapter we present some contributions to two fundamental Text Mining problems
in ORM.
• A supervised learning approach for Entity Filtering on tweets, achieving state-ofthe-art performance using a relatively small training set.
• Created and made available word embeddings trained from financial texts.
• A supervised learning approach for fine-grained sentiment analysis of financial
texts.
Chapter 6
Text-based Entity-centric
Prediction
In this chapter we explore the predictive power of entity-centric information in online
news and social media in the context of ORM. We address two different predictive tasks.
The first is concerned with predicting entity popularity on Twitter based on signals
extracted from the news cycle. We aim to study different sets of signals extracted from
online news mentioning specific entities that could influence or at least are correlated
with future popularity of those entities on Twitter. We know that entity popularity
on social media can be influenced by several factors but we are only interested in
exploring the interplay between online news and social media for entities that are
frequently mentioned on the news cycle such as politicians or footballers. This could be
particularly interesting for anticipating public relations damage control once a polemic
news article is published. Or even for editorial purposes to maximize buzz on social
media.
The second predictive task consists in using entity-centric sentiment polarity extracted from tweets to predict political polls. There has been several research work
trying to assess the predictive power of social media to predict the outcome of political opinion surveys or elections. However, each study proposes its own method
of aggregating polarity scores over time, however, there is not a consensus on which
sentiment aggregate function is the most adequate for this problem. We propose to
use and contrast several sentiment aggregate functions reported in the literature, by
assessing their predictive power on a specific case comprising data collected during the
Portuguese bailout (2011-2013).
Text-based Entity-centric Prediction
96
6.1
Exploring Online News for Reputation Monitoring on Twitter
1
Online publication of news articles has become a standard behavior of news outlets,
while the public joined the movement either using desktop or mobile terminals. The
resulting setup consists of a cooperative dialog between news outlets and the public
at large. Latest events are covered and commented by both parties in a continuous
basis through the social media, such as Twitter. When sharing or commenting news
on social media, users tend to mention the most predominant entities mentioned in
the news story. Therefore, entities, such as public figures, organizations, companies
or geographic locations, can act as latent connections between online news and social
media.
Online Reputation Monitoring (ORM) focuses on continuously tracking what is
being said about entities on social media and online news. Automatic collection and
processing of comments and opinions on social media is now crucial to understand
the reputation of individuals and organizations and therefore to manage their public
relations. However, ORM systems would be even more useful if they would be able to
know in advance if social media users will talk a lot about the target entities or not.
We hypothesize that for entities that are frequently mentioned on the news (e.g.
politicians) it is possible to establish a predictive link between online news and popularity on social media. We cast the problem as a supervised learning classification
approach: to decide whether popularity will be high or low based on features extracted
from the news cycle. We define four set of features: signal, textual, sentiment and
semantic. We aim to respond to the following research questions:
• Is online news a valuable source of information to effectively predict entity
popularity on Twitter?
• Do online news carry different predictive power based on the nature of the entity
under study?
• How do different thresholds for defining high and low popularity affect the
effectiveness of our approach?
• Does the performance remain stable for different prediction times?
• What is the most important feature set for predicting entity popularity on Twitter
based on the news cycle?
1
The material contained in this section was published in P. Saleiro and C. Soares, “Learning from
the News: Predicting Entity Popularity on Twitter” [19]
6.1 Exploring Online News for Reputation Monitoring on Twitter
97
• Do individual sets of features exhibit different importance for different entities?
6.1.1
Approach
The starting point of our hypothesis is that for entities that are frequently mentioned
on the news (e.g. politicians) it is possible to predict popularity on social media using
signals extracted from the news cycle. The first step towards a solution requires the
definition of entity popularity on social media.
Entity Popularity
There are different ways of expressing the notion of popularity on social media. For
example, the classical way of defining it is through the number of followers of a Twitter
account or the number of likes in a Facebook page. Another notion of popularity,
associated with entities, consists on the number of retweets or replies on Twitter and
post likes and comments on Facebook. We define entity popularity based on named
entity mentions in social media messages. Mentions consist of specific surface forms of
an entity name. For example, “Cristiano Ronaldo” might be mentioned also using just
“Ronaldo” or “#CR7”.
Given an set of entities E = {e1 , e2 , ..., ei , ...}, a daily stream of social media messages
S = {s1 , s2 , ..., si , ...} and a daily stream of online news articles N = {n1 , n2 , ..., ni , ...}
we are interested in monitoring the mentions of an entity ei on the social media stream
St , i.e. the discrete function fm (ei , St ). Let T be a daily time frame T = [tp , tp+h ],
where the time tp is the time of prediction and tp+h is the prediction horizon time. We
want to learn a target popularity function fp on social media stream S as a function of
the given entity ei , the online news stream N and the time frame T :
t=tp+h
fp (ei , N, T ) =
X
fm (ei , St )
t=tp
which corresponds to integrating fm (ei , S) over T .
Given a day di , a time of prediction tp , we extract features from the news stream
N until tp and predict fp until the prediction horizon tp + h. We measure popularity
on a daily basis, and consequently, we adopted tp+h as 23:59:59 everyday. For example,
if tp equals to 8 a.m, we extract features from N until 07:59:59 and predict fp in the
interval 08:00 - 23:59:59 on day di . In the case of tp equals to midnight, we extract
Text-based Entity-centric Prediction
98
features from N on the 24 hours of previous day di−1 to predict fp for the 24 hours of
di .
We cast the prediction of fp (ei , N, T ) as a supervised learning classification problem,
in which we want to infer the target variable fˆp (ei , N, T ) ∈ {0, 1} defined as:
fˆp =
0(low),
if P (fp (ei , N, T ) ≤ δ) = k
1(high),
if P (fp (ei , N, T ) > δ) = 1 − k
where δ is the inverse of cumulative distribution function at k of fp (ei , N, T ) as measured
in the training set, a similar approach to Tsagkias et al. [119]. For instance, k = 0.5
corresponds to the median of fp (ei , N, T ) in the training set and higher values of k
mean that fp (ei , N, T ) has to be higher than k examples on the training set to consider
fˆp = 1, resulting in a reduced number of training examples of the positive class high.
News Features
Previous work has focused on the influence of characteristics of the social media stream
S in the adoption and popularity of memes and hashtags [126]. In contrast, the main
goal of this work is to investigate the predictive power of the online news stream N .
Therefore we extract four types of features from N which we label: (i) signal, (ii)
textual, (iii) sentiment and (iv) semantic, as depicted in Table 6.1. One important
issue is how can we filter relevant news items to ei . There is no consensus on how to
link a news stream N with a social media stream S. Some works use URLs from N ,
shared on S, to filter simultaneously relevant news articles and social media messages
[117]. As our work is entity oriented, we select news articles with mentions of ei as our
relevant N .
Signal Features - This type of features depict the “signal” of the news cycle
mentioning ei and we include a set of counting variables as features, focusing on the
total number of news mentioning ei in specific time intervals, mentions on news titles,
the average length of news articles, the different number of news outlets that published
news mentioning ei as well as, features specific to the day of the week to capture any
seasonal trend on the popularity. The idea is to capture the dynamics of news events,
for instance, if ei has a sudden peak of mentions on N , a relevant event might have
happened which may influence fp .
Textual features - To collect textual features we build a daily profile of the news cycle
by aggregating all titles of online news articles mentioning ei for the daily time frame
[0, tp ] in di . We select the top 10,000 most frequent terms (unigrams and bi-grams) in
6.1 Exploring Online News for Reputation Monitoring on Twitter
99
the training set and create a document-term matrix R. Two distinct methods were
applied to capture textual features.
The first method is to apply TF-IDF weighting to R. We employ Singular Value
Decomposition (SVD) to capture similarity between terms and reduce dimensionality.
It computes a low-dimensional linear approximation σ. The final set of features for
training and testing is the TF-IDF weighted term-document matrix R combined with
σR which produces 10 real valued latent features. When testing, the system uses the
same 10,000 terms from the training data and calculates TF-IDF using the IDF from
the training data, as well as, σ for applying SVD on test data.
The second method consists in applying Latent Dirichlet allocation (LDA) to
generate a topic model of 10 topics (features). The system learns a topic-document
distribution θ and a word distribution over topics φ using the training data for a given
entity ei . When testing, the system extracts the word distribution of the news title
vector r on a test day d′i . Then, by using φ learned on training data, it calculates
the probability of r belonging to one of the 10 topics learned before. The objective of
extracting this set of features is to create a characterization of the news stream that
mentions ei , namely, which are the most salient terms and phrases on each day di as
well as the latent topics associated with ei . By learning our classifier we hope to obtain
correlations between certain terms and topics and fp .
Sentiment features - We include several types of word level sentiment features. The
assumption here is that subjective words on the news will result in more reactions
on social media, as exposed in [187]. Once again we extract features from the titles
of news mentioning ei for the daily time frame [0, tp ]. We use a sentiment lexicon as
SentiWordNet to extract subjective terms from the titles daily profile and label them
as positive, neutral or negative polarity. We compute count features for number of
positive, negative, neutral terms as well as difference and ratio of positive and negatives
terms. Similar to textual features we create a TFIDF weighted term-document matrix
R using the subjective terms from the title and apply SVD to compute 10 real valued
sentiment latent features.
Semantic features - We use the number of different named entities recognized in
N on day di until tp, as well as, the number of distinct news category tags extracted
from the news feeds metadata. These tags, common in news articles, consist of
author annotated terms and phrases that describe a sort of semantic hierarchy of news
categories, topics and news stories (e.g. “european debt crisis”). We create a TF-IDF
weighted entity-document and TF-IDF tag-document matrices and applied SVD to
each of them to reduce dimensionality to 10. The idea is to capture interesting entity
Text-based Entity-centric Prediction
100
Table 6.1 Summary of the four type of features we consider.
Number
Signal
1
2
3
4
5
6
7
8
Textual
9-18
19-28
Sentiment
29
30
31
32
33
34
35-44
Semantic
45
46
47-56
57-66
Feature
Description
news
news di−1
news total di−1
news titles
avg content
sources
weekday
is weekend
number of news mentions of ei in [0, tp ] in di
number of news mentions of ei in [0, tp ] in di−1
number of news mentions of ei in [0, 24[ in di−1
number of title mentions in news of ei in [0, tp ] in di
average content length of news of ei in [0, tp ] in di
number of different news sources of ei in [0, tp ] in di
day of week
true if weekend, false otherwise
tfidf titles
LDA titles
TF-IDF of news titles [0, tp ] in di
LDA-10 of news titles [0, tp ] in di
pos
neg
neu
ratio
diff
subjectivity
tfidf subj
number of positive words in news titles [0, tp ] in di
number of negative words in news titles [0, tp ] in di
number of neutral words in news titles [0, tp ] in di
positive/negative
positive − negative
P
(positive + negative + neutral)/ words
TF-IDF of subjective words (pos, neg and neu)
entities
tags
tfidf entities
tfidf tags
number of entities in news [0, tp ] in di
number of tags in news [0, tp ] in di
TF-IDF of entities in news [0, tp ] in di
TF-IDF of news tags [0, tp ] in di
co-occurrences as well as, news stories that are less transient in time and might be able
to trigger popularity on Twitter.
Learning Framework
Let x be the feature vector extracted from the online news stream N on day di until
tp. We want to learn the probability P (fˆp = 1|X = x). This can be done using the
inner product between x and a weighting parameter vector w ∈ R, w⊤ x.
Using logistic regression and for binary classification one can unify the definition of
ˆ
p(fp = 1|x) and p(fˆp = 0|x) with
6.1 Exploring Online News for Reputation Monitoring on Twitter
p(fˆp |x) =
101
1
1+
e−fˆp w⊤ x
Given a set of z instance-label pairs (xi ,fˆp i ), with i = 1, ..., z and fˆp i ∈ {0, 1} we
solve the binary class L2 penalized logistic regression optimization problem, where
C>0
n
X
1
ˆ ⊤
min w⊤ w + C
log(1 + e−fp i w xi )
w 2
i=1
We apply this approach following an entity specific basis, i.e. we train an individual
model for each entity. Given a set of entities E to which we want to apply our approach
and a training set of example days D = {d1 , d2 , ..., di , ...}, we extract a feature vector
xi for each entity ei on each training day di . Therefore, we are able to learn a model of
w for each ei . The assumption is that popularity on social media fp is dependent of
the entity ei and consequently we extract entity specific features from the news stream
N . For instance, the top 10,000 words of the news titles mentioning ei are not the
same for ej .
6.1.2
Experimental Setup
This work uses Portuguese news feeds and tweets collected from January 1, 2013
to January 1, 2016, consisting of over 150 million tweets and 5 million online news
articles2 . To collect and process raw Twitter data, we use a crawler, which recognizes
and disambiguates named entities on Twitter [188]. News data is provided by a
Portuguese online news aggregator3 . This service handles online news from over 60
Portuguese news outlets and it is able to recognize entities mentioned on the news.
We choose the two most common news categories: politics and football and select
the 3 entities with highest number of mentions on the news for both categories. The
politicians are two former Prime-ministers, José Sócrates and Pedro Passos Coelho and
the incumbent, António Costa. The football entities are two coaches, Jorge Jesus and
José Mourinho, and the most famous Portuguese football player, Cristiano Ronaldo.
Figure 7.5 depicts the behavior of daily popularity of the six entities on the selected
community stream of Twitter users for each day from July 2014 until July 2015. As
expected, it is easily observable that in some days the popularity on Twitter exhibits
2
3
Dataset is available for research purposes. Access requests via e-mail.
http://www.sapo.pt
Text-based Entity-centric Prediction
102
6000
5000
Pedro Passos Coelho
José Sócrates
António Costa
Jorge Jesus
Cristiano Ronaldo
José Mourinho
4000
3000
2000
1000
0
Aug
Sep
Oct
Nov
Dec
Jan
2015
Feb
Mar
Apr
May
Jun
Fig. 6.1 Daily popularity on Twitter of entities under study.
Training
Iteration 1
Jan 2013
Feb 2013
Test
...
Dec 2014
Jan 2015
Training
Iteration 2
Jan 2013
Feb 2013
Mar 2013
Feb 2015
Test
...
Jan 2015
Feb 2015
Fig. 6.2 Training and testing sliding window - first 2 iterations.
bursty patterns. For instance, when José Sócrates was arrested in November 21st 2014
or when Cristiano Ronaldo won the FIFA Ballon d’Or in January 12th 2015.
We defined the years of 2013 and 2014 as training set and the whole year of 2015 as
test set. We applied a monthly sliding window setting in which we start by predicting
entity popularity for every day of January 2015 (i.e. the test set) using a model trained
on the previous 24 months, 730 days (i.e. the training set). Then, we use February
2015 as the test set, using a new model trained on the previous 24 months. Then
March and so on, as depicted in Figure 6.2. We perform this evaluation process, rolling
the training and test set until December 2015, resulting in 365 days under evaluation.
6.1 Exploring Online News for Reputation Monitoring on Twitter
103
The process is applied for each one of the six entities, for different time of predictions
tp and for different values of the decision boundary k. We test tp = 0, 4, 8, 12, 16, 20
and k = 0.5, 0.65, 0.8. Therefore, we report results in Section 6.2.4 for 18 different
experimental settings, for each one of the six entities. The goal is to understand how
useful the news cycle is for predicting entity popularity on Twitter for different entities,
at different hours of the 24 hours cycle and with different thresholds for considering
popularity as high or low.
6.1.3
Results and Discussion
Results are depicted in Table 6.2. We report F1 on positive class since in online
reputation monitoring is more valuable to be able to predict high popularity than
low. Nevertheless, we also calculated overall Accuracy results, which were better than
the F1 reported here. Consequently, this means that our system is fairly capable of
predicting low popularity. We organize this section based on the research questions we
presented in the beginning of this section.
Is online news valuable as source of information to effectively predict entity
popularity on Twitter?
Do online news carry different predictive power based on the nature of the entity
under study?
Results show that performance varies with each target entity ei . In general, results
are better in the case of predicting popularity of politicians. In the case of football
public figures, Jorge Jesus exhibits similar results with the three politicians but José
Mourinho and especially Cristiano Ronaldo represent the worst results in our setting.
For instance, when Cristiano Ronaldo scores three goals in a match, the burst on
popularity is almost immediate and not possible to predict in advance.
Further analysis showed that online news failed to be informative of popularity in
the case of live events covered by other media, such as TV. Interviews and debates on
one hand, and live football games on the other, consist of events with unpredictable
effects on popularity. Cristiano Ronaldo can be considered a special case in our
experiments. He is by far the most famous entity in our experiments and in addition,
he is also an active Twitter user with more than 40M followers. This work focus on
assessing the predictive power of online news and its limitations. We assume that for
Cristiano Ronaldo, endogenous features from the Twitter itself would be necessary to
obtain better results.
Text-based Entity-centric Prediction
104
Table 6.2 F1 score of popularity high as function of tp and k equal to 0.5, 0.65 and 0.8
respectively.
Entity \tp (hour)
k = 0.50
António Costa
José Sócrates
Pedro Passos Coelho
Cristiano Ronaldo
Jorge Jesus
José Mourinho
k = 0.65
António Costa
José Sócrates
Pedro Passos Coelho
Cristiano Ronaldo
Jorge Jesus
José Mourinho
k = 0.80
António Costa
José Sócrates
Pedro Passos Coelho
Cristiano Ronaldo
Jorge Jesus
José Mourinho
0
4
8
12
16
20
0,76
0,77
0,72
0,35
0,73
0,62
0,67
0,66
0,63
0,41
0,68
0,46
0,74
0,73
0,70
0,45
0,69
0,51
0,77
0,75
0,70
0,37
0,68
0,56
0,75
0,75
0,74
0,35
0,69
0,55
0,72
0,75
0,71
0,32
0,70
0,45
0,61
0,63
0,58
0,29
0,63
0,56
0,60
0,57
0,57
0,35
0,61
0,39
0,66
0,62
0,65
0,42
0,63
0,48
0,64
0,66
0,67
0,41
0,59
0,56
0,60
0,64
0,67
0,36
0,62
0,47
0,60
0,62
0,65
0,30
0,64
0,38
0,48
0,48
0,47
0,14
0,50
0,32
0,51
0,42
0,46
0,29
0,48
0,32
0,55
0,47
0,56
0,31
0,51
0,36
0,53
0,53
0,56
0,26
0,48
0,41
0,44
0,47
0,52
0,20
0,57
0,41
0,49
0,35
0,54
0,21
0,56
0,36
How do different thresholds for defining high and low popularity affect the
effectiveness of our approach?
Our system exhibits top performance with k = 0.5, which corresponds to balanced
training sets, with the same number of high and low popularity examples on each
training set. Political entities exhibit F1 scores above 0.70 with k = 0.5. On the other
hand, as we increase k, performance deteriorates. We observe that for k = 0.8, the
system predicts a very high number of false positives. It is very difficult to predict
extreme values of popularity on social media before they happen. We plan to tackle
this problem in the future by also including features about the target variable in the
current and previous hours, i.e., time-series auto-regressive components.
Does performance remain stable for different time of predictions?
Results show that time of prediction affects the performance of the system, specially
for the political entities. In their case, F1 is higher when time of prediction is noon
6.1 Exploring Online News for Reputation Monitoring on Twitter
105
Fig. 6.3 Individual feature type F1 score for tp = 12 at k = 0.5.
and 4 p.m. which is an evidence that in politics, most of the news events that trigger
popularity on social media are broadcast by news outlets in the morning. It is very
interesting to compare results for midnight and 4 a.m./8 a.m. The former use the
news articles from the previous day, as explained in Section 6.1.1, while the latter use
news articles from the first 4/8 hours of the day under prediction. In some examples,
Twitter popularity was triggered by events depicted on the news from the previous day
and not from the current day.
What is the most important feature set for predicting entity popularity on Twitter
based on the news cycle?
Do individual set of features exhibit different importance for different entities?
Figure 6.3 tries to answer these two questions. The first observation is that the
combination of all groups of features does not lead to substantial improvements.
Semantic features alone achieve almost the same F1 score as the combination of all
features. However in the case of Mourinho and Ronaldo, the combination of all features
lead to worse F1 results than the semantic set alone.
Text-based Entity-centric Prediction
106
Sentiment features are the second most important for all entities except José
Mourinho. Signal and Textual features are less important and this was somehow a
surprise. Signal features represent the surface behavior of news articles, such as the
volume of news mentions of ei before tp and we were expecting an higher importance.
Regarding Textual features, we believe that news articles often refer to terms and
phrases that explain past events in order to contextualize a news article.
In future work, we consider alternative approaches for predicting future popularity
of entities that do not occur everyday on the news, but do have social media public
accounts, such as musicians or actors. In opposition, entities that occur often on the
news, such as economics ministers and the like, but do not often occur in the social
media pose also a different problem.
6.2
Predicting Political Polls using Twitter
Sentiment4
Surveys and polls using the telephone are widely used to provide information of what
people think about parties or political entities [150]. Surveys randomly select the
electorate sample, avoiding selection bias, and are designed to collect the perception
of a population regarding some subject, such as in politics or marketing. However
this method is expensive and time consuming [150]. Furthermore, over the years it is
becoming more difficult to contact people and persuade them to participate in these
surveys [189].
On the other hand, the rise of social media, namely Twitter and Facebook, has
changed the way people interact with news. This way, people are able to react and
comment any news in real time [135]. One challenge that several research works have
been trying to solve is to understand how opinions expressed on social media, and
their sentiment, can be a leading indicator of public opinion. However, at the same
time there might exist simultaneously positive, negative and neutral opinions regarding
the same subject. Thus, we need to obtain a value that reflects the general image of
each political target in social media, for a given time period. To that end, we use
sentiment aggregate functions. In summary, a sentiment aggregate function calculates
a global value based on the number of positive, negative, and neutral mentions of each
political target, in a given period. We conducted an exhaustive study and collected and
implemented several sentiment aggregate functions from the state of the art [135–143].
4
The material contained in this section was published in P. Saleiro, L. Gomes, C. Soares, “Sentiment
Aggregate Functions for Political Opinion Polling using Microblog Streams” [21]
6.2 Predicting Political Polls using Twitter
Sentiment
107
Thus, the main objective of our work is to study and define a methodology capable
of successfully estimating the poll results, based on opinions expressed on social media,
represented by sentiment aggregators. We applied this problem to the Portuguese
bailout case study, using Tweets from a sample of the Portuguese Tweetosphere and
Portuguese polls as gold standard. Given the monthly periodicity of polls, we needed to
aggregate the data by month. This approach allows each aggregate value to represent
the monthly sentiment for each political party. Due to the absence of a general
sentiment aggregate function suitable for different case studies, we decided to include
all aggregate functions as features of the regression model. Therefore the learning
algorithm is able to adapt to the most informative aggregate functions through time.
6.2.1
Methodology
To collect and process raw Twitter data, we use an online reputation monitoring
platform [36] which can be extended by researchers interested in tracking political
opinion on the web. It collects tweets from a predefined sample of users, applies named
entity disambiguation [177] and generates indicators of both frequency of mention
and polarity (positivity/negativity) [190] of mentions of entities over time. In our
case, tweets are collected from the stream of 100 thousand different users, representing
a sample of the Portuguese community on Twitter. This sample was obtained by
expanding a manually annotated seed set of 1000 users using heuristics such as, as
language of posts, language of followers posts or geo-location [188].
The platform automatically classifies each tweet according to its sentiment polarity.
If a message expresses a positive, negative or neutral opinion regarding an entity (e.g.
politicians), it is classified as positive, negative or neutral mention, respectively. The
sentiment classifier uses a corpus of 1500 annotated tweets as training set and it has
achieved an accuracy over 80% using 10-fold cross validation. These 1500 tweets were
manually annotated by 3 political science students.
Mentions of entities and respective polarity are aggregated by counting positive,
negative, neutral and total mentions for each entity in a given period. Sentiment
aggregate functions use these cumulative numbers as input to generate a new value
for each specific time period. Since we want to use sentiment aggregate functions
as features of a regression model to produce an estimate of the political opinion, we
decided to use traditional poll results as gold standard.
Text-based Entity-centric Prediction
108
Sentiment Aggregate Functions
Let Mei be a mention on Twitter of an entity ei , then Me+i , Me∗i and Me−i are positive,
neutral and negative classified mentions of entity ei on Twitter. Therefore, given a
time frame T (e.g. a month), sentiment aggregate functions applied to the aggregated
data between polls are the following:
P
• entitybuzz: T Mei , the sum of the number of mentions (buzz) of a given entity
in the time frame T .
• entitypositives: T Me+i , sum of the positively classified mentions of a given
entity in the time frame T .
P
• entityneutrals: T Me∗i , the sum of the neutral classified mentions of a given
entity in a time frame T .
P
• entitynegatives: T Me−i , the sum of the negatively classified mentions of a given
entity in a time frame T .
P
M + +M −
P
ei
T
P ei
• entitysubjectivity:
, the ratio of positive and negative classified
M
e
i
T
mentions of entity ei over its buzz in a time frame T .
M+
P
• entitypolarity: PT Me−i , the ratio of positive over negative classified mentions in
ei
T
a time frame T .
P
M−
ei
T
• berminghamsovn: P P
, the ratio of the negative classified mentions of
Me−i
T
E
entity ei over the total number of negative mentions of all entities in time frame
T.
P
Me+i +1
T
P
P
log10
Me− +1
- bermingham [135]:
T
- berminghamsovp [135]:
T
E
Me+i
P
M+
PT e−i
- connor [191]:
T
P
Mei
Me−j
Ej̸=i
+
Mei +Me−i
E
Me+i +
P
T
P
P
- gayo [141]:
T
- polarity:
E
P
Me+i
T
P P
P
T
Me+i −
- polarityON eutral:
P
Me−i
P
Me+i − T Me−i
P
Me0i
T
T
T
P
i
6.2 Predicting Political Polls using Twitter
Sentiment
P
- polarityOT otal:
P
P
T
P
- subjOT otal:
- subjN euv:
T
M ei
Me+i + T Me−i
P
M ei
T
P
T
P
109
Me−i
T
Me+i −
T
Me+i + T Me−i
P
Me0i
T
P
P
+
P
−
E
ei
ei
M +
M
- subjSoV : PT Pei M +T+Me−i
T
- subjV ol:
P
T
- share [135]:
Me+i + Me−i
P
M ei
T
P P
T
E
M ei
- shareOf N egDistribution:
P −
M
P T ei
Mei
T
P
P
Me−
T
i
P
E
T
in the poll
, where n is the number of political entities
M ei
+
P
M
- normalized_positive: PT Meei
i
T
- normalized_negative:
P
M−
PT e−i
T
- normalized_neutral:
Mei
P
M0
PT ei
T
M ei
- normalized_bermingham: log10
- normalized_connor:
normalized_positives+1
normalized_negatives+1
normalized_positives
normalized_negatives
- normalized_gayo:
normalized_positives+normalized_others_negatives
normalized_total_positives+normalized_total_negatives
- normalized_polarity:
normalized_positives − normalized_negatives
The sentiment aggregate functions are used as features in the regression models.
Text-based Entity-centric Prediction
110
Fig. 6.4 Negatives share (berminghamsovn) of political leaders in Twitter.
6.2.2
Data
The data used in this work consists of tweets mentioning Portuguese political party
leaders and polls from August 2011 to December 2013. This period corresponds to the
Portuguese bailout when several austerity measures were adopted by the incumbent
right wing governmental coalition of the PSD and CDS parties.
Twitter
Table 6.3 Distribution of positive, negative and neutral mentions per political party
PSD
PS
CDS
CDU
BE
Negative
69 723
28 660
41 935
2 445
9 603
Positive
121
225
51
79
306
Neutral
37 133
15 326
17 554
5 604
4 214
Total Mentions
106 977
44 211
59 540
8 128
14 123
The Twitter data set contains 232,979 classified messages, collected from a network of
100 thousand different users classified as Portuguese. Table 6.3 presents the distribution
of positive, negative, and neutral mentions of the political leaders of the 5 most voted
political parties in Portugal (PSD, PS, CDS, PCP and BE). The negative mentions
represent the majority of the total mentions, except for CDU where the number of
negative mentions is smaller than the neutral ones. The positive mentions represent
less than 1% of the total mentions of each party, except for BE where they represent
6.2 Predicting Political Polls using Twitter
Sentiment
111
2% of the total mentions. The most mentioned parties are PS, PSD and CDS. The
total mentions of these three parties represent 90% of the data sample total mentions.
Figure 6.4 depicts the time series of the berminghamsovn (negatives share) sentiment
aggregate function. The higher the value of the function the higher is the percentage of
negative tweets mention a given political entity in comparison with the other entities.
As expected, Pedro Passos Coelho (PSD) as prime-minister is the leader with the
higher score throughout the whole time period under study. Paulo Portas (CDS) leader
of the other party of the coalition, and also member of the government is the second
most negatively mentioned in the period, while António José Seguro (PS) is in some
periods the second higher. PSD and CDS are the incumbent parties while PS is the
main opposition party in the time frame under study. PSD and CDS as government
parties were raising taxes and cutting salaries. PS was the incumbent government
during the years that led to the bailout and a fraction of the population considered
responsible for the financial crisis. The bailout and the consequent austerity measures
could explain the overwhelming percentage of negative mentions although we verified
that in other time periods the high percentage of negatives mentions remains. We can
say that Twitter users of this sample when mentioning political leaders on their tweets
tend to criticize them.
Political Opinion Polls
The polling was performed by Eurosondagem, a Portuguese private company which
collects public opinion. This data set contains the monthly polls results of the five
main Portuguese parties, from June 2011 to December 2013. Figure 6.5 represents the
evolution of Portuguese polls results. We can see two main party groups: The first
group, where both PSD and PS are included, has a higher value of vote intention (above
23%). PSD despite starting as the preferred party in vote intention, has a downtrend
along the time, losing the leadership for PS in September 2012. On the other hand, PS
has in general an uptrend. The second group, composed by CDS, PCP and BE, has a
vote intention range from 5% to 15%. While CDS has a downtrend in public opinion,
PCP has an ascendant one. Although the constant tendencies (up and down trends),
we noticed that the maximum variation observed between two consecutive months is
3%. In June 2013 there was political crises in the government when CDS threaten to
leave the government coalition due to the austerity measures being implemented and
corresponds to the moment when PS takes the lead in the polls.
Text-based Entity-centric Prediction
112
Fig. 6.5 Representation of the monthly poll results of each political candidate
6.2.3
Experimental Setup
We defined the period of 2011 to December 2012 as training set and the whole year
of 2013 as test set. We applied a sliding window setting in which we predict the poll
results of a given month using the previous 16 months as training set:
• Training set – containing the monthly values of the aggregators (both sentiment
and buzz aggregator) for 16 months prior the month intended to be predicted.
• Test set - containing the values of the aggregators (both sentiment and buzz
aggregator) of the month intended to be predicted.
We start by predicting the poll results of January 2013 using the previous 16 months
as training set:
1. We select the values of the aggregators of the 16 months prior January 2013
(September 2011 to December 2012).
2. We use that data to train our regression model.
3. Then we input the aggregators’ values of January 2013 - the first record of the
test set - in the the trained model, to obtain the poll results prediction.
6.2 Predicting Political Polls using Twitter
Sentiment
113
4. We select the next month of the test set and repeat the process until all months
are predicted.
The models are created using two regression algorithms: a linear regression algorithm
(Ordinary Least Squares - OLS) and a non-linear regression algorithm (Random Forests
- RF). We also run an experiment using the derivative of the polls time series as gold
standard, i.e., poll results variations from poll to poll. Thus, we also calculate the
variations of the aggregate functions from month to month as features. Furthermore,
we repeat each experiment including and excluding the lagged self of the polls, i.e.,
the last result of the poll for a given candidate (yt−1 ) or the last polls result variation
(∆yt−1 ) when predicting polls variations. We use Mean Absolute Error (MAE) as
evaluation measure, to determine the absolute error of each prediction. Then, we
calculate the average of the twelve MAE’s so we could know the global prediction error
of our model.
Pn
|fi − yi |
(6.1)
n
n is the number of forecasts, fi is the model’s forecast and yi the real outcome.
M AE =
6.2.4
i=1
Results and Discussion
In this section we explain in detail the experiments and their results. We perform two
different experiments: (1) using absolute values and (2) using monthly variations.
Predicting Polls Results
In this experiment, the sentiment aggregators take absolute values in order to predict
the absolute values of polls results. Mathematically speaking, this experiment can be
seen as: y ← {yt−1 , buzzAggregators, sentimentAggregators}. In Figure 6.6 we see
the global errors we obtained.
The results show that we obtain a MAE for the 5 parties poll results over 12 months
of 6.55% using Ordinary Least Squares and 3.1% using Random Forests. The lagged
self of the polls, i.e., assuming the last known poll result as prediction results in a
MAE of 0.61 which was expectable since the polls exhibit slight changes from month
to month. This experiment shows that the inclusion of the lagged self (yt−1 ) produces
average errors similar to the lagged self.
Text-based Entity-centric Prediction
114
Fig. 6.6 Error predictions for polls results.
Fig. 6.7 Error predictions for polls results variation.
Predicting Polls Results Variation
According to our exploratory data analysis, the polls results have a small variation
between two consecutive months. Thus, instead of predicting the absolute value of
poll results, we tried to predict the variation, ∆y ← {∆(yt−1 ), ∆buzzAggregators,
∆sentimentAggregators}
In this particular experiment, the inclusion of the ∆yt−1 as feature in the regression
model has not a determinant role (Figure 6.7). Including that feature we could not
obtain lower MAE than excluding it. It means that the real monthly poll variation
is not constant over the year. In general, using a non-linear regression algorithm we
obtain lower MAE. The results show that when leading with polls results with slight
6.2 Predicting Political Polls using Twitter
Sentiment
115
Fig. 6.8 Mean absolute error buzz vs sentiment.
changes from poll to poll it makes sense to transform the dataset by taking differences
between consecutive time-steps.
Buzz and Sentiment Several studies state that the buzz has predictive power and
reflects correctly the public opinion on social media. Following that premise, we trained
our models with buzz and sentiment aggregators separately to predict polls variations:
• ∆y ← {∆(yt−1 ), ∆buzzAggregators}
• ∆y ← {∆(yt−1 ), ∆sentimentAggregators}
This experiment allowed us to compare the behavior of buzz and sentiment aggregators.
According to Figure 6.8, buzz and sentiment aggregators have similar results.
Although the OLS algorithm combined only with buzz aggregators has a slightly lower
error than the other models, it is not a significant improvement. These results also
show that Random Forests algorithm performs the best when combined only with
sentiment aggregators.
Feature Selection
One of the main goals of our work is to understand which aggregator (or group of
aggregators) better suits our case study. According to the previous experiments, we
can achieve lower prediction errors when training our model with buzz and sentiment
aggregators separately. However, when training our model with these two kinds of
aggregators separately, we are implicitly performing feature selection. We only have
Text-based Entity-centric Prediction
116
two buzz features (share and total_mentions). Due to that small amount of features,
it was not necessary to perform any feature selection technique within buzz features.
Thus, we decided to apply a feature selection technique to the sentiment aggregators, in
order to select the most informative ones to predict the monthly polls results variation.
We use univariate feature selection, selecting 10% of the sentiment features (total of 3
features). Using this technique, the Random Forests’ global error rose from 0.65 to
0.73. However, OLS presents an MAE drop from 0.72 to 0.67. Another important fact
to notice is that if we perform univariate feature selection to all aggregators (buzz and
sentiment), we will achieve the same MAE value that when applied only to sentiment
aggregators. It means that buzz aggregators are discarded by the feature selection
technique.
We try a different approach and perform a recursive feature elimination technique.
In this technique, features are eliminated recursively according to a initial score given
by the external estimator. This method allows us to determine the number of features
to select. Thus, also selecting 3 features, the OLS’ MAE drops to 0.63. Once again,
none of the buzz features were selected. Furthermore, both feature selection techniques
select different features for each monthly prediction.
6.2.5
Feature Importance
We select the Random Forest model of monthly variations to study the features
importance as depicted in Figure 6.9. The higher the score, the more important the
feature is. The importance of a feature is computed as the (normalized) total reduction
of the criterion brought by that feature. It is also known as the Gini importance. Values
correspond to the average of the Gini importance over the different models trained
in the experiments. The single most important feature is the bermingham aggregate
function, followed by neutrals. It is important to notice that when combining all the
aggregate functions as features in a single regression model, the buzz does not comprise
a high Gini importance, even though when used as a single feature it produces similar
results to the sentiment aggregate functions. In general, the standard deviation of the
Gini importance is relatively high. This has to do with our experimental setup, as the
values depicted in the bar chart correspond to the average of the Gini importance over
12 different models (12 months of testing set). Therefore, feature importances vary over
time while the MAE tends to remain unchanged. We can say that different features
have different informative value over time and consequently it is useful to combine all
the sentiment aggregation functions as features of the regression models over time.
6.2 Predicting Political Polls using Twitter
Sentiment
117
Fig. 6.9 Aggregate functions importance in the Random Forests models.
6.2.6
Outlook
We studied a large set of sentiment aggregate functions to use as features in a regression
model to predict political opinion poll results. The results show that we can estimate
the polls results with low prediction error, using sentiment and buzz aggregators
based on the opinions expressed on social media. We introduced a strong baseline
for comparison, the lagged self of the polls. In our study, we built a model where we
achieve the lowest MAE using the linear algorithm (OLS), combined only with buzz
aggregators, using monthly variations. The model has an MAE of 0.63%. We performed
two feature selection techniques: (1) univariate feature selection and (2) recursive
feature elimination. Applying the recursive technique to the sentiment features, we
can achieve an MAE of 0.63, matching our best model. Furthermore, the chosen
features are not the same in every prediction. Regarding feature importance analysis
118
Text-based Entity-centric Prediction
our experiments showed that bermingham aggregate function represents the highest
Gini importance in the Random Forests model.
6.3
Summary of the Contributions
In this chapter we presented research work about entity-centric text-based prediction
for ORM, making the following contributions:
• Analysis of the predictive power of online news regarding entity popularity on
Twitter for entities that are frequently mentioned on the news.
• Analysis of how to combine different sentiment aggregate functions to serve as
features for predicting political polls.
Chapter 7
A Framework for Online
Reputation Monitoring
In this chapter, we present a framework that puts together all the building blocks
required to perform ORM. The framework is divided in two distinct components, one
is dedicated to Entity Retrieval and the other to Text Mining. In practice these two
components can act as two separate frameworks. Both are adaptable and can be reused
in different application scenarios, from computational journalism to finance or politics.
We start with a framework overview description and then we focus specifically on
each of the two components. The first component is RELink, a research framework
for E-R retrieval. We carried the experiments on E-R retrieval, described in Chapter
4 using RELink. Furthermore, since we did not have access to training data based
on news articles, we describe a case study of using RELink for entity retrieval from a
large news collection. We then describe the TexRep framework which is responsible
for Text Mining related tasks for ORM, such as Entity Filtering, Sentiment Analysis
or Predictive tasks. The experiments described in both Chapter 5 and Chapter 6
were carried out using TexRep. We also provide further detail how TexRep was used
as backend of the POPSTAR project. Finally, we perform an independent study of
practical aspects of general purpose word embeddings from the Twitter stream to serve
as resource for future users of TexRep.
7.1
Framework Overview
The framework provides Entity Retrieval and Text Mining functionalities that enable
the collection, disambiguation, retrieval of entities and relationships, sentiment analysis, data aggregation, prediction and visualization of entity-centric information from
A Framework for Online Reputation Monitoring
120
heterogeneous Web data sources. Furthermore, given that both components are built
using modular architectures providing abstraction layers and well defined interfaces,
new functionalities or methods can be easily integrated.
The framework is divided in two components: RELink and TexRep. Both can
work as independently dedicated frameworks using specific data sources or can be put
together in a unifying setup for ORM. As depicted in Figure 7.1, when working together,
RELink and TexRep are connected through the Entity Occurrences Warehouse. This
is the central module of our framework for ORM. The Entity Occurrences Warehouse
contains extractions from occurrences of the entities of interest across the Web data
sources.
RELink
(Entity Retrieval)
ENTITY
OCCURRENCES
WAREHOUSE
TexRep
(Text Mining)
Fig. 7.1 High-level overview on the ORM framework.
The data flow starts with TexRep collecting data from Web text data sources,
extraction of text passages containing entity mentions and disambiguation. Entitycentric text passages are then stored in the Entity Occurrences Warehouse. This data
can then be used for E-R retrieval indexing using RELink or for downstream Text
Mining tasks (e.g. Sentiment Analysis) using other modules of TexRep. We now
describe RELink and TexRep architectures and internal data flow.
7.1.1
RELink
The RELink framework is designed to facilitate experiments with E-R Retrieval query
collections. The formulation of E-R queries in natural language and relational format
(QEi−1 , QRi−1,i , QEi ) provide opportunities to define and explore a range of query
formulations and search algorithms. Although, RELink provides support for Late
Fusion design patterns, it is mostly tailored for Early Fusion approaches where it is
necessary to create entity and relationship representations at indexing time.
A typical Early Fusion E-R retrieval experimental setup would involve search over
a free-text collection to extract relevant instances of entity tuples and then verify their
correctness against the relevance judgments. The key enabling components therefore
are: (1) test collections of documents with annotated entity instances that could be
7.1 Framework Overview
121
extracted during E-R search, (2) an indexing facility, and (3) a retrieval module to
process queries and rank results.
Fig. 7.2 RELink Framework architecture overview.
Figure 7.2 depicts the architecture of RELink used in the experiments described
in Chapter 4. We include the modules responsible for deriving relevance judgments
from Wikipedia. The Table Parser module is described in Section 4.1.2 in Chapter 4.
Currently, the RELink Framework includes the ClueWeb-09-B1 collection combined
with FACC1[174] text span annotations with links to Wikipedia entities (via Freebase).
The entity linking precision and recall in FACC1 are estimated at 85% and 70-85%,
respectively [174]. The RELink Extractor, part of E-R Indexer, applies an Open
Information Extraction method [57] over the annotated ClueWeb-09-B corpus. The
two additional components are Corpus E-R Index and E-R Retrieval, both depicted
in Figure 7.2. The implementation of all modules in E-R Retrieval and the Indexer
1
http://www.lemurproject.org/clueweb09/
A Framework for Online Reputation Monitoring
122
module in Corpus E-R Index are based on Apache Lucene and the Letor module serves
as a wrapper for RankLib2 .
Indexing and Retrieval
Based on the ClueWeb-09-B collection we create two essential resources: entity index
and entity pair relationship index for the entities that occur in the corpus. For a given
entity instance, the ER Indexer identifies co-occuring terms within the same sentence
and considers them as entity types for the observed entity instance. Similarly, for a
given pair of entities, the ER Indexer verifies whether they occur in the same sentence
and extracts the separating string. That string is considered a context term for the
entity pair that describes their relationship type. We obtain 476M entity and 418M
entity pair extractions with corresponding sentences that are processed by the Indexer.
Once the inverted index (ER Index) is created, any instance of an entity or entity pair
can be retrieved in response to the contextual terms, i.e., entity types and relationship
types, specified by the users.
Search Process
The E-R retrieval process is managed by the RELinker module (Figure 7.2). The
Query Analyzer module processes information requests and passes queries in the
structured format to the Retriever. Query search is performed in stages to allow for
experimentation with different methods and parameter settings. First, the Retriever
provides an initial set of results using Lucene’s default search settings and groups them
by entity or entity pairs on query time using the Lucene’s GroupingSearch. The Scorer
then generates and applies feature functions of specific retrieval models with required
statistics. Currently, the Scorer has implementations for Early Fusion variants EF-LM,
EF-BM25 and ERDM. The RELinker is responsible for re-ranking and providing final
results based on the scores provided by the Scorer and the parameter weights learned
by Letor.
7.1.2
TexRep
TexRep is a research framework that implements Text Mining techniques to perform
Online Reputation Monitoring (ORM) in various application domains, such as computational social sciences, political data science, computational journalism, computational
finance or online marketing.
2
http://www.lemurproject.org/ranklib.php
7.1 Framework Overview
123
TexRep was designed with two main challenges in mind: 1) it should be able to cope
with the Text Mining problems underlying ORM and 2) it should be flexible, adaptable
and reusable in order to support the specificities of different application scenarios. We
define that a Text Mining based system for Online Reputation Monitoring must follow
a set of technical and operational requirements:
• Batch and real-time operation: such a system must naturally be able to
operate in real-time, i.e. collecting data as it is generated, processing it and
updating indicators. However, it is also important to be able to operate in batch
mode, in which it collects specific data from a period indicated by the user, if
available, and then processes it. The system should use a distributed approach to
deal with great volumes of data, (e.g. Hadoop). It should also be able to operate
autonomously for long periods of time, measured in months.
• Adaptability: the system should be able to adapt its models (e.g. polarity
classification) through time as well as across different applications. Updating
models often requires manually annotated data (e.g. NED). Therefore the system
should provide a flexible annotation interface.
• Modularity: researchers should be able to plug in specific modules, such as a
new data source and respective crawler or a different visualization. The system
interfaces should use REST APIs and JSON data format, which allow users
to add new modules that interact with other data sources (e.g. Wikipedia or
Facebook).
• Reusability: the system should enable repeatability of all experiments to allow
the research community to obtain equal results. We will make the software package
of a prototype publicly available as well as the data sources and configuration
parameters used in experiments.
• Language independence: each component of the system should apply a statistical language modeling completely agnostic to the language of the texts.
We decompose the use of Text Mining for ORM into four distinct but interconnected
tasks: Data Collection, Entity Filtering, Sentiment Analysis and Analytics. Each task
is accomplished by one or more software modules. For instance, Analytics tasks usually
involves the use of the Aggregation, Prediction and Visualization modules. Figure 7.3
presents the TexRep architecture, including the data flow between modules.
A Framework for Online Reputation Monitoring
124
PIPELINE MANAGER
SENTIMENT ANALYSIS
CONFIGURATIONS
TRAINING DATA
DATA COLLECTION
SERVER
DATA COLLECTION
CLIENT
VISUALIZATION
DATA COLLECTION
CLIENT
ENTITY
OCCURRENCES
WAREHOUSE
ENTITY FILTERING
AGGREGATION
PREDICTION
TEXREP
DATA SOURCES
KNOWLEDGE BASE
Fig. 7.3 Architecture and data flows of the TexRep framework.
Entity Filtering and Sentiment Analysis represent the most challenging Text Mining
problems tackled in the TexRep framework. When tracking what is being said online
about the target entities it is necessary to disambiguate mentions. When this is
done incorrectly, the knowledge obtained by the other modules is negatively affected.
Consequently, other Text Mining tasks, such as Sentiment Analysis, will benefit from
filtering non relevant texts.
The current implementation of the Entity Filtering module uses the scikit-learn
Python library as the Machine Learning library interface, providing access to TexRep
users to the most suitable learning algorithm and parameter tuning for their specific
needs. We studied a large set of features that describe the relationship between the
target entity representation and a given text and we tried several different supervised
learning algorithms that are available through the framework, such as Support Vector
Machines (SVM) and Random Forests (RF).
The Sentiment Analysis module also uses scikit-learn implementation of supervised
learning algorithms in order to predict sentiment polarity and intensity in short texts
7.1 Framework Overview
125
using regression analysis. We use unsupervised learning of word embeddings [182]
in short texts to construct syntactic and semantic representations of words. The
Sentiment Analysis module combines word embeddings with traditional approaches,
such as pre-processing techniques, bag-of-words and lexical-based features to train a
classifier for sentiment polarity and a regressor for sentiment intensity.
Analytics modules include Aggregation, Visualization and Prediction. These modules are application specific and depend on user configurations. For instance, in the
political domain it is common to create aggregate functions that represent relative
popularity indicators between political parties or candidates. These indicators are then
used to predict elections. On the other hand, if we consider the financial domain, due
to its high volatility, aggregation is usually performed with lower granularity (minutes
instead of days) and target prediction variables are individual stock prices or variations.
TexRep implements various aggregation functions and allows custom plug-in of tailored
prediction models based on each application.
Therefore, TexRep is able to adapt itself to the specificities of different application
scenarios by implementing a modular and flexible design through user configurations and
abstraction layers. Data Collection depends on the specified data sources, thus TexRep
decouples client-side implementations from the data collection process management
using a REST API. If a user needs a different Data Collection Client from the ones
provided by default, she is able to implement a specific client that is easily integrated
into the framework. The same applies to the Analytics modules which are extensible
by loading user-implemented methods through an abstraction layer. Furthermore, if
users wish to extend TexRep with Topic Modeling, they only need to plug-in the new
module and write topic assignments through the Entity Occurrences Warehouse. New
aggregation functions could be implemented that use the topic of each mention as
input in order to create entity-centric topic trends visualizations.
The framework can be fully configured using configuration files that are processed
in the Pipeline Manager, which is the module responsible for forwarding specific
parameterization to the other modules. It is possible to specify the entities of interest,
data sources, aggregate functions and prediction time windows. Module specific
configurations are also specified in this module, such as which training data should be
used by the modules that rely on machine learning.
As explained, TexRep addresses the two aforementioned challenges of developing a
Text Mining framework for ORM. The current version of the framework is implemented
in Python, uses MongoDB as NoSQL database and implements the MapReduce
paradigm for aggregations. The external and pluggable resources used are the scikit-
126
A Framework for Online Reputation Monitoring
learn library and the matplotlib for visualization, though users can replace these
two resources by others of their preference. We provide the implementations of each
module that we believe are the most generic as possible within the context of ORM.
Nevertheless, users are also able to extend each module with the methods they see
fit, such as, new features or data pre-processing steps. We now describe in detail how
the different modules interact with each other, as well as, a detailed explanation of
the current implementation of the Entity Filtering, Sentiment Analysis and Analytics
modules.
Data Flow
TexRep collects data continuously and performs mini-batch processing and analytics
tasks. The standard data flow is organized as follows. First the user defines the
entities of interest in the configurations files, including canonical and alternative names.
These configurations are processed by the Pipeline Manager and forwarded to the Data
Collection clients to search for texts (e.g. news articles and tweets) using entity names
as queries on each data source-specific API. The Data Collection Clients implement
source-specific API clients, such as the case of Twitter and Yahoo Finance, for instance.
If the user is interested in collecting RSS feeds of news outlets, then the Data Collection
Client can be adapted to subscribe to those feeds and process them accordingly.
Once collected, texts are stored in the Entity Occurrences Warehouse. Entity
Filtering classifies each text as relevant or not for each target entity using a supervised
learning approach. A knowledge base (e.g. Freebase) is used to extract target entity
representations and to compute similarity features with extracted mentions contexts.
Once the non-relevant texts are filtered, Sentiment Analysis takes place. The framework
implements both polarity classification and sentiment regression for sentiment intensity
detection. Then, Analytics modules are able to aggregate and create visualizations of
trends in data or predictions of application specific dependent variables.
Data Collection
The Data Collection Server communicates with each Data collection Client using a
REST API and therefore it allows modularity and a plugin approach for adapting
to specific data sources. The task of data collection is based on user-defined entity
configurations containing the list of entities under study. Each data source has specific
web interfaces (e.g. RSS feeds, Yahoo Finance API or Twitter API). The Data
Collection Server manages the Data Collection Clients through specific interfaces
(plugins) that are adequate for the corresponding source. For instance, collecting data
7.2 RELink Use Case
127
from Twitter poses some challenges, namely due to the limits on the amount of data
collected. We opted to create by default a Data Collection Client for SocialBus [192],
a distributed Twitter client that enables researchers to continuously collect data from
particular user communities or topics, while respecting the established limits.
Some data sources allow query by topics (e.g. entity names) while others do not (e.g.
RSS feeds). Moreover, in the case of Twitter, we might be interested in continuously
monitoring a fixed group of Twitter users (e.g., the accounts of the entities of interest).
In such cases, when we cannot search directly by entity name in the specific data
source, we use the list of entity names to process collected texts that might be relevant.
The Data Collection Server applies a sequential classification approach using a prefix
tree to detect mentions. This method can be seen as first step of filtering but it is
still prone to noisy mentions. For instance, a tweet with the word “Cameron” can
be relative to several entities, such as a former UK prime minister, a filmmaker or a
company. Consequently, this problem is later tackled by the Entity Filtering module.
Collected texts (e.g. news or tweets) are stored in a centralized document-oriented
NoSQL database (e.g. MongoDB), the Entities Occurrence Warehouse. This setup
provides modularity and flexibility, allowing the possibility of developing specific data
collection components tailored to specific data sources and is completely agnostic to
the data format retrieved from each data source. The Data Collection Server annotates
each text with the target entity which will be used by the Entity Filtering module to
validate that annotation.
7.2
RELink Use Case
In this section we present a use case of the RELink framework in the context of
ORM applied to computational journalism. Never before has computation been so
tightly connected with the practice of journalism. In recent years, the computer
science community has researched [193–196, 36, 197–199] and developed3 new ways
of processing and exploring news archives to help journalists perceiving news content
with an enhanced perspective.
We created a demo the TimeMachine, that brings together a set of Natural Language
Processing, Text Mining and Information Retrieval technologies to automatically extract
and index entity related knowledge from the news articles [177, 200, 36, 201, 197–199].
It allows users to issue queries containing keywords and phrases about news stories or
events, and retrieves the most relevant entities mentioned in the news articles through
3
NewsExplorer (IBM Watson): http://ibm.co/1OsBO1a
128
A Framework for Online Reputation Monitoring
time. TimeMachine provides readable and user-friendly insights and a temporal
perspective of news stories and mentioned entities. It visually represents relationships
among public figures co-mentioned in news articles as a social network graph, using a
force atlas algorithm layout [202] for the interactive and real-time clustering of entities.
7.2.1
News Processing Pipeline
The news processing pipeline, depicted in Figure 7.4, starts with a news cleaning
module which performs the boilerplate removal from the raw news files (HTML/XML).
Once the news content is processed we apply the NERD module which recognizes
entity mentions and disambiguates each mention to an entity using a set of heuristics
tailored for news, such as job descriptors (e.g. “Barack Obama, president of USA”)
and linguistic patterns well defined for the journalistic text style. We use a bootstrap
approach to train the NER system [201]. Our method starts by annotating entity
names on a dataset of 50,000 news items. This is performed using a simple dictionarybased approach. Using such training set we build a classification model based on
Conditional Random Fields (CRF). We then use the inferred classification model to
perform additional annotations of the initial seed corpus, which is then used for training
a new classification model. This cycle is repeated until the NER model stabilizes. The
Fig. 7.4 News processing pipeline.
entity snippet extraction consists of collecting sentences containing mentions to a given
entity. All snippets are concatenated generating an entity document, which is then
indexed in the entity index. The entity index represents the frequency of co-occurrence
of each entity with each term that it occurs with in the news. Therefore, by relying on
the redundancy of news terms and phrases associated with an entity we are able to
retrieve the most relevant entity to a given input keyword or phrase query. As we also
index the snippet datetime it is possible to filter query results based on a time span.
For instance, the keyword “corruption” might retrieve a different entity list results in
different time periods. Quotations are typically short and very informative sentences,
which may directly or indirectly quote a given entity. Quotations are automatically
7.2 RELink Use Case
129
extracted (refer to "Quotations Extraction" module) using linguistic patterns, thus
enriching the information extracted for each entity. Finally, once we have all mentioned
entities in a given news articles we extract entity tuples representing co-occurrences
of entities in a given news article and update the entity graph by incrementing the
number of occurrences of a node (entity) and creating/incrementing the number of
occurrences of the edge (relation) between any two mentions.
7.2.2
Demonstration
The setup for demonstration uses a news archive of Portuguese news. It comprises two
different datasets: a repository from the main Portuguese news agency (1990-2010),
and a stream of online articles provided by the main web portal in Portugal (SAPO)
which aggregates news articles from 50 online newspapers. The total number of news
articles used in this demonstration comprises over 12 million news articles. The system
is working on a daily basis, processing articles as they are collected from the news
stream. TimeMachine allows users to explore its news archive through an entity search
box or by selecting a specific date. Both options are available on the website homepage
and in the top bar on every page. There are a set of “stories” recommendations on
the homepage suited for first time visitors. The entity search box is designed to be
the main entry point to the website as it is connected to the entity retrieval module of
TimeMachine.
Fig. 7.5 Cristiano Ronaldo egocentric network.
Users may search for surface names of entities (e.g. “Cristiano Ronaldo”) if they
know which entities they are interested to explore in the news, although the most
130
A Framework for Online Reputation Monitoring
powerful queries are the ones containing keywords or phrases describing topics or news
stories, such as “Eurozone crisis” or “Ballon d’Or nominees”. When selecting an entity
from the ranked list of results, users access the entity profile page which contains a
set of automatically extracted entity specific data: name, profession, a set of news
articles, quotations from the entity and related entities. An entity timeline is also
provided to allow users to navigate entity specific data through time. By selecting a
specific period, different news articles, quotations and related entities are retrieved.
Furthermore, users have the option of “view network” which consists in a interactive
network depicting connections among entities co-mentioned in news articles for the
selected time span. An example of such visualization is depicted in Figure 7.5, and
it is implemented using the graph drawing library Sigma JS, together with "Force
Atlas" algorithm for the clustered layout of entities. Nodes consist of entities and edges
represent a co-occurrence of mentioned entities in the same news articles. The size
of the nodes and the width of edges is proportional to the number of mentions and
co-occurrences, respectively. Different node colors represent specific news topics where
entities were mentioned. By selecting a date interval on the homepage, instead of
issuing a query, users get a global interactive network of mentions and co-occurrences
of the most frequent entities mentioned in the news articles for the selected period of
time.
7.3
TexRep Use Case
This section describes the design and implementation of the POPmine system, an use
case of the proposed framework, developed in the scope of the POPSTAR project. It is
an open source platform which can be used and extended by researchers interested in
tracking reputation of political entities on the Web. POPmine operates either in batch
or online mode and is able: to collect texts from web-based conventional media (news
items in mainstream media sites) and social media (blogs and Twitter); to process
those texts, recognizing topics and political entities; to analyze relevant linguistic units;
to generate indicators of both frequency of mention and polarity (positivity/negativity)
of mentions to political entities across sources, types of sources, and across time. As a
proof of concept we present these indicators in a web application tailored for tracking
political opinion in Portugal, the POPSTAR website. The system is available as an
open source software package that can be used by other researchers from social sciences
but also from any other area that is interested in tracking public opinion on the web.
7.3 TexRep Use Case
131
We opted to use data from news articles, tweets and blog posts and each of these
data sources requires its specific crawler. News articles and blog posts are collected
using RSS feeds which eases the implementations of a specific crawler. Collecting data
from Twitter poses some challenges. The need for large amounts of data, coupled with
Twitter’s imposed limits demand for a distributed system. We opted to use SocialBus4
which enables researchers to continuously collect data from particular user communities,
while respecting Twitter’s imposed limits.
The data collection components crawl data from specific data sources which implement specific web interfaces (e.g. RSS feeds, Twitter API). Each data source must
have its own data collection module which in turn connects to the POPmine system
using REST services. POPmine stores data collected in a document oriented NoSQL
database (MongoDB). This configuration allows modularity and flexibility, allowing
the possibility of developing specific data collection components tailored to specific
data sources.
The default setting of data collection modules comprise the following components:
• News: Data from online news are provided by the service Verbetes e Notícias
from Labs Sapo. This service handles online news from over 60 Portuguese news
sources and is able to recognize entities mentioned in the news.
• Blogs: Blog posts are provided by the blogs’ monitoring system from Labs
Sapo, which includes all blogs with domain sapo.pt, blogspot.pt (Blogger) and
Wordpress (blogs written in Portuguese).
• Twitter: Tweets are collected using the platform SocialBus, responsible for
the compilation of messages from 100.000 Portuguese users of Twitter. Tweets
are collected in real time and submitted to a language classification. In our
experiments we opted to collect the tweets written in Portuguese.
The information extraction component comprises a knowledge base containing
metadata about entities, e.g., names or jobs. Using a knowledge base is crucial to
filter relevant data mentioning politicians, such as news, tweets and blog posts. In
our application scenario, we opted to use Verbetes, a knowledge base which comprises
names, alternative names, and professions of Portuguese people mentioned often in
news articles.
The Information Extraction components address two tasks: Named Entity Recognition and Named Entity Disambiguation. We envision an application scenario where we
4
http://reaction.fe.up.pt/socialbus/
A Framework for Online Reputation Monitoring
132
need to track political entities. Usually this type of entities are well known therefore we
opted to use a knowledge base to provide metadata about the target entities, namely
the most common surface forms of their names. Once we had the list of surface forms
to search for we applied a sequential classification approach using a prefix tree to detect
mentions. This method is very effective on news articles and blog posts but can result
in noisy mentions when applied to Twitter. For instance, a tweet containing the word
“Cameron” can be related with several entities, such as the former UK prime minister,
a filmmaker or a company. Furthermore, tweets are short which results in a reduced
context for entity disambiguation. We then apply the Entity Filtering approach of
TexRep.
The opinions warehouse contains the messages filtered by the information extraction
component and applies polarity classification to those messages using an external
resource - the Opinionizer classifier [203]. One of the requirements of the Opinionizer is
to use manually labeled data to train the classifier. We developed an online annotation
tool for that effect.
We create opinion and polls indicators using the aggregator which is responsible to
apply aggregation functions and smoothing techniques. Once we obtain the aggregated
data we make available a set of web services that can be consumed by different
applications such as the POPSTAR website or other research experiences, such as polls
predictions using social media opinions.
7.3.1
Data Aggregation
Buzz is the daily frequency with which political leaders are mentioned by Twitter users,
bloggers and online media news. We use two types of indicators. The first type is the
relative frequency with which party leaders are mentioned by each medium (Twitter,
Blogs and News), on each day. This indicator is expressed, for each leader of each
party, as a percentage relative to the total number of mentions to all party leaders.
The second indicator is the absolute frequency of mentions, a simple count of citations
for each political leader.
To estimate trends in Buzz, we use the Kalman Filter. We allow users to choose the
smoothing degree for each estimated trend. Users can choose between three alternatives:
a fairly reactive one, where trend is highly volatile, allowing close monitoring of dayby-day variations; a very smooth one, ideal to capture long term trends; and an
intermediate option, displayed by default.
After identifying the polarity in each of the tweets, there are several ways to quantify
the overall sentiment regarding political leaders. We can, for instance, look at each
7.3 TexRep Use Case
133
target independently or in relative terms, compare positive with negative references
or simply look at one side of the polarity, or look at daily, weekly or monthly data
records.
In this first prototype we opted to present two separate indicators and their evolution
across time, using in both cases the day as reference period. The fist indicator is the
logarithm of the ratio of positive and negative tweets by political leader (party leaders
and the president). In other words, a positive sign means that the political leader under
consideration received more positive than negative tweets that day, while a negative
result means that he received more negative than positive tweets. In mathematical
notation:
logsentimenti = log(
positivesi + 1
)
negativesi + 1
The second approach is to simply look at the negative tweets (the vast majority of
tweets in our base classifier) and calculate their relative frequency for each leader. In
this way it is possible to follow each day which party leaders were, in relative terms,
more or less subject to tweets with negative polarity. In mathematical notation:
negativesi,d
negativesshare = P
negativesd
Fig. 7.6 Twitter buzz share of political leaders.
A Framework for Online Reputation Monitoring
134
7.3.2
Visualization
We created a website5 to allow interactive visualization of the data collected and
processed in real time by the POPmine platform. The site was developed within the
scope of the POPSTAR project (Public Opinion and Sentiment Tracking, Analysis, and
Research) and presents the following data: a) mentions to Portuguese party leaders in
Twitter, in the blogosphere and in online news; b) sentiment conveyed through tweets
regarding party leaders, c) voting intentions for the main political parties, measured by
traditional polls; and d) evaluation of the performance of said party leaders, measured
by polls. An example chart is depicted in Figure 7.6.
Besides providing our indicators in the form of charts, the website also has a
dashboard offering a more compact view of trends across indicators for all politicians.
7.4
Learning Word Embeddings for ORM6
Word embeddings have great practical importance since they can be used as precomputed high-density features to ML models, significantly reducing the amount of
training data required in a variety of Text Mining tasks. We aim to provide general
purpose pre-trained word embeddings for the Text Mining tasks in ORM. We are
particularly interested in learning word embeddings from the Twitter stream due to
the specificities of user generated content. It is relatively easy to get access to word
embeddings trained from well formed texts such as Wikipedia or online news. However,
to the best of our knowledge there are no publicly available word embeddings learned
from the Portuguese Twitter stream.
There are several inter-related challenges with computing and consistently distributing word embeddings concerning the:
• intrinsic properties of the embeddings. How many dimensions do we actually need to store all the “useful" semantic information? How big should the
embedded vocabulary be to have practical value? How do these two factors
interplay?
• type of model used for generating the embeddings. There are multiple possible
models and it is not obvious which one is the “best", both in general or in the
context of a specific type of applications.
5
6
http://www.popstar.pt
The material contained in this section was published in P. Saleiro, L. Sarmento, E. M. Rodrigues,
C. Soares, E. Oliveira, “Learning Word Embeddings from the Portuguese Twitter Stream: A Study of
some Practical Aspects” [17].
7.4 Learning Word Embeddings for ORM
135
• the size and properties of training data: What is the minimum amount of
training data needed? Should we include out of vocabulary words in the training?
• optimization techniques to be used, model hyperparameter and training
parameters.
Not only the space of possibilities for each of these aspects is large, there are also
challenges in performing a consistent large-scale evaluation of the resulting embeddings [204]. This makes systematic experimentation of alternative word-embedding
configurations extremely difficult.
In this work, we make progress in trying to find good combinations of some of the
previous parameters. We focus specifically in the task of computing word embeddings
for processing the Portuguese Twitter stream. User-generated content (such as twitter
messages) tends to be populated by words that are specific to the medium, and that
are constantly being added by users. These dynamics pose challenges to NLP systems,
which have difficulties in dealing with out of vocabulary words. Therefore, learning a
semantic representation for those words directly from the user-generated stream - and
as the words arise - would allow us to keep up with the dynamics of the medium and
reduce the cases for which we have no information about the words.
Starting from our own implementation of a neural word embedding model, which
should be seen as a flexible baseline model for further experimentation, our research
tries to answer the following practical questions:
• how large is the vocabulary the one can realistically embed given the level of
resources that most organizations can afford to buy and to manage (as opposed
to large clusters of GPU’s only available to a few organizations)?
• how much data, as a function of the size of the vocabulary we wish to embed, is
enough for training meaningful embeddings?
• how can we evaluate embeddings in automatic and consistent way so that a
reasonably detailed systematic exploration of the previously describe space of
possibilities can be performed?
By answering these questions based on a reasonably small sample of Twitter data
(5M), we hope to find the best way to proceed and train embeddings for Twitter
vocabulary using the much larger amount of Twitter data available (300M), but for
which parameter experimentation would be unfeasible. This work can thus be seen as
a preparatory study for a subsequent attempt to produce and distribute a large-scale
database of embeddings for processing Portuguese Twitter data.
136
7.4.1
A Framework for Online Reputation Monitoring
Neural Word Embedding Model
The neural word embedding model we use is the Continuous Bag-of-words (CBOW)
[182]. Given a sequence of 5 words - wi−2 wi−1 wi wi+1 wi+2 , the task the model tries
to perform is that of predicting the middle word, wi , based on the two words on the left
- wi−2 wi−1 - and the two words on the right - wi+1 wi+2 : P (wi |wi−2 , wi−1 , wi+1 , wi+2 ).
This should produce embeddings that closely capture distributional similarity, so that
words that belong to the same semantic class, or which are synonyms and antonyms of
each other, will be embedded in “close” regions of the embedding hyper-space.
The neural model is composed of the following layers:
• a Input Word Embedding Layer, that maps each of the 4 input words
represented by a 1-hot vectors with |V | dimensions (e.g. 32k) into a low dimension
space (64 bits). The projections matrix - Winput - is shared across the 4 inputs.
This is not be the embedding matrix that we wish to produce.
• a Merge Layer that concatenates the 4 previous embeddings into a single vector
holding all the context information. The concatenation operation ensures that
the rest of the model has explicit information about the relative position of
the input words. Using an additive merge operation instead would preserve
information only about the presence of the words, not their sequence.
• a Intermediate Context Embedding Dense Layer that maps the preceding
representation of 4 words into a lower dimension space, still representing the
entire context. We have fixed this context representation to 64 dimensions.
This ultimately determines the dimension of the resulting embeddings. This
intermediate layer is important from the point of view of performance because it
isolates the still relatively high-dimensional input space (4 x 64 bits input word
embeddings) from the very high-dimensional output space.
• a final Output Dense Layer that maps the takes the previous 64-bit representation of the entire input context and produces a vector with the dimensionality
of the word output space (|V | dimensions). This matrix - Woutput - is the one
that stores the word embeddings we are interested in.
• A Softmax Activation Layer to produces the final prediction over the word
space, that is the P (wi |wi−2 , wi−1 , wi+1 , wi+2 ) distribution
7.4 Learning Word Embeddings for ORM
137
All neural activations in the model are sigmoid functions. The model was implemented
using the Syntagma7 library which relies on Keras [205] for model development, and we
train the model using the built-in ADAM [206] optimizer with the default parameters.
7.4.2
Experimental Setup
We are interested in assessing two aspects of the word embedding process. On one
hand, we wish to evaluate the semantic quality of the produced embeddings. On the
other, we want to quantify how much computational power and training data are
required to train the embedding model as a function of the size of the vocabulary |V |
we try to embed. These aspects have fundamental practical importance for deciding
how we should attempt to produce the large-scale database of embeddings we will
provide in the future. All resources developed in this work are publicly available8 .
Apart from the size of the vocabulary to be processed (|V |), the hyperparamaters of
the model that we could potentially explore are i) the dimensionality of the input word
embeddings and ii) the dimensionality of the output word embeddings. As mentioned
before, we set both to 64 bits after performing some quick manual experimentation.
Full hyperparameter exploration is left for future work.
Our experimental testbed comprises a desktop with a nvidia TITAN X (Pascal),
Intel Core Quad i7 3770K 3.5Ghz, 32 GB DDR3 RAM and a 180GB SSD drive.
Training Data
We randomly sampled 5M tweets from a corpus of 300M tweets collected from the
Portuguese Twitter community [192]. The 5M comprise a total of 61.4M words (approx.
12 words per tweets in average). From those 5M tweets we generated a database
containing 18.9M distinct 5-grams, along with their frequency counts. In this process,
all text was down-cased. To help anonymizing the n-gram information, we substituted
all the twitter handles by an artificial token “T_HANDLE". We also substituted all
HTTP links by the token “LINK". We prepended two special tokens to complete
the 5-grams generated from the first two words of the tweet, and we correspondingly
appended two other special tokens to complete 5-grams centered around the two last
tokens of the tweet.
Tokenization was perform by trivially separating tokens by blank space. No linguistic
pre-processing, such as for example separating punctuation from words, was made. We
7
8
https://github.com/sarmento/syntagma
https://github.com/saleiro/embedpt
A Framework for Online Reputation Monitoring
138
Table 7.1 Number of 5-grams available for training for different sizes of target vocabulary
|V |
|V |
# 5-grams
2048
2,496,830
8192
6,114,640
32768
10,899,570
opted for not doing any pre-processing for not introducing any linguistic bias from
another tool (tokenization of user generated content is not a trivial problem). The most
direct consequence of not performing any linguistic pre-processing is that of increasing
the vocabulary size and diluting token counts. However, in principle, and given enough
data, the embedding model should be able to learn the correct embeddings for both
actual words (e.g. “ronaldo") and the words that have punctuation attached (e.g.
“ronaldo!"). In practice, we believe that this can actually be an advantage for the
downstream consumers of the embeddings, since they can also relax the requirements
of their own tokenization stage. Overall, the dictionary thus produced contains
approximately 1.3M distinct entries. Our dictionary was sorted by frequency, so the
words with lowest index correspond to the most common words in the corpus.
We used the information from the 5-gram database to generate all training data
used in the experiments. For a fixed size |V | of the target vocabulary to be embedded
(e.g. |V | = 2048), we scanned the database to obtain all possible 5-grams for which all
tokens were among the top |V | words of the dictionary (i.e. the top |V | most frequent
words in the corpus). Depending on |V |, different numbers of valid training 5-grams
were found in the database: the larger |V | the more valid 5-grams would pass the filter.
The number of examples collected for each of the values of |V | is shown in Table 7.1.
Since one of the goals of our experiments is to understand the impact of using
different amounts of training data, for each size of vocabulary to be embedded |V | we
will run experiments training the models using 25%, 50%, 75% and 100% of the data
available.
Metrics Related with the Learning Process
We tracked metrics related to the learning process itself, as a function of the vocabulary
size to be embedded |V | and of the fraction of training data used (25%, 50%, 75%
7.4 Learning Word Embeddings for ORM
139
and 100%). For all possible configurations, we recorded the values of the training and
validation loss (cross entropy) after each epoch. Tracking these metrics serves as a
minimalistic sanity check: if the model is not able to solve the word prediction task
with some degree of success (e.g. if we observe no substantial decay in the losses) then
one should not expect the embeddings to capture any of the distributional information
they are supposed to capture.
Tests and Gold-Standard Data for Intrinsic Evaluation
Using the gold standard data (described below), we performed three types of tests:
• Class Membership Tests: embeddings corresponding to members of the same
semantic class (e.g. “Months of the Year", “Portuguese Cities", “Smileys") should
be close, since they are supposed to be found in mostly the same contexts.
• Class Distinction Test: this is the reciprocal of the previous Class Membership
test. Embeddings of elements of different classes should be different, since words
of different classes ere expected to be found in significantly different contexts.
• Word Equivalence Test: embeddings corresponding to synonyms, antonyms,
abbreviations (e.g. “porque" abbreviated by “pq") and partial references (e.g. “slb
and benfica") should be almost equal, since both alternatives are supposed to
be used be interchangeable in all contexts (either maintaining or inverting the
meaning).
Therefore, in our tests, two words are considered:
• distinct if the cosine of the corresponding embeddings is lower than 0.70 (or
0.80).
• to belong to the same class if the cosine of their embeddings is higher than 0.70
(or 0.80).
• equivalent if the cosine of the embeddings is higher that 0.85 (or 0.95).
We report results using different thresholds of cosine similarity as we noticed that cosine
similarity is skewed to higher values in the embedding space, as observed in related
work [207, 208]. We used the following sources of data for testing Class Membership:
• AP+Battig data. This data was collected from the evaluation data provided by
[116]. These correspond to 29 semantic classes.
140
A Framework for Online Reputation Monitoring
• Twitter-Class - collected manually by the authors by checking top most frequent
words in the dictionary and then expanding the classes. These include the
following 6 sets (number of elements in brackets): smileys (13), months (12),
countries (6), names (19), surnames (14) Portuguese cities (9).
For the Class Distinction test, we pair each element of each of the gold standard
classes, with all the other elements from other classes (removing duplicate pairs since
ordering does not matter), and we generate pairs of words which are supposed belong
to different classes. For Word Equivalence test, we manually collected equivalente pairs,
focusing on abbreviations that are popular in Twitters (e.g. “qt" ≃ “quanto" or “lx" ≃
“lisboa" and on frequent acronyms (e.g. “slb" ≃ “benfica"). In total, we compiled 48
equivalence pairs.
For all these tests we computed a coverage metric. Our embeddings do not
necessarily contain information for all the words contained in each of these tests.
So, for all tests, we compute a coverage metric that measures the fraction of the goldstandard pairs that could actually be tested using the different embeddings produced.
Then, for all the test pairs actually covered, we obtain the success metrics for each
of the 3 tests by computing the ratio of pairs we were able to correctly classified as i)
being distinct (cosine < 0.7 or 0.8), ii) belonging to the same class (cosine > 0.7 or
0.8), and iii) being equivalent (cosine > 0.85 or 0.95).
It is worth making a final comment about the gold standard data. Although we do
not expect this gold standard data to be sufficient for a wide-spectrum evaluation of
the resulting embeddings, it should be enough for providing us clues regarding areas
where the embedding process is capturing enough semantics, and where it is not. These
should still provide valuable indications for planning how to produce the much larger
database of word embeddings.
7.4.3
Results and Analysis
We run the training process and performed the corresponding evaluation for 12 combinations of size of vocabulary to be embedded, and the volume of training data available
that has been used. Table 7.2 presents some overall statistics after training for 40
epochs.
The average time per epoch increases first with the size of the vocabulary to embed
|V | (because the model will have more parameters), and then, for each |V |, with the
volume of training data. Using our testbed (Section 7.4.2), the total time of learning
in our experiments varied from a minimum of 160 seconds, with |V | = 2048 and 25%
7.4 Learning Word Embeddings for ORM
141
Table 7.2 Overall statistics for 12 combinations of models learned varying |V | and
volume of training data. Results observed after 40 training epochs.
Embeddings
# Training Data Tuples
Avg secs/epoch
Training loss
Validation loss
|V | = 2048
561,786 (25% data)
4
3.2564
3.5932
|V | = 2048
1,123,573 (50% data)
9
3.2234
3.4474
|V | = 2048
1,685,359 (75% data)
13
3.2138
3.3657
|V | = 2048
2,496,830 (100% data)
18
3.2075
3.3074
|V | = 8192
1,375,794 (25% data)
63
3.6329
4.286
|V | = 8192
2,751,588 (50% data)
151
3.6917
4.0664
|V | = 8192
4,127,382 (75% data)
187
3.7019
3.9323
|V | = 8192
6,114,640 (100% data)
276
3.7072
3.8565
|V | = 32768
2,452,402 (25% data)
388
3.7417
5.2768
|V | = 32768
4,904,806 (50% data)
956
3.9885
4.8409
|V | = 32768
7,357,209 (75% data)
1418
4.0649
4.6
|V | = 32768
10,899,570 (100% data)
2028
4.107
4.4491
of data, to a maximum of 22.5 hours, with |V | = 32768 and using 100% of the training
data available (extracted from 5M tweets). These numbers give us an approximate
figure of how time consuming it would be to train embeddings from the complete
Twitter corpus we have, consisting of 300M tweets.
We now analyze the learning process itself. We plot the training set loss and
validation set loss for the different values of |V | (Figure 7.7 left) with 40 epochs and
using all the available data. As expected, the loss is reducing after each epoch, with
validation loss, although being slightly higher, following the same trend. When using
100% we see no model overfitting. We can also observe that the higher is |V | the higher
are the absolute values of the loss sets. This is not surprising because as the number
of words to predict becomes higher the problem will tend to become harder. Also,
because we keep the dimensionality of the embedding space constant (64 dimensions), it
becomes increasingly hard to represent and differentiate larger vocabularies in the same
hyper-volume. We believe this is a specially valuable indication for future experiments
and for deciding the dimensionality of the final embeddings to distribute.
On the right side of Figure 7.7 we show how the number of training (and validation)
examples affects the loss. For a fixed |V | = 32768 we varied the amount of data used
for training from 25% to 100%. Three trends are apparent. As we train with more
data, we obtain better validation losses. This was expected. The second trend is that
by using less than 50% of the data available the model tends to overfit the data, as
indicated by the consistent increase in the validation loss after about 15 epochs (check
142
A Framework for Online Reputation Monitoring
Fig. 7.7 Continuous line represents loss in the training data while dashed line represents
loss in the validation data. Left side: effect of increasing |V | using 100% of training
data. Right side: effect of varying the amount of training data used with |V | = 32768.
dashed lines in right side of Figure 7.7). This suggests that for the future we should not
try any drastic reduction of the training data to save training time. Finally, when not
overfitting, the validation loss seems to stabilize after around 20 epochs. We observed
no phase-transition effects (the model seems simple enough for not showing that type
of behavior). This indicates we have a practical way of safely deciding when to stop
training the model.
Intrinsic Evaluation
Table 7.3 presents results for the three different tests described in Section 7.4.2. The first
(expected) result is that the coverage metrics increase with the size of the vocabulary
being embedded, i.e., |V |. Because the Word Equivalence test set was specifically
created for evaluating Twitter-based embedding, when embedding |V | = 32768 words
we achieve almost 90% test coverage. On the other hand, for the Class Distinction test
set - which was created by taking the cross product of the test cases of each class in
Class Membership test set - we obtain very low coverage figures. This indicates that it
is not always possible to re-use previously compiled gold-standard data, and that it
will be important to compile gold-standard data directly from Twitter content if we
want to perform a more precise evaluation.
The effect of varying the cosine similarity decision threshold from 0.70 to 0.80 for
Class Membership test shows that the percentage of test cases that are classified as
correct drops significantly. However, the drop is more accentuated when training with
7.4 Learning Word Embeddings for ORM
143
only a portion of the available data. The differences of using two alternative thresholds
values is even higher in the Word Equivalence test.
The Word Equivalence test, in which we consider two words equivalent word if
the cosine of the embedding vectors is higher than 0.95, revealed to be an extremely
demanding test. Nevertheless, for |V | = 32768 the results are far superior, and for a
much larger coverage, than for lower |V |. The same happens with the Class Membership
test.
On the other hand, the Class Distinction test shows a different trend for larger
values of |V | = 32768 but the coverage for other values of |V | is so low that it would not
make sense to hypothesize about the reduced values of True Negatives (TN) percentage
obtained for the largest |V |. It would be necessary to confirm this behavior with even
larger values of |V |. One might hypothesize that the ability to distinguish between
classes requires larger thresholds when |V | is large. Also, we can speculate about
the need of increasing the number of dimensions to be able to encapsulate different
semantic information for so many words.
Table 7.3 Evaluation of resulting embeddings using Class Membership, Class Distinction
and Word Equivalence tests for different thresholds of cosine similarity.
Embeddings
|V |, %data
Class Membership
coverage
2048, 25%
2048, 50%
12.32%
Acc.
Acc.
@0.70
Class Distinction
Word Equivalence
TN
TN
@0.80
@0.70
30.71% 4.94%
29.13% 12.69%
coverage
1.20%
Acc.
Acc.
@0.80
@0.85
@0.95
100%
100%
26.67% 2.94%
100%
100%
coverage
31.25%
26.67% 2.94%
2048, 75%
29.13% 18.12%
100%
100%
33.33% 2.94%
2048, 100%
32.28% 26.77%
100%
100%
33.33% 6.67%
8192, 25%
14.17% 4.94%
100%
100%
14.71% 2.94%
99%
100%
8192, 50%
29.60%
22.41% 12.69%
6.54%
70.83%
20.59% 2.94%
8192, 75%
27.51% 18.12%
99%
100%
20.59% 2.94%
8192, 100%
33.77% 21.91%
97%
100%
29.41% 5.88%
32768, 25%
17.73% 5.13%
98%
100%
16.28% 2.33%
83%
98%
32768, 50%
47.79%
52.30% 21.06%
18.31%
89.58%
34.88% 9.30%
32768, 75%
85.15% 49.41%
44%
88%
58.14% 23.26%
32768, 100%
95.59% 74.80%
13%
57%
72.09% 34.88%
Further Analysis regarding Evaluation Metrics
Despite already providing interesting practical clues for our goal of trying to embed a
larger vocabulary using more of the training data we have available, these results also
144
A Framework for Online Reputation Monitoring
revealed that the intrinsic evaluation metrics we are using are overly sensitive to their
corresponding cosine similarity thresholds. This sensitivity poses serious challenges for
further systematic exploration of word embedding architectures and their corresponding
hyper-parameters, which was also observed in other recent works [208].
By using these absolute thresholds as criteria for deciding the similarity of words, we
create a dependency between the evaluation metrics and the geometry of the embedded
data. If we see the embedding data as a graph, this means that metrics will change
if we apply scaling operations to certain parts of the graph, even if its structure (i.e.
relative position of the embedded words) does not change.
For most practical purposes (including training downstream ML models) absolute
distances have little meaning. What is fundamental is that the resulting embeddings
are able to capture topological information: similar words should be closer to each
other than they are to words that are dissimilar to them (under the various criteria of
similarity we care about), independently of the absolute distances involved.
It is now clear that a key aspect for future work will be developing additional
performance metrics based on topological properties. We are in line with recent
work [209], proposing to shift evaluation from absolute values to more exploratory
evaluations focusing on weaknesses and strengths of the embeddings and not so much
in generic scores. For example, one metric could consist in checking whether for any
given word, all words that are known to belong to the same class are closer than any
words belonging to different classes, independently of the actual cosine. Future work
will necessarily include developing this type of metrics.
7.4.4
Concluding Remarks
Producing word embeddings from tweets is challenging due to the specificities of the
vocabulary in the medium. We implemented a neural word embedding model that
embeds words based on n-gram information extracted from a sample of the Portuguese
Twitter stream, and which can be seen as a flexible baseline for further experiments
in the field. Work reported in this paper is a preliminary study of trying to find
parameters for training word embeddings from Twitter and adequate evaluation tests
and gold-standard data.
Results show that using less than 50% of the available training examples for each
vocabulary size might result in overfitting. The resulting embeddings obtain reasonable
performance on intrinsic evaluation tests when trained a vocabulary containing the
32768 most frequent words in a Twitter sample of relatively small size. Nevertheless,
results exhibit a skewness in the cosine similarity scores that should be further explored
7.5 Summary of the Contributions
145
in future work. More specifically, the Class Distinction test set revealed to be challenging
and opens the door to evaluation of not only similarity between words but also
dissimilarities between words of different semantic classes without using absolute score
values.
Therefore, a key area of future exploration has to do with better evaluation resources
and metrics. We made some initial effort in this front. However, we believe that
developing new intrinsic tests, agnostic to absolute values of metrics and concerned
with topological aspects of the embedding space, and expanding gold-standard data
with cases tailored for user-generated content, is of fundamental importance for the
progress of this line of work.
Furthermore, we plan to make public available word embeddings trained from a
large sample of 300M tweets collected from the Portuguese Twitter stream. This will
require experimenting with and producing embeddings with higher dimensionality (to
avoid the cosine skewness effect) and training with even larger vocabularies. Also,
there is room for experimenting with some of the hyper-parameters of the model itself
(e.g. activation functions, dimensions of the layers), which we know have impact on
final results.
7.5
Summary of the Contributions
The work reported in this chapter makes the following contributions:
• A framework that supports research in Entity Retrieval and Text Mining tasks
in the context of Online Reputation Monitoring. This framework is composed by
two major components that can act as independent frameworks: RELink and
TexRep.
• The RELink framework that supports comprehensive research work in E-R
retrieval, supporting the semi-automatic creating of test queries, as well as, Early
Fusion based approaches for E-R retrieval.
• The TexRep framework that is able to collect texts from online media, such as
Twitter or online news, and identify entities of interest, classify sentiment polarity
and intensity. The framework supports multiple data aggregation methods, as
well as visualization and modeling techniques that can be used for both descriptive
analytics, such as analyze how political polls evolve over time, and predictive
analytics, such as predict elections.
146
A Framework for Online Reputation Monitoring
• A study of some practical aspects, namely vocabulary size, training data size and
intrinsic evaluation for the training and publishing word embeddings from the
Portuguese Twitter stream that can be later used for ORM related tasks.
Chapter 8
Conclusions
In this thesis we have addressed two computational problems in Online Reputation
Monitoring: Entity Retrieval and Text Mining. Entities are the gravitational force that
drives the ORM process and consequently the work reported in this thesis gravitates
around entities and their occurrences across the Web. We researched and developed
methods for text-based extraction, entity-relationship retrieval, analysis and prediction
of entity-centric information spread across the Web.
The main objectives of this thesis were achieved resulting in several contributions
to the problem of Online Reputation Monitoring. Several competitive baselines were
developed which we believe represent significant progress in a research area where
open source work is scarce. However, there are still many issues to be addressed in the
future. Recent developments in Deep Neural Networks create opportunities to improve
performance in several tasks we addressed in this thesis. Once we have access to larger
quantities of training data it will be possible to easily adapt our research framework to
include these techniques.
8.1
Summary and Main Contributions
Entity-Relationship Retrieval
We have established that ORM benefits from entity retrieval capabilities and should
not be constrained to classic data analytics reports. Users ought to be able to search for
entity-centric information from Social Media and online news. Furthermore, reputation
is not an isolated asset and depends also of the reputation of “neighboring” entities.
We studied the problem of Entity-Relationship Retrieval using a IR-centric perspective
and we made several contributions to this line of research:
148
Conclusions
• Generalization of the problem of entity-relationship search to cover entity types
and relationships represented by any attribute and predicate, respectively, rather
than a predefined set.
• A general probabilistic model for E-R retrieval using Bayesian Networks.
• Proposal of two design patterns that support retrieval approaches using the E-R
model.
• Proposal of a Entity-Relationship Dependence model that builds on the basic
Sequential Dependence Model (SDM) to provide extensible entity-relationship
representations and dependencies, suitable for complex, multi-relations queries.
• Proposal of an indexing method that supports a retrieval approach to the above
problem.
• A semi-automatic method for generating E-R test collections, which resulted in
the RELink Query Collection comprising 600 E-R queries.
• Results of experiments at scale, with a comprehensive set of queries and corpora.
Entity-Relationship (E-R) Retrieval is a complex case of Entity Retrieval where
the goal is to search for multiple unknown entities and relationships connecting them.
Contrary to entity retrieval from structured knowledge graphs, IR-centric approaches
to E-R retrieval are more adequate in the context of ORM. This happens due to
the dynamic nature of the data sources which are much more transient than other
more stable sources of information (e.g Wikipedia) used in general Entity Retrieval.
Consequently, we developed E-R retrieval methods that do not rely on fixed and
predefined entity types and relationships, enabling a wider range of queries compared
to Semantic Web-based approaches.
We started by presenting a formal definition of E-R queries where we assume
that a E-R query can be decomposed as a sequence of sub-queries each containing
keywords related to a specific entity or relationship. Then we adopted a probabilistic
formulation of the E-R retrieval problem. When creating specific representations for
entities (e.g. context terms) and for pairs of entities (i.e. relationships) it is possible
to create a graph of probabilistic dependencies between sub-queries and entity plus
relationship representations. We use a Bayesian network to depict these dependencies
in a probabilistic graphical model. To the best of our knowledge this represents the
first probabilistic model of E-R retrieval.
8.1 Summary and Main Contributions
149
However, these conditional probabilities cannot be computed directly from raw
documents in a collection. In fact, this is a condition inherent to the problem of Entity
Retrieval. Documents serve as proxies to entities and relationship representations and
consequently, we need to fuse information spread across multiple documents to be able
to create those representations. We proposed two design patterns, Early Fusion and
Late Fusion, inspired from Model 1 and Model 2 of Balog et al. [46]. However, in the
context of ORM, we are only interested in Early Fusion.
Early Fusion aggregates context terms of entity and relationship occurrences to
create two dedicated indexes, the entity index and the relationship index. Once we
have the two indexes it is possible to apply any retrieval method to compute the
relevance scores of entity and relationship documents (i.e. representations) given the
E-R sub-queries. The joint probability to retrieve the final entity tuples is computed
using a factorization of the conditional probabilities, i.e., the individual relevance
scores.
On the other hand, Late Fusion consists in matching the E-R sub-queries directly on
a standard document index alongside a set of entity occurrence in each document. Once
we compute the individual relevance scores of each document given a E-R sub-query,
we then aggregate the entity occurrences of the top k results to compute the final
joint probability. When using traditional retrieval models, such as Language Models
or BM25, these design patterns can be used to create unsupervised baselines for E-R
retrieval.
Since our objective was to explore an Early Fusion approach to E-R retrieval we
developed a novel supervised Early Fusion-based model for E-R retrieval, the EntityRelationship Dependence Model (ERDM). It uses Markov Random Field to model
term dependencies of E-R sub-queries and entity/relationship documents. ERDM can
be seen as an extension of the Sequential Dependence Model (SDM) [63] for ad-hoc
document retrieval in a way that it relies on query term dependencies but creates
a more complex graph structure that connects terms of multiple (sub-)queries and
multiple documents to compute the probability mass function under the MRF.
One of the difficulties we faced while researching E-R retrieval was the lack of test
collections. We therefore decided to contribute to this research problem by creating
a semi-automatic method for creating test collections. We realized that web tabular
data often include implicit relationships between entities that belong to the same row
in a table. We developed a table parser that extracts tuples of related entities from
Wikipedia Lists-of-lists-of-lists tables. We then extract metadata, such as table title
or column name, and provide it to editors, together with the list of entity tuples. We
150
Conclusions
asked editors to create E-R queries in which the list of entity tuples could serve as
relevance judgments. This process resulted in the creation and publication of the
RELink Query Collection comprising 600 E-R queries. We believe RELink QC will
foster research work in E-R retrieval.
We performed experiments at scale using the ClueWeb-09B Web corpus from which
we extracted and indexed more than 850 million entity and relationship occurrences.
We evaluated our methods using four different query sets comprising a total of 548 E-R
queries. As far as we know, this is the largest experiment in E-R retrieval, considering
the size of the query set and the data collection. Results show consistently better
performance of the ERDM model over three proposed baselines. When comparing Language Models and BM25 as feature functions we observed variance on the performance
depending on the query set. Furthermore, using unsupervised Early Fusion proved to
be very competitive when compared to ERDM, suggesting that it can be used in some
application scenarios where the overhead of computing sequential dependencies might
be unfeasible.
Entity Filtering and Sentiment Analysis
Entity Filtering and Sentiment Analysis are two fundamental Text Mining problems in
ORM. We participated in two well known external benchmark competitions in both
tasks resulting in state-of-the-art performance. We made the following contributions
to these two problems:
• A supervised learning approach for Entity Filtering on tweets, achieving state-ofthe-art performance using a relatively small training set.
• Created and made available word embeddings trained from financial texts.
• A supervised learning approach for fine-grained sentiment analysis of financial
texts.
Entity Filtering can be seen as targeted named entity disambiguation. We developed
a supervised method that classifies tweets as relevant or non-relevant to a given target
entity. This task is fundamental in ORM as downstream tasks, such as prediction, can
be highly affected by noisy input data. We implemented a large set of features that
can be generated to describe the relationship between a tweet mentioning a entity and
a reference entity representation.
8.1 Summary and Main Contributions
151
We relied on metadata, such as entity categories, text represented with TF-IDF,
similarity between tweets and Wikipedia entity articles, Freebase entities disambiguation, feature selection of terms based on frequency and feature matrix transformation
using SVD. Although our approach can be perceived as relatively simple and low cost,
we achieved first place with an Accuracy over 0.90 at the Filtering Task of RepLab
2013, in a test set containing more than 90 thousand tweets and 61 different target
entities.
Regarding Sentiment Analysis, we decided to focus our efforts in a not so well
explored sub-area, namely financial texts. We participated in SemEval 2017 Task 5
which focused on fine-grained sentiment analysis of financial news and microblogs. The
task consisted in predicting a real continuous variable from -1.0 to +1.0 representing
the polarity and intensity of sentiment concerning companies/stocks mentioned in short
texts. We modeled it as a regression analysis problem.
Previous work in this domain showed that financial sentiment is often depicted in an
implicit way. We created financial-specific word embeddings in order to obtain domain
specific syntactic and semantic relations between words in this context. We combined
traditional bag-of-words, lexical-based features and bag-of-embeddings to train a
regressor of both sentiment and intensity. Results showed that different combination
of features attained different performances on each sub-tasks. Nevertheless, we were
able to obtain cosine similarities above 0.65 in both sub-tasks and mean average errors
below 0.2 in a scale range of 2.0, representing less than 10% of the maximum possible
error.
Text-based Entity-centric Prediction
We explored two text-based prediction problems in the context of ORM, performing
analysis of the predictive power of entity-centric information on the news to predict
entity popularity on Twitter, as well, as a study of sentiment aggregate functions to
predict political opinion. We made the following contribution in this research area:
• Analysis of the predictive power of online news regarding entity popularity on
Twitter for entities that are frequently mentioned on the news.
• Analysis of how to combine different sentiment aggregate functions to serve as
features for predicting political polls.
We are aware that entity popularity on social media can be influenced by endogenous
and exogenous factors but we are only interested in exploring the interplay between
152
Conclusions
online news and social media reactions. This could be useful for anticipating public
relations damage control or even for editorial purposes to maximize attention and
consequently revenue. We explored different sets of signal extracted from online news
mentioning entities that are frequently mentioned on the news such as politicians
of footballers. These signals could influence or at least are correlated with future
popularity of those entities on Twitter.
Results show that performance varies depending on the target entity. In general,
results are better in the case of predicting popularity of politicians, due to the high
unpredictability of live events associated with sports. This is a general conclusion
of this study as online news do not have predictive power for live events as Twitter
reactions happen quickly than the publication of the news for such cases. Results also
show that the time of prediction affects the performance of the models. For instance,
in the case of politicians F1 score is higher when time of prediction occurs after lunch
time, which is an evidence that in politics most of the news events that trigger social
media reactions are reported in the morning news.
The second predictive studied we carried out consisted in using entity-centric
sentiment polarity extracted from tweets to predict political polls. There is no consensus
on previous research work on what sentiment aggregate functions is more adequate to
predict political results. We explored several sentiment aggregate functions described in
the literature to assess which one or combination would be more effective on predicting
polls during the Portuguese bailout (2011-2013). In our study, we achieved the lowest
mean average error using a combination of buzz aggregation functions to predict
monthly poll variations instead of absolute values. On the other hand, the most
important individual feature was an aggregate function consisting on the logarithm of
the ration positive and negative classified tweets.
A Framework for ORM
We also created a framework specifically tailored for ORM that puts together the
sub-tasks we tackled throughout this thesis. We believe this framework represents a
significant contribution and paves the way to future research in the computational
problems inherent to the process of monitoring reputation online. More precisely we
make the following contributions:
• A framework that supports research in Entity Retrieval and Text Mining tasks
in the context of Online Reputation Monitoring. This framework is composed by
two major components that can act as independent frameworks: RELink and
TexRep.
8.1 Summary and Main Contributions
153
• The RELink framework that supports comprehensive research work in E-R
retrieval, supporting the semi-automatic creating of test queries, as well as, Early
Fusion based approaches for E-R retrieval.
• The TexRep framework that is able to collect texts from online media, such as
Twitter or online news, and identify entities of interest, classify sentiment polarity
and intensity. The framework supports multiple data aggregation methods, as
well as visualization and modeling techniques that can be used for both descriptive
analytics, such as analyze how political polls evolve over time, and predictive
analytics, such as predict elections.
• A study of some practical aspects, namely vocabulary size, training data size and
intrinsic evaluation for the training and publishing word embeddings from the
Portuguese Twitter stream that can be later used for ORM related tasks.
The framework is divided in two distinct components, one is dedicated to Entity
Retrieval and the other to Text Mining. In practice these two components can act as
two separate frameworks. Both are adaptable and can be reused in different application
scenarios, from computational journalism to finance or politics. RELink framework
is designed to facilitate experiments with E-R Retrieval query collections. TexRep
was designed with two main challenges in mind: 1) it should be able to cope with the
Text Mining problems underlying ORM and 2) it should be flexible, adaptable and
reusable in order to support the specificities of different application scenarios. We also
presented two use cases of our framework for ORM. In the first we use RELink in the
context of computational journalism while in the second we described the design and
the implementation of the POPmine system, an use case of the proposed framework in
the scope of the POPSTAR project.
Furthermore, we presented a study of the practical aspects of learning word embeddings from the Twitter stream. Our goal was to try to assess the feasibility of
producing and publishing general purpose word embeddings for ORM. Results showed
that using less than 50% of the available training examples for each vocabulary size
might result in over-fitting. We obtained interesting performance on intrinsic evaluation when trained a vocabulary containing 32768 most frequent words in a Twitter
sample of relatively small size. We proposed a set of gold standard data for intrinsic
evaluation of word embeddings from user generated content. Nevertheless, we realized
that evaluation metrics using absolute values as thresholds might not be suitable due
to the cosine skewness effect on large dimensional embedding spaces. We propose to
develop topological intrinsic evaluation metrics in future work.
Conclusions
154
8.2
Limitations and Future Work
One of the major obstacles we faced during the course of this thesis was the limited
availability of labeled data for training and evaluation of the different tasks we tackled.
This is a common limitation in the scope of Online Reputation Monitoring. Due to this
obstacle we did not have the chance to perform extensive experimentation using more
than one data source and language for each task. This aspect reduces the generalization
of the results obtained since they might be biased towards the available datasets we
had access to. Therefore, we leave for future work experimentation on each task with
multiple datasets using different data sources and languages to perform comparable
evaluations.
We also recognize that we tried to address many different tasks which reduced our
capability of addressing every task with the same level of depth. Nevertheless, we
believe that exploring several new tasks in the scope of ORM constitutes a strong
contribution to foster future research work in this area. During the course of this
thesis, we did not have the possibility of performing user studies to assess the global
usefulness of our framework for ORM. We would like to leave that as future work.
While we had the objective of applying E-R retrieval in online news and social media
which represent the natural data sources for ORM, it was not possible to evaluate
our approaches using these type of data sources. Research work in E-R retrieval is
still in its early stages and we believed it was necessary to first contribute to general
E-R retrieval and leave for future work specific evaluation in the context of ORM. We
implemented and created a demo of the Early Fusion approach since it is unsupervised.
However, it was not possible to apply ERDM to online news due to the lack of training
queries and relevance judgments for parameter tuning. In either cases, we aim to
conduct an user experience in a near future to collect queries and relevance judgments
in the context of ORM.
Recent work in Deep Neural Networks makes the opportunity to beat the baselines
we created in this thesis however, most of the tasks we addressed do not have enough
labeled data to use these techniques. One of the most interesting avenues we would like
to explore would be the use of neural networks as feature functions of the ERDM model.
Since we have a dataset of more than 850 million entity and relationship extractions
this represents an ideal scenario for Deep Learning. We propose to use a window based
prediction task similar to the CBOW model for training word embeddings. Given a
fixed window size, one would learn a neural network that would provide a ranked list
of entities/relationships given an input query. We believe this approach would reduce
8.2 Limitations and Future Work
155
the computational costs of the current ERDM feature functions since we would not
need to keep two huge indexes at query time.
We would like also to explore different priors in entity and relationship documents
within ERDM. For instance, creating source and time sensitive rankings would be
useful when using transient information sources. Another promising avenue is transfer
learning, specially due to the lack of training resources in the context of ORM. The
possibility of bilingual training or cross-domain (e.g. politics to finance) transfer
knowledge would constitute a major progress in this area.
References
[1] Cees BM Van Riel, Charles J Fombrun, et al. Essentials of corporate communication: Implementing practices for effective reputation management. Routledge,
2007.
[2] Mats Atvesson. Organization: from substance to image? Organization studies,
11(3):373–394, 1990.
[3] Diana Maynard, Kalina Bontcheva, and Dominic Rout. Challenges in developing opinion mining tools for social media. Proceedings of @ NLP can u tag#
usergeneratedcontent, 2012.
[4] Gianluca Demartini, Claudiu S Firan, Tereza Iofciu, Ralf Krestel, and Wolfgang
Nejdl. Why finding entities in wikipedia is difficult, sometimes. Information
Retrieval, 13(5):534–567, 2010.
[5] Jeffrey Pound, Peter Mika, and Hugo Zaragoza. Ad-hoc object retrieval in the
web of data. In Proceedings of the 19th international conference on World wide
web, pages 771–780. ACM, 2010.
[6] Charles J Fombrun and Cees BM Van Riel. Fame & fortune: How successful
companies build winning reputations. FT Press, 2004.
[7] Don Stacks. A Practioner’s Guide to Public Relations Research, Measurement
and Evaluation. Business Expert Press, 2010.
[8] Krisztian Balog, Yi Fang, Maarten de Rijke, Pavel Serdyukov, Luo Si, et al.
Expertise retrieval. Foundations and Trends® in Information Retrieval, 6(2–3):
127–256, 2012.
[9] Tom Heath and Christian Bizer. Linked data: Evolving the web into a global
data space. Synthesis lectures on the semantic web: theory and technology, 1(1):
1–136, 2011.
[10] Mohamed Yahya, Denilson Barbosa, Klaus Berberich, Qiuyue Wang, and Gerhard
Weikum. Relationship queries on extended knowledge graphs. In Proceedings
of the Ninth ACM International Conference on Web Search and Data Mining,
pages 605–614. ACM, 2016.
[11] Anastasia Giachanou and Fabio Crestani. Like it or not: A survey of twitter
sentiment analysis methods. ACM Comput. Surv., 49(2):28:1–28:41, June 2016.
ISSN 0360-0300. doi: 10.1145/2938640.
158
References
[12] Michela Nardo, Marco Petracco-Giudici, and Minás Naltsidis. Walking down
wall street with a tablet: A survey of stock market predictions using the web.
Journal of Economic Surveys, 30(2), 2016.
[13] Jasmina Smailović, Miha Grčar, Nada Lavrač, and Martin Žnidaršič. Streambased active learning for sentiment analysis in the financial domain. Information
Sciences, 285, 2014.
[14] Pedro Saleiro, Eduarda Mendes Rodrigues, Carlos Soares, and Eugénio Oliveira.
Texrep: A text mining framework for online reputation monitoring. New Generation Comput., 35(4):365–389, 2017. doi: 10.1007/s00354-017-0021-3.
[15] Pedro Saleiro, Natasa Milic-Frayling, Eduarda Mendes Rodrigues, and Carlos
Soares. Relink: A research framework and test collection for entity-relationship
retrieval. In Proceedings of the 40th International ACM SIGIR Conference on
Research and Development in Information Retrieval, Shinjuku, Tokyo, Japan,
August 7-11, 2017, pages 1273–1276, 2017. doi: 10.1145/3077136.3080756.
[16] Pedro Saleiro, Natasa Milic-Frayling, Eduarda Mendes Rodrigues, and Carlos
Soares. Early fusion strategy for entity-relationship retrieval. In Proceedings
of the First Workshop on Knowledge Graphs and Semantics for Text Retrieval
and Analysis (KG4IR 2017) co-located with the 40th International ACM SIGIR
Conference on Research and Development in Information Retrieval (SIGIR 2017),
Shinjuku, Tokyo, Japan, August 11, 2017., pages 49–54, 2017.
[17] Pedro Saleiro, Luís Sarmento, Eduarda Mendes Rodrigues, Carlos Soares, and Eugénio C. Oliveira. Learning word embeddings from the portuguese twitter stream:
A study of some practical aspects. In Progress in Artificial Intelligence - 18th EPIA
Conference on Artificial Intelligence, EPIA 2017, Porto, Portugal, September 5-8,
2017, Proceedings, pages 880–891, 2017. doi: 10.1007/978-3-319-65340-2_71.
[18] Pedro Saleiro, Eduarda Mendes Rodrigues, Carlos Soares, and Eugénio Oliveira.
Feup at semeval-2017 task 5: Predicting sentiment polarity and intensity with
financial word embeddings. In Proceedings of the 11th International Workshop
on Semantic Evaluation (SemEval-2017), pages 904–908. Association for Computational Linguistics, 2017. doi: 10.18653/v1/S17-2155.
[19] Pedro Saleiro and Carlos Soares. Learning from the news: Predicting entity
popularity on twitter. In Advances in Intelligent Data Analysis XV - 15th
International Symposium, IDA 2016, Stockholm, Sweden, October 13-15, 2016,
Proceedings, pages 171–182, 2016. doi: 10.1007/978-3-319-46349-0_15.
[20] Pedro Saleiro, Jorge Teixeira, Carlos Soares, and Eugénio C. Oliveira. Timemachine: Entity-centric search and visualization of news archives. In Advances
in Information Retrieval - 38th European Conference on IR Research, ECIR
2016, Padua, Italy, March 20-23, 2016. Proceedings, pages 845–848, 2016. doi:
10.1007/978-3-319-30671-1_78.
[21] Pedro Saleiro, Luís Gomes, and Carlos Soares. Sentiment aggregate functions
for political opinion polling using microblog streams. In Proceedings of the
References
159
Ninth International C* Conference on Computer Science & Software Engineering,
C3S2E ’16, Porto, Portugal, July 20-22, 2016, pages 44–50, 2016. doi: 10.1145/
2948992.2949022.
[22] Pedro Saleiro, Silvio Amir, Mário J. Silva, and Carlos Soares. Popmine: Tracking
political opinion on the web. In 15th IEEE International Conference on Computer
and Information Technology, CIT 2015; 14th IEEE International Conference on
Ubiquitous Computing and Communications, IUCC 2015; 13th IEEE International Conference on Dependable, Autonomic and Secure Computing, DASC 2015;
13th IEEE International Conference on Pervasive Intelligence and Computing,
PICom 2015, Liverpool, United Kingdom, October 26-28, 2015, pages 1521–1526,
2015. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.228.
[23] Pedro Saleiro, Luis Rei, Arian Pasquali, Carlos Soares, Jorge Teixeira, Fábio Pinto,
Mohammad Nozari Zarmehri, Catarina Félix, and Pedro Strecht. POPSTAR at
replab 2013: Name ambiguity resolution on twitter. In Working Notes for CLEF
2013 Conference , Valencia, Spain, September 23-26, 2013., 2013.
[24] Theo BC Poiesz. The image concept: its place in consumer psychology. Journal
of Economic Psychology, 10(4):457–472, 1989.
[25] Gary H Jones, Beth H Jones, and Philip Little. Reputation as reservoir: Buffering
against loss in times of economic crisis. Corporate Reputation Review, 3(1):21–29,
2000.
[26] Stephen J Newell and Ronald E Goldsmith. The development of a scale to
measure perceived corporate credibility. Journal of Business Research, 52(3):
235–247, 2001.
[27] Charles Fombrun. The reptrak system. In Presented 10th Anniversary Conference
on Reputation, Image, Identity and Competitiveness, pages 25–28, 2006.
[28] Kurniawati Kurniawati, Graeme G Shanks, and Nargiza Bekmamedova. The
business impact of social media analytics. In ECIS, page 48, 2013.
[29] Matt Kaufmann, E Portmann, and Madjid Fathi. A concept of semantics
extraction from web data by induction of fuzzy ontologies. In Electro/Information
Technology (EIT), 2013 IEEE International Conference on, pages 1–6. IEEE,
2013.
[30] Edy Portmann. The FORA framework: a fuzzy grassroots ontology for online
reputation management. Springer Science & Business Media, 2012.
[31] Julio Gonzalo. Monitoring reputation in the wild online west. In Proceedings of
the 4th Spanish Conference on Information Retrieval, page 1. ACM, 2016.
[32] E. Amigó, J. Carrillo de Albornoz, I Chugur, A. Corujo, J. Gonzalo, T. Martín,
E. Meij, M. de Rijke, and D. Spina. Overview of replab 2013: Evaluating online
reputation monitoring systems. CLEF, 2013.
160
References
[33] Marija Matešić, Kristina Vučković, and Zdravko Dovedan. Should academia
care about online reputation management and monitoring? In MIPRO, 2010
Proceedings of the 33rd International Convention, pages 852–857. IEEE, 2010.
[34] Sina Samangooei, Trevor Cohn, Nicholas Gibbins, and Mahesan Niranjan. Trendminer: An architecture for real time analysis of social media text. In ICWSM,
2012.
[35] Ali Khalili, Sören Auer, and Axel-Cyrille Ngonga Ngomo. context–lightweight
text analytics using linked data. In European Semantic Web Conference, pages
628–643. Springer, 2014.
[36] Pedro Saleiro, Silvio Amir, Mário Silva, and Carlos Soares. Popmine: Tracking
political opinion on the web. In Computer and Information Technology; Ubiquitous
Computing and Communications; Dependable, Autonomic and Secure Computing;
Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE
International Conference on, pages 1521–1526. IEEE, 2015.
[37] Christopher D Manning, Prabhakar Raghavan, Hinrich Schütze, et al. Introduction to information retrieval, volume 1. Cambridge university press Cambridge,
2008.
[38] Gerard Salton. Automatic information organization and retrieval. 1968.
[39] Karen Sparck Jones. A statistical interpretation of term specificity and its
application in retrieval. Journal of documentation, 28(1):11–21, 1972.
[40] Stephen E Robertson, Steve Walker, Susan Jones, Micheline M Hancock-Beaulieu,
Mike Gatford, et al. Okapi at trec-3. NIST SPECIAL PUBLICATION SP, 109:
109, 1995.
[41] S Fissaha Adafre, Maarten de Rijke, and E Tjong Kim Sang. Entity retrieval.
Recent Advances in Natural Language Processing (RANLP 2007), 2007.
[42] Haiqiang Chen, Huawei Shen, Jin Xiong, Songbo Tan, and Xueqi Cheng. Social
network structure behind the mailing lists: Ict-iiis at trec 2006 expert finding
track. In TREC. National Institute of Standards and Technology (NIST), 2006.
[43] Gianluca Demartini, Tereza Iofciu, and Arjen P De Vries. Overview of the inex
2009 entity ranking track. In Focused Retrieval and Evaluation, pages 254–264.
Springer, 2009.
[44] Krisztian Balog, Pavel Serdyukov, and Arjen P de Vries. Overview of the trec
2010 entity track. Technical report, DTIC Document, 2010.
[45] Krisztian Balog, Arjen P de Vries, Pavel Serdyukov, and Ji-Rong Wen. The first
international workshop on entity-oriented search (eos). In ACM SIGIR Forum,
volume 45, pages 43–50. ACM, 2012.
[46] Krisztian Balog, Leif Azzopardi, and Maarten De Rijke. Formal models for expert
finding in enterprise corpora. In Proceedings of the 29th annual international
ACM SIGIR conference on Research and development in information retrieval,
pages 43–50. ACM, 2006.
References
161
[47] Leif Azzopardi, Krisztian Balog, and Maarten de Rijke. Language modeling
approaches for enterprise tasks. In TREC. Citeseer, 2005.
[48] Nick Craswell, Arjen P de Vries, and Ian Soboroff. Overview of the trec 2005
enterprise track. In Trec, volume 5, pages 199–205, 2005.
[49] Zhao Ru, Yuehua Chen, Weiran Xu, and Jun Guo. Trec 2005 enterprise track
experiments at bupt. In TREC, 2005.
[50] Desislava Petkova and W Bruce Croft. Proximity-based document representation
for named entity retrieval. In Proceedings of the sixteenth ACM conference on
Conference on information and knowledge management, pages 731–740. ACM,
2007.
[51] Marc Bron, Krisztian Balog, and Maarten De Rijke. Example based entity search
in the web of data. In European Conference on Information Retrieval, pages
392–403. Springer, 2013.
[52] Nansu Zong, Sungin Lee, and Hong-Gee Kim. Discovering expansion entities for
keyword-based entity search in linked data. Journal of Information Science, 41
(2):209–227, 2015.
[53] Nikita Zhiltsov, Alexander Kotov, and Fedor Nikolaev. Fielded sequential dependence model for ad-hoc entity retrieval in the web of data. In Proceedings of
the 38th International ACM SIGIR Conference on Research and Development in
Information Retrieval, pages 253–262. ACM, 2015.
[54] Jeffrey Pound, Alexander K Hudek, Ihab F Ilyas, and Grant Weddell. Interpreting
keyword queries over web knowledge bases. In Proceedings of the 21st ACM
international conference on Information and knowledge management, pages 305–
314. ACM, 2012.
[55] Christina Unger, Lorenz Bühmann, Jens Lehmann, Axel-Cyrille Ngonga Ngomo,
Daniel Gerber, and Philipp Cimiano. Template-based question answering over
rdf data. In Proceedings of the 21st international conference on World Wide Web,
pages 639–648. ACM, 2012.
[56] Xiaonan Li, Chengkai Li, and Cong Yu. Entity-relationship queries over wikipedia.
ACM Transactions on Intelligent Systems and Technology (TIST), 3(4):70, 2012.
[57] Michael Schmitz, Robert Bart, Stephen Soderland, Oren Etzioni, et al. Open
language learning for information extraction. In Proceedings of the 2012 Joint
Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 523–534. Association for Computational
Linguistics, 2012.
[58] Jeffrey Xu Yu, Lu Qin, and Lijun Chang. Keyword search in databases. Synthesis
Lectures on Data Management, 1(1):1–155, 2009.
162
References
[59] Shady Elbassuoni, Maya Ramanath, Ralf Schenkel, Marcin Sydow, and Gerhard
Weikum. Language-model-based ranking for queries on rdf-graphs. In Proceedings
of the 18th ACM conference on Information and knowledge management, pages
977–986. ACM, 2009.
[60] Tao Cheng, Xifeng Yan, and Kevin Chen-Chuan Chang. Entityrank: searching entities directly and holistically. In Proceedings of the 33rd international
conference on Very large data bases, pages 387–398. VLDB Endowment, 2007.
[61] Jack G Conrad and Mary Hunter Utt. A system for discovering relationships by
feature extraction from text databases. In SIGIR’94, pages 260–270. Springer,
1994.
[62] Jason DM Rennie and Tommi Jaakkola. Using term informativeness for named
entity detection. In Proceedings of the 28th annual international ACM SIGIR
conference on Research and development in information retrieval, pages 353–360.
ACM, 2005.
[63] Donald Metzler and W Bruce Croft. A markov random field model for term
dependencies. In Proceedings of the 28th annual international ACM SIGIR
conference on Research and development in information retrieval, pages 472–479.
ACM, 2005.
[64] Fei Song and W Bruce Croft. A general language model for information retrieval. In Proceedings of the eighth international conference on Information and
knowledge management, pages 316–321. ACM, 1999.
[65] Donald Metzler and W Bruce Croft. Linear feature-based models for information
retrieval. Information Retrieval, 10(3):257–274, 2007.
[66] Samuel Huston and W Bruce Croft. A comparison of retrieval models using
term dependencies. In Proceedings of the 23rd ACM International Conference on
Conference on Information and Knowledge Management, pages 111–120. ACM,
2014.
[67] Fedor Nikolaev, Alexander Kotov, and Nikita Zhiltsov. Parameterized fielded
term dependence models for ad-hoc entity retrieval from knowledge graph. In
Proceedings of the 39th International ACM SIGIR conference on Research and
Development in Information Retrieval, pages 435–444. ACM, 2016.
[68] Paul Ogilvie and Jamie Callan. Combining document representations for knownitem search. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, pages 143–150. ACM,
2003.
[69] Faegheh Hasibi, Krisztian Balog, and Svein Erik Bratsberg. Exploiting entity
linking in queries for entity retrieval. In Proceedings of the 2016 ACM on
International Conference on the Theory of Information Retrieval, pages 209–218.
ACM, 2016.
[70] Sayali Kulkarni, Amit Singh, Ganesh Ramakrishnan, and Soumen Chakrabarti.
Collective annotation of wikipedia entities in web text. In SIGKDD. ACM, 2009.
References
163
[71] Damiano Spina, Enrique Amigó, and Julio Gonzalo. Filter keywords and majority
class strategies for company name disambiguation in twitter. In CLEF. Springer,
2011.
[72] AD Delgado Munoz, Raquel Martınez Unanue, Alberto Pérez Garcıa-Plaza, and
Vıctor Fresno. Unsupervised real-time company name disambiguation in twitter.
In ICWSM Workshop on Real-Time Analysis and Mining of Social Streams, pages
25–28, 2012.
[73] Maria Christoforaki, Ivie Erunse, and Cong Yu. Searching social updates for
topic-centric entities. In VLDS, pages 34–39, 2011.
[74] Viktor Hangya and Richárd Farkas. Filtering and polarity detection for reputation
management on tweets. In CLEF (Working Notes), 2013.
[75] Amparo Elizabeth Cano Basave, Andrea Varga, Matthew Rowe, Milan Stankovic,
and Aba-Sah Dadzie. Making sense of microposts (# msm2013) concept extraction
challenge. 2013.
[76] Leon Derczynski, Diana Maynard, Niraj Aswani, and Kalina Bontcheva.
Microblog-genre noise and impact on semantic annotation accuracy. In Proceedings of the 24th ACM Conference on Hypertext and Social Media, pages 21–30.
ACM, 2013.
[77] Xiaohua Liu, Yitong Li, Haocheng Wu, Ming Zhou, Furu Wei, and Yi Lu. Entity
linking for tweets. In ACL (1), pages 1304–1311, 2013.
[78] Mark A Greenwood, Niraj Aswani, and Kalina Bontcheva. Reputation profiling
with gate. In CLEF (Online Working Notes/Labs/Workshop), 2012.
[79] Leon Derczynski, Diana Maynard, Giuseppe Rizzo, Marieke van Erp, Genevieve
Gorrell, Raphaël Troncy, Johann Petrak, and Kalina Bontcheva. Analysis of
named entity recognition and linking for tweets. Information Processing &
Management, 51(2):32–49, 2015.
[80] Edgar Meij, Wouter Weerkamp, and Maarten de Rijke. Adding semantics to
microblog posts. In Proceedings of the fifth ACM international conference on
Web search and data mining, pages 563–572. ACM, 2012.
[81] Alexandre Davis, Adriano Veloso, Altigran S Da Silva, Wagner Meira Jr, and
Alberto HF Laender. Named entity disambiguation in streaming data. In ACL:
Long Papers-Volume 1, pages 815–824. Association for Computational Linguistics,
2012.
[82] Wei Shen, Jianyong Wang, Ping Luo, and Min Wang. Linking named entities
in tweets with knowledge base via user interest modeling. In Proceedings of the
19th ACM SIGKDD international conference on Knowledge discovery and data
mining, pages 68–76. ACM, 2013.
[83] H. Kwak, H. Park C. Lee, and S. Moon. What is twitter, a social network or a
news media? In WWW ’10, pages 591–600. ACM, 2010.
164
References
[84] M. B. Habib and M. Van Keulen. Twitterneed: A hybrid approach for named
entity extraction and disambiguation for tweet. Natural language engineering, 22
(03), 2016.
[85] Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. Yago: a core of
semantic knowledge. In Proceedings of the 16th international conference on World
Wide Web, pages 697–706. ACM, 2007.
[86] Paolo Ferragina and Ugo Scaiella. Tagme: on-the-fly annotation of short text
fragments (by wikipedia entities). In Proceedings of the 19th ACM international
conference on Information and knowledge management, pages 1625–1628. ACM,
2010.
[87] Andrea Moro, Alessandro Raganato, and Roberto Navigli. Entity linking meets
word sense disambiguation: a unified approach. Transactions of the Association
for Computational Linguistics, 2:231–244, 2014.
[88] Francesco Piccinno and Paolo Ferragina. From tagme to wat: a new entity
annotator. In Proceedings of the first international workshop on Entity recognition
& disambiguation, pages 55–62. ACM, 2014.
[89] Zhengyan He, Shujie Liu, Mu Li, Ming Zhou, Longkai Zhang, and Houfeng Wang.
Learning entity representation for entity disambiguation. In ACL (2), pages
30–34, 2013.
[90] Wei Fang, Jianwen Zhang, Dilin Wang, Zheng Chen, and Ming Li. Entity
disambiguation by knowledge and text jointly embedding. CoNLL 2016, page
260, 2016.
[91] Jose G Moreno, Romaric Besançon, Romain Beaumont, Eva D’hondt, Anne-Laure
Ligozat, Sophie Rosset, Xavier Tannier, and Brigitte Grau. Combining word and
entity embeddings for entity linking. In European Semantic Web Conference,
pages 337–352. Springer, 2017.
[92] Bing Liu. Sentiment analysis and opinion mining. Synthesis lectures on human
language technologies, 5(1), 2012.
[93] Sara Rosenthal, Preslav Nakov, Svetlana Kiritchenko, Saif M Mohammad, Alan
Ritter, and Veselin Stoyanov. Semeval-2015 task 10: Sentiment analysis in twitter.
Proceedings of SemEval-2015, 2015.
[94] Saif Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. Nrc-canada: Building
the state-of-the-art in sentiment analysis of tweets. In SemEva, pages 321–327,
Atlanta, Georgia, USA, 2013. Association for Computational Linguistics.
[95] Efthymios Kouloumpis, Theresa Wilson, and Johanna D Moore. Twitter sentiment
analysis: The good the bad and the omg! Icwsm, 11:538–541, 2011.
[96] David Bamman and Noah A Smith. Contextualized sarcasm detection on twitter.
In Proceedings of the 9th International Conference on Web and Social Media,
pages 574–77. AAAI Menlo Park, CA, 2015.
References
165
[97] Bing Liu. Sentiment analysis and subjectivity. Handbook of natural language
processing, 2:627–666, 2010.
[98] Mike Thelwall, Kevan Buckley, and Georgios Paltoglou. Sentiment strength
detection for the social web. Journal of the American Society for Information
Science and Technology, 63(1):163–173, 2012.
[99] Yoshua Bengio. Deep learning of representations: Looking forward. In Statistical
language and speech processing, pages 1–37. Springer, 2013.
[100] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean.
Distributed representations of words and phrases and their compositionality. In
NIPS, 2013.
[101] Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y
Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In
Proceedings of the 49th Annual Meeting of the Association for Computational
Linguistics: Human Language Technologies-Volume 1, pages 142–150. Association
for Computational Linguistics, 2011.
[102] Igor Labutov and Hod Lipson. Re-embedding words. In ACL (2), pages 489–493,
2013.
[103] Yaming Sun, Lei Lin, Nan Yang, Zhenzhou Ji, and Xiaolong Wang. Radicalenhanced chinese character embedding. In Neural Information Processing, pages
279–286. Springer, 2014.
[104] Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. Learning
sentiment-specific word embedding for twitter sentiment classification. In ACL
(1), pages 1555–1565, 2014.
[105] Gerard Salton, Anita Wong, and Chung-Shu Yang. A vector space model for
automatic indexing. Communications of the ACM, 18(11):613–620, 1975.
[106] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation.
the Journal of machine Learning research, 3:993–1022, 2003.
[107] Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak,
and Zachary Ives. Dbpedia: A nucleus for a web of open data. Springer, 2007.
[108] Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer,
and Richard Harshman. Indexing by latent semantic analysis. Journal of the
American society for information science, 41(6):391, 1990.
[109] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global
vectors for word representation. In EMNLP, volume 14, pages 1532–1543, 2014.
[110] Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. A
neural probabilistic language model. Journal of machine learning research, 3
(Feb):1137–1155, 2003.
166
References
[111] Ronan Collobert and Jason Weston. A unified architecture for natural language
processing: Deep neural networks with multitask learning. In Proceedings of the
25th international conference on Machine learning, pages 160–167. ACM, 2008.
[112] Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix
factorization. In Advances in neural information processing systems, pages
2177–2185, 2014.
[113] Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic regularities in
continuous space word representations. In Hlt-naacl, volume 13, 2013.
[114] Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski.
Rand-walk: A latent variable model approach to word embeddings. arXiv preprint
arXiv:1502.03520, 2015.
[115] Preslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Sebastiani, and Veselin
Stoyanov. Semeval-2016 task 4: Sentiment analysis in twitter. Proceedings of
SemEval, pages 1–18, 2016.
[116] João Rodrigues, António Branco, Steven Neale, and João Silva. Lx-dsemvectors:
Distributional semantics models for portuguese. In International Conference on
Computational Processing of the Portuguese Language, pages 259–270. Springer,
2016.
[117] S.Asur R. Bandari and B. Huberman. The pulse of news in social media:
Forecasting popularity. In ICWSM ’12, 2012.
[118] J. Yang and J.Leskovec. Patterns of temporal variation in online media. In
WSDM ’11, pages 177–186. ACM, 2011.
[119] W. Weerkamp M. Tsagkias and M. De Rijke. Predicting the volume of comments
on online news stories. In CIKM ’09, pages 1765–1768. ACM, 2009.
[120] Xiangnan He, Ming Gao, Min-Yen Kan, Yiqun Liu, and Kazunari Sugiyama.
Predicting the popularity of web 2.0 items based on user comments. In Proceedings
of the 37th international ACM SIGIR conference on Research & development in
information retrieval, pages 233–242. ACM, 2014.
[121] Swapna Gottipati and Jing Jiang. Finding thoughtful comments from social
media. In COLING, volume 12, pages 995–1010, 2012.
[122] Annie Louis and Ani Nenkova. What makes writing great? first experiments on
article quality prediction in the science journalism domain. Transactions of the
Association for Computational Linguistics, 1:341–352, 2013.
[123] Carlos Castillo, Mohammed El-Haddad, Jürgen Pfeffer, and Matt Stempeck.
Characterizing the life cycle of online news stories using social media reactions.
In Proceedings of the 17th ACM conference on Computer supported cooperative
work & social computing, pages 211–223. ACM, 2014.
References
167
[124] Riley Crane and Didier Sornette. Robust dynamic classes revealed by measuring
the response function of a social system. Proceedings of the National Academy of
Sciences, 105(41):15649–15653, 2008.
[125] Janette Lehmann, Bruno Gonçalves, José J Ramasco, and Ciro Cattuto. Dynamical classes of collective attention in twitter. In Proceedings of the 21st
international conference on World Wide Web, pages 251–260. ACM, 2012.
[126] Daniel M Romero, Brendan Meeder, and Jon Kleinberg. Differences in the
mechanics of information diffusion across topics: idioms, political hashtags, and
complex contagion on twitter. In Proceedings of the 20th international conference
on World wide web, pages 695–704. ACM, 2011.
[127] Mikalai Tsytsarau, Themis Palpanas, and Malu Castellanos. Dynamics of news
events and social media reaction. In Proceedings of the 20th ACM SIGKDD
international conference on Knowledge discovery and data mining, pages 901–910.
ACM, 2014.
[128] Harold Dwight Lasswell. The comparative study of symbols: An introduction.
Number 1. Stanford University Press, 1952.
[129] Maxwell E McCombs and Donald L Shaw. The agenda-setting function of mass
media. Public opinion quarterly, 36(2), 1972.
[130] Matthew C Moen. Ronald reagan and the social issues: Rhetorical support for
the christian right. The Social Science Journal, 27(2):199–207, 1990.
[131] Daniel Riffe and Alan Freitag. A content analysis of content analyses: Twenty-five
years of journalism quarterly. Journalism & Mass Communication Quarterly, 74
(3), 1997.
[132] Kimberly A Neuendorf. The content analysis guidebook. Sage, 2002.
[133] Daniel J Hopkins and Gary King. A method of automated nonparametric content
analysis for social science. American Journal of Political Science, 54(1), 2010.
[134] Justin Grimmer and Brandon M. Stewart. Text as data: The promise and pitfalls
of automatic content analysis methods for political texts. Political Analysis, 2013.
[135] A. Bermingham and A. Smeaton. On using twitter to monitor political sentiment
and predict election results. Workshop at the International Joint Conference for
Natural Language Processing (IJCNLP), November 2011.
[136] Andranik Tumasjan, Timm Oliver Sprenger, Philipp G Sandner, and Isabell M
Welpe. Predicting elections with twitter: What 140 characters reveal about
political sentiment. ICWSM, 10, 2010.
[137] Micol Marchetti-Bowick and Nathanael Chambers. Learning for microblogs with
distant supervision: Political forecasting with twitter. In Proceedings of the
13th Conference of the European Chapter of the Association for Computational
Linguistics, EACL ’12. Association for Computational Linguistics, 2012.
168
References
[138] Pawel Sobkowicz, Michael Kaschesky, and Guillaume Bouchard. Opinion mining
in social media: Modeling, simulating, and forecasting political opinions in the
web. Government Information Quarterly, 29(4):470 – 479, 2012. Social Media
in Government - Selections from the 12th Annual International Conference on
Digital Government Research.
[139] Avishay Livne, Matthew P Simmons, Eytan Adar, and Lada A Adamic. The
party is over here: Structure and content in the 2010 election. In ICWSM, 2011.
[140] Andranik Tumasjan, Timm O Sprenger, Philipp G Sandner, and Isabell M Welpe.
Predicting elections with twitter: What 140 characters reveal about political
sentiment. In Proceedings of the fourth international AAAI conference on weblogs
and social media, 2010.
[141] D Gayo-Avello. I wanted to predict elections with twitter and all i got was this
lousy paper a balanced survey on election prediction using twitter data. arXiv
preprint arXiv:1204.6441, 2012.
[142] Brendan O’Connor, Ramnath Balasubramanyan, Bryan R Routledge, and Noah A
Smith. From tweets to polls: Linking text sentiment to public opinion time
series. In Proceedings of the International AAAI Conference on Weblogs and
Social Media, 2010.
[143] Jessica Chung and Eni Mustafaraj. Can collective sentiment expressed on twitter
predict political elections. In Proceedings of the Twenty-Fifth AAAI Conference
on Artificial Intelligence. San Francisco, CA, USA, 2011.
[144] Panagiotis T. Metaxas, Eni Mustafaraj, and Dani Gayo-Avello. How (Not) to
Predict Elections. 2011 IEEE Third Int’l Conference on Privacy, Security, Risk
and Trust and 2011 IEEE Third Int’l Conference on Social Computing, October
2011. doi: 10.1109/PASSAT/SocialCom.2011.98.
[145] Daniel Gayo Avello, Panagiotis T Metaxas, and Eni Mustafaraj. Limits of
electoral predictions using twitter. In Proceedings of the International Conference
on Weblogs and Social Media, 2011.
[146] Daniel Gayo-Avello. A meta-analysis of state-of-the-art electoral prediction from
twitter data. Social Science Computer Review, page 0894439313493979, 2013.
[147] Bo Pang and Lillian Lee. Opinion mining and sentiment analysis. Found. Trends
Inf. Retr., 2(1-2), 2008.
[148] Efthymios Kouloumpis, Theresa Wilson, and Johanna Moore. Twitter sentiment
analysis: The good the bad and the omg. In Proceedings of the International
Conference on Weblogs and Social Media, 2011.
[149] Preslav Nakov, Sara Rosenthal, Zornitsa Kozareva, Veselin Stoyanov, Alan Ritter,
and Theresa Wilson. Semeval-2013 task 2: Sentiment analysis in twitter. In
Proceedings of the International Workshop on Semantic Evaluation (SemEval
2013), 2013.
References
169
[150] C. Johnson, P. Shukla, and S. Shukla. On classifying the political sentiment of
tweets. 2012.
[151] Nicholas A. Diakopoulos and David A. Shamma. Characterizing debate performance via aggregated twitter sentiment. In Proceedings of the SIGCHI Conference
on Human Factors in Computing Systems, CHI ’10. ACM, 2010.
[152] Eric Sanders and Antal Van Den Bosch. Relating political party mentions on
twitter with polls and election results. In DIR, pages 68–71, 2013.
[153] Marko Skoric, Nathaniel Poor, Palakorn Achananuparp, Ee-Peng Lim, and Jing
Jiang. Tweets and votes: A study of the 2011 singapore general election. In
System Science (HICSS), 2012 45th Hawaii International Conference on, pages
2583–2591. IEEE, 2012.
[154] Juan M Soler, Fernando Cuartero, and Manuel Roblizo. Twitter as a tool for
predicting elections results. In Proceedings of the 2012 International Conference
on Advances in Social Networks Analysis and Mining (ASONAM 2012), pages
1194–1200. IEEE Computer Society, 2012.
[155] Erik Tjong Kim Sang and Johan Bos. Predicting the 2011 dutch senate election
results with twitter. In Proceedings of the Workshop on Semantic Analysis in
Social Media, pages 53–60. Association for Computational Linguistics, 2012.
[156] Lu Chen, Wenbo Wang, and Amit P Sheth. Are twitter users equal in predicting
elections? a study of user groups in predicting 2012 us republican presidential
primaries. In Social informatics, pages 379–392. Springer, 2012.
[157] Joseph DiGrazia, Karissa McKelvey, Johan Bollen, and Fabio Rojas. More tweets,
more votes: Social media as a quantitative indicator of political behavior. PloS
one, 8(11):e79449, 2013.
[158] Colin Fink, Nathan Bos, Alexander Perrone, Erwu Liu, and Jonathon Kopecky.
Twitter, public opinion, and the 2011 nigerian presidential election. In Social
Computing (SocialCom), 2013 International Conference on, pages 311–320. IEEE,
2013.
[159] Manish Gaurav, Amit Srivastava, Anoop Kumar, and Scott Miller. Leveraging
candidate popularity on twitter to predict election outcome. In Proceedings of
the 7th Workshop on Social Network Mining and Analysis, page 7. ACM, 2013.
[160] Nicholas A Thapen and Moustafa M Ghanem. Towards passive political opinion
polling using twitter. In SMA BCS-SGAI, pages 19–34. Citeseer, 2013.
[161] Lei Shi, Neeraj Agarwal, Ankur Agrawal, Rahul Garg, and Jacob Spoelstra.
Predicting us primary elections with twitter. Workshop social network and social
media analysis: methods, models and applications, 2012.
[162] Michael J Jensen and Nick Anstead. Psephological investigations: Tweets, votes,
and unknown unknowns in the republican nomination process. Policy & Internet,
5(2):161–182, 2013.
170
References
[163] Fabio Franch. (wisdom of the crowds) 2: 2010 uk election prediction with social
media. Journal of Information Technology & Politics, 10(1):57–71, 2013.
[164] Nick Beauchamp. Predicting and interpolating state-level polling using twitter
textual data. In New directions in analyzing text as data workshop, 2013.
[165] Danish Contractor and Tanveer Afzal Faruquie. Understanding election candidate
approval ratings using social media data. In Proceedings of the 22nd international
conference on World Wide Web companion, pages 189–190. International World
Wide Web Conferences Steering Committee, 2013.
[166] Vasileios Lampos, Daniel Preotiuc-Pietro, and Trevor Cohn. A user-centric model
of voting intention from social media. In ACL (1), pages 993–1003, 2013.
[167] Micol Marchetti-Bowick and Nathanael Chambers. Learning for microblogs with
distant supervision: Political forecasting with twitter. In Proceedings of the
13th Conference of the European Chapter of the Association for Computational
Linguistics, pages 603–612. Association for Computational Linguistics, 2012.
[168] Mohamed Yahya, Klaus Berberich, Shady Elbassuoni, Maya Ramanath, Volker
Tresp, and Gerhard Weikum. Natural language questions for the web of data.
In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural
Language Processing and Computational Natural Language Learning, pages 379–
390. Association for Computational Linguistics, 2012.
[169] Uma Sawant and Soumen Chakrabarti. Learning joint query interpretation and
response ranking. In Proceedings of the 22nd international conference on World
Wide Web, pages 1099–1110. ACM, 2013.
[170] Judea Pearl. Bayesian networks: A model of self-activated memory for evidential
reasoning. In Proceedings of the 7th Conference of the Cognitive Science Society,
1985, pages 329–334, 1985.
[171] Shuo Zhang and Krisztian Balog. Design patterns for fusion-based object retrieval.
In European Conference on Information Retrieval, pages 684–690. Springer, 2017.
[172] Chandra Sekhar Bhagavatula, Thanapon Noraset, and Doug Downey. Methods
for exploring and mining tables on wikipedia. In Proceedings of the ACM SIGKDD
Workshop on Interactive Data Exploration and Analytics, pages 18–26. ACM,
2013.
[173] Oliver Lehmberg, Dominique Ritze, Robert Meusel, and Christian Bizer. A large
public corpus of web tables containing time and context metadata. In Proceedings
of the 25th International Conference Companion on World Wide Web, pages
75–76. International World Wide Web Conferences Steering Committee, 2016.
[174] Evgeniy Gabrilovich, Michael Ringgaard, and Amarnag Subramanya. Facc1:
Freebase annotation of clueweb corpora, 2013.
[175] Krisztian Balog and Robert Neumayer. A test collection for entity search in
dbpedia. In Proceedings of the 36th international ACM SIGIR conference on
Research and development in information retrieval, pages 737–740. ACM, 2013.
References
171
[176] Gustavo Laboreiro, Luís Sarmento, Jorge Teixeira, and Eugénio Oliveira. Tokenizing micro-blogging messages using a text classification approach. In Proceedings
of the fourth workshop on Analytics for noisy unstructured text data, AND 10,
2010.
[177] Pedro Saleiro, Luis Rei, Arian Pasquali, Carlos Soares, Jorge Teixeira, Fábio
Pinto, Mohammad Nozari Zarmehri, Catarina Félix, and Pedro Strecht. Popstar
at replab 2013: Name ambiguity resolution on twitter. In CLEF (Working Notes),
2013.
[178] Enrique Amigó, Julio Gonzalo, and Felisa Verdejo. A general evaluation measure
for document organization tasks. In Proceedings SIGIR 2013, July .
[179] Claudiu Musat and Stefan Trausan-Matu. The impact of valence shifters on
mining implicit economic opinions. In International Conference on Artificial
Intelligence: Methodology, Systems, and Applications. Springer, 2010.
[180] Marjan Van de Kauter, Diane Breesch, and Véronique Hoste. Fine-grained
analysis of explicit and implicit sentiment in financial news articles. Expert
Systems with applications, 42(11), 2015.
[181] Keith Cortis, Andre Freitas, Tobias Daudert, Manuela Huerlimann, Manel
Zarrouk, and Brian Davis. Semeval-2017 task 5: Fine-grained sentiment analysis
on financial microblogs and news. Proceedings of SemEval, 2017.
[182] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation
of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
[183] K. Gimpel, N. Schneider, B. O’Connor, D. Das, D. Mills, J. Eisenstein, M. Heilman, D. Yogatama, J. Flanigan, and Noah A Smith. Part-of-speech tagging
for twitter: Annotation, features, and experiments. In ACL HLT: short papersVolume 2, 2011.
[184] Andriy Bodnaruk, Tim Loughran, and Bill McDonald. Using 10-k text to gauge
financial constraints. Journal of Financial and Quantitative Analysis, 50(04),
2015.
[185] Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. Recognizing contextual
polarity in phrase-level sentiment analysis. In EMNLP, 2005.
[186] J. Deriu, M. Gonzenbach, F. Uzdilli, A. Lucchi, V. De Luca, and M. Jaggi.
Swisscheese at semeval-2016 task 4: Sentiment classification using an ensemble of
convolutional neural networks with distant supervision. Proceedings of SemEval,
2016.
[187] J. Reis, P. Olmo F. Benevenuto, H. Kwak R. Prates, and J. An. Breaking the
news: First impressions matter on online news. In ICWSM ’15, 2015.
[188] Matko Boanjak, Eduardo Oliveira, José Martins, Eduarda Mendes Rodrigues,
and Luís Sarmento. Twitterecho: a distributed focused crawler to support open
research with twitter data. In Proceedings of the 21st international conference
companion on World Wide Web, pages 1233–1240. ACM, 2012.
172
References
[189] A. Kohut, S. Keeter, C. Doherty, M. Dimock, A. Directors, and L. Christian.
Assessing the representativeness of public opinion surveys. 2012.
[190] Joao Filgueiras and Silvio Amir. Popstar at replab 2013: Polarity for reputation
classification. In Fourth International Conference of the CLEF initiative, CLEF,
volume 2013, 2013.
[191] Brendan O’Connor, Ramnath Balasubramanyan, Bryan R Routledge, and Noah A
Smith. From tweets to polls: Linking text sentiment to public opinion time series.
ICWSM, 11:122–129, 2010.
[192] M. Bošnjak, E. Oliveira, J. Martins, E. M. Rodrigues, and L. Sarmento. Twitterecho: a distributed focused crawler to support open research with twitter data.
ACM, 2012.
[193] Gianluca Demartini, Malik Muhammad Saad Missen, Roi Blanco, and Hugo
Zaragoza. Taer: Time aware entity retrieval. In CIKM, Toronto, Canada. ACM,
2010.
[194] Michael Matthews, Pancho Tolchinsky, Roi Blanco, Jordi Atserias, Peter Mika,
and Hugo Zaragoza. Searching through time in the new york times. In HumanComputer Interaction and Information Retrieval, pages 41–44, 2010.
[195] Krisztian Balog, Maarten de Rijke, Raymond Franz, Hendrike Peetz, Bart
Brinkman, Ivan Johgi, and Max Hirschel. Sahara: Discovering entity-topic
associations in online news. In ISWC, 2009.
[196] Omar Alonso, Klaus Berberich, Srikanta Bedathur, and Gerhard Weikum. Timebased exploration of news archives. HCIR 2010, 2010.
[197] Jorge Teixeira, Luis Sarmento, and Eugenio Oliveira. Semi-automatic creation of
a reference news corpus for fine-grained multi-label scenarios. In CISTI, 2011.
[198] Luís Sarmento, Sérgio Nunes, Jorge Teixeira, and Eugénio Oliveira. Propagating
fine-grained topic labels in news snippets. In IEEE/WIC/ACM WI-IAT, 2009.
[199] Carla Abreu, Jorge Teixeira, and Eugénio Oliveira. encadear encadeamento
automático de notícias. Linguistica, Informatica e Traducao: Mundos que se
Cruzam, Oslo Studies in Language 7(1), 2015, 2015.
[200] Pedro Saleiro and Luís Sarmento. Piaf vs adele: classifying encyclopedic queries
using automatically labeled training data. In OAIR, 2013.
[201] Jorge Teixeira, Luís Sarmento, and Eugénio Oliveira. A bootstrapping approach
for training a ner with conditional random fields. In Progress in Artificial
Intelligence. 2011.
[202] Mathieu Jacomy, Tommaso Venturini, Sebastien Heymann, and Mathieu Bastian.
Forceatlas2, a continuous graph layout algorithm for handy network visualization
designed for the gephi software. PLoS ONE, 2014.
References
173
[203] Silvio Amir, Miguel B. Almeida, Bruno Martins, João Filgueiras, and Mario J.
Silva. Tugas: Exploiting unlabelled data for twitter sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014),
pages 673–677, Dublin, Ireland, August 2014. Association for Computational
Linguistics. URL http://www.aclweb.org/anthology/S14-2120.
[204] Omer Levy, Yoav Goldberg, and Ido Dagan. Improving distributional similarity
with lessons learned from word embeddings. Transactions of the Association for
Computational Linguistics, 3:211–225, 2015.
[205] François Chollet. keras. https://github.com/fchollet/keras, 2015.
[206] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization.
arXiv preprint arXiv:1412.6980, 2014.
[207] Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. Improving zero-shot
learning by mitigating the hubness problem. arXiv preprint arXiv:1412.6568,
2014.
[208] Manaal Faruqui, Yulia Tsvetkov, Pushpendre Rastogi, and Chris Dyer. Problems
with evaluation of word embeddings using word similarity tasks. ACL 2016,
page 30, 2016.
[209] Anna Gladkova, Aleksandr Drozd, and Computing Center. Intrinsic evaluations
of word embeddings: What can we do better? ACL 2016, page 36, 2016.
| 2 |
International Journal of Computing and Business Research (IJCBR)
ISSN (Online) : 2229-6166
Volume 3 Issue 3 September 2012
TIME EFFICIENT APPROACH TO OFFLINE HAND WRITTEN
CHARACTER RECOGNITION USING ASSOCIATIVE MEMORY
NET
Tirtharaj Dash
B.Tech Final Year Student, Department of Information Technology
National Institute of Science and Technology
Berhampur-761008, India
Abstract: In this paper, an efficient Offline Hand Written Character
Recognition algorithm is proposed based on Associative Memory Net (AMN).
The AMN used in this work is basically auto associative. The implementation
is carried out completely in ‘C’ language. To make the system perform to its
best with minimal computation time, a Parallel algorithm is also developed
using an API package OpenMP. Characters are mainly English alphabets
(Small (26), Capital (26)) collected from system (52) and from different
persons (52). The characters collected from system are used to train the AMN
and characters collected from different persons are used for testing the
recognition ability of the net. The detailed analysis showed that the network
recognizes the hand written characters with recognition rate of 72.20% in
average case. However, in best case, it recognizes the collected hand written
characters with 88.5%. The developed network consumes 3.57 sec (average)
in Serial implementation and 1.16 sec (average) in Parallel implementation
using OpenMP.
Keywords: Offline; Hand Written Character; Associative Memory Net;
OpenMP; Serial; Parallel.
1. Introduction
In the recent years, Hand Written Character Recognition has been a challenging and
interesting research area in the field of pattern recognition and image processing
(Impedovo et al., 1991; Mori et al., 1992). It contributes mainly to the Human-Computer
interaction and improves the interface between the two (Pradeep et al., 2011). Other
International Journal of Computing and Business Research (IJCBR)
ISSN (Online) : 2229-6166
Volume 3 Issue 3 September 2012
human cognition methods viz. face, speech, thumb print recognitions are also being
great area of research (Imtiaz and Fattah, 2011; Khurana and Singh, 2011; Kurian and
Balakriahnan, 2012).
Generally, character recognition can be broadly characterized into two types (i)
Offline and (ii) Online. In offline method, the pattern is captured as an image and taken
for testing purpose. But in case of online approach, each point of the pattern is a
function of time, pressure, slant, strokes etc. Both the methods are best based on their
application in the field. Yielding best accuracy with minimal cost of time is a crucial
precondition for pattern recognition system. Therefore, hand written character
recognition is continuously being a broad area of research.
In this work, an approach for offline character recognition has been proposed using
Associative Memory Network (AMN). In fact, to make it time efficient, a parallel
algorithm has been developed for the implementation of AMN using OpenMP (Open
Multiprocessing) (www.openmp.org). AMN is a neural network which can store patterns
as memories. When the network is being tested with a key pattern, it corresponds by
producing one of the stored patterns, which closely resembles to key pattern. Based on
the testing pattern, AMN can be of two types (i) auto-associative memory net or (ii)
hereto-associative memory net. Both the networks contains two layers (a) input layer
and (b) output layer. In case of auto-associative memory net, the input and target
pattern are same (Sivanandam and Deepa, 2011). But, in case of hetero-associative
memory net the two patterns are different. This work uses the auto-AMN, as the
character to be tested is same as the stored character. The characters considered in
this work are English alphabets (both small and capital letters).
This paper is organized as follows. Section 1 presented a general introduction to the
character recognition systems and methods. Section 2 gives a brief literature review of
some methods proposed for character recognition. Section 3 describes the proposed
methodology of this work. Section 4 is a result and discussion section which gives a
detailed analysis of the work. The paper is concluded in section 5 with a note to future
works.
2. Literature Review
International Journal of Computing and Business Research (IJCBR)
ISSN (Online) : 2229-6166
Volume 3 Issue 3 September 2012
Available literatures convey that various algorithms and techniques have been used
in order to accomplish the task of character recognition. Some studies are described
below. Source of the literature are Google scholar, Scopus and IEEE library.
Neural Network (NN) has been a backend of character classification in most of the
methods. This is due to its faster and reliable computation. The methods used in front
end could be (a) statistical approaches (b) kernel methods (c) support methods or (d)
hybrid of fuzzy logic controllers.
Multilayer Perceptron (MLP) was used for ‘Bangla’ alphabet recognition by Basu et
al., 2005. The accuracy achieved in this work was 86.46% and 75.05% on the samples
of training and testing respectively.
Manivannan and Neil, 2010 proposed and demonstrated an optical correlator-neural
network architecture for pattern recognition. English alphabet used as patterns for the
training and testing process.
(Pal and Singh, 2010) proposed NN based English character recognition system. In
this work, MLP with one hidden layer was used. About 500 testing were carried out to
test the performance of the design. The best case accuracy obtained in this work was
94%.
(Perwej and Chaturvedi, 2011) worked on English alphabet recognition using NN. In
this work, binary pixels of the alphabets were used train the NN. The accuracy achieved
was found to be 82.5%.
(Pal et al., 2007) proposed a modified quadratic classifier approach for handwritten
numerals of six popular Indian scripts with high level of recognition accuracy.
(Dinesh et al., 2007) used horizontal and vertical strokes and end points as feature
for handwritten numerals. This method reported an accuracy rate of 90.5% in best case.
However, this method used a thinning method resulting in loss of features.
Yanhua and Chuanjun, 2009 recommended a novel Chinese character recognition
algorithm which was based on minimum distance classifier. The algorithm attempted to
work with two classes of feature extraction-structure and statistics. The statistic feature
decided the primary class and the structure feature used to identify Chinese characters.
A good method of character recognition was proposed by (Huiqin et al, 2011). In this
work, they proposed a distribution based algorithm based on image segmentation and
International Journal of Computing and Business Research (IJCBR)
ISSN (Online) : 2229-6166
Volume 3 Issue 3 September 2012
distribution of pixels. Deflection Correction method was adopted for flexibility as well as
reduction of matching error. This work avoided the burden of extracting the skeleton
from the character. The method gave excellent result and was robust.
3. Methodology
A step-wise methodology has been proposed which is demonstrated in Figure 1.
Figure 1: Proposed Methodology
Step-1: Collection of English alphabets (both small and capital) from (i) system and (ii)
persons (Hand Written)
Step-2: Extraction of pixels from the characters
Step-3: Implementation of auto AMN: (i) Training and (ii) Testing using both (a) serial
and (b) parallel algorithms.
Step-4: Comparison of results from serial and parallel processing with respect to time of
execution
3.1 Generation of English alphabets
English alphabets (both small and capital) are designed in the system using MS
Paint version 6.1 in Arial font size-28 (No Bold) in BMP file format. The dimension of the
bmp file is 31×39, with bit depth of 4. Some alphabets are given in Figure 2.
International Journal of Computing and Business Research (IJCBR)
ISSN (Online) : 2229-6166
Volume 3 Issue 3 September 2012
Figure 2: English alphabets of the system
Hand written English alphabets are collected, each one from different persons.
The characters are given in Figure 3.
Figure 3: English alphabets collected from different persons
3.2 Extraction of Pixel from the characters
Pixels are extracted from the character images (bitmap files) using a standard
image function of MATLAB version 10. The function is imread(‘filename.bmp’). The
function extracts the decimal values associated with each pixel. The pixels are then
stored in a text (.txt) file for the experiment purpose.
3.3 Auto-associative memory net (Auto-AMN) implementation
3.3.1 Serial algorithm
INITIALIZE weight (W) to 0
SET the target pattern as the system’s pattern
International Journal of Computing and Business Research (IJCBR)
ISSN (Online) : 2229-6166
Volume 3 Issue 3 September 2012
INPUT the Handwritten pattern to the first layer of AMN
FOR i=1 to n
DO
FOR j=1 to n
DO
CALCULATE the weight as
Wij(new)=W ij (old) + INPUTi×TARGETj
END
END
FOR i=1 to n
DO
FOR j=1 to n
DO
CALCULATE the net input to each output node as,
n
Yinj= ∑ x i Wij
i =1
IF(Yinj>0)
Yj=+1;
ELSE
Yj=-1;
END
END
3.3.2 Parallel algorithm
INITIALIZE weight (W) to 0
SET the target pattern as the system’s pattern
INPUT the Handwritten pattern to the first layer of AMN
#pragma omp paralle shared(W,Yin,chunk,p) private(tid,i,j)
DO
#pragma omp for schedule (static, chunk)
FOR i=1 to n
DO
FOR j=1 to n
DO
CALCULATE the weight as
Wij(new)=W ij (old) + INPUTi×TARGETj
END
END
International Journal of Computing and Business Research (IJCBR)
ISSN (Online) : 2229-6166
Volume 3 Issue 3 September 2012
#pragma omp for schedule (static, chunk)
FOR i=1 to n
DO
FOR j=1 to n
DO
CALCULATE the net input to each output node
n
Yinj= ∑ x i Wij
i =1
IF(Yinj>0)
Yj=+1;
ELSE
Yj=-1;
END
END
END
3.3.3 System Specification
A computer system having 1 GB RAM and Four processors is used for the
complete work. The operating system is Ubuntu 10.04 (Linux). However, for auto
optimization by the compiler, ‘–g’ tag is used in the compilation command.
4. Results and Discussion
The contribution of this work is a detailed analysis on the recognition accuracy for all
the handwritten English alphabets (Total 52). Time of computation has been noted for
both serial and parallel algorithm to compare the decision making speed.
4.1 Recognition accuracy
Table 1 shows a result of the testing the developed AMN for a set of hand written
characters. It should be noted that the network is trained with the machine’s alphabets
and tested with the hand written alphabet. However, for reliability issue, a hand written
character is checked for 5 times and the matching percentage is the average of the 5
results.
Table 1: Recognition accuracy of AMN for offline Hand written character recognition
International Journal of Computing and Business Research (IJCBR)
ISSN (Online) : 2229-6166
Volume 3 Issue 3 September 2012
System’s
Alphabet
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
a
b
c
d
e
f
g
h
i
j
Hand Written Alphabet
for which highest match
is achieved
A
B
C
O
F
F
G
H
I
J
K
L
H
H
O
P
O
R
S
T
U
V
U
X
Y
Z
o
b
e
d
e
p
y
b
i
j
Recognition
Accuracy (%)
66.56
56.80
67.19
60.88
62.85
68.34
58.78
70.73
85.74
86.28
64.85
85.16
70.38
71.13
64.17
67.74
61.42
61.16
63.20
80.18
71.46
73.69
73.13
73.39
76.67
64.01
69.49
65.35
73.96
67.26
67.47
73.54
69.36
70.52
83.62
88.50
Time of Computation
(sec.)
Serial
Parallel
3.00
0.99
3.11
0.98
2.98
0.74
4.60
1.55
3.77
2.01
4.65
2.32
3.12
1.87
3.67
0.99
4.01
1.03
4.32
1.93
4.17
1.05
3.04
0.87
4.12
1.21
4.61
1.35
3.02
0.76
4.00
1.04
2.98
0.83
3.41
0.99
4.04
1.12
2.87
0.77
2.76
0.76
2.78
0.87
3.05
1.00
3.83
1.43
4.05
1.45
4.00
1.45
3.77
1.88
3.78
1.43
3.17
1.42
3.77
1.54
3.18
1.31
3.17
1.22
3.18
1.03
4.01
1.12
3.19
1.27
4.01
1.23
International Journal of Computing and Business Research (IJCBR)
ISSN (Online) : 2229-6166
Volume 3 Issue 3 September 2012
k
l
m
n
o
p
q
r
s
t
u
v
w
x
y
z
K
l
m
n
o
p
q
r
s
t
u
v
w
x
y
z
69.44
83.60
62.62
72.97
70.03
68.61
65.45
82.90
68.66
79.92
78.54
82.52
67.16
78.55
78.27
70.08
3.66
3.75
2.76
2.89
2.75
3.78
3.89
4.05
3.00
2.01
4.89
4.02
3.96
2.95
3.81
3.99
1.44
1.03
1.05
0.75
0.76
0.94
0.94
1.11
0.77
0.52
1.33
1.04
1.21
0.76
0.98
1.23
Table-1 can be viewed as a detailed analysis of performance of the developed auto
AMN for offline hand written English alphabet recognition. The network recognizes the
handwritten character ‘j’ with highest matching of 88.50%. However, the network doesn’t
recognize some alphabets like ‘D’, ‘E’, ‘M’, ‘N’, ‘Q’, ‘W’, ‘a’, ‘c’, ‘f’, ‘g’, ‘h’, ‘j’ and ‘k’ and
these alphabets are recognized as ‘O’, ‘F’, ‘H’,’H’, ‘O’, ‘U’, ‘o’, ‘e’, ‘p’, ‘y’, ‘b’, ‘I’ and ‘K’
respectively with some matching error.
4.2 Level of matching of each alphabet
A plot has been given in Figure 4 to view the level up to which each English
alphabet is matched by the AMN. The alphabets which are not recognized are awarded
with 0% matching.
International Journal of Computing and Business Research (IJCBR)
ISSN (Online) : 2229-6166
Volume 3 Issue 3 September 2012
90
80
Level of Matching (%)
70
60
50
40
30
20
10
0
A B C D E F G H I J K L M N O P Q R S T U VW X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z
English Alphabet
Figure 4 This plot shows level of matching of each alphabet
4.3 Time Efficiency
As it is already mentioned that the network is developed with two algorithms, (i)
serial and (ii) parallel; it will be a good idea to check the timing variation in both the
cases. A plot given in Figure 5, shows speed up after achieved after the execution of
parallel algorithm.
International Journal of Computing and Business Research (IJCBR)
ISSN (Online) : 2229-6166
Volume 3 Issue 3 September 2012
5
Serial
Parallel
4.5
Decision making speed (sec.)
4
3.5
3
2.5
2
1.5
1
0.5
0
5
10
15
20
25
30
35
Alphabet Serial Number (1-52)
40
45
50
Figure 5 Decision making speed by Serial and Parallel algorithm
5. Conclusion
In this paper, an offline English character recognition system has been proposed.
The system is developed using auto Associative Memory Net. To make the developed
system faster and reliable, a parallel algorithm has been developed and tested
successfully. Experimental study showed that, the system recognizes characters with
average recognition rate of 72.20%. Character ‘j’ is recognized with highest accuracy
rate of 88.5%. The average time required by the serial algorithm to recognize a
character is 3.57 sec where as the parallel algorithm takes only 1.16 sec on an average.
However, automatic checking of a sequence of character by the network will play a
great role in the world of character recognition. The author is currently working on this
issue.
References
1. Basu, S. et al. (2005) “Handwritten ‘Bangla’ alphabet recognition using an MLP based
classifier,” Proceeding of 2nd National Conference on Computer Processing of Bangla,
pp. 285-291.
International Journal of Computing and Business Research (IJCBR)
ISSN (Online) : 2229-6166
Volume 3 Issue 3 September 2012
2. Dinesh, A. U. et al. (2007) “Isolated handwritten Kannada numeral recognition using
structural feature and K-means cluster,” IISN, pp. 125-129.
3. Huiqin L. et al.(2011) “The Research of Algorithm for Handwritten Character Recognition
in Correcting Assignment System,” Proceeding of 6th International Conference on Image
and Graphics (ICIG), pp.456-460.
4. Impedovo, S. et al. (1991) “Optical character recognition,” International Journal of
Pattern Recognition and Artificial Intelligence, Vol. 5(1-2), pp. 1-24.
5. Imtiaz, H. and Fattah, S. A. (2011) “A wavelet-domain local dominant feature selection
scheme for face recognition,” International Journal of Computing and Business
Research, Vol.3, Issue.2.
6. Khurana, P. and Singh, V. (2011) “A model for human cognition,” International Journal of
Computing and Business Research, Vol.2, Issue.3.
7. Kurian, C. and Balakriahnan, K. (2012) “Continuous speech recognition system for
Malayalam language using PLP cepstral coefficient,” Journal of Computing and
Business Research, Vol.3, Issue.1.
8. Manivannan, N. and Neil, M.A.A. (2010) “Optical correlator-neural network hybrid system
for many patterns recognition,” Proceeding of 8th International Symposium on Intelligent
Systems and Informatics (SISY), pp.249-251.
9. Mori, S. et al. (1992) “Historical review of OCR research and development,” Proceedings
of IEEE, vol. 80, pp. 1029-1058.
10. Pal, U. et al. (2007) “Handwritten numeral recognition of six popular scripts,” 9th
International conference on Document analysis Recognition, vol. 2, pp. 749-753.
11. Pal, A. and Singh, D. (2010) “Handwritten English Character Recognition Using Neural
Network,” International Journal of Computer Science and Communication, Vol.1, No.2,
pp. 141-144.
12. Perwej, Y. and Chaturvedi, A. (2011) “Neural Networks for Handwritten English Alphabet
Recognition,” International Journal of Computer Applications, Vol. 20, No. 7, pp. 1-5.
13. Pradeep J. et al. (2011) “Diagonal based feature extraction for handwritten alphabet
recognition system using neural network,” International Journal of Computer Science
and Information Technology, Vol.3, No.1, pp. 27-38.
14. Sivanandam, S.N and Deepa, S.N. (2011) “Principles of Soft Computing,” Wiley-India
publisher, 2nd edition.
15. Yanhua, M. and Chuanjun, L. (2009) “A Recognition Algorithm for Chinese Character
Based on Minimum Distance Classifier,” Proceeding of 2nd International Workshop on
Computer Science and Engineering, vol.2, pp.246-249.
| 9 |
arXiv:1512.03987v3 [math.ST] 8 Oct 2016
On the Finite-Sample Analysis of Θ-estimators
Yiyuan She
Department of Statistics
Florida State University, Tallahassee, FL 32306
Abstract
In large-scale modern data analysis, first-order optimization methods are
usually favored to obtain sparse estimators in high dimensions. This paper
performs theoretical analysis of a class of iterative thresholding based estimators defined in this way. Oracle inequalities are built to show the nearly
minimax rate optimality of such estimators under a new type of regularity
conditions. Moreover, the sequence of iterates is found to be able to approach the statistical truth within the best statistical accuracy geometrically
fast. Our results also reveal different benefits brought by convex and nonconvex types of shrinkage.
1 Introduction
Big data naturally arising in machine learning, biology, signal processing, and
many other areas, call for the need of scalable optimization in computation. Although for low-dimensional problems, Newton or quasi-Newton methods converge fast and have efficient implementations, they typically do not scale well
to high dimensional data. In contrast, first-order optimization methods have recently attracted a great deal of attention from researchers in statistics, computer
science and engineering. They iterate based on the gradient (or a subgradient) of
the objective function, and have each iteration step being cost-effective. In high
dimensional statistics, a first-order algorithm typically proceeds in the following
manner
β (t+1) = P ◦ (β (t) − α∇l(β (t) )),
1
(1)
where P is an operator that is easy to compute, ∇l denotes the gradient of the
loss function l, and α gives the stepsize. Such a simple iterative procedure is
suitable for large-scale optimization, and converges in arbitrarily high dimensions
provided α is properly small.
P can be motivated from the perspective of statistical shrinkage or regularization and is necessary to achieve good accuracy when the dimensionality is moderate or high. For example, a proximity operator (Parikh and Boyd, 2014) is associated with a convex penalty function. But the problems of interest may not always
be convex. Quite often, P is taken as a certain thresholding rule Θ in statistical
learning, such as SCAD (Fan and Li, 2001). The resulting computation-driven estimators, which we call Θ-estimators, are fixed points of β = Θ(β − ∇l(β); λ).
To study the non-asymptotic behavior of Θ-estimators (regardless of the sample
size and dimensionality), we will establish some oracle inequalities.
During the last decade, people have performed rigorous finite-sample analysis of many high-dimensional estimators defined as globally optimal solutions to
some convex or nonconvex problems—see Bunea et al. (2007), Zhang and Huang
(2008), Bickel et al. (2009), Lounici et al. (2011), Zhang and Zhang (2012), She
(2014), among many others. Θ-estimators pose some new questions. First, although nicely, an associated optimization criterion can be constructed for any
given Θ-estimator, the objective may not be convex, and the estimator may not
correspond to any functional local (or global) minimum. Second, there are various types of Θ-estimators due to the abundant choices of Θ, but a comparative
study regarding their statistical performance in high dimensions is lacking in the
literature. Third, Θ-estimators are usually computed in an inexact way on big
datasets. Indeed, most practitioners (have to) terminate (1) before full computational convergence. These disconnects between theory and practice when using
iterative thresholdings motivate our work.
The rest of the paper is organized as follows. Section 2 introduces the Θestimators, the associated iterative algorithm–TISP, and some necessary notation.
Section 3 presents the main results, including some oracle inequalities, and sequential analysis of the iterates generated by TISP. Section 4 provides proof details.
2
2 Background and Notation
2.1 Thresholding functions
Definition 1 (Thresholding function). A thresholding function is a real valued
function Θ(t; λ) defined for −∞ < t < ∞ and 0 ≤ λ < ∞ such that (i)
Θ(−t; λ) = −Θ(t; λ); (ii) Θ(t; λ) ≤ Θ(t′ ; λ) for t ≤ t′ ; (iii) limt→∞ Θ(t; λ) =
∞; (iv) 0 ≤ Θ(t; λ) ≤ t for 0 ≤ t < ∞.
A vector version of Θ (still denoted by Θ) is defined componentwise if either
t or λ is replaced by a vector. From the definition,
Θ−1 (u; λ) := sup{t : Θ(t; λ) ≤ u}, ∀u > 0
(2)
must be monotonically non-decreasing and so its derivative is defined almost everywhere on (0, ∞). Given Θ, a critical number LΘ ≤ 1 can be introduced such
that dΘ−1 (u; λ)/ du ≥ 1 − LΘ for almost every u ≥ 0, or
LΘ := 1 − ess inf{ dΘ−1 (u; λ)/ du : u ≥ 0},
(3)
where ess inf is the essential infimum. For the perhaps most popular soft-thresholding
and hard-thresholding functions
ΘS (t; λ) = sgn(t)(|t| − λ)+ ,
ΘH (t; λ) = t1|t|≥λ ,
LΘ equals 0 and 1, respectively.
For any arbitrarily given Θ, we construct a penalty function PΘ (t; λ) as follows
PΘ (t; λ) =
Z
0
|t|
−1
(Θ (u; λ) − u) du =
Z
0
|t|
(sup{s : Θ(s; λ) ≤ u} − u) du (4)
for any t ∈ R. This penalty will be used to make a proper objective function for
Θ-estimators.
The threshold τ (λ) := Θ−1 (0; λ) may not equal λ in general. For ease in
notation, in writing Θ(·; λ), we always assume that λ is the threshold parameter,
i.e., λ = τ (λ), unless otherwise specified. Then an important fact is that given λ,
any thresholding rule Θ satisfies Θ(t; λ) ≤ ΘH (t; λ), ∀t ≥ 0, due to property (iv),
from which it follows that
PΘ (t; λ) ≥ PH (t; λ),
3
(5)
where
PH (t; λ) =
Z
0
|t|
2
2
(Θ−1
H (u; λ) − u) du = (−t /2 + λ|t|)1|t|<λ + (λ /2)1|t|≥λ . (6)
2
In particular, PH (t; λ) ≤ P0 (t; λ) := λ2 1t6=0 and PH (t; λ) ≤ P1 (t; λ) := λ|t|.
When Θ has discontinuities, such as t = ±λ in ΘH (t; λ), ambiguity may
arise in definition. To avoid the issue, we assume the quantity to be thresholded
never corresponds to any discontinuity of Θ. This assumption is mild because
practically used thresholding rules have few discontinuity points and such discontinuities rarely occur in real applications.
2.2 Θ-estimators
We assume a model
y = Xβ ∗ + ǫ,
(7)
where X is an n × p design matrix, y is a response vector in Rn , β ∗ is the unknown coefficient vector, and ǫ is a sub-Gaussian random vector with mean zero
and scale bounded by σ, cf. Definition 2 in Section 4 for more detail. Then a
Θ-estimator β̂, driven by the computational procedure (1), is defined as a solution
to the Θ-equation
ρβ = Θ(ρβ + X T y/ρ − X T Xβ/ρ; λ),
(8)
where ρ, the scaling parameter, does not depend on β. Having ρ appropriately
large is crucial to guarantee the convergence of the computational procedure.
All popularly used penalty functions are associated with thresholdings, such
as the ℓr (0 < r ≤ 1), ℓ2 , SCAD (Fan and Li, 2001), MCP (Zhang, 2010a),
capped ℓ1 (Zhang, 2010b), ℓ0 , elastic net (Zou and Hastie, 2005), Berhu (Owen,
2007; He et al., 2013), ℓ0 + ℓ2 (She, 2009), to name a few. Table 1 lists some
examples. From a shrinkage perspective, thresholding rules usually suffice in
statistical learning.
Equation (8) can be re-written in terms of the scaled deign X̃ = X/ρ and the
corresponding coefficient vector β̃ = ρβ
β̃ = Θ(β̃ + X̃ T y − X̃ T X̃ β̃; λ).
4
(9)
We will show that the λ in the scaled form does not have to adjust for the sample
size, which is advantageous in regularization parameter tuning.
A simple iterative procedure can be defined based on (8) or (9):
β̃ (t+1) = Θ(β̃ (t) + X̃ T y − X̃ T X̃ β̃ (t) ; λ), β (t+1) = β̃ (t+1) /ρ,
(10)
which is called the Thresholding-based Iterative Selection Procedure (TISP) (She,
2009). From Theorem 2.1 of She (2012), given an arbitrary Θ, TISP ensures the
kXk2
following function-value descent property when ρ ≥ 2−L
:
Θ
f (β (t+1) ; λ) ≤ f (β (t) ; λ).
(11)
Here, the energy function (objective function) is constructed as
p
X
1
f (β; λ) = kXβ − yk22 +
P (ρ|βj |; λ),
2
j=1
(12)
where the penalty P can be PΘ as defined in (4), or more generally,
P (t; λ) = PΘ (t; λ) + q(t; λ),
(13)
with q an arbitrary function satisfying q(t, λ) ≥ 0, ∀t ∈ R and q(t; λ) = 0
if t = Θ(s; λ) for some s ∈ R. Furthermore, we can show that when ρ >
kXk2 /(2 − LΘ ), any limit point of β (t) is necessarily a fixed point of (8), and
thus a Θ-estimator. See She (2012) for more detail. Therefore, f is not necessarily unique when Θ has discontinuities—for example, penalties like the capped ℓ1 ,
2
P0 (t; λ) = λ2 1t6=0 and PH are all associated with the same ΘH . Because of the
many-to-one mapping from penalty functions to thresholding functions, iterating
(1) with a well-designed thresholding rule is perhaps more convenient than solving a nonconvex penalized optimization problem. Indeed, some penalties (like
SCAD) are designed from the thresholding viewpoint.
The following theorem shows thatPthe set of Θ-estimators include all locally
optimal solutions of 12 kXβ − yk22 + pj=1 PΘ (|βj |; λ) =: fΘ (β).
Theorem 1. Let β̂ be a local minimum point (or a coordinate-wise minimum
point) of fΘ (·). If Θ is continuous at β̂ + X T y − X T X β̂, β̂ must satisfy β =
Θ(β + X T y − X T Xβ; λ).
The converse is not necessarily true. Namely, Θ-estimators may not guarantee
functional local optimality, let alone global optimality. This raises difficulties in
statistical analysis. We will give a novel and unified treatment which can yield
nearly optimal error rate for various thresholdings.
5
Table 1: Some examples of thresholding functions and their associated quantities.
Θ
LΘ
soft
(t − λsgn(t))1|t|>λ
0
ridge
−η
PΘ
λ|t|
η 2
t
2
hard
t1|t|>λ
1
(
− 21 t2 + λ|t|,
1 2
λ ,
2
t
1+η
if |t| < λ
if |t| ≥ λ
2
P
elastic net (η ≥ 0)
berhu
(η ≥ 0)
if |t| < λ
0
t − λsgn(t) if λ ≤ |t| ≤ λ + λ/η
t
if |t| > λ + λ/η
1+η
0
(
λ|t|
if |t| ≤ λ/η
min(λ|t|, λ2 ) (‘capped ℓ1 ’),
hard-ridge (η ≥ 0)
λ2
1
2 t6=0
Θ
t−λsgn(t)
1|t|≥λ
1+η
LΘ
−η
PΘ
λ|t| + 21 ηt2
P
(‘ℓ0 + ℓ2 ’)
scad (a > 2)
mcp (γ ≥ 1)
0,
if |t| ≤ λ
if |t| < λ
t − λ sgn(t),
0,
if λ < |t| ≤ 2λ
t−λsgn(t)
,
if λ ≤ |t| < γλ
(a−1)t−aλ sgn(t)
1−1/γ
, if 2λ < |t| ≤ aλ
a−2
t,
if |t| ≥ γλ
t,
if |t| > aλ
1/(a − 1)
1/γ
( 2
if |t| ≤ λ
λ sgn(t),
t
+ λ|t|, if |t| < γλ
− 2γ
aλ sgn(t)−t
dP
=
= γ1 PH (t; γλ)
,
if
λ
<
|t|
≤
aλ
dt
a−1
γλ2
,
if |t| ≥ γλ
2
0,
if |t| > aλ
lr (0 < r < 1, ζ ≥ 0)
(
0, if |t| ≤ ζ 1/(2−r) (2 − r)(2 − 2r)(r−1)/(2−r)
sgn(t) max{ζ 1/(2−r) [r(1 − r)]1/(2−r) ≤ θ ≤ |t| : θ + ζrθ r−1 = |t|}, otherwise.(The set is a singleton.)
1
ζ|t|r
Θ
LΘ
PΘ
Θ
LΘ
P
ηt2
2
+
2
λ
2η
if |t| > λ/η.
t
1
1+η |t|>λ
1
(
− 21 t2 + λ|t|,
2
1 2
λ
ηt + 12 1+η
,
2
η 2
1 λ2
1
+ 2t
2 1+η t6=0
if |t| <
if |t| ≥
λ
1+η
λ
.
1+η
3 Main Results
To address the problems in arbitrary dimensions (with possibly large p and/or n),
we aim to establish non-asymptotic oracle inequalities (Donoho and Johnstone,
1994). For any β = [β1 , . . . , βp ]T , define
J (β) = {j : βj 6= 0},
J(β) = |J (β)| = kβk0 .
2
(14)
Recall P1 (t; λ) = λ|t|, P0 (t; λ) = λ2 1t6=0 , PH (t; λ) = (−t2 /2 + λ|t|)1|t|<λ +
(λ2 /2)1|t|≥λ. For convenience, we use P1 (β; λ) to denote λkβk1 when there is
no ambiguity. P0 (β; λ) and PH (β; λ) are used similarly. We denote by . an
inequality that holds up to a multiplicative constant.
Unless otherwise specified, we study scaled Θ-estimators satisfying equation
6
(9), where β̃ = ρβ, X̃ = X/ρ, and ρ ≥ kXk2 (and so kX̃k2 ≤ 1). By abuse
of notation, we still write β for β̃, and X for X̃. As mentioned previously, we
always assume that Θ is continuous at β̂ + X T y − X T X β̂ in Sections 3.1 & 3.2;
similarly, Section 3.3 assumes that Θ is continuous at β (t) + X T y − X T Xβ (t) .
The past works on the lasso show that a certain incoherence requirement must
be assumed to obtain sharp error rates. In most theorems, we also need to make
similar assumptions to prevent the design matrix from being too collinear. We will
state a new type of regularity conditions, which are called comparison regularity
conditions, under which oracle inequalities and sequential statistical error bounds
can be obtained for any Θ.
3.1 PΘ -type oracle inequalities under R0
In this subsection, we use PΘ to make a bound of the prediction error of Θestimators. Our regularity condition is stated as follows.
A SSUMPTION R0 (δ, ϑ, K, β, λ) Given X, Θ, β, λ, there exist δ > 0, ϑ > 0,
K ≥ 0 such that the following inequality holds for any β ′ ∈ Rp
ϑPH (β ′ − β; λ) +
≤
LΘ ′
kβ − βk22
2
2−δ
kX(β ′ − β)k22 + PΘ (β ′ ; λ) + KPΘ (β; λ).
2
(15)
Roughly, (15) means that 2kX(β ′ − β)k22 can dominate LΘ kβ ′ − βk22 with
the help from PΘ (β ′ ; λ) and KPΘ (β; λ) for some K > 0.
T
T
Theorem 2. Let
p β̂ be any Θ-estimator satisfying β = Θ(β + X y − X Xβ; λ)
with λ = Aσ log(ep) and A a constant. Then for any sufficiently large A, the
following oracle inequality holds for β ∈ Rp
E[kX β̂ − Xβ ∗ k22 ] . kXβ − Xβ ∗ k22 + PΘ (β; λ) + σ 2 ,
(16)
provided R0 (δ, ϑ, K, β, λ) is satisfied for some constants δ > 0, ϑ > 0, K ≥ 0.
Theorem 2 is applicable to any Θ. Let’s examine two specific cases. First,
consider LΘ ≤ 0, which indicates that PΘ is convex. Because PH ≤ PΘ and PH is
sub-additive: PH (t + s) ≤ PH (t) + PH (s) due to its concavity (Zhang and Zhang,
2012), R0 (δ, ϑ, K, β, λ) is always satisfied (for any δ ≤ 2, 0 < ϑ ≤ 1, K ≥ ϑ).
7
Corollary 1. Suppose Θ satisfies LΘ ≤ 0. Then, (16) holds for all corresponding
Θ-estimators, without requiring any regularity condition.
In the case of hard-thresholding or SCAD thresholding, PΘ (β; λ) does not
depend on the magnitude of β, and we can get a finite complexity rate in the
oracle inequality. Also, R0 can be slightly relaxed, by replacing KPΘ (β; λ) with
KP0 (β; λ) in (15). We denote the modified version by R′0 (δ, ϑ, K, β, λ).
Corollary 2. Suppose that Θ corresponds to a bounded nonconvex penalty satisfying PΘ (t; λ) ≤ Cλ2 , ∀t ∈ R, for some constant C > 0. Then in the setting of
Theorem 2,
E[kX β̂ − Xβ ∗ k22 ] . kXβ − Xβ ∗ k22 + σ 2 J(β) log(ep) + σ 2 ,
(17)
provided R′0 (δ, ϑ, K, β, λ) is satisfied for some constants δ > 0, ϑ > 0, K ≥ 0.
Remark 1. The right-hand side of the oracle inequalities involves a bias term
kXβ − Xβ ∗ k22 and a complexity term PΘ (β; λ). Letting β = β ∗ in, say, (16), the
bias vanishes, and we obtain a prediction error bound of the order σ 2 J ∗ log(ep)
(omitting constant factors), where J ∗ denotes the number of nonzero components
in β ∗ . On the other hand, the existence of the bias term ensures the applicability
of our results to approximately sparse signals. For example, when β ∗ has many
small but nonzero components, we can use a reference β with a much smaller
support than J (β ∗ ) to get a lower error bound, as a benefit from the bias-variance
tradeoff.
Remark 2. When R0 holds with δ > 1, the proof of Theorem 2 shows that
the multiplicative constant for kXβ − Xβ ∗ k22 can be as small as 1. The corresponding oracle inequalities are called ‘sharp’ in some works (Koltchinskii et al.,
2011). This also applies to Theorem 3. Our proof scheme can also deliver highprobability form results, without requiring an upper bound of kXk2 .
Remark 3. Corollary 2 applies to all “hard-thresholding like” Θ, because when
Θ(t; λ) = t for |t| > cλ, PΘ (t; λ) ≤ c2 λ2 . It is worth mentioning that the error rate
of σ 2 J ∗ log(ep) cannot be significantly improved in a minimax sense. In fact, under the Gaussian noise contamination and some regularity conditions, there exist
constants C, c > 0 such that inf β̌ supβ∗ :J(β∗ )≤J E[kX(β̌ − β ∗ )k22 )/(CPo (J))] ≥
c > 0, where β̌ denotes an arbitrary estimator of β ∗ and Po (J) = σ 2 {J +
J log(ep/J)}. See, e.g., Lounici et al. (2011) for a proof. The bound in (17)
achieves the minimax optimal rate up to a mild logarithm factor for any n and
p.
8
3.2 P0 -type oracle inequalities under R1
This part uses P0 instead of PΘ to make an oracle bound. We will show that under
another type of comparison regularity conditions, all thresholdings can attain the
essentially optimal error rate given in Corollary 2. We will also show that in the
case of soft-thresholding, our condition is more relaxed than many other assumptions in the literature.
A SSUMPTION R1 (δ, ϑ, K, β, λ) Given X, Θ, β, λ, there exist δ > 0, ϑ > 0,
K ≥ 0 such that the following inequality holds for any β ′ ∈ Rp
ϑPH (β ′ − β; λ) +
LΘ ′
kβ − βk22 + PΘ (β; λ)
2
(18)
2−δ
′
2
′
2
kX(β − β)k2 + PΘ (β ; λ) + Kλ J(β).
≤
2
p
Theorem 3. Let β̂ be a Θ-estimator and λ = Aσ log(ep) with A a sufficiently
large constant. Then E[kX β̂ − Xβ ∗ k22 ] . kXβ − Xβ ∗ k22 + λ2 J(β) + σ 2 holds
for any β ∈ Rp if R1 (δ, ϑ, K, β, λ) is satisfied for some constants δ > 0, ϑ > 0,
K ≥ 0.
Remark 4. Some fusion thresholdings, like those associated with elastic net,
Berhu and Hard-Ridge (cf. Table 1), involve an additional ℓ2 shrinkage. In the situation, the complexity term in the oracle inequality should involve both J(β) and
kβk22 . We can modify our regularity conditions to obtain such ℓ0 +ℓ2 bounds using
the same proof scheme. The details are however not reported in this paper. In addition, our results can be extended to Θ-estimators with a stepsize parameter. Given
λ > 0 and 0 < α ≤ 1, suppose λα is introduced such that αPΘ (t; λ) = PΘ (t; λα )
for any t. Then, for any β̂ as a fixed point of β = Θ(β −αX T Xβ +αX T y; λα ),
an analogous result can be obtained (the only change is that LΘ is replaced by
LΘ /α).
To give some more intuitive regularity conditions, we suppose PΘ is concave
on [0, ∞). Examples include ℓr (0 ≤ r ≤ 1), MCP, SCAD, and so on. The concavity implies PΘ (t + s) ≤ PΘ (t) + PΘ (s), and so PΘ (βJ′ ; λ) − PΘ (βJ ; λ) ≤
PΘ ((β ′ − β)J ; λ) and PΘ (βJ′ c ; λ) = PΘ ((β ′ − β)J c ; λ), where J c is the complement of J and βJ is the subvector of β indexed by J . Then R1 is implied by
R′1 below for given J = J (β).
9
A SSUMPTION R′1 (δ, ϑ, K, J , λ) Given X, Θ, J , λ, there exist δ > 0, ϑ > 0,
K ≥ 0 such that for any ∆ ∈ Rp ,
PΘ (∆J ; λ) + ϑPH (∆J ; λ) +
LΘ
k∆k22
2
2−δ
kX∆k22 + Kλ2 J + PΘ (∆J c ; λ) − ϑPH (∆J c ; λ),
≤
2
(19)
or
(1 + ϑ)PΘ (∆J ; λ) +
LΘ
2−δ
k∆k22 ≤
kX∆k22 + Kλ2 J + (1 − ϑ)PΘ (∆J c ; λ).
2
2
(20)
When Θ is the soft-thresholding, it is easy to verify that a sufficient condition
for (20) is
√
(1 + ϑ)k∆J k1 ≤ K J kX∆k2 + k∆J c k1 ,
(21)
for some ϑ > 0 and K ≥ 0. (21) has a simper form than R1 . In the following, we
give the definitions of the RE and the compatibility condition (Bickel et al., 2009;
van de Geer and Bühlmann, 2009) to make a comparison to (21).
A SSUMPTION RE(κRE , ϑRE , J ). Given J ⊂ [p], we say that X ∈ Rn×p satisfies RE(κRE , ϑRE , J ), if for positive numbers κRE , ϑRE > 0,
JkX∆k22 ≥ κRE k∆J k21 ,
(22)
kX∆k22 ≥ κRE k∆J k22 ,
(23)
(1 + ϑRE )k∆J k1 ≥ k∆J c k1 .
(24)
or more restrictively,
for all ∆ ∈ Rp satisfying
Assume RE(κRE , ϑRE , J ) holds. When (1 + ϑRE )k∆J k1 ≤ k∆J c k√1 , (21)
holds trivially with ϑ = ϑRE ; otherwise, (22) indicates (1+ϑ)k∆J k1 ≤ K JkX∆k2
√
with K = (1 + ϑRE )/ κRE . So intuitively, we have the following relationship:
(23) + (24) ⇒ (22) + (24) ⇒ (21) ⇒ (20) ⇒ (19) ⇒ (18).
10
In particular, R1 is less demanding than RE.
Next, let’s compare the regularity conditions required by ΘS and ΘH to achieve
the nearly optimal error rate. Recall R1 (δ, ϑ, K, β, λ) and R′0 (δ, ϑ, K, β, λ) in
Theorem 3 and Corollary 2, respectively
2−δ
kX(β ′ − β)k22 + λkβ ′ k1 + Kλ2 J,
2
2−δ
1
kX(β ′ − β)k22 + PH (β ′ ; λ) + Kλ2 J.
ϑPH (β ′ − β; λ) + kβ ′ − βk22 ≤
2
2
ϑPH (β ′ − β; λ) + λkβk1 ≤
R′0 (δ, ϑ, K, β, λ) implies R1 (δ, ϑ, K + 1, β, λ). Indeed, for ∆ = β ′ − β,
λkβk1 − λkβ ′ k1 ≤ λk∆J k1 − λkβJ′ c k1
1
1
≤ λ2 J + k∆J k22 − PH (βJ′ c ; λ)
2
2
1
1 2
≤ λ J + k∆J k22 − PH (β ′ ; λ) + PH (βJ′ ; λ)
2
2
1
1 2
≤ λ J + k∆J k22 − PH (β ′ ; λ) + P0 (βJ′ ; λ)
2
2
1
≤ λ2 J + k∆J k22 − PH (β ′ ; λ).
2
On the other hand, Corollary 2 studies when all ΘH -estimators have the optimal performance guarantee, while practically, one may initialize (10) with a
carefully chosen starting point.
Theorem 4. Given any Θ, there exists a Θ-estimator (which minimizes (12)) such
that (16) holds without requiring any regularity condition. In particular, if Θ
corresponds to a bounded nonconvex penalty as described in Corollary 2, then
there exists a Θ-estimator such that (17) holds free of regularity conditions.
Theorem 4 does not place any requirement on X. So it seems that applying
ΘH may have some further advantages in practice. (How to efficiently pick a ΘH estimator to completely remove all regularity conditions is however beyond the
the scope of the current paper. For a possible idea of relaxing the conditions, see
Remark 6.)
Finally, we make a discussion of the scaling parameter ρ. Our results so far
are obtained after performing X ← X/ρ with ρ ≥ kXk2 . The prediction error is
invariant to the transformation. But it affects the regularity conditions.
11
Seen from (8), 1/ρ2 is related to the stepsize α appearing in (1), also known
as the learning rate in the machine learning literature. From the computational
results in Section 2.2, ρ must be large enough to guarantee TISP is convergent.
The larger the value of ρ is, the smaller the stepsize is (and so the slower the
convergence is). Based on the machine learning literature, slow learning rates are
always recommended when training a nonconvex learner (e.g., artificial neural
networks). Perhaps interestingly, in addition to computational efficiency reasons,
all our statistical analyses caution against using an extremely large scaling when
LΘ > 0. For example, R′0 (δ, ϑ, K, β, λ) for an unscaled X reads ϑPH (ρ(β ′ −
β); λ) + ρ2 kβ ′ − βk22 /2 ≤ (2 − δ)kX(β ′ − β)k22 /2 + PH (ρβ ′ ; λ) + Kλ2 J, which
becomes difficult to hold when ρ is very large. This makes the statistical error
bound break down easily. Therefore, a good idea is to have ρ just appropriately
large (mildly greater than kXk2 ). The sequential analysis of the iterates in the
next part also supports the point.
3.3 Sequential Algorithmic Analysis
We perform statistical error analysis of the sequence of iterates defined by TISP:
β (t+1) = Θ(β (t) + X T y − X T Xβ (t) ; λ), where kXk2 ≤ 1 and β (0) is the starting point. The study is motivated from the fact that in large-scale applications,
Θ-estimators are seldom computed exactly. Indeed, why bother to run TISP till
computational convergence? How does the statistical accuracy improve (or deteriorate) at t increases? Lately, there are some key advances on the topic. For
example, Agarwal et al. (2012) showed that for convex problems (not necessarily
strongly convex), proximal gradient algorithms can be geometrically fast to approach a globally optimal solution β̂ within the desired statistical precision, under
a set of conditions. We however care about the statistical error between β (t) and
the genuine β ∗ in this work.
We will introduce two comparison regularity conditions (analogous to R0
and R1 ) to present both PΘ -type and P0 -type error bounds. Hereinafter, denote
(β T Aβ)1/2 by kβkA , where A is a positive semi-definite matrix.
A SSUMPTION S0 (δ, ϑ, K, β, β ′ , λ) Given X, Θ, β, β ′ , λ, there exist δ > 0,
ϑ > 0, K ≥ 0 such that the following inequality holds
LΘ + δ ′
kβ − βk22
2
≤ kX(β ′ − β)k22 + PΘ (β ′ ; λ) + KPΘ (β; λ).
ϑPH (β ′ − β; λ) +
12
(25)
A SSUMPTION S1 (δ, ϑ, K, β, β ′ , λ) Given X, Θ, β, β ′ , λ, there exist δ > 0,
ϑ > 0, K ≥ 0 such that the following inequality holds
LΘ + δ ′
kβ − βk22 + PΘ (β; λ)
2
≤ kX(β ′ − β)k22 + PΘ (β ′ ; λ) + (K + 1)λ2 J(β).
ϑPH (β ′ − β; λ) +
(26)
(25) and (26) require a bit more than (15) and (18), respectively, due to kXk2 ≤
1. The theorem and the corollary below perform sequential analysis of the iterates
and reveal the explicit roles of δ, ϑ, K (which can often be treated as constants).
∗
(t+1)
Theorem 5. Suppose S0 (δ, ϑ,
pK, β , β p , λ) is satisfied for some δ > 0, ϑ > 0,
K ≥ 0, then for λ = Aσ log(ep)/ (δ ∧ ϑ)ϑ with A sufficiently large, the
2
following error bound holds with probability at least 1 − Cp−cA :
1 + δ (t+1)
1
kβ
− β ∗ k2(I−X T X) ≤ kβ (t) − β ∗ k2(I−X T X) + (K + 1)PΘ (β ∗ ; λ),
2
2
(27)
where C, c are universal positive constants.
Similarly, under the same choice of regularity parameter, if S1 (δ, ϑ, K, β ∗ , β (t) , λ)
is satisfied for some δ > 0, ϑ > 0, K ≥ 0, (28) is true with probability at least
2
1 − Cp−cA :
1 + δ (t+1)
1
kβ
− β ∗ k2(I−X T X) ≤ kβ (t) − β ∗ k2(I−X T X) + (K + 1)λ2 J ∗ .
2
2
(28)
Corollary 3. In the setting of Theorem 5, for any initial point β (0) ∈ Rp , we have
κ
K ′ PΘ (β ∗ ; λ),
1−κ
κ
K ′ λ2 J ∗ ,
≤ κt kβ (0) − β ∗ k2(I−X T X) +
1−κ
kβ (t) − β ∗ k2(I−X T X) ≤ κt kβ (0) − β ∗ k2(I−X T X) +
(29)
kβ (t) − β ∗ k2(I−X T X)
(30)
under S0 (δ, ϑ, K, β ∗ , β (s) , λ) and S1 (δ, ϑ, K, β ∗ , β (s) , λ), 0 ≤ s ≤ t − 1, respec2
tively, with probability at least 1 − Cp−cA . Here, κ = 1/(1 + δ), K ′ = 2(K + 1).
Remark 5. We can get some sufficient conditions for S0 and S1 , similar to the
discussions made in Section 3.2. When kXk2 is strictly less than 1, (25) can be
relaxed to ϑPH (β ′ − β; λ) + (LΘ + δ)kβ ′ − βk22 /2 ≤ (2 + δ)kX(β ′ − β)k22 /2 +
13
PΘ (β ′ ; λ) + KPΘ (β; λ) for some δ > 0. The proof in Section 4.4 also gives
expectation-form results, with an additional additive term Cσ 2 /(δ ∧ ϑ) in the
upper bounds. Similar to Remark 4, we can also study Θ-iterates with stepsize α,
in which case the weighting matrix in (27)-(30) changes from I −X T X to I/α −
X T X, and the factor (LΘ + δ)/2 in (25) and (26) is replaced by (LΘ + δ)/(2α).
Remark 6. Theorem 5 still applies when δ, ϑ, K and λ are dependent on t. For
example, if we use a varying threshold sequence, i.e., β (t+1) = Θ(β (t) + X T y −
X T Xβ (t) ; λ(t) ), then (30) becomes
kβ
(t)
−
β ∗ k2(I−X T X)
t
≤ κ kβ
(0)
−
β ∗ k2(I−X T X)
′
+K J
∗
t−1
X
κt−s λ2s .
s=0
This allows for much larger values of λs to be used in earlier iterations to attain
the same accuracy. It relaxes the regularity condition required by applying a fixed
threshold level.
At the end, we re-state some results under ρ > kXk2 , to get more intuition
and implications. For a general X (unscaled), (30) reads
kβ (t) − β ∗ k2(ρ2 I−X T X) ≤ κt kβ (0) − β ∗ k2(ρ2 I−X T X) +
κ
K ′ σ 2 λ2 J ∗ .
1−κ
Set ρ to be a number slightly larger than kXk2 , i.e., ρ = (1+ǫ)kXk2 , ǫ > 0. Then,
we know that the prediction error kXβ (t) − Xβ ∗ k22 decays geometrically fast to
O(σ 2J ∗ log(ep)) with high probability, when ǫ, δ, ϑ, K are viewed as constants; a
similar conclusion is true for the estimation error. This is simply due to
ρ2 − kXk22 (t)
kβ −β ∗ k2X T X ≤ (ρ2 −kXk22 )kβ (t) −β ∗ k22 ≤ kβ (t) −β ∗ k2(ρ2 I−X T X) .
2
kXk2
Accordingly, there is no need to run TISP till convergence—one can terminate
the algorithm earlier, at, say, tmax = log{ρ2 kβ (0) − β ∗ k2 /(Kσ 2 λ2 J ∗ )} /log(1/κ),
without sacrificing much statistical accuracy. The formula also reflects that the
quality of the initial point affects the required iteration number.
There are some related results in the literature. (i) As mentioned previously,
in a broad convex setting Agarwal et al. (2012) proved the geometric decay of
the optimization error kβ (t) − β̂k to the desired statistical precision, where β̂
is the convergent point. Loh and Wainwright (2015) extended the conclusion to
a family of nononvex optimization problems, and they showed that when some
14
regularity conditions hold, every local minimum point is close to the authentic β ∗ .
In comparison, our results are derived toward the statistical error between β (t) and
β ∗ directly, without requiring all local minimum points to be statistically accurate.
(ii) Zhang (2010b) showed a similar fast-converging statistical error bound for an
elegant multi-stage capped-ℓ1 regularization procedure. However, the procedure
carries out an expensive ℓ1 optimization at each step. Instead, (10) involves a
simple and cheap thresholding, and our analysis covers any Θ.
Acknowledgement
The author would like to thank the editor, the associated editor and two anonymous referees for their careful comments and useful suggestions that improve the
quality of the paper. The author also appreciates Florentina Bunea for the encouragement. This work was supported in part by NSF grant DMS-1352259.
4 Proofs
Throughout the proofs, we use C, c, L to denote universal non-negative constants.
They are not necessarily the same at each occurrence. Given any matrix A, we
use R(A) to denote its column space. Denote by PA the orthogonal projection
matrix onto R(A), i.e., PA = A(AT A)+ AT , where + stands for the MoorePenrose pseudoinverse. Let [p] := {1, · · · , p}. Given J ⊂ [p], we use XJ to
denote a column submatrix of X indexed by J .
Definition 2. ξ is called a sub-Gaussian random variable if there exist constants
2
C, c > 0 such that P{|ξ| ≥ t} ≤ Ce−ct , ∀t > 0. The scale (ψ2 -norm) for
ξ is defined as σ(ξ) = inf{σ > 0 : E exp(ξ 2/σ 2 ) ≤ 2}. ξ ∈ Rp is called
a sub-Gaussian random vector with scale bounded by σ if all one-dimensional
marginals hξ, αi are sub-Gaussian satisfying khξ, αikψ2 ≤ σkαk2 , ∀α ∈ Rp .
Examples include Gaussian random variables and bounded random variables
such as Bernoulli. Note that the assumption that vec (ǫ) is sub-Gaussian does not
imply that the components of ǫ must be i.i.d.
We begin with two basic facts. Because they are special cases of Lemma 1
and Lemma 2 in She (2012), respectively, we state them without proofs.
15
Lemma 1. Given an arbitrary thresholding rule Θ, let P be any function satisR |θ|
fying P (θ; λ) − P (0; λ) = PΘ (θ; λ) + q(θ; λ) where PΘ (θ; λ) , 0 (sup{s :
Θ(s; λ) ≤ u} − u) du, q(θ; λ) is nonnegative and q(Θ(t; λ)) = 0 for all t. Then,
β̂ = Θ(y; λ) is always a globally optimal solution to minβ 12 ky − βk22 + P (|β|; λ).
It is the unique optimal solution provided Θ(·; λ) is continuous at |y|.
Lemma 2. Let Q0 (β) = ky − βk22 /2 + PΘ (|β|; λ). Denote by β̂ the unique
minimizer of Q0 (β). Then for any δ, Q0 (β̂ + δ) − Q0 (β̂) ≥ (1 − LΘ )kδk22 /2.
4.1 Proof of Theorem 1
Let s(u; λ) := Θ−1 (u; λ) − u for u ≥ 0. Assume β̂ is a local minimum point (the
proof for a coordinate-wise minimum point follows the same lines). We write fΘ
as f for simplicity. Let δf (β; h) denote the Gateaux differential of f at β with
(β)
. By the definition of PΘ , δf (β, h)
increment h: δf (β; h) = limǫ→0+ f (β+ǫh)−f
ǫ
1
p
exists for any h ∈ R . Let l(β) = 2 kXβ − yk22 . We consider the following
directional vectors: dj = [d1 , · · · , dp ]T with dj = ±1 and dj ′ = 0, ∀j ′ 6= j. Then
for any j,
δl(β; dj ) = dj xTj (Xβ − y),
(
s(|βj |)sgn(βj )dj ,
δPΘ (β; dj ) =
s(|βj |),
(31)
if βj =
6 0,
if βj = 0.
(32)
Due to the local optimality of β̂, δf (β̂; dj ) ≥ 0, ∀j. When β̂1 6= 0, we obtain
T
x1 (X β̂−y)+s(|β̂1|; λ)sgn(β̂1 ) = 0. When β̂1 = 0, xT1 (X β̂−y)+s(|β̂1 |; λ) ≥ 0
and −xT1 (X β̂−y)+s(|β̂1 |; λ) ≥ 0, i.e., |xT1 (X β̂−y)| ≤ s(|β̂1 |; λ) = Θ−1 (0; λ).
To summarize, when f achieves a local minimum or a coordinate-wise minimum
(or more generally, a local coordinate-wise minimum) at β̂, we have
β̂j 6= 0 ⇒ Θ−1 (|β̂j |; λ)sgn(β̂j ) = β̂j − xTj (X β̂ − y)
β̂j = 0 ⇒ Θ(xTj (Xβ − y); λ) = 0
(33)
(34)
When Θ is continuous at β̂j − xTj (X β̂ − y), (33) implies that β̂j = Θ(β̂j −
xTj (X β̂ − y); λ). Hence β̂ must be a Θ-estimator satisfying β = Θ(β + X T y −
X T Xβ; λ).
16
4.2 Proofs of Theorem 2 and Theorem 3
Given Θ, let β̂ be any Θ-estimator, β be any p-dimensional vector (non-random)
and ∆ = β̂ − β. The first result constructs a useful criterion for β̂ on basis of
Lemma 1 and Lemma 2.
Lemma 3. Any Θ-estimator β̂ satisfies the following inequality for any β ∈ Rp
1
1
kX(β̂ − β ∗ )k22 + ∆T (X T X − LΘ I)∆
2
2
1
∗ 2
≤ kX(β − β )k2 + PΘ (β; λ) − PΘ (β̂; λ) + hǫ, X∆i,
2
(35)
where ∆ = β̂ − β.
To handle hǫ, X∆i, we introduce another lemma.
p
Lemma 4. Suppose kXk2 ≤ 1 and let λo = σ log(ep). Then there exist universal constants A1 , C, c > 0 such that for any constants a ≥ 2b > 0, the following
event
√
1
1
sup {2hǫ, Xβi − kXβk22 − [PH (β; abA1 λo )]} ≥ aσ 2 t
a
b
β∈Rp
(36)
2
occurs with probability at most C exp(−ct)p−cA1 , where t ≥ 0.
The lemma plays an important role in bounding the last stochastic term in (35).
Its proof is based on the following results.
Lemma 5. Suppose kXk2 ≤ 1. There exists a globally optimal solution β o to
minβ 12 ky − Xβk22 + PH (β; λ) such that for any j : 1 ≤ j ≤ p, either βjo = 0 or
|βjo| ≥ λ.
p
Lemma 6. Given X ∈ Rn×p and J : 1 ≤ J ≤ p, define Γ′J = {α ∈ R
: kαk2 ≤
p
′
2
1, α ∈ R(XJ ) for some J : |J | = J}. Let Po (J) = σ {J + log J }. Then for
any t ≥ 0,
!
p
(37)
P sup hǫ, αi ≥ tσ + LPo′ (J) ≤ C exp(−ct2 ),
α∈Γ′J
where L, C, c > 0 are universal constants.
17
√
1
1
Let R = sup1≤J≤p sup∆∈ΓJ {hǫ, X∆i − 2b
PH (∆; abA1 λo ) − 2a
kX∆k22 },
o
with λ , A1 given in Lemma 4. (The starting value of J is 1 because when J(∆) =
0, hǫ, X∆i = 0.) Substituting it into (35) gives
1
1
kX(β̂ − β ∗ )k22 + ∆T (2X T X − LΘ I)∆
2
2
√
1
1
∗ 2
≤ kX(β − β )k2 + PΘ (β; λ) − PΘ (β̂; λ) + PH (∆; abA1 λo )
2
2b
1
1
2
2
+ kX∆k2 + kX∆k2 + R
2a
2
√
1
1
∗ 2
≤ kX(β − β )k2 + PΘ (β; λ) − PΘ (β̂; λ) + PH (∆; abA1 λo )
2
2b
1
1
2
+ (1 + )kX∆k2 + R.
2
a
Because P(R ≥ aσ 2 t) ≤ C exp(−ct),
we know E[R] . aσ 2 .
√
Let λ = Aλo with A = A1 ab and set b ≥ 1/(2ϑ). The regularity condition
R0 (δ, ϑ, K, β, λ) implies that
LΘ
2−δ
1
PH (∆; λ) +
k∆k22 ≤
kX∆k22 + PΘ (β̂; λ) + KPΘ (β; λ). (38)
2b
2
2
Choose a to satisfy a > 1/δ, a ≥ 2b. Combining the last two inequalities gives
E[kX(β̂ − β ∗ )k22 ]
≤kX(β − β ∗ )k22 + 2(K + 1)PΘ (β; λ) + E[(1 +
.kX(β − β ∗ )k22 + PΘ (β; λ) + σ 2 ,
1
− δ)kX∆k22 ] + 2 E[R]
a
(39)
with the last inequality due to kX∆k22 ≤ (1+1/c)kX(β−β ∗ )k22 +(1+c)kX(β̂−
β ∗ )k22 for any c > 0.
The proof of Theorem 3 follows the lines of the proof of Theorem 2, with (38)
replaced by
1
LΘ
2−δ
PH (∆; λ) +
k∆k22 + PΘ (β; λ) ≤
kX∆k22 + PΘ (β̂; λ) + Kλ2 J(β),
2b
2
2
and (39) replaced by
E[kX(β̂ − β ∗ )k22 ]
≤kX(β − β ∗ )k22 + 2Kλ2 J(β) + E[(1 +
.kX(β − β ∗ )k22 + λ2 J(β) + σ 2 .
18
1
− δ)kX∆k22 ] + 2 E[R]
a
The details are omitted.
4.3 Proof of Theorem 4
From the proof of Lemma 5, there exists a Θ-estimator β̂ which minimizes f (β) =
l(β) + PΘ (β; λ). This means that the term 12 ∆T (X T X − LΘ I)∆ can be dropped
from (35). Following the lines of Section 4.2, (17) holds under a modified version
of R0 (δ, ϑ, K, β, λ), which replaces (15) with
ϑPH (β ′ − β; λ) ≤
1−δ
kX(β ′ − β)k22 + PΘ (β ′ ; λ) + KPΘ (β; λ).
2
(40)
Using the sub-additivity of PH , we know that any design matrix satisfies (40) for
any 0 < ϑ ≤ 1, δ ≤ 1, K ≥ ϑ.
4.4 Proof of Theorem 5 and Corollary 3
Let f (β) = l(β) + PΘ (β; λ) where l(β) = 21 kXβ − yk22 .
Lemma 7. Let β (t+1) = Θ(β (t) + X T y − X T Xβ (t) ; λ). Then the following
‘triangle inequality’ holds for any β ∈ Rp
1
1 − LΘ (t+1)
kβ
− βk22 + kβ (t+1) − β (t) k2I−X T X
2
2
1 (t)
≤ kβ − βk2I−X T X + f (β) − f (β (t+1) ).
2
Letting β = β ∗ in the lemma, we have
1 (t+1)
1
kβ
− β ∗ k2X T X+(1−LΘ )I + kβ (t+1) − β (t) k2I−X T X + PΘ (β (t+1) ; λ)
2
2
1 (t)
∗ 2
(t+1)
≤ kβ − β kI−X T X + hǫ, X(β
− β ∗ )i + PΘ (β ∗ ; λ).
2
Moreover, under S0 (δ, ϑ, K, β ∗ , β ′ , λ) with β ′ = β (t+1) ,
1 + δ (t+1)
kβ
− β ∗ k22 − KPΘ (β ∗ ; λ)
2
1 (t+1)
1
∗ 2
≤ kβ
− β kX T X+(1−LΘ )I + PΘ (β (t+1) ; λ) + kβ (t+1) − β ∗ k2X T X .
2
2
ϑPH (β (t+1) − β ∗ ; λ) +
19
Combining the last two inequalities gives
1 + δ (t+1)
1
kβ
− β ∗ k2I−X T X + kβ (t+1) − β (t) k2I−X T X
2
2
δ (t+1)
− β ∗ k2X T X + ϑPH (β (t+1) − β ∗ ; λ)
+ kβ
2
1
≤ kβ (t) − β ∗ k2I−X T X + (K + 1)PΘ (β ∗ ; λ) + hǫ, X(β (t+1) − β ∗ )i.
2
p
Let ΓJ = {β ∈ Rp : J(β) = J}, λo = σ log(ep). We define an event E
with its complement given by
√
1
1
E c , {sup{2hǫ, Xβi − kXβk22 − [PH (β; abA1 λo )]} ≥ 0}.
a
b
β
By Lemma 4, there exists a universal constant L such that for any A21 ≥ L, a ≥
2
2b > 0, P (E c) ≤ Cp−cA1 . Clearly, E implies
√
1
1
kβ (t+1) − β ∗ k2X T X + PH (β (t+1) − β ∗ ; abA1 λo ).
2a
2b
(41)
√ o
√
Take b = 1/(2ϑ), a = 1/(δ ∧ ϑ), A1 ≥ L, and λ = A1 abλ . Then, on E
we get the desired statistical accuracy bound
hǫ, X(β (t+1) − β ∗ )i ≤
1 + δ (t+1)
1
kβ
− β ∗ k2I−X T X ≤ kβ (t) − β ∗ k2I−X T X + (K + 1)PΘ (β ∗ ; λ).
2
2
The bound under S1 can be similarly proved. Noticing that (41) holds for any
t, Corollary 3 is immediately true.
4.5 Proofs of Lemmas
4.5.1 Proof of Lemma 3
Let f (β) = l(β) + PΘ (β; λ) with l(β) = 12 kXβ − yk22. Define
1
g(β, γ) = l(β) + h∇l(β), γ − βi + kγ − βk22 + PΘ (γ; λ).
2
Given β, g(β, γ) can be expressed as
1
kγ − (β − ∇l(β))k22 + PΘ (γ; λ) + c(β),
2
20
(42)
where c(β) depends on β only.
Let β̂ be a Θ-estimator satisfying β̂ = Θ(β̂ − X T X β̂ + X T y; λ). Based on
Lemma 1 and Lemma 2, we have
g(β̂, β̂ + ∆) − g(β̂, β̂) ≥
1 − LΘ
k∆k22 ,
2
from which it follows that
1
f (β̂ + ∆) − f (β̂) ≥ ∆T (X T X − LΘ I)∆.
2
This holds for any ∆ ∈ Rp .
4.5.2 Proof of Lemma 4.
Let
1
lH (β) = 2hǫ, Xβi − kXβk22 −
a
1
l0 (β) = 2hǫ, Xβi − kXβk22 −
a
√
1
[PH (β; abA0 λo )]
b
√
1
[P0 (β; abA0 λo )],
b
and EH = {supβ∈Rp lH (β) ≥ atσ 2 }, and E0 = {supβ∈Rp l0 (β) ≥ atσ 2 }. Because
P0 ≥ PH , E0 ⊂ EH . We prove that EH = E0 . The occurrence of EH implies that
lH (β o ) ≥ atσ 2 for any β o defined by
√
1
1
β o ∈ arg min kXβk22 − 2hǫ, Xβi + [PH (β; abA0 λo )],
β a
b
With a ≥ 2b > 0, Lemma
exists at least one global minimizer
√ 5 states that there √
β oo satisfying PH (β oo ; abA1 λo ) = P0 (β oo ; abA1 λo ) and thus lH (β oo ) = l0 (β oo ).
This means that sup l0 (β) ≥ l0 (β oo ) = lH (β oo ) ≥ atσ 2 . So EH ⊂ E0 , and it
suffices to prove E0c occurs with high probability, or more specifically, P(E0 ) ≤
2
C exp(−ct)p−cA1 .
Given 1 ≤ J ≤ p, define ΓJ = √{β ∈ Rp : J(β) = J}. Let R =
1
1
P0 (β; abA1 λo ) − 2a
kXβk22 }. We will use
sup1≤J≤p supβ∈ΓJ {hǫ, Xβi − 2b
Lemma 6 to bound its tail probability.
Let Po′ (J) = σ 2 {J + log Jp }. We claim that
P[ sup {hǫ, Xβi −
β∈ΓJ
1
kXβk22 − aLPo′ (J)} > atσ 2 ] ≤ C exp(−ct).
2a
21
(43)
Indeed,
1
2hǫ, Xβi − kXβk22 − 2aLPo′ (J)
a
p
1
≤2hǫ, Xβ/kXβk2 ikXβk2 − 2kXβk2 LPo′ (J) − kXβk22
2a
p
1
=2kXβk2 hǫ, Xβ/kXβk2 i − LPo′ (J) − kXβk22
2a
p
1
≤2kXβk2 hǫ, Xβ/kXβk2 i − LPo′ (J) − kXβk22
2a
+
2
p
≤2a hǫ, Xβ/kXβk2 i − LPo′ (J) ,
(44)
+
where the last inequality is due to Cauchy-Schwarz inequality. (43) now follows
from Lemma 6. √
Set A1 ≥ 4 L. We write P0 (β; λo ) with β ∈ ΓJ as P0 (J; λo ). Noticing
′
some basic facts that
≤ CP0p
(J; λo ) due to Stirling’s
p (i) Po (J) ≤ CJ log(ep)
p
approximation, (ii) (A21 /2)P0 (J; λo ) ≥ LPo′ (J) + cA21 P0 (J; λo ) for some
c > 0, and (iii) J log(ep) ≥ log p + J for any J ≥ 1, we get
P(R ≥ aσ 2 t)
!
2
p
q
X
P a sup hǫ, Xβ/kXβk2 i − (A21 /2)P0 (J; λo )
≤
≥ aσ 2 t
J=1
p
β∈ΓJ
+
q
X
√
P( sup hǫ, αi − (A21 /2)P0(J; λo ) ≥ σ t)
=
J=1
p
≤
X
≤
X
J=1
p
α∈Γ′J
q
p
√
′
P( sup hǫ, αi − LPo (J) ≥ tσ + cA21 P0 (J; λo ))
α∈Γ′J
C exp(−ct) exp{−cA21 (J + log(p))}
J=1
≤C exp(−ct)
p
X
exp(−cA21 log p) exp(−cA21 J)
J=1
−cA21
≤C exp(−ct)p
,
where the last inequality due to the sum of geometric series.
22
4.5.3 Proof of Lemma 5.
Similar to the proof of Lemma 3, we set fH (β) = l(β) + PH (β; λ) with l(β) =
1
kXβ − yk22 and construct gH (β, γ) = fH (γ) + 12 kγ − βk22 − (l(γ) − l(β) −
2
h∇l(β), γ − βi). Under kXk2 ≤ 1, for any (β, γ),
1
gH (β, γ) − fH (γ) = (γ − β)T (I − X T X)(γ − β) ≥ 0.
2
o
Let β be a globally optimal solution to minβ fH (β). Then γ o := ΘH (β o −
X T Xβ o + X T y; λ) gives
fH (γ o ) ≤ gH (β o , γ o ) ≤ gH (β o , β o ) = fH (β o ),
with the second inequality due to Lemma 1. Therefore, γ o must also be a global
minimizer of fH , and by definition, γ o demonstrates a threshold gap as desired.
4.5.4 Proof of Lemma 6.
By definition, {hǫ, αi : α ∈ Γ′J } is a stochastic process with sub-Gaussian increments. The induced metric on Γ′J is Euclidean: d(α1 , α2 ) = σkα1 − α2 k2 .
To bound the metric entropy log N (ε, Γ′J , d), where N (ε, Γ′J , d) is the smallest cardinality of an ε-net that covers Γ′J under d, we notice that α is in a Jdimensional ball in Rp . The number of such balls {PXJ ∩ Bp (0, 1) : J ⊂ [p]}
is at most Jp , where Bp (0, 1) denotes the unit ball in Rp . By a standard volume
argument (see, e.g., Vershynin (2012)),
p
p Cσ J
′
+ J log(Cσ/ε),
(45)
) = log
(
log N (ε, Γr,J , d) ≤ log
ε
J
J
where C is a universal constant. The conclusion follows from Dudley’s integral
bound (Talagrand, 2005).
4.5.5 Proof of Lemma 7
We use the notation in the proof of Lemma 3 with g defined in (42). By Lemma
Θ
1 and Lemma 2, we obtain g(β (t) , β) − g(β (t) , β (t+1) ) ≥ 1−L
kβ (t+1) − βk22 ,
2
namely,
1
h∇l(β (t) ), β − β (t+1) i + PΘ (β) − PΘ (β (t+1) ) + kβ − β (t) k22
2
1 (t)
1
−
L
Θ
− kβ − β (t+1) k22 ≥
kβ (t+1) − βk22 .
2
2
23
To cancel the first-order term, we give two other inequalities based on secondorder lower/upper bounds:
1
l(β) − l(β (t) ) − h∇l(β (t) ), β − β (t) i ≥ kβ (t) − βk2X T X ,
2
1
l(β (t) ) + h∇l(β (t) ), β (t+1) − β (t) i − l(β (t+1) ) ≥ − kβ (t+1) − β (t) k2X T X .
2
Adding the three inequalities together gives the triangle inequality.
References
Agarwal, A., Negahban, S., and Wainwright, M. J. (2012). Fast global convergence of gradient methods for high-dimensional statistical recovery. Ann.
Statist., 40(5):2452–2482.
Bickel, P. J., Ritov, Y., and Tsybakov, A. B. (2009). Simultaneous analysis of
lasso and dantzig selector. The Annals of Statistics, pages 1705–1732.
Bunea, F., Tsybakov, A. B., and Wegkamp, M. (2007). Sparsity oracle inequalities
for the lasso. Electronic Journal of Statistics, 1:169–194.
Donoho, D. and Johnstone, I. (1994). Ideal spatial adaptation via wavelet shrinkages. Biometrika, 81:425–455.
Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association,
96:1348–1360.
He, Y., She, Y., and Wu, D. (2013). Stationary sparse causality network learning.
J. Mach. Learn. Res., 14:3073–3104.
Koltchinskii, V., Lounici, K., and Tsybakov, A. B. (2011). Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Statist.,
39(5):2302–2329.
Loh, P.-L. and Wainwright, M. J. (2015). Regularized m-estimators with nonconvexity: Statistical and algorithmic theory for local optima. J. Mach. Learn.
Res., 16(1):559–616.
24
Lounici, K., Pontil, M., Tsybakov, A. B., and van de Geer, S. (2011). Oracle
inequalities and optimal inference under group sparsity. Annals of Statistics,
39:2164–2204.
Owen, A. B. (2007). A robust hybrid of lasso and ridge regression. Prediction
and Discovery (Contemporary Mathematics), 443:59–71.
Parikh, N. and Boyd, S. (2014). Proximal algorithms. Foundations and Trends in
Optimization, 1(3):127–239.
She, Y. (2009). Thresholding-based iterative selection procedures for model selection and shrinkage. Electronic Journal of Statistics, 3:384–415.
She, Y. (2012). An iterative algorithm for fitting nonconvex penalized generalized linear models with grouped predictors. Computational Statistics and Data
Analysis, 9:2976–2990.
She, Y. (2014). Selective factor extraction in high dimensions. arXiv preprint
arXiv:1403.6212.
Talagrand, M. (2005). The Generic Chaining: Upper and Lower Bounds of
Stochastic Processes. Springer Monographs in Mathematics. Springer.
van de Geer, S. A. and Bühlmann, P. (2009). On the conditions used to prove
oracle results for the lasso. Electronic Journal of Statistics, 3:1360–1392.
Vershynin, R. (2012). Introduction to the non-asymptotic analysis of random matrices. Compressed sensing.
Zhang, C.-H. (2010a). Nearly unbiased variable selection under minimax concave
penalty. Ann. Statist., 38(2):894–942.
Zhang, C.-H. and Huang, J. (2008). The sparsity and bias of the Lasso selection
in high-dimensional linear regression. Ann. Statist, 36:1567–1594.
Zhang, C.-H. and Zhang, T. (2012). A general theory of concave regularization
for high dimensional sparse estimation problems. Statist. Sci., 27(4):576–593.
Zhang, T. (2010b). Analysis of multi-stage convex relaxation for sparse regularization. J. Mach. Learn. Res., 11:1081–1107.
Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic
net. JRSSB, 67(2):301–320.
25
| 10 |
arXiv:1709.07708v1 [math.GR] 22 Sep 2017
COMPATIBLE ACTIONS AND NON-ABELIAN TENSOR
PRODUCTS
VALERIY G. BARDAKOV AND MIKHAIL V. NESHCHADIM
Abstract. For a pair of groups G, H we study pairs of actions G on H and H on
G such that these pairs are compatible and non-abelian tensor products G ⊗ H are
defined.
1. Introduction
R. Brown and J.-L. Loday [1, 2] introduced the non-abelian tensor product G ⊗ H
for a pair of groups G and H following works of C. Miller [6], and A. S.-T. Lue [5].
The investigation of the non-abelian tensor product from a group theoretical point
of view started with a paper by R. Brown, D. L. Johnson, and E. F. Robertson [3].
The non-abelian tensor product G ⊗ H depends not only on the groups G and H
but also on the action of G on H and on the action of H on G. Moreover these
actions must be compatible (see the definition in Section 2). In the present paper we
study the following question: what actions are compatible?
The paper is organized as follows. In Section 2, we recall a definition of non-abelian
tensor product, formulate some its properties and give an answer on a question of
V. Thomas, proving that there are nilpotent group G and some group H such that
in G ⊗ H the derivative subgroup [G, H] is equal to G. In the Section 3 we study the
following question: Let a group H acts on a group G by automorphisms, is it possible
to define an action of G on H such that this pair of actions are compatible? Some
necessary conditions for compatibility of actions will be given and in some cases will
be prove a formula for the second action if the first one is given. In the Section 4
we construct pairs compatible actions for arbitrary groups and for 2-step nilpotent
groups give a particular answer on the question from Section 3. In Section 5 we study
groups of the form G ⊗ Z2 and describe compatible actions.
2. Preliminaries
In this article we will use the following notations. For elements x, y in a group G,
the conjugation of x by y is xy = y −1 xy; and the commutator of x and y is [x, y] =
x−1 xy = x−1 y −1 xy. We write G′ for the derived subgroup of G, i.e. G′ = [G, G]; Gab
for the abelianized group G/G′ ; the second hypercenter ζ2 G of G is the subgroup of
G such that
ζ2 G/ζ1 G = ζ1 (G/ζ1 G),
Date: March 24, 2018.
2010 Mathematics Subject Classification. Primary 20E22; Secondary 20F18, 20F28.
Key words and phrases. tensor product; compatible action, nilpotent group.
1
2
V. G. BARDAKOV AND M. V. NESHCHADIM
where ζ1 G = Z(G) is the center of a group G.
Recall the definition of the non-abelian tensor product G ⊗ H of groups G and H
(see [1, 2]). It is defined for a pair of groups G and H where each one acts on the
other (on right)
G × H −→ G, (g, h) 7→ g h ; H × G −→ H, (h, g) 7→ hg
and on itself by conjugation, in such a way that for all g, g1 ∈ G and h, h1 ∈ H,
−1 g h1
h g1
h1
g1−1
(hg1 )
.
and h(g ) = hh1
g
=
g
In this situation we say that G and H act compatibly on each other. The non-abelian
tensor product G ⊗ H is the group generated by all symbols g ⊗ h, g ∈ G, h ∈ H,
subject to the relations
gg1 ⊗ h = (gg1 ⊗ hg1 )(g1 ⊗ h) and g ⊗ hh1 = (g ⊗ h1 )(gh1 ⊗ hh1 )
for all g, g1 ∈ G, h, h1 ∈ H.
In particular, as the conjugation action of a group G on itself is compatible, then
the tensor square G⊗G of a group G may always be defined. Also, the tensor product
G ⊗ H is defined if G and H are two normal subgroups of some group M and actions
are conjugations in M .
The following proposition is well known. We give a proof only for fullness.
Proposition 2.1. 1) Let G and H be abelian groups. Independently on the action of
G on H and H on G, the group G ⊗ H is abelian.
2) (See [2, Proposition 2.4]) Let G and H be arbitrary groups. If the actions of G
on H and H on G are trivial, then the group G ⊗ H ∼
= Gab ⊗Z H ab is the abelian
tensor product.
Proof. 1) We have the equality
(g ⊗ h)g1 ⊗h1 = g [g1 ,h1] ⊗ h[g1 ,h1 ] ,
where g[g1 ,h1 ] is the action of the commutator [g1 , h1 ] ∈ G by conjugation on g, but
G is abelian and g [g1 ,h1 ] = g. Analogously, h[g1 ,h1] = h. Hence, G ⊗ H is abelian.
2) From the previous formula and triviality actions we have
−1 g1 h1 −1 g1 h1
−1 h−1
1 g 1 h1
g 1 h1
[g1 ,h1 ]
g1−1 h−1
1
= g h1 = g.
= g g1
= g g1
g
=g
= g g1
Analogously, h[g1 ,h1] = h. Hence, G ⊗ H is abelian.
Remind presentation of non-abelian tensor product as a central extension (see [4]).
The derivative subgroup of G by H is called the following subgroup
DH (G) = [G, H] = hg −1 g h | g ∈ G, h ∈ Hi.
The map κ : G ⊗ H −→ DH (G) defined by κ(g ⊗ h) = g −1 gh is a homomorphism, its
kernel A = ker(κ) is the central subgroup of G ⊗ H and G acts on G ⊗ H by the rule
(g ⊗ h)x = gx ⊗ hx , x ∈ G, i.e. there exists the short exact sequence
1 −→ A −→ G ⊗ H −→ DH (G) −→ 1.
COMPATIBLE ACTIONS AND NON-ABELIAN TENSOR PRODUCTS
3
In this case A can be viewed as Z[DH (G)]-module via conjugation in G ⊗ H, i.e.
under the action induced by setting
a · g = x−1 ax, a ∈ A, x ∈ G ⊗ H, κ(x) = g.
The following proposition gives an answer on the following question: is there nonabelian tensor product G ⊗ H such that [G, H] = G? which of V. Thomas formulated
in some letter to the authors.
Proposition 2.2. Let G = Fn /γk Fn , k ≥ 2, be a free nilpotent group of rank n ≥ 2
and H = Aut(G) is its automorphism group. Then DH (G) = [G, H] = G.
Proof. Let Fn be a free group of rank n ≥ 2 with the basis x1 , . . . , xn , G = Fn /γk Fn
be a free k − 1-step nilpotent group for k ≥ 2. Let G acts trivially on H and elements
of H act by automorphisms on G. It is easy to see that these actions are compatible.
Let us show that in this case [G, H] = G. To do it, let us prove that x1 lies in
[G, H]. Take ϕ1 ∈ H = Aut(G), which acts on the generators of G by the rules:
ϕ1
ϕ1
ϕ1
1
xϕ
1 = x1 , x2 = x2 x1 , x3 = x3 , . . . , xn = xn .
Then
ϕ1
−1 ϕ1
−1 ϕ1
−1 ϕ1
x−1
1 x1 = 1, x2 x2 = x1 , x3 x3 = 1, . . . , xn xn = 1.
Hence the generator x1 lies in [G, H]. Analogously, x2 , x3 , . . . , xn lie in [G, H]. This
completes the proof.
3. What actions are compatible?
In this section we study
Question 1. Let a group H acts on a group G by automorphisms. Is it possible to
define an action of G on H such that this pair of actions are compatible?
Consider some examples.
Example 3.1. Let us take G = {1, a, a2 } ∼
= Z3 , H = {1, b, b2 } ∼
= Z3 . In dependence
on actions we have three cases.
1) If the action of H on G and the action of G on H are trivial, then by the second
part of Proposition 2.1 G ⊗ H = Z3 ⊗Z Z3 ∼
= Z3 is abelian tensor product.
2) Let H acts non-trivially on G, i.e. ab = a2 and the action G on H is trivial.
It is not difficult to check that G and H act compatibly on each other. To find
DH (G) = [G, H] we calculate
[a, b] = a−1 ab = a2 a2 = a.
Hence, DH (G) = G. But DG (H) = 1.
By the definition, G ⊗ H is generated by elements
a ⊗ b, a2 ⊗ b, a ⊗ b2 , a2 ⊗ b2 .
Using the defining relations:
gg1 ⊗ h = (g g1 ⊗ hg1 )(g1 ⊗ h), g ⊗ hh1 = (g ⊗ h1 )(gh1 ⊗ hh1 ),
4
V. G. BARDAKOV AND M. V. NESHCHADIM
we find
a2 ⊗b = (aa ⊗ba )(a⊗b) = (a⊗b)2 , a⊗b2 = (a⊗b)(ab ⊗bb ) = (a⊗b)(a2 ⊗b) = (a⊗b)3 .
On the other side
1 = a2 a ⊗ b = (a2 ⊗ ba )(a ⊗ b) = (a ⊗ b)3 .
Hence,
a ⊗ b2 = a2 ⊗ b2 = 1
and in this case we have the same result: Z3 ⊗ Z3 = Z3 .
3) Let H acts non-trivially on G, i.e. ab = a2 and G acts non-trivially on H. In
this case G and H act non-compatibly on each other. Indeed,
a)
a(b
but
Hence, the equality
a
a−1
b a
a
2
= ab = (a2 )b = a,
(ba )
does not hold.
= (ab )a = (a2 )2 = a2 .
b a
a−1
a
=
Let G, H be some groups. Actions of G on H and H on G are defined by homomorphisms
β : G → Aut(H), α : H → Aut(G),
and by definition
gh = gα(h) , hg = hβ(g) , g ∈ G, h ∈ H.
The actions (α, β) are compatible, if
g 1
−1 α(h)
β(g1 )
)=
g g1
gα(h
and
β (g α(h1 ) )
h
=
h−1
1
h
β(g) h1
for all g, g1 ∈ G, h, h1 ∈ H. In this case we will say that the pair (α, β) is compatible.
Rewrite these equalities in the form
(1)
α hβ(g1 ) = gb1 −1 α(h)gb1
and
c1 −1 β(g)c
h1 ,
β gα(h1 ) = h
(2)
where b
g is the inner automorphism of G which is induced by conjugation of g, i.e.
gb : g1 7→ g−1 g1 g, g, g1 ∈ G,
and analogously, b
h is the inner automorphism of H which is induced by the conjugation of h, i.e.
b
h : h1 7→ h−1 h1 h, h, h1 ∈ H.
COMPATIBLE ACTIONS AND NON-ABELIAN TENSOR PRODUCTS
5
Theorem 3.2. 1) If the pair (α, β) defines compatible actions of H on G and G on
H, then the following inclusions hold
NAut(G) (α(H)) ≥ Inn(G),
NAut(H) (β(G)) ≥ Inn(H).
Here Inn(G) and Inn(H) are the subgroups of inner automorphisms.
2) If α : H → Aut(G) is an embedding and NAut(G) (α(H)) ≥ Inn(G), then defining
β : G → Aut(H) by the formula
β(g) : h 7→ α−1 gb−1 α(h)b
g , h ∈ H,
we get the compatible actions (α, β).
Proof. The first claim immediately follows from the relations (1), (2).
To prove the second claim it is enough to check (2), or that is equivalent, the
equality
β(g) h1
β (g α(h1 ) )
h−1
1
.
(3)
h
=
h
Using the definition β, rewrite the left side of (3):
−1
α(h1 )
\
) = α−1 g\
α(h1 ) α(h)g
α(h1 ) .
hβ (g
(4)
Rewrite the right side of (3):
β(g) h1
−1 β(g)
−1 −1
h−1
= h−1
h1 = h−1
g α(h1 hh−1
g )h1 .
h 1
1 (h1 hh1 )
1 α (b
1 )b
(5)
From (4) and (5):
−1
α
α(h1 )
g\
−1
α(h1 )
α(h)g\
−1 −1
= h−1
g α(h1 hh−1
g )h1 .
1 α (b
1 )b
Using the homomorphism α:
α(h1 )
g\
−1
α(h1 ) = α h−1 α−1 (b
g )h1 =
α(h)g\
g−1 α(h1 hh−1
1 )b
1
= α(h1 )−1 gb−1 α(h1 hh−1
g α(h1 ) =
1 )b
α(h1 )
= α(h1 )−1 gb−1 α(h1 )α(h)α(h1 )−1 )b
g α(h1 ) = g\
−1
α(h1 ) .
α(h)g\
In the last equality we used the formula
Hence, the equality (3) holds.
α(h1 ) .
α(h1 )−1 gbα(h1 ) = g\
Question 2. Are the inclusions
NAut(G) (α(H)) ≥ Inn(G),
NAut(H) (β(G)) ≥ Inn(H)
sufficient for compatibility of the pare (α, β)?
6
V. G. BARDAKOV AND M. V. NESHCHADIM
4. Compatible actions for nilpotent groups
At first, recall the following definition.
Definition 4.1. Let G and H be groups and G1 E G, H1 E H are their normal
subgroups. We will say that G is comparable with H with respect to the pare (G1 , H1 ),
if there are homomorphisms
ϕ : G −→ H,
ψ : H −→ G,
such that
x ≡ ψϕ(x)(mod G1 ),
for all x ∈ G, y ∈ H, i.e.
y ≡ ϕψ(y)(mod H1 )
x−1 · ψϕ(x) ∈ G1 ,
y −1 · ϕψ(y) ∈ H1 .
Note that if G1 = 1, H1 = 1, then ϕ, ψ are mutually inverse isomorphisms.
The following theorem holds.
Theorem 4.2. Let G, H be groups and there exist homomorphisms
ϕ : G −→ H,
ψ : H −→ G,
such that
x ≡ ψϕ(x)(mod ζ2 G), y ≡ ϕψ(y)(mod ζ2 H)
for all x ∈ G, y ∈ H. Then the action of G on H and the action of H on G by the
rules
xy = ψ(y)−1 xψ(y), y x = ϕ(x)−1 yϕ(x), x ∈ G, y ∈ H,
are compatible, i.e. the following equalities hold
x(y
x1 )
= ((xx1 )y )x1 ,
−1
y (x
y1 )
= ((y y1 )x )y1 ,
−1
x, x1 ∈ G, y, y1 ∈ H.
Proof. Let us prove that the following relation holds
x(y
x1 )
= ((xx1 )y )x1 .
−1
For this denote the left hand side of this relation by L and transform it:
L = x(y
x1 )
= xϕ(x1 )
−1 yϕ(x
1)
= ψ(ϕ(x1 )−1 y −1 ϕ(x1 ))xψ(ϕ(x1 )−1 yϕ(x1 )) =
= (ψϕ(x1 ))−1 ψ(y)−1 (ψϕ(x1 ))x(ψϕ(x1 ))−1 ψ(y)(ψϕ(x1 )) =
−1
−1 −1
= (c(x1 )−1 x−1
1 ψ(y) x1 c(x1 ))x(c(x1 ) x1 ψ(y)x1 c(x1 )).
Here ψϕ(x1 ) = x1 c(x1 ), c(x1 ) ∈ ζ2 G. Since c(x1 ) ∈ ζ2 G, then the commutator
[x−1
1 ψ(y)x1 , c(x1 )] lies in the center of G. Hence
L = xx 1
−1
ψ(y)x1
.
Denote the right hand side of this relation by R and transform it:
R = ((xx1 )y )x1 = ((xx1 )ψ (y))x1 = xx1
−1
−1
−1
ψ(y)x1
.
We see that L = R, i.e. the first relation from the definition of compatible action
holds. The checking of the second relation is the similar.
COMPATIBLE ACTIONS AND NON-ABELIAN TENSOR PRODUCTS
7
From this theorem we have particular answer on Question 1 for 2-step nilpotent
groups.
Corollary 4.3. If G, H are 2-step nilpotent groups, then any pare of homomorphisms
ϕ : G −→ H,
ψ : H −→ G
define the compatible action.
Problem 1. Let G and H be free 2-step nilpotent groups. By Corollary 4.3, any pair
of homomorphisms (ϕ, ψ), where ϕ ∈ Hom(G, H), ψ ∈ Hom(H, G) defines a tensor
product M (ϕ, ψ) = G ⊗ H. Give a classification of the groups M (ϕ, ψ).
Note that for arbitrary groups Corollary 4.3 does not hold. Indeed, let G = hx1 , x2 i,
H = hy1 , y2 i be free groups of rank 2. Define the homomorphisms
ϕ : G −→ H,
ψ : H −→ G
by the rules
ϕ(x1 ) = y1 , ϕ(x2 ) = y2 ,
ψ(y1 ) = ψ(y2 ) = 1.
Then
ϕ(x1 )
y2x1 = y2
= y2y1 6= y2 ,
i.e. the conditions of compatible actions does not hold.
5. Tensor products G ⊗ Z2
Note that the group Aut(Z2 ) is trivial and hence, any group G acts on Z2 only
trivially.
This section is devoted to the answer on the following question.
Question 3. Let G be a group and ψ ∈ Aut(G) be an automorphism of order 2. Let
Z2 = hϕi and α : Z2 −→ Aut(G) such that α(ϕ) = ψ. Under what conditions the
pare (α, 1) is compatible?
If ψ ∈ Aut(G) is trivial automorphism, then by the second part of Proposition 2.1
G ⊗ Z2 = Gab ⊗Z Z2 is an abelian tensor product. In the general case we have
Proposition 5.1. Let
1) G be a group,
2) Z2 = hϕi be a cyclic group of order two with the generator ϕ,
3) α : Z2 −→ Aut(G) be a homomorphism, β = 1 : G → Aut(Z2 ) be the trivial
homomorphism,
Then the pare of actions (α, β) is compatible if and only if for any g ∈ G holds
g α(ϕ) = gc(g),
where c(g) is a central element of G such that c(g)α(ϕ) = c(g)−1 . In particular, if the
center of G is trivial, then G ⊗ Z2 = Gab ⊗Z Z2 .
8
V. G. BARDAKOV AND M. V. NESHCHADIM
Proof. Since Inn(G) normalizes α(Z2 ), then for every g ∈ G holds
gb−1 α(ϕ)b
g = α(ϕ).
Using this equality for arbitrary element x ∈ G we get
g−1 gα(ϕ) xα(ϕ) (g−1 gα(ϕ) )−1 = xα(ϕ) .
Since xα(ϕ) is an arbitrary element of G, then c(g) is a central element of G. Applying
α(ϕ) to the equality g α(ϕ) = gc(g) we have
2
g = g α(ϕ) = gα(ϕ) c(g)α(ϕ) = gc(g)c(g)α(ϕ) ,
that is c(g)α(ϕ) = c(g)−1 .
For an arbitrary abelian group A we know that A ⊗Z Z = A. The following
proposition is some analog of this property for non-abelian tensor product.
Proposition 5.2. Let A be an abelian group, Z2 = hϕi is the cyclic group of order 2
and ϕ acts on the elements of A by the following manner
aϕ = a−1 ,
a ∈ A.
Then the non-abelian tensor product A ⊗ Z2 is defined and there is an isomorphism
A ⊗ Z2 ∼
= A.
Proof. It is not difficult to check that defined actions are compatible.
Since A acts on Z2 trivially and A is abelian, then the defining relations of the
tensor product:
aa1 ⊗ h = (aa1 ⊗ ha1 )(a1 ⊗ h),
a, a1 ∈ A,
h ∈ Z2 ,
have the form
aa1 ⊗ h = (a ⊗ h)(a1 ⊗ h) = (a1 ⊗ h)(a ⊗ h).
(1)
The relations
a ⊗ hh1 = (a ⊗ h1 )(ah1 ⊗ hh1 ),
a ∈ A,
h, h1 ∈ Z2 ,
give only one non-trivial relation
1 = a ⊗ ϕ2 = (a ⊗ ϕ)(a−1 ⊗ ϕ),
a ∈ A,
which follows from (1).
Since the set of relations (1) is a full system of relations for A ⊗ Z2 , then there
exists the natural isomorphism of A ⊗ Z2 on A that is defined by the formular
a ⊗ ϕ 7→ a,
a ∈ A.
Acknowledgement. The authors gratefully acknowledge the support from the
RFBR-16-01-00414 and RFBR-15-01-00745. Also, we thank S. Ivanov, A. Lavrenov
and V. Thomas for the interesting discussions and useful suggestions.
COMPATIBLE ACTIONS AND NON-ABELIAN TENSOR PRODUCTS
9
References
[1] R. Brown, J.-L. Loday, Excision homotopique en basse dimension, C. R. Acad. Sci. Paris Ser.
I Math. 298 (15) (1984), 353–356.
[2] R. Brown, J.-L. Loday, Van Kampen theorems for diagrams of spaces, Topology 26 (3) (1987),
311–335, with an appendix by M. Zisman.
[3] R. Brown, D. L. Johnson, E. F. Robertson, Some computations of non-abelian tensor products
of groups, J. Algebra, 1987, 111, 177–202.
[4] G. Donadze, M. Larda, V. Thomas, More on the non-abelian tensor product and the Bogomolov
multiplier, Preprint, 2015, 16 pp.
[5] A. S.-T. Lue, The Ganea map for nilpotent groups, J. London Math. Soc. 14, 309–312, (1976).
[6] C. Miller, The second homology group of a group; relations among commutators, Proceedings
AMS 3, 588–595, (1952).
Sobolev Institute of Mathematics, Novosibirsk 630090, Russia,
Novosibirsk State University, Novosibirsk 630090, Russia,
Novosibirsk State Agrarian University, Dobrolyubova street, 160, Novosibirsk, 630039,
Russia,
E-mail address: [email protected]
Sobolev Institute of Mathematics and Novosibirsk State University, Novosibirsk
630090, Russia,
E-mail address: [email protected]
| 4 |
Spanning Tree Congestion and Computation of
Generalized Győri-Lovász Partition
L. Sunil Chandran
arXiv:1802.07632v1 [cs.DS] 21 Feb 2018
1
?1
, Yun Kuen Cheung
??2
, and Davis Issac2
Department of Computer Science and Automation, Indian Institute of Science, India.
[email protected]
2
Max Planck Institute for Informatics, Saarland Informatics Campus, Germany.
[email protected] , [email protected]
Abstract. We study a natural problem in graph sparsification, the Spanning Tree Congestion (STC) problem. Informally, the STC problem seeks
a spanning tree with no tree-edge routing too many of the original edges.
The root of this problem dates back to at least 30 years ago, motivated
by applications in network design, parallel computing and circuit design.
Variants of the problem have also seen algorithmic applications as a
preprocessing step of several important graph algorithms.
For any general connected graph with n vertices and m edges, we show that
√
its STC is at most O( mn), which is asymptotically optimal since we also
√
demonstrate graphs with STC at least Ω( mn). We present a polynomial√
time algorithm which computes a spanning tree with congestion O( mn ·
log n). We also present another algorithm for computing a spanning tree
√
with congestion O( mn); this algorithm runs in sub-exponential time
2
when m = ω(n log n).
For achieving the above results, an important intermediate theorem
is generalized Győri-Lovász theorem, for which Chen et al. [14] gave a
non-constructive proof. We give the first elementary and constructive
proof by providing a local search algorithm with running time O∗ (4n ),
which is a key ingredient of the above-mentioned sub-exponential time
algorithm. We discuss a few consequences of the theorem concerning
graph partitioning, which might be of independent interest.
We also show that for any graph which satisfies certain expanding properties, its STC is at most O(n), and a corresponding spanning tree can be
computed in polynomial time. We then use this to show that a random
graph has STC Θ(n) with high probability.
1
Introduction
Graph Sparsification/Compression generally describes a transformation of a large
input graph into a smaller/sparser graph that preserves certain feature (e.g.,
?
??
This work was done while this author was visiting Max Planck Institute for Informatics,
Saarbrücken, Germany, supported by Alexander von Humboldt Fellowship.
Part of the work done while this author was a visitor at the Courant Institute, NYU.
The visit was funded in part by New York University.
2
L. Sunil Chandran, Yun Kuen Cheung, and Davis Issac
distance, cut, congestion, flow) either exactly or approximately. The algorithmic
value is clear, since the smaller graph might be used as a preprocessed input to
an algorithm, so as to reduce subsequent running time and memory requirement.
In this paper, we study a natural problem in graph sparsification, the Spanning
Tree Congestion (STC) problem. Informally, the STC problem seeks a spanning
tree with no tree-edge routing too many of the original edges. The problem is
well-motivated by network design applications, where designers aim to build
sparse networks that meet traffic demands, while ensuring no connection (edge)
is too congested. Indeed, the root of this problem dates back to at least 30 years
ago under the name of “load factor” [8,36], with natural motivations from parallel
computing and circuit design applications. The STC problem was formally defined
by Ostrovskii [30] in 2004, and since then a number of results have been presented.
The probabilistic version of the STC problem, coined as probabilistic capacity
mapping, also finds applications in several important graph algorithm problems,
e.g., the Min-Bisection problem.
Two canonical goals for graph sparsification problems are to understand the
trade-off between the sparsity of the output graph(s) and how well the feature is
preserved, and to devise (efficient) algorithms for computing the sparser graph(s).
These are also our goals for the STC problem. We focus on two scenarios: (A)
general connected graphs with n vertices and m edges, and (B) graphs which
exhibit certain expanding properties:
√
– For (A), we show that the
p spanning tree congestion (STC) is at most O( mn),
which is a factor of Ω( m/n) better than the trivial bound of m. We present
a polynomial-time
algorithm which computes a spanning tree with congestion
√
O( mn · log n). We also √
present another algorithm for computing a spanning
tree with congestion O( mn); this algorithm runs in sub-exponential time
when m = ω(n log2 n). For almost all ranges
√ of average degree 2m/n, we also
demonstrate graphs with STC at least Ω( mn).
– For (B), we show that the expanding properties permit us to devise polynomialtime algorithm which computes a spanning tree with congestion O(n). Using
this result, together with a separate lower-bound argument, we show that a
random graph has Θ(n) STC with high probability.
For achieving the results for (A), an important intermediate theorem is
generalized Győri-Lovász theorem, which was first proved by Chen et al. [14].
Their proof uses advanced techniques in topology and homology theory, and is
non-constructive.
Definition 1.1. In a graph G = (V, E), a k-connected-partition is a k-partition
of V into ∪kj=1 Vj , such that for each j ∈ [k], G[Vj ] is connected.
Theorem 1.2 ([14, Theorems 25, 26]). Let G = (V, E) be a k-connected 3
graph.
Let w be a weight function w : V → R+ . For any U ⊂ V , let w(U ) :=
P
v∈U w(v). Given any k distinct terminal vertices t1 , · · · , tk , and k positive
3
For brevity, we say “k-connected” for “k-vertex-connected” henceforth.
Spanning Tree Congestion
3
Pk
integers T1 , · · · , Tk such that for each j ∈ [k], Tj ≥ w(tj ) and i=1 Ti = w(V ),
there exists a k-connected-partition of V into ∪kj=1 Vj , such that for each j ∈ [k],
tj ∈ Vj and w(Vj ) ≤ Tj + maxv∈V w(v) − 1.
One of our main contributions is to give the first elementary and constructive
proof by providing a local search algorithm with running time O∗ (4n ):4
Theorem 1.3. (a) There is an algorithm which given a k-connected graph,
computes a k-connected-partition satisfying the conditions stated in Theorem 1.2
in time O∗ (4n ).
(b) If we need a (bk/2c + 1)-partition instead of k-partition (the input graph
remains assumed to be k-connected), the algorithm’s running time improves to
O∗ (2O((n/k) log k) ).
We make three remarks. First, the O∗ (2O((n/k) log k) )-time algorithm is a
key√ingredient of our algorithm for computing a spanning tree with congestion
O( mn). Second, since Theorem 1.2 guarantees the existence of such a partition,
the problem of computing such a partition is not a decision problem but a search
problem. Our local search algorithm shows that this problem is in the complexity
class PLS [20]; we raise its completeness in PLS as an open problem. Third, the
running times do not depend on the weights.
The STC Problem, Related Problems and Our Results. Given a connected graph G = (V, E), let T be a spanning tree. For an edge e = (u, v) ∈ E, its
detour with respect to T is the unique path from u to v in T ; let DT(e, T ) denote
the set of edges in this detour. The stretch of e with respect to T is |DT(e, T )|, the
length of its detour. The dilation of T is maxe∈E |DT(e, T )|. The edge-congestion
of an edge e ∈ T is ec(e, T ) := |{f ∈ E : e ∈ DT(f, T )}|, i.e., the number of edges
in E whose detours contain e. The congestion of T is cong(T ) := maxe∈T ec(e, T ).
The spanning tree congestion (STC) of the graph G is STC(G) := minT cong(T ),
where T runs over all spanning trees of G.
We note that there is an equivalent cut-based definition for edge-congestion,
which we will use in our proofs. For each tree-edge e ∈ T , removing e from T
results in two connected components; let Ue denote one of the components. Then
ec(e, T ) := |E(Ue , V \ Ue )|.
Various types of congestion, stretch and dilation problems are studied in
computer science and discrete mathematics. In these problems, one typically
seeks a spanning tree (or some other structure) with minimum congestion or
dilation. We mention some of the well-known problems, where minimization is
done over all the spanning trees of the given graph:
1. The Low Stretch Spanning Tree (LSST) problem is to find a spanning tree
which minimizes the total stretch of all the edges of G. [3] It is easy to
see that minimizing the total stretch is equivalent to minimizing the total
edge-congestion of the selected spanning tree.
2. The STC problem is to find a spanning tree of minimum congestion. [30]
4
O∗ notation hides all polynomial factors in input size.
4
L. Sunil Chandran, Yun Kuen Cheung, and Davis Issac
3. Tree Spanner Problem is to find a spanning tree of minimum dilation. [13]
The more general Spanner problem is to find a sparser subgraph of minimum
distortion. [4]
There are other congestion and dilation problems which do not seek a spanning
tree, but some other structure. The most famous among them is the Bandwidth
problem and the Cutwidth problem; see the survey [34] for more details.
Among the problems mentioned above, several strong results were published
in connection with the LSST problem. Alon et al. [3] had shown a lower bound
of Ω(max{n log n, m}). Upper bounds have been derived and many efficient algorithms have been devised; the current best upper bound is Õ(m log n). [3,15,1,22,2]
Since total stretch is identical to total edge-congestion, the best upper bound
for the LSST problem automatically implies an Õ( m
n log n) upper bound on the
average edge-congestion. But in the STC problem, we concern the maximum
edge-congestion; as we shall
p see, for some graphs, the maximum edge-congestion
has to be a factor of Ω̃( n3 /m) larger than the average edge-congestion.
In comparison, there were not many strong and general results for the
STC Problem, though it was studied extensively in the past 13 years. The
problem was formally proposed by Ostrovskii [30] in 2004. Prior to this, Simonson [36] had studied the same parameter under a different name to approximate
the cut width of outer-planar graph. A number of graph-theoretic results were presented on this topic [31,25,24,23,10]. Some complexity results were also presented
recently [29,9], but most of these results concern special classes
√ of graphs. The
most general result regarding STC of general graphs is an O(n n) upper bound
by Löwenstein, Rautenbach and Regen in 2009 [27], and a matching lower bound
by Ostrovskii in 2004 [30]. Note that the above upper bound is not interesting
when the graph is sparse, since there is also a trivial upper bound of m. In this
paper we come up with a strong improvement to these bounds after 8 years:
Theorem (informal): For a connected graph
√ G with n vertices and m edges,
its spanning tree congestion is at most O( mn). p
In terms of average degree
davg = 2m/n, we can state this upper bound as O(n davg ). There is a matching
lower bound.
√
Our proof for achieving the O( mn) upper bound is constructive. It runs in
exponential time in general; for graphs with m = ω(n log2 n) edges, it runs in
sub-exponential time. By using an algorithm of Chen et al. [14] for computing
single-commodity confluent flow from single-commodity splittable flow, we improve
the running
time to polynomial, but with a slightly worse upper bound guarantee
√
of O( mn · log n).
Motivated by an open problem raised by Ostrovskii [32] concerning STC of
random graphs, we formulate a set of expanding properties, and prove that for
any graph satisfying these properties, its STC is at most O(n). We devise a
polynomial time algorithm for computing a spanning tree with congestion O(n)
for such graphs. This result, together with a separate lower-bound argument,
n
permit us to show that for random graph G(n, p) with 1 ≥ p ≥ c log
for some
n
Spanning Tree Congestion
5
small constant c > 1,5 its STC is Θ(n) with high probability, thus resolving the
open problem raised by Ostrovskii completely.
Min-Max Graph Partitioning and the Generalized Győri-Lovász Theorem. It looks clear that the powerful Theorem 1.2 can make an impact on
graph partitioning. We discuss a number of its consequences which might be of
wider interest.
Graph partitioning/clustering is a prominent topic in graph theory/algorithms,
and has a wide range of applications.A popular goal is to partition the vertices
into sets such that the number of edges across different sets is small. While the
min-sum objective, i.e., minimizing the total number of edges across different
sets, is more widely studied, in various applications, the more natural objective
is the min-max objective, i.e., minimizing the maximum number of edges leaving
each set. The min-max objective is our focus here.
Depending on applications, there are additional constraints on the sets in
the partition. Two natural constraints are (i) balancedness: the sets are (approximately) balanced in sizes, and (ii) induced-connectivity: each set induces
a connected subgraph. The balancedness constraint appears in the application
of domain decomposition in parallel computing, while the induced-connectivity
constraint is motivated by divide-and-conquer algorithms for spanning tree construction. Imposing both constraints simultaneously is not feasible for every
graph; for instance, consider the star graph with more than 6 vertices and one
wants a 3-partition. Thus, it is natural to ask, for which graphs do partitions
satisfying both constraints exist. Theorem 1.2 implies a simple sufficient condition
for existence of such partitions.
By setting the weight of each vertex in G to be its degree, and using the
elementary fact that the maximum degree ∆(G) ≤ n ≤ 2m/k for any k-connected
graph G on n vertices and m edges, we have
Proposition 1.4. If G is a k-connected graph with m edges, then there exists
a k-connected-partition, such that the total degree of vertices in each part is at
most 4m/k. Consequently, the min-max objective is also at most 4m/k.
Due to expander graphs, this bound is optimal up to a small constant factor.
This proposition (together with Lemma 4.1) implies the following crucial lemma
for achieving some of our results.
Lemma 1.5. Let G be a k-connected graph with m edges. Then STC(G) ≤ 4m/k.
Proposition 1.4 can be generalized to include approximate balancedness in
terms of number of vertices. By setting the weight of each vertex to be cm/n
plus its degree in G, we have
Proposition 1.6. Given any fixed c > 0, if G is a k-connected graph with m
edges and n vertices, then there exists a k-connected-partition such that the total
5
Note that the STC problem is relevant only for connected graphs. Since the threshold
function for graph connectivity is logn n , this result applies for almost all of the relevant
range of values of p.
6
L. Sunil Chandran, Yun Kuen Cheung, and Davis Issac
degree of vertices in each part is at most (2c + 4)m/k, and the number of vertices
n
in each part is at most 2c+4
c · k.
Further Related Work. Concerning STC problem, Okamoto et al. [29] gave
an O∗ (2n ) algorithm for computing the exact STC of a graph. The probabilistic
version of the STC problem, coined as probabilistic capacity mapping, is an important tool for several graph algorithm problems, e.g., the Min-Bisection problem.
Räcke [33] showed that in the probabilistic setting, distance and capacity are
interchangeable, which briefly says a general upper bound for one objective implies
the same general upper bound for the other. Thus, due to the above-mentioned
results on LSST, there is an upper bound of Õ(log n) on the maximum average
congestion. Räcke’s result also implies an O(log n) approximation algorithm to
the Min-Bisection problem, improving upon the O(log3/2 n) approximation algorithm of Feige and Krauthgamer [16]. However, in the deterministic setting,
such interchanging phenomenon does not hold: there is a √
simple tight bound
Θ(n) for dilation, but for congestion it can be as high as Θ(n n). For the precise
definitions, more background and key results about the concepts we have just
discussed, we recommend the writing of Andersen and Feige [5].
Graph partitioning/clustering is a prominent research topic with wide applications, so it comes no surprise that a lot of work has been done on various aspects
of the topic; we refer readers to the two extensive surveys by Schaeffer [35] and by
Teng [41]. Kiwi, Spielman and Teng [21] formulated the min-max k-partitioning
problem and gave bounds for classes of graphs with small separators, which are
then improved by Steurer [38]. On the algorithmic side, many of the related
problems are NP-hard, so the focus is on devising approximation algorithms.
Sparkled by the seminal work of Arora, Rao and Vazirani [6] on sparsest cut
and of Spielman and Teng [37] on local clustering, graph partitioning/clustering
algorithms with various constraints have attracted attention across theory and
practice; we refer readers to [7] for a fairly recent account of the development. The
min-sum objective has been extensively studied; the min-max objective, while
striking as the more natural objective in some applications, has received much
less attention. The only algorithmic work on this objective (and its variants) are
Svitkina and Tardos [40] and Bansal et al. [7]. None of the above work addresses
the induced-connectivity constraint.
The classical version of Győri-Lovász Theorem (i.e., the vertex weights are
uniform) was proved independently by Győri [17] and Lovász [26]. Lovász’s proof
uses homology theory and is non-constructive. Győri’s proof is elementary and is
constructive implicitly, but he did not analyze the running time. Polynomial time
algorithms for constructing the k-partition were devised for k = 2, 3 [39,42], but
no non-trivial finite-time algorithm was known for general graphs with k ≥ 4.6
Recently, Hoyer and Thomas [19] provided a clean presentation of Győri’s proof
6
In 1994, there was a paper by Ma and Ma in Journal of Computer Science and
Technology, which claimed a poly-time algorithm for all k. However, according to a
recent study [18], Ma and Ma’s algorithm can fall into an endless loop. Also, Győri
said the algorithm should be wrong (see [28]).
Spanning Tree Congestion
7
by introducing their own terminology, which we use for our constructive proof of
Theorem 1.2.
Notation. Given a graph G = (V, E), an edge set F ⊆ E and 2 disjoint vertex subsets V1 , V2 ⊂ V , we let F (V1 , V2 ) := { e = {v1 , v2 } ∈ F | v1 ∈ V1 and v2 ∈ V2 }.
2
Technical Overview
To prove the generalized Győri-Lovász theorem constructively, we follow the
same framework of Győri’s proof [17], and we borrow terminology from the
recent presentation by Hoyer and Thomas [19]. But it should be emphasized that
proving our generalized theorem is not straight-forward, since in Győri’s proof, at
each stage a single vertex is moved from one set to other to make progress, while
making sure that the former set remains connected. In our setting, in addition to
this we also have to ensure that the weights in the partitions do not exceed the
specified limit; and hence any vertex that can be moved from one set to another
need not be candidate for being transferred. The proof is presented in Section 3.
As discussed, a crucial ingredient for our upper bound results is Lemma 1.5,
which is a direct corollary of the generalized Győri-Lovász theorem. The lemma
takes care of the highly-connected cases; for other cases we provide a recursive way
to construct a low congestion spanning tree; see Section 4 for details. For showing
our lower bound for general graphs, the challenge is to maintain high congestion
while keeping density small. To achieve this, we combine three expander graphs
with little overlapping between them, and we further make those overlapped
vertices of very high degree. This will force a tree-edge adjacent to the centroid
of any spanning tree to have high congestion; see Section 5 for details.
We formulate a set of expanding properties which permit constructing a
spanning tree of better congestion guarantee in polynomial time. The basic idea
is simple: start with a vertex v of high degree as the root. Now try to grow the
tree by keep attaching new vertices to it, while keeping the invariant that the
subtrees rooted at each of the neighbours of v are roughly balanced in size; each
such subtree is called a branch. But when trying to grow the tree in a balanced
way, we will soon realize that as the tree grow, all the remaining vertices may
be seen to be adjacent only to a few number of “heavy” branches. To help the
balanced growth, the algorithm will identify a transferable vertex which is in
a heavy branch, and it and its descendants in the tree can be transferred to a
“lighter” branch. Another technique is to use multiple rounds of matching between
vertices in the tree and the remaining vertices to attach new vertices to the
tree. This will tend to make sure that all subtrees do not grow uncontrolled. By
showing that random graph satisfies the expanding properties with appropriate
parameters, we show that a random graph has STC of Θ(n) with high probability.
3
Generalized Győri-Lovász Theorem
We prove Theorem 1.3 in this section. Observe that the classical Győri-Lovász
Theorem follows from Theorem 1.2 by taking w(v) = 1 for all v ∈ V and Tj = nj
8
L. Sunil Chandran, Yun Kuen Cheung, and Davis Issac
for all j ∈ [k]. We note that a perfect generalization where one requires that
w(Vj ) = Tj is not possible — think when all vertex weights are even integers,
while some Tj is odd.
Let G = (V, E) be a k-connected graph on n vertices and m P
edges, and
w : V → R+ be a weight function. For any subset U ⊆ V , w(U ) := u∈U w(u).
Let wmax := maxv∈V w(v).
3.1
Key Combinatorial Notions
We first highlight the key combinatorial notions used for proving Theorem 1.3;
see Figures 1 and 2 for illustrations of some of these notions.
Fitted Partial Partition. First, we introduce the notion of fitted partial partition (FPP). An FPP A is a tuple of k subsets of V , (A1 , . . . , Ak ), such that the
k subsets are pairwise disjoint, and for each j ∈ [k]:
1. tj ∈ Aj ,
2. G[Aj ] is connected and
3. w(Aj ) ≤ Tj + wmax − 1 (we say the set is fitted for satisfying this inequality).
We say an FPP is a Strict Fitted Partial Partition (SFPP) if A1 ∪ · · · ∪ Ak is a
proper subset of V . We say the set Aj is light if w(Aj ) < Tj , and we say it is
heavy otherwise. Note that there
Pk exists at least one light set in any SFPP, for
otherwise w(A1 ∪ · · · ∪ Ak ) ≥ j=1 Tj = w(V ), which means A1 ∪ · · · ∪ Ak = V .
Also note that by taking Aj = {tj }, we have an FPP, and hence at least one
FPP exists.
Configuration. For a set Aj in an FPP A and a vertex v ∈ Aj \ {tj }, we define
the reservoir of v with respect to A, denoted by RA (v), as the vertices in the
same connected component as tj in G[Aj ] \ {v}. Note that v ∈
/ RA (v).
For a heavy set Aj , a sequence of vertices (z1 , . . . , zp ) for some p ≥ 0 is called
a cascade of Aj if z1 ∈ Aj \ {tj } and zi+1 ∈ Aj \ RA (zi ) for all 1 ≤ i < p. The
cascade is called a null cascade if p = 0, i.e., if the cascade is empty. Note that
for light set, we do not need to define its cascade since we do not use it in the
proof. (See Figure 1.)
A configuration CA is defined as a pair (A, D), where A = (A1 , · · · , Ak ) is an
FPP, and D is a set of cascades, which consists of exactly one cascade (possibly,
a null cascade) for each heavy set in A. A vertex that is in some cascade of the
configuration is called a cascade vertex.
Given a configuration, we define rank and level inductively as follows. Any
vertex in a light set is said to have level 0. For i ≥ 0, a cascade vertex is said to
have rank i + 1 if it has an edge to a level-i vertex but does not have an edge
to any level-i0 vertex for i0 < i. A vertex u is said to have level i, for i ≥ 1, if
u ∈ RA (v) for some rank-i cascade vertex v, but u ∈
/ RA (w) for any cascade
vertex w such that rank of w is less than i. A vertex that is not in RA (v) for any
cascade vertex v is said to have level ∞.
A configuration is called a valid configuration if for each heavy set Aj , rank
is defined for each of its cascade vertices and the rank is strictly increasing in the
Spanning Tree Congestion
9
z3
z2
RA(z3)
RA(z2)
z1
RA(z1)
tj
Fig. 1. Given a configuration (A, D) and a heavy set Aj in A, the figure shows a cascade
(z1 , z2 , z3 ) for the heavy set Aj and several reservoirs of the cascade vertices.
For any z` , z` ∈
/ RA (z` ). A cascade vertex z` is a cut-vertex of G[Aj ], i.e., G[Aj \ {z` }] is
disconnected. The removal of z` from Aj will lead to at least two connected components
in G[Aj \ {z` }], and the connected component containing tj is the reservoir of z` .
We identify tj = z0 , but we clarify that a terminal vertex is never in a cascade. Each
epoch between z` and z`+1 , and also the epoch above z3 , is a subset of vertices B ⊂ Aj ,
where B 3 z` and G[B] is connected. Note that in general, it is possible that there is
no vertex above the last cascade vertex.
cascade, i.e., if {z1 , . . . , zp } is the cascade, then rank(z1 ) < · · · < rank(zp ). Note
that by taking Aj = {tj } and taking the null cascade for each heavy set (in this
case Aj is heavy if w(tj ) = Tj ), we get a valid configuration. (See Figure 2.)
Configuration Vectors and Their Total Ordering. For any vertex, we
define its neighborhood level as the smallest level of any vertex adjacent to it. A
vertex v of level ` is said to satisfy maximality property if each vertex adjacent
on it is either a rank-(` + 1) cascade vertex, has a level of at most ` + 1, or
is one of the terminals tj for some j. For any ` ≥ 0, a valid configuration is
called an `-maximal configuration if all vertices having level at most ` − 1 satisfy
the maximality property. Note that by definition, any valid configuration is a
0-maximal configuration.
For a configuration CA = ((A1 , . . . , Ak ) , D), we define SA := V \(A1 ∪· · ·∪Ak ).
An edge uv is said to be a bridge in CA if u ∈ SA , v ∈ Aj for some j ∈ [k], and
level(v) 6= ∞.
10
L. Sunil Chandran, Yun Kuen Cheung, and Davis Issac
L=∞
z12, rank = 2
L=2
L=∞
z22, rank = 5
L=5
z11, rank = 1
z32, rank = 4
L=4
z21, rank = 3
z31, rank = 1
L=1
L=3
L=1
t1
t2
t3
Vertices in all light sets
Level = 0
Fig. 2. An instance of a valid configuration. Every blue segment/curve represent an
edge from a cascade vertex to a vertex in some reservoir or light set.
Every cascade vertex connected to a light set has rank 1, and all vertices in the epoch
immediately below a rank 1 cascade vertex are of level 1. Inductively, every cascade
vertex connected to a vertex of level i has rank i + 1, and all vertices in the epoch
immediately below a rank i cascade vertex are of level i. All vertices above the last
cascade vertex of each cascade has level ∞.
A valid configuration CA is said to be `-good if the highest rank of a cascade
vertex in CA is exactly ` (if there are no cascade vertices, then we take the highest
rank as 0), CA is `-maximal, and all bridges uv in CA (if any) are such that
u ∈ SA and level(v) = `. Note that taking Aj = {tj } and taking the null cascade
for each heavy set gives a 0-good configuration.
For each configuration CA = (A, D), we define a configuration vector as below:
( LA , NA0 , NA1 , NA2 , . . . , NAn ),
where LA is the number of light sets in A, and NA` is the total number of all
level-` vertices in CA .
Next, we define ordering on configuration vectors. Let CA and CB be configurations. We say CA >0 CB if
Spanning Tree Congestion
11
– LA < LB , or
– LA = LB , and NA0 > NB0 .
We say CA =0 CB if LA = LB and NA0 = NB0 . We say CA ≥0 CB if CA =0 CB or
0
0
CA >0 CB . We say CA =` CB if LA = LB , and NA` = NB` for all `0 ≤ `.
For 1 ≤ ` ≤ n, we say CA >` CB if
– CA >`−1 CB , or
– CA =`−1 CB , and NA` > NB` .
We say CA ≥` CB if CA =` CB or CA >` CB . We say CA > CB (CA is strictly better
than CB ) if CA >n CB .
3.2
Proof of Theorem 1.3
We use two technical lemmas about configuration vectors and their orderings
to prove Theorem 1.3(a). The proof of Theorem 1.3(b) follows closely with the
proof of Theorem 1.3(a), but makes use of an observation about the rank of a
vertex in the local search algorithm, to give an improved bound on the number
of configuration vectors navigated by the algorithm.
Lemma 3.1. Given any `-good configuration CA = (A = (A1 , . . . , Ak ), DA ))
that does not have a bridge, we can find an (` + 1)-good configuration CB =
(B = (B1 , B2 , . . . , Bk ) , DB ) in polynomial time such that CB > CA .
Proof. Since CA is `-maximal, any vertex that is at level `0 < ` satisfies maximality
property. So, for satisfying (` + 1)-maximality, we only need to worry about the
vertices that are at level `. Let Xj be the set of all vertices x ∈ Aj such that x is
adjacent to a level-` vertex, level(x) ≥ ` + 1 (i.e., level(x) = ∞ as the highest
rank of any cascade vertex is `), x =
6 tj , and x is not a cascade vertex of rank `.
We claim that there exists at least one j for which Xj is not empty. If that is
not the case, then we exhibit a cut set of size at most k − 1. For each j such that
Aj is a heavy set with a non-null cascade, let yj be the highest ranked cascade
vertex in Aj . For each j such that Aj is a heavy set with a null cascade, let yj be
tj . Let Y be the set of all yj such that Aj is a heavy set. Note that |Y | ≤ k − 1
as A is an SFPP and hence has at least one light set. Let Z∞ be the set of all
vertices in V \ Y that have level ∞ and Z be the remaining vertices in V \ Y .
Since A is an SFPP, SA 6= ∅, and since all vertices in SA have level ∞, we have
that Z∞ 6= ∅. Z is not empty because there exists at least one light set in A and
the vertices in a light set have level 0. We show that there is no edge between
Z∞ and Z in G. Suppose there exists an edge uv such that u ∈ Z∞ and v ∈ Z.
If u ∈ SA , then uv is a bridge which is a contradiction by our assumption that
CA does not have a bridge. Hence u ∈ Aj for some j ∈ [k]. Note that Aj has to
be a heavy set, otherwise u has level 0. We have that u is not a cascade vertex
(as all cascade vertices with level ∞ are in Y ) and u 6= tj (as all tj such that
level(tj ) = ∞ are in Y ). Also, v is not of level ` as otherwise, u ∈ Xj but we
assumed Xj is empty. But then, v has level at most ` − 1, u has level ∞, and there
12
L. Sunil Chandran, Yun Kuen Cheung, and Davis Issac
is an edge uv. This means that CA was not `-maximal, which is a contradiction.
Thus, there exists at least one j for which Xj is not empty.
For any j such that Xj 6= ∅ , there is at least one vertex xj such that
Xj \ {xj } ⊆ RA (xj ). Now we give the configuration CB as follows. We set
Bj = Aj for all j ∈ [k]. For each heavy set Aj such that Xj 6= ∅, we take the
cascade of Bj as the cascade of Aj appended with xj . For each heavy set Aj such
that Xj = ∅, we take the cascade of Bj as the cascade of Aj . It is easy to see
that CB is (` + 1)-maximal as each vertex that had an edge to level-` vertices in
CA is now either a rank ` + 1 cascade vertex or a level-(` + 1) vertex or is tj for
some j. Also, notice that all the new cascade vertices that we introduce (i.e., the
xj ’s) have their rank as ` + 1 and there is at least one rank ` + 1 cascade vertex
as Xj is not empty for some j. Since there were no bridges in CA , all bridges in
CB has to be from SB to a vertex having level ` + 1. Hence, CB is (` + 1)-good.
All vertices that had level at most ` in CA retained their levels in CB . And, at
least one level-∞ vertex of CA became a level-(` + 1) vertex in CB because the
cascade vertex that was at rank ` becomes level-(` + 1) vertex now in at least
one set. Since CA had no level-(` + 1) vertices, this means that CB > CA .
Lemma 3.2. Given an `-good configuration CA = (A = (A1 , . . . , Ak ), DA )
having a bridge, we can find in polynomial time a valid configuration CB =
(B = (B1 , . . . , Bk ) , DB ) such that one of the following holds:
– CB >` CA , and CB is an `-good configuration, or
– CB ≥`−1 CA , there is a bridge u0 v 0 in CB such that u0 ∈ SB and level(v 0 ) ≤
` − 1, and CB is an (` − 1)-good configuration.
Proof. Let uv be a bridge where u ∈ SA . Let Aj ∗ be the set containing v. Note
that level(v) = ` because CA is `-good. We keep Bj = Aj for all j 6= j ∗ . But we
modify Aj ∗ to get Bj ∗ as described below. We maintain that if Aj is a heavy set
then Bj is also a heavy set for all j, and hence maintain that LB ≤ LA .
Case 1: Aj ∗ is a light set (i.e., when ` = 0). We take Bj ∗ = Aj ∗ ∪ {u}. For
all j such that Bj is a heavy set, cascade of Bj is taken as the null cascade. We
have w(Aj ∗ ) ≤ Tj − 1 because Aj ∗ is a light set. So, w(Bj ∗ ) = w(Aj ∗ ) + w(u) ≤
(Tj − 1) + wmax , and hence Bj ∗ is fitted. Also, G[Bj ∗ ] is connected and hence
(B1 , . . . , Bk ) is an FPP. We have CB >0 CA because either Bj ∗ became a heavy
set in which case LB < LA , or it is a light set in which case LB = LA and
NB0 > NA0 . It is easy to see that CB is 0-good.
Case 2: Aj ∗ is a heavy set i.e., when ` ≥ 1.
Case 2.1: w(Aj ∗ ∪ {u}) ≤ Tj + wmax − 1. We take Bj ∗ = Aj ∗ ∪ {u}. For each
j such that Bj is a heavy set (Aj is also heavy set for such j), the cascade of Bj
is taken as the cascade of Aj . G[Bj ∗ ] is clearly connected and Bj ∗ is fitted by
assumption of the case that we are in. Hence B is indeed an FPP. Observe that
all vertices that had level `0 ≤ ` in CA still has level `0 in CB . Since level(v) was
` in CA by `-goodness of CA , u also has level ` in CB ; and u had level ∞ in CA .
Hence, CB >` CA . It is also easy to see that CB remains `-good.
Case 2.2: w(Aj ∗ ∪ {u}) ≥ Tj + wmax . Let z be the cascade vertex of rank ` in
Aj ∗ . Note that Aj ∗ should have such a cascade vertex as v ∈ Aj ∗ has level `. Let
Spanning Tree Congestion
13
R̄ be Aj ∗ \ (RA (z) ∪ z), i.e., R̄ is the set of all vertices in Aj ∗ \ {z} with level ∞.
We initialize Bj ∗ := Aj ∗ ∪ {u}. Now, we delete vertices one by one from Bj ∗ in a
specific order until Bj ∗ becomes fitted. We choose the order of deleting vertices
such that G[Bj ∗ ] remains connected. Consider a spanning tree τ of G[R̄ ∪ {z}].
τ has at least one leaf, which is not z. We delete this leaf from Bj ∗ and τ . We
repeat this process until τ is just the single vertex z or Bj ∗ becomes fitted. If Bj ∗
is not fitted even when τ is the single vertex z, then delete z from Bj ∗ . If Bj ∗ is
still not fitted then delete u from Bj ∗ . Note that at this point Bj ∗ ⊂ Aj ∗ and
hence is fitted. Also, note that G[Bj ∗ ] remains connected. Hence (B1 , . . . , Bk ) is
an FPP. Bj ∗ does not become a light set because Bj became fitted when the
last vertex was deleted from it. Before this vertex was deleted, it was not fitted
and hence had weight at least Tj ∗ + wmax before this deletion. Since the last
vertex deleted has weight at most wmax , Bj ∗ has weight at least Tj ∗ and hence
is a heavy set. Now we branch into two subcases for defining the cascades.
Case 2.2.1: z ∈ Bj∗ (i.e, z was not deleted from Bj ∗ in the process above).
For each j such that Bj is a heavy set, the cascade of Bj is taken as the cascade
of Aj . Since a new ` level vertex u is added and all vertices that had level at
most ` retain their level, we have that CB >` CA . It is also easy to see that CB
remains `-good.
Case 2.2.2: z ∈
/ Bj ∗ (i.e, z was deleted from Bj ∗ ). For each j such that Bj
is a heavy set, the cascade of Bj is taken as the cascade of Aj but with the
rank ` cascade vertex (if it has any) deleted from it. CB ≥`−1 CA because all
vertices that were at a level of `0 = ` − 1 or smaller, retain their levels. Observe
that there are no bridges in CB to vertices that are at a level at most ` − 2, all
vertices at a level at most ` − 2 still maintain the maximality property, and we
did not introduce any cascade vertices. Hence, CB is (` − 1)-good. It only remains
to prove that there is a bridge u0 v 0 in CB such that level(v 0 ) ≤ ` − 1. We know
z ∈ SB . Since z was a rank ` cascade vertex in CA , z had an edge to z 0 such that
z 0 had level ` − 1 in CA . Observe that level of z 0 is at most ` − 1 in CB as well.
Hence, taking u0 v 0 = zz 0 completes the proof.
Proof of Theorem 1.3(a): . We always maintain a configuration CA = (A, DA )
that is `-good for some ` ≥ 0. If the FPP A is not an SFPP at any point, then
we are done. So assume A is an SFPP.
We start with the 0-good configuration where Aj = {tj } and the cascades
of all heavy sets are null cascades. If our current configuration CA is an `-good
configuration that has no bridge, then we use Lemma 3.1 to get a configuration
CB such that CB > CA and B is (` + 1)-good. We take CB as the new current
configuration CA . If our current configuration CA is an `-good configuration with
a bridge, then we get an `0 -good configuration CB for some `0 ≥ 0 such that
CB > CA by repeatedly applying Lemma 3.2 at most ` times. So in either case, we
get a strictly better configuration that is `0 -good for some `0 ≥ 0 in polynomial
time. We call this an iteration of our algorithm.
Notice that the number of iterations possible is at most the number of distinct
configuration vectors possible. It is easy to see that the number of distinct
configuration vectors with highest rank at most r is at most n+r−1
. Since rank
n
14
L. Sunil Chandran, Yun Kuen Cheung, and Davis Issac
of any point is at most n, the number of iterations of our algorithm is at most
n
(k + 1) · 2n
n , which is at most n · 4 . Since each iteration runs in polynomial
time as guaranteed by the two lemmas, the required running time is O∗ (4n ).
When the algorithm terminates, the FPP given by the current configuration
is not an SFPP and this gives the required partition.
Proof of Theorem 1.3(b): . Since any k-connected graph is also (bk/2c +
1)−vertex connected, the algorithm will give the required partition due to Theorem 1.3(a). We only need to prove the better running time claimed by Theorem 1.3(b). For this, we show that the highest rank attained by any vertex
during the algorithm is at most 2n/(k − 2). Since the number
of distinct config
uration vectors with highest rank r is at most n+r−1
,
we
then
have that the
n
2n
−1
∗ O((n/k) log k)
running time is O∗ n+ k−2
,
which
is
O
(2
),
as
claimed.
Hence, it
n
only remains to prove that the highest rank is at most 2n/(k − 2).
For this, observe that in an `-good configuration, for each 0 ≤ i < `, the union
of all vertices having level i and the set of (bk/2c + 1) terminals together forms a
cutset. Since the graph is k-connected, this means that for each 0 ≤ i < `, the
number of vertices having level i is at least k/2 − 1. The required bound on the
rank easily follows.
4
Upper Bounds for Spanning Tree Congestion
We first state the following easy lemma, which together with Proposition 1.4,
implies Lemma 1.5.
Lemma 4.1. In a graph G = (V, E), let t1 be a vertex, and let t2 , · · · , t` be any
(` − 1) neighbours of t1 . Suppose that there exists a `-connected-partition ∪`j=1 V`
such that for all j ∈ `, tj ∈ Vj , and the sum of degree of vertices in each Vj is
at most D. Let τj be an arbitrary spanning tree of G[Vj ]. Let ej denote
the edge
S
{t1 , tj }. Let τ be the spanning tree of G defined as τ := ∪`j=1 τj
∪`j=2 ej .
Then τ has congestion at most D.
Theorem 4.2. For any connected graph G = (V, E), there is an algorithm
which
√
√
O n log n/ m/n
∗
computes a spanning tree with congestion at most 8 mn in O 2
time.
Theorem 4.3. For any connected graph G = (V, E), there is a polynomial
time
√
algorithm which computes a spanning tree with congestion at most 16 mn · log n.
The two algorithms follow the same framework, depicted in Algorithm 1. It
is a recursive algorithm; the parameter m̂ is a global parameter, which is the
number of edges in the input graph G in the first level of the recursion; let n̂
denote the number of vertices in this graph.
The only difference between the two algorithms is in Line 15 on how this step is
executed, with trade-off between the running time of the step T (m̂, nH , mH ), and
the guarantee D(m̂, nH , mH ). For proving Theorem 4.2, we use Theorem 1.3(b),
Spanning Tree Congestion
15
p
Proposition 1.4 andLemma 4.1, yielding
D(m̂, nH , mH ) ≤ 8mH nH /m̂ and
√
T (m̂, nH , mH ) = O∗ 2O(nH log nH / m̂/nH ) . For proving Theorem 4.3, we make
p
use of an algorithm in Chen et al. [14], which yields D(m̂, nH , mH ) ≤ 16mH nH /m̂·
log nH and T (m̂, nH , mH ) = poly(nH , mH ).
Algorithm 1: FindLCST(H, m̂)
Input : A connected graph H = (VH , EH ) on nH vertices and mH edges
Output : A spanning tree τ of H
p
1 if mH ≤ 8 m̂nH then
2
return an arbitrary spanning tree of H
3 end
lp
m
4 k ←
m̂/nH
5
6
7
8
9
10
11
12
13
14
15
Y ← a global minimum vertex cut of H
if |Y | < k then
X ← the smallest connected component in H[VH \ Y ] (See Figure 3)
Z ← VH \ (X ∪ Y )
τ1 ← FindLCST( H[X], m̂ )
τ2 ← FindLCST( H[Y ∪ Z], m̂); (H[Y ∪ Z] is connected as Y is a global
min cut)
return τ1 ∪ τ2 ∪ (an arbitrary edge between X and Y )
else
t1 ← an arbitrary vertex in VH
Pick bk/2c neighbours of t1 in the graph H; denote them by
t2 , t3 , · · · , tbk/2c+1 . Let ej denote edge t1 tj for 2 ≤ j ≤ bk/2c + 1. (See
Figure 4)
Compute a (bk/2c + 1)-connected-partition of H, denoted by
bk/2c+1
16
17
18
∪j=1
Vj , such that for each j ∈ [bk/2c + 1], tj ∈ Vj , and the total
degree (w.r.t. graph H) of vertices in each Vj is at most
D(m̂, nH , mH ). Let the time needed be T (m̂, nH , mH ).
For eachj ∈ [bk/2c + 1], τj← an arbitrary
spanning tree of G[Vj ]
S
bk/2c+1
bk/2c+1
return ∪j=1
τj
∪j=2
ej
end
In the rest of this section, we first discuss the algorithm in Chen et al., then
we prove Theorem 4.3. The proof of Theorem 4.2 is almost identical, and is
deferred to Appendix A.2.
Single-Commodity Confluent Flow and The Algorithm of Chen et
al. In a single-commodity confluent flow problem, the input includes a graph
G = (V, E), a demand function w : V → R+ and ` sinks t1 , · · · , t` ∈ V . For each
v ∈ V , a flow of amount w(v) is routed from v to one of the sinks. But there is a
restriction: at every vertex u ∈ V , the outgoing flow must leave u on at most 1
edge, i.e., the outgoing flow from u is unsplittable. The problem is to seek a flow
satisfying the demands which minimizes the node congestion, i.e., the maximum
16
L. Sunil Chandran, Yun Kuen Cheung, and Davis Issac
Fig. 3. The scenario in Algorithm 1 when the graph has low connectivity. The vertex
set Y is a global minimum vertex cut of the graph. The vertex set X is the smallest
connected component after the removal of Y , and Z is the union of all the other
connected components.
Fig. 4. The scenario in Algorithm 1 when the graph has high connectivity.
incoming flow among all vertices. Since the incoming flow is maximum at one
of the sinks, it is equivalent to minimize the maximum flow received among all
sinks. (Here, we assume that no flow entering a sink will leave.)
Single-commodity splittable flow problem is almost identical to single-commodity
confluent flow problem, except that the above restriction is dropped, i.e., now
the outgoing flow at u can split along multiple edges. Note that here, the maximum incoming flow might not be at a sink. It is known that single-commodity
splittable flow can be solved in polynomial time. For brevity, we drop the phrase
“single-commodity” from now on.
Theorem 4.4 ([14, Section 4]). Suppose that given graph G, demand w and
` sinks, there is a splittable flow with node congestion q. Then there exists a
Spanning Tree Congestion
17
polynomial time algorithm which computes a confluent flow with node congestion
at most (1 + ln `)q for the same input.
Corollary 4.5. Let G be a k-connected graph with m edges. Then for any ` ≤ k
and for any ` vertices t1 , · · · , t` ∈ V , there exists a polynomial time algorithm
which computes an `-connected-partition ∪`j=1 V` such that for all j ∈ `, tj ∈ Vj ,
and the total degrees of vertices in each Vj is at most 4(1 + ln `)m/`.
Corollary 4.5 follows from Theorem 4.4 and Proposition 1.4. See Appendix A.1
for details.
Congestion Analysis. We view the whole recursion process as a recursion tree.
There is no endless loop, since down every path in the recursion tree, the number
of vertices in the input graphs are strictly decreasing. On the other hand, note
that the leaf of the recursion tree
pis resulted by either (i) when the input graph
H to that call satisfies mH ≤ 8 m̂nH , or (ii) when Lines 13–17 are executed.
An internal node appears only when the vertex-connectivity of the input graph
H is low, and it makes two recursion calls.
We prove the following statement by induction from bottom-up: for each
graph which is the input to some call in thep
recursion tree, the returned spanning
tree of that call has congestion at most 16 m̂nH log nH .
We first handle the two basis cases (i) and (ii). In case (i), FindLCST p
returns
an arbitrary spanning tree, and the congestion is bounded by mH ≤ 8 m̂nH .
In case (ii), by Corollaryp4.5 and Lemma 4.1,pFindLCST returns a tree with
congestion at most 16mH nH /m̂ · log nH ≤ 16 m̂nH · log nH .
Next, let H be the input graph to a call which is represented by an internal
node of the recursion tree. Recall the definitions of X, Y, Z, τ1 , τ2 in the algorithm.
Let |X| = x. Note that 1 ≤ x ≤ nH /2. Then by induction hypothesis, the
congestion of the returned spanning tree is at most
max{ congestion of τ1 in H[X] , congestion of τ2 in H[Y ∪ Z] } + |X| · |Y |
p
p
≤ 16 m̂(nH − x) log(nH − x) +
m̂/nH + 1 · x.
(1)
Viewing x as a real variable, by taking derivative, it is easy to see that the
above expression is maximized at x = 1. Thus the congestion is at most
p
p
p
16 m̂(nH − 1) log(nH −1)+ m̂/nH +1 ≤ 16 m̂nH log nH , as desired by Theorem 4.3.
Runtime Analysis. At every internal node of the recursion tree, the algorithm
makes two recursive calls with two vertex-disjoint and strictly smaller (w.r.t. vertex size) inputs. The dominating knitting cost is in Line 5 for computing a global
minimum vertex cut, which is well-known that it can be done in polynomial
time. Since at every leaf of the recursion tree the running time is polynomial,
by standard analysis on divide-and-conquer algorithms, the running time of the
whole algorithm is polynomial, which completes the proof of Theorem 4.3.
18
5
L. Sunil Chandran, Yun Kuen Cheung, and Davis Issac
Lower Bound for Spanning Tree Congestion
Here, we give a lower bound on spanning tree congestion which matches our
upper bound.
Theorem 5.1. For any sufficiently large n, and for any m satisfying n2 /2 ≥
m ≥ max{16n log n, 100n}, there exists a connected graph with N = (3 − o(1))n
vertices
√ and M ∈ [m, 7m] edges, for which the spanning tree congestion is at least
Ω ( mn).
We start with the following lemma, which states that for a random graph
G(n, p), when p is sufficiently large, its edge expansion is Θ(np) with high probability. The proof of the lemma uses only fairly standard arguments and is deferred
to Appendix A.3.
Lemma 5.2. For any integer n ≥ 4 and 1 ≥ p ≥ 32 · logn n , let G(n, p) denote
the random graph with n vertices, in which each edge occurs independently with
probability p. Then with probability at least 1 − O(1/n), (i) the random graph is
connected, (ii) the number of edges in the random graph is between pn2 /4 and
pn2 , and (iii) for each subset of vertices S with |S| ≤ n/2, the number of edges
leaving S is at least p2 · |S| · (n − |S|).
In particular, for any sufficiently large integer n, when n2 /2 ≥ m ≥ 16n log n,
by setting p = 2m/n2 , there exists a connected graph with n vertices and
[m/2, 2m] edges, such that for each subset of vertices S with |S| ≤ n/2, the
m
number of edges leaving S is at least 2n
· |S| = Θ(m/n) · |S|. We denote such a
graph by H(n, m).
We discuss our construction here (see Figure 5) before delving into the
proof. The vertex set V is the union of three vertex
subsets V1 , V2 , V3 , such that
p
|V1 | = |V2 | = |V3 | = n, |V1 ∩ V2 | = |V2 ∩ V3 | = m/n, and V1 , V3 are disjoint. In
each of V1 , V2 and V3 , we embed H(n, m). The edge sets are denoted E1 , E2 , E3
respectively. Up to this point, the construction is similar to that of Ostrovskii [30],
except that we use H(n, m) instead of a complete graph.
The new component in our construction is adding the following edges. For
each vertex v ∈ V1 ∩ V2 , add an edge between v and every vertex in (V1 ∪ V2 ) \ {v}.
The set of these edges are denoted F1 . Similarly, for each vertex v ∈ V3 ∩ V2 , add
an edge between v and every vertex in (V3 ∪ V2 ) \ {v}. The set of these edges are
denoted F3 . This new component is crucial:
without it, we could only prove a
√
√
√
lower bound of Ω(m/ n) = Ω( mn · nm ).
Proof of Theorem 5.1: . p
Let G = (V, E) be the graph constructed as above.
The whole graph has 3n − 2 m/n vertices. The number
m
p of edges is at least
√
(due to edges in E1 and E3 ), and is at most 6m + 2 m/n · 2n = 6m + 4 mn,
which is at most 7m for all sufficiently large n.
It is well known that for any tree on n vertices, there exists a vertex x called
a centroid of the tree such that, removing x decomposes the tree into connected
components, each of size at most n/2. Now, consider any spanning tree of the
Spanning Tree Congestion
19
V2
H(n, m)
V1
V3
v3
v1
H(n, m)
H(n, m)
Fig. 5. Our lower-bound construction for spanning tree congestion. V1 , V2 , V3 are three
vertex subsets of the same size. In each of the subsets, we embed expander H(n, m).
There is a small overlap between V2 and V1 , V3 , while V1 , V3 are disjoint. For any vertex
v1 ∈ V1 ∩ V2 , we add edges between it and any other vertex in V1 ∪ V2 ; similarly, for
any vertex v3 ∈ V3 ∩ V2 , we add edges (not shown in figure) between it and any other
vertex in V3 ∪ V2 .
given graph, let u be a centroid of the tree. Without loss of generality, we can
assume that u ∈
/ V1 ; otherwise we swap the roles of V1 and V3 . The removal of
u (and its adjacent edges) from the tree decomposes the tree into a number of
connected components. For any of these components which intersects V1 , it must
contain
pat least one vertex of V1 ∩ V2 , thus the number of such components is at
most m/n, and hence there exists one of them, denoted by Uj , such that
p
p
b1 := |Uj ∩ V1 | ≥ n/( m/n) = n n/m.
Let ej denote the tree-edge that connects u to Uj . Then there are three cases:
p
p
Case 1: n n/m ≤ b1 ≤ n − n n/m. Due to the property
√ of H(n, m), the
congestion of ej is at least Θ(m/n) · min{b1 , n − b1 } ≥ Θ( mn).
p
p
Case 2: b1 > n − n n/m and |Uj ∩ V1 ∩ V2 | ≤ 21 · m/n. Let W :=
p
(V1 ∩ V2 ) \ Uj . Note that by this case’s assumption, |W1 | ≥ 12 · m/n. Due to
the edge subset F1 , the congestion of ej is at least
√
1 p
n
F1 (W , V1 \ W ) ≥
· m/n ·
= Θ mn .
2
2
p
p
Case 3: b1 > n − n n/m and |Uj ∩ V1 ∩ V2 | > 21 · m/n. Let W 0 :=
Uj ∩ V1 ∩ V2 , and let Z := (V2 \ V1 ) ∩ Uj .
p
Note that b1 > n − n n/m ≥ 9n/10. Suppose |Z| ≥ 6n/10, then
|Uj | > 9n/10 + 6n/10 > |V |/2, a contradiction to the assumption that u is a
20
L. Sunil Chandran, Yun Kuen Cheung, and Davis Issac
centroid. Thus, |Z| < 6n/10. Due to the edge subset F2 , the congestion of ej is
at least
F2 (W 0 ∪ Z , V2 \ (W 0 ∪ Z)) ≥ |W 0 | · (n − |W 0 | − |Z|)
p
√
1 p
6n
≥
· m/n · n − m/n −
= Θ( mn).
2
10
6
Graphs with Expanding Properties
For any vertex subset U, W ⊂ V , let NW (U ) denote the set of vertices in W
which are adjacent to a vertex in U . Let N (U ) := NV \U (U ).
Definition 6.1. A graph G = (V, E) on n vertices is an (n, s, d1 , d2 , d3 , t)expanding graph if the following four conditions are satisfied:
(1) for each vertex subset S with |S| = s, |N (S)| ≥ d1 n;
(2) for each vertex subset S with |S| ≤ s, |N (S)| ≥ d2 |S|;
(3) for each vertex subset S with |S| ≤ n/2 and for any subset S 0 ⊂ S, |NV \S (S 0 )| ≥ |S 0 |−
t.
(4) For each vertex subset S, |E(S, V \ S)| ≤ d3 |S|.
Theorem 6.2. For any connected graph G which is an (n, s, d1 , d2 , d3 , t)-expanding
graph, there is a polynomial time algorithm which computes a spanning tree with
congestion at most
"
#
log(2−δ) 2
3d1 n
1
t
d3 · 4 · max s + 1 ,
.
·
+ t , where δ =
d2
2d1
d1 n
Next, we present the polynomial time algorithm in Theorem 6.2 and its
analysis.
Algorithm. Let G be an (n, s, d1 , d2 , d3 , t)-expanding graph. By Condition (2),
every vertex has degree at least d2 . Let v0 be a vertex of degree d ≥ d2 , and
let v1 , · · · , vd be its d neighbours. We maintain a tree T rooted at v0 such that
T = T1 ∪ T2 ∪ · · · ∪ Td ∪ {v0 v1 , v0 v2 , . . . , v0 vd } where T1 , T2 , · · · , Td are trees
rooted at v1 , v2 , . . . , vd respectively. We call the Ti0 s as branches. (See Figure 6).
We start with each branch Ti = vi . In order to minimize congestion, we grow T
in a balanced way, i.e., we maintain that the Tin’s are roughlyoof the same size. A
branch is saturated if it contains at least max s + 1 , 3dd12n vertices.
At any point of time, let VT be the set of vertices in T and VT be the vertices
not in T . Often, we will move a subtree of a saturated branch Ti to an unsaturated
branch Tj to ensure balance. For any x ∈ VT , let Tx denote the subtree of T
rooted at x. A vertex x of a saturated branch Ti is called transferable (to branch
Tj ) if x has a neighbour y in Tj and the tree Tj ∪ {xy} ∪ Tx is unsaturated. (See
Figure 7.)
Spanning Tree Congestion
21
v0
v1
vd
v2
T1
T2
Td
VT
Fig. 6. The tree T and its branches
v0
Ti∗
v0
Tj
Ti∗
Tj
y
x
y
x
Fig. 7. Transfer of a subtree from a saturated branch to an unsaturated branch
The algorithm is divided into two phases which are described below. Throughout the algorithm, whenever a branch Ti gets modified, T gets modified accordingly, and whenever T gets modified VT and VT gets modified accordingly.
Phase 1: Repeatedly do one of the following two actions, until |VT | ≥ d1 n:
(We will prove that the precondition of at least one of the actions is satisfied if
|VT | < d1 n)
1. If there exists a b ∈ VT such that b has a neighbour a in some unsaturated
branch Ti :
Add the vertex b and the edge ab to branch Ti .
2. If there exists at least one transferable vertex: (see Figure 7)
Find the transferable vertex x such that Tx is the smallest. Let Ti∗ be the
22
L. Sunil Chandran, Yun Kuen Cheung, and Davis Issac
branch currently containing x, Tj be a branch to which it is transferable, and
y be an arbitrarily chosen neighbour of x in Tj .
(a) Remove the subtree Tx from Ti∗ and add it to Tj with x as a child of y.
(b) Pick a b ∈ VT that has a neighbour a (arbitrarily chosen, if many) either
in Ti∗ or in Tj . (We will show in the analysis that such b exists). We add
vertex b and edge ab to the branch containing a (i.e. to Ti∗ or Tj ).
Phase 2: While VT 6= ∅, repeat:
Find a maximum matching of G[VT , VT ], the bipartite graph formed by edges of
G between VT and VT . Let M be the matching. Add all edges of M to T .
In the analysis below, we say that a tree is saturated if it contains at least A
vertices; we will determine its appropriate value by the end of the analysis.
Analysis of Phase 1. We claim that during Phase 1, i.e. if |VT | < d1 n, the
precondition of either step 1 or step 2 is satisfied. We also show the existence of a
vertex b as specified in step 2b, whenever step 2b is reached. Given these and the
fact that a vertex in VT is moved to VT (either in step 1 or in step 2b) during
each round of Phase 1, we have that Phase 1 runs correctly and terminates after
a linear number of rounds.
During Phase 1, we will also maintain the invariant that each branch has at
most A vertices; thus, each saturated branch has exactly A vertices. We call this
invariant the balancedness. Note that balancedness is not violated due to step 1,
as the new vertex is added to an unsaturated branch. It is not violated during
step 2 as the branches Ti∗ and Tj (as defined in step 2) become unsaturated at
the end of the step.
We define the hidden vertices of T (denoted by H ≡ HT ) as follows: they are
the vertices which are not adjacent to any vertices outside the tree, i.e., to any
vertex in VT . If there is an unsaturated branch with a non-hidden vertex, clearly
the precondition of step 1 is satisfied. So, let us assume that all the vertices in all
unsaturated branches are hidden. In such a case, we show that the precondition
of step 2 is satisfied if |VT | < d1 n.
We argue that in this case |H| ≤ s: otherwise, take a subset H 0 ⊂ H of
cardinality s, then by condition (2), N (H 0 ), which is contained in VT , has
cardinality at least d1 n, a contradiction.
Since |VT | < d1 n, the number of saturated branches is at most d1 n/A. To
ensure that at least one unsaturated branch exists, we set A such that d1 n/A < d2 .
Let U denote the set of vertices in all unsaturated branches. Since all vertices
in U are hidden vertices, |U | ≤ s. Then by condition (2), |N (U )| ≥ d2 |U |. Note
that the vertices in N (U ) are all in the saturated branches. By the pigeon-hole
principle, there exists a saturated branch containing at least
N (U )/(d1 n/A) ≥
Ad2 |U |
d1 n
vertices of N (U ). By setting A ≥ 3dd12n , the above calculation guarantees the
existence of a saturated branch containing at least 3|U | ≥ |U | + 2 vertices of
N (U ); let Ti be such a branch.
Spanning Tree Congestion
23
In Ti , pick a vertex x ∈ Ti ∩ N (U ) such that Tx does not contain any vertex in
N (U ), except x. Then the size of Tx is at most A − |N (U )∩Ti |+1 ≥ A −(|U |+1).
Let y ∈ U be a vertex which is adjacent to x and Tj be the branch containing
y. Since Tj has at most |U | vertices, x is a transferable vertex (to Tj ). Thus
precondition of step 2 is satisfied.
We further set A > s so that in each saturated branch, there is at least one
unhidden vertex. In particular, Ti has an unhidden vertex, which is adjacent to
some b ∈ VT . The vertex b is either adjacent to a vertex in Tx , or a vertex in
Ti \ Tx as required in step 2b.
Analysis of Phase 2. Since G is connected, M is non-empty in each iteration
of Phase 2, and hence Phase 2 terminates in linear number of rounds. At the end
of Phase 2, since VT is empty, T is clearly a spanning tree. It only remains to
estimate the congestion of this spanning tree. Towards this, we state the following
modified Hall’s theorem, which is an easy corollary of the standard Hall’s theorem.
Lemma 6.3. In a bipartite graph (L, R) with |L| ≤ |R|, for any vertex w ∈ L,
let R(w) denote the neighbours of w in R; then for any W ⊂ L, let R(W ) :=
∪w∈W R(w). Suppose that there exist t ≥ 0 such that for any W ⊂ L, we have
|R(W )| ≥ |W | − t. Then the bipartite graph admits a matching of size at least
|L| − t.
Recall that Phase 2 consists of multiple rounds of finding a matching between
VT and VT . As long as |VT | ≤ n/2, condition (3) (with S = VT ) plus the modified
Hall’s theorem (with L = VT and R = VT ) guarantees that in each round, at
least
t
|VT | − t ≥
1−
· |VT | =: (1 − δ)|VT |
d1 n
m
l
number of vertices in VT are matched. Thus, after at most log(2−δ) 2d11 rounds
of matching, |VT | ≥ n/2. After reaching |VT | ≥ n/2, condition (3) (with S = VT )
plus the modified Hall’s theorem (with L = VT and R = VT ) guarantees that
after one more round of matching, all but t vertices are left in VT .
By the end of Phase 1, each branch had at most A vertices. After each round of
matching, the cardinality of each branch is doubled at most. Thus, the maximum
possible number of vertices in each branch after running the whole algorithm is
at most
log(2−δ) 2
l
m
1
log(2−δ) 2d1 +1
1
+ t ≤ 4A ·
A·2
+ t.
2d1
and hence the STC is at most
"
d3 · 4A ·
1
2d1
log(2−δ) 2
#
+t .
Recall that we need A to satisfy d1 n/A < d2 , A ≥
n
l
mo
set A := max s + 1 , 3dd12n .
3d1 n
d2
and A > s. Thus we
24
6.1
L. Sunil Chandran, Yun Kuen Cheung, and Davis Issac
Random Graph
Let G ∈ G(n, p) where p ≥ c0 log n/n, and c0 = 64. The following lemmas
show that with high probability G is an (n, s, d1 , d2 , d3 , t)-expanding graph with
s = Θ(1/p), d1 = Θ(1), d2 = Θ(np), d3 = Θ(np), t = Θ(1/p) (and hence
δ = o(1)). The proof of the lemmas are deferred to Appendix B.2.
Lemma 6.4. For any S ⊆ V (G) such that |S| = d1/pe, we have |N (S)| ≥ c2 n
with probability at least 1 − e−n/16 , where c2 = 1/25.
Lemma 6.5. For any S ⊆ V (G) such that |S| ≤ 1/p, we have |N (S)| ≥ c3 np|S|
with probability at least 1 − O(1/n2 ), where c3 = 1/16.
Lemma 6.6. For all A ⊆ V (G) such that |A| ≤ n/2, and for all S ⊆ A, with
probability at least 1 − e−n , S has at least |S| − c4 /p neighbors in V (G) \ A,
where c4 = 12.
Lemma 6.7. For all S ⊆ V (G), the cut size |E(S, V (G) \ S)| is at most np|S|
with probability at least 1 − n−c0 /4 .
Plugging the bounds from above lemmas into Theorem 6.2, together with a
separate lower bound argument (Theorem B.2 in Appendix B.1), we have the
following theorem; in Appendix B.1, we also present a non-algorithmic proof of
this theorem.
Theorem 6.8. If G ∈ G(n, p) where p ≥ 64 log n/n, then with probability at
least 1 − O(1/n), its STC is Θ(n).
7
Discussion and Open Problems
In this paper, we provide thorough understanding, both combinatorially and
algorithmically, on the spanning tree congestion of general graphs and random
graphs. On course of doing so, we also provide the first constructive proof for
the generalized Győri-Lovász theorem, which might be of independent interest.
Following are some natural open problems:
– Finding the spanning tree with minimum congestion is NP-hard; indeed,
Bodlaender et al. [9] showed a (9/8 − )-approximation NP-hardness for the
STC problem. Does a constant or a poly-logarithmic factor approximation
polynomial time algorithm exist?
– We present √
an algorithm for computing a spanning tree achieving congestion
at most O( mn). The algorithm runs in sub-exponential time when m =
ω(n log2 n). Is there a polynomial time algorithm for constructing such a
spanning tree?
– For a k-connected graph, a connected k-partition where all parts are of size
at most O((n/k) log k) can be found in polynomial time due to an algorithm
of Chen et al. [14]. Can we improve the sizes of parts to O(n/k)?
– Is finding Győri-Lovász partition PLS-complete? If not, is it polynomial time
solvable?
Spanning Tree Congestion
25
References
1. Ittai Abraham, Yair Bartal, and Ofer Neiman. Nearly tight low stretch spanning
trees. In FOCS 2008, pages 781–790, 2008.
2. Ittai Abraham and Ofer Neiman. Using petal-decompositions to build a low stretch
spanning tree. In STOC 2012, pages 395–406, 2012.
3. Noga Alon, Richard M. Karp, David Peleg, and Douglas B. West. A graph-theoretic
game and its application to the k-server problem. SIAM J. Comput., 24(1):78–100,
1995.
4. Ingo Althöfer, Gautam Das, David P. Dobkin, Deborah Joseph, and José Soares. On
sparse spanners of weighted graphs. Discrete & Computational Geometry, 9:81–100,
1993.
5. Reid Andersen and Uriel Feige. Interchanging distance and capacity in probabilistic
mappings. CoRR, abs/0907.3631, 2009.
6. Sanjeev Arora, Satish Rao, and Umesh V. Vazirani. Expander flows, geometric
embeddings and graph partitioning. J. ACM, 56(2):5:1–5:37, 2009.
7. Nikhil Bansal, Uriel Feige, Robert Krauthgamer, Konstantin Makarychev,
Viswanath Nagarajan, Joseph Naor, and Roy Schwartz. Min-max graph partitioning
and small set expansion. SIAM J. Comput., 43(2):872–904, 2014.
8. Sandeep N. Bhatt, Fan R. K. Chung, Frank Thomson Leighton, and Arnold L.
Rosenberg. Optimal simulations of tree machines (preliminary version). In FOCS
1986, pages 274–282, 1986.
9. Hans L. Bodlaender, Fedor V. Fomin, Petr A. Golovach, Yota Otachi, and Erik Jan
van Leeuwen. Parameterized complexity of the spanning tree congestion problem.
Algorithmica, 64(1):85–111, 2012.
10. Hans L. Bodlaender, Kyohei Kozawa, Takayoshi Matsushima, and Yota Otachi.
Spanning tree congestion of k-outerplanar graphs.
Discrete Mathematics,
311(12):1040–1045, 2011.
11. Béla Bollobás. Random Graphs. Cambridge University Press, 2001.
12. Béla Bollobás and Andrew Thomason. Random graphs of small order. NorthHolland Mathematics Studies, 118:47–97, 1985.
13. Leizhen Cai and Derek G. Corneil. Tree spanners. SIAM J. Discrete Math.,
8(3):359–387, 1995.
14. Jiangzhuo Chen, Robert D. Kleinberg, László Lovász, Rajmohan Rajaraman, Ravi
Sundaram, and Adrian Vetta. (almost) tight bounds and existence theorems for
single-commodity confluent flows. J. ACM, 54(4):16, 2007.
15. Michael Elkin, Yuval Emek, Daniel A. Spielman, and Shang-Hua Teng. Lowerstretch spanning trees. SIAM J. Comput., 38(2):608–628, 2008.
16. Uriel Feige and Robert Krauthgamer. A polylogarithmic approximation of the
minimum bisection. SIAM J. Comput., 31(4):1090–1118, 2002.
17. E. Győri. On division of graphs to connected subgraphs. Colloq. Math. Soc. Janos
Bolyai, 18:485–494, 1976.
18. Ludovic Hofer and Thibaud Lambert. Study of the article: An O(k2 n2 ) algorithm
to find a k-partition in a k-connected graph. 2014.
19. Alexander Hoyer and Robin Thomas. The Győri-Lovász theorem. arXiv,
abs/1605.01474, 2016. URL: http://arxiv.org/abs/1706.08115.
20. David S. Johnson, Christos H. Papadimitriou, and Mihalis Yannakakis. How easy
is local search? J. Comput. Syst. Sci., 37(1):79–100, 1988.
21. Marcos A. Kiwi, Daniel A. Spielman, and Shang-Hua Teng. Min-max-boundary
domain decomposition. Theor. Comput. Sci., 261(2):253–266, 2001.
26
L. Sunil Chandran, Yun Kuen Cheung, and Davis Issac
22. Ioannis Koutis, Gary L. Miller, and Richard Peng. A nearly O(m log n) time solver
for SDD linear systems. In FOCS 2011, pages 590–598, 2011.
23. Kyohei Kozawa and Yota Otachi. Spanning tree congestion of rook’s graphs.
Discussiones Mathematicae Graph Theory, 31(4):753–761, 2011.
24. Kyohei Kozawa, Yota Otachi, and Koichi Yamazaki. On spanning tree congestion
of graphs. Discrete Mathematics, 309(13):4215–4224, 2009.
25. Hiu Fai Law, Siu Lam Leung, and Mikhail I. Ostrovskii. Spanning tree congestions
of planar graphs. Involve, 7(2):205–226, 2014.
26. László Lovász. A homology theory for spanning trees of a graph. Acta Math. Acad.
Sci. Hungaricae, 30(3–4):241–251, 1977.
27. Christian Löwenstein, Dieter Rautenbach, and Friedrich Regen. On spanning tree
congestion. Discrete Math., 309(13):4653–4655, 2009.
28. Shin-Ichi Nakano, Md. Saidur Rahman, and Takao Nishizeki. A linear-time algorithm for four-partitioning four-connected planar graphs. Inf. Process. Lett.,
62(6):315–322, 1997.
29. Yoshio Okamoto, Yota Otachi, Ryuhei Uehara, and Takeaki Uno. Hardness results
and an exact exponential algorithm for the spanning tree congestion problem. J.
Graph Algorithms Appl., 15(6):727–751, 2011.
30. M. I. Ostrovskii. Minimal congestion trees. Discrete Math., 285:219–226, 2004.
31. M. I. Ostrovskii. Minimum congestion spanning trees in planar graphs. Discrete
Math., 310:1204–1209, 2010.
32. M. I. Ostrovskii. Minimum congestion spanning trees in bipartite and random
graphs. Acta Mathematica Scientia, 31(2):634 – 640, 2011.
33. Harald Räcke. Optimal hierarchical decompositions for congestion minimization in
networks. In STOC, pages 255–264, 2008.
34. André Raspaud, Ondrej Sýkora, and Imrich Vrto. Congestion and dilation, similarities and differences: A survey. In SIROCCO 2000, pages 269–280, 2000.
35. Satu Elisa Schaeffer. Graph clustering. Computer Science Review, 1(1):27–64, 2007.
36. Shai Simonson. A variation on the min cut linear arrangement problem. Mathematical Systems Theory, 20(4):235–252, 1987.
37. Daniel A. Spielman and Shang-Hua Teng. A local clustering algorithm for massive
graphs and its application to nearly linear time graph partitioning. SIAM J.
Comput., 42(1):1–26, 2013.
38. David Steurer. Tight bounds for the min-max boundary decomposition cost of
weighted graphs. In SPAA 2006, pages 197–206, 2006.
39. Hitoshi Suzuki, Naomi Takahashi, and Takao Nishizeki. A linear algorithm for
bipartition of biconnected graphs. Inf. Process. Lett., 33(5):227–231, 1990.
40. Zoya Svitkina and Éva Tardos. Min-max multiway cut. In APPROX-RANDOM
2004, pages 207–218, 2004.
41. Shang-Hua Teng. Scalable algorithms for data and network analysis. Foundations
and Trends in Theoretical Computer Science, 12(1-2):1–274, 2016.
42. Koichi Wada and Kimio Kawaguchi. Efficient algorithms for tripartitioning triconnected graphs and 3-edge-connected graphs. In Graph-Theoretic Concepts in
Computer Science, 19th International Workshop, WG ’93, Utrecht, The Netherlands,
June 16-18, 1993, Proceedings, pages 132–143, 1993.
Spanning Tree Congestion
A
A.1
27
Missing Proofs in Sections 4 and 5
Proof of Corollary 4.5
First of all, we set the demand of each vertex in the flow problem to be the the
degree of the vertex in G, and t1 , · · · , t` as the sinks in the flow problem.
By Proposition 1.4, there exists an `-connected-partition ∪`j=1 U` such that
for all j ∈ [`], tj ∈ Uj , and the total degrees of vertices in each Uj is at most
4m/`. With this, by routing the demand of a vertex in Uj to tj via an arbitrary
path in G[Uj ] only, we construct a splittable flow with node congestion at most
4m/`. By Theorem 4.4, one can construct a confluent flow with node congestion
at most 4(1 + ln `)m/` in polynomial time.
Obviously, in the confluent flow, all the flow originating from one vertex goes
completely into one sink. Set Vj to be the set of vertices such that the flows
originating from these vertices go into tj . It is then routine to check that ∪`j=1 V`
is our desired `-connected-partition.
A.2
Proof of Theorem 4.2
Instead of giving the full proof, we point out the differences from the proof of
Theorem 4.3.
First, in handling the basis case (ii), by Theorem 1.3(b), Proposition 1.4 and
Lemma 4.1, we havep
an improvedp
upper bound on the congestion of the returned
tree, which is 8mH / m̂/nH ≤ 8 m̂nH . Thus, (1) can be improved to
s
p
8 m̂(nH − x) +
m̂
· x.
nH
Again, by viewing x as a real variable and taking derivative, it is easy to see that
the above expression is maximized at x = 1. So the above bound is at most
s
p
8 m̂(nH − 1) +
m̂
nH
p
≤ 8 m̂nH , as desired.
Concerning the running time, it is clear that in the worst case, it is dominated
by some calls to the algorithm in Theorem 1.3(b). Note that the number of such
calls is at most n̂, since each call to the algorithm is on a disjoint set of vertices.
There remains one concern, which is the connectedness of H[Y ∪ Z]. Suppose
the contrary that H[Y ∪ Z] is not connected. Let C be one of its connected
components, so that it contains the least number of vertices from Y . Then C
contains at most b|Y |/2c vertices from Y , i.e., |C ∩ Y | < |Y |. Note that C ∩ Y is
a vertex cut set of the graph H, thus contradicting that Y is a global minimum
vertex cut set.
28
A.3
L. Sunil Chandran, Yun Kuen Cheung, and Davis Issac
Proof of Lemma 5.2
It is well known that the requirements (i) and (ii) are satisfied with probability
1 − o(1/n). [11] For each subset S with |S| ≤ n/2, by the Chernoff bound,
h
i
p
P E(S, V \ S) ≤
· |S| · (n − |S|)
≤ e−p|S|(n−|S|)/8 ≤ e−pn|S|/16 .
2
Since p ≥ 32 · logn n , the above probability is at most n−2|S| . Then by a union
bound, the probability that (iii) is not satisfied is at most
bn/2c
X
s=1
B
B.1
n
· n−2s ≤
s
bn/2c
X
s=1
bn/2c
ns · n−2s ≤
X
s=1
n−s ≤
2
.
n
Spanning Tree Congestion of Random Graphs
Non-Algorithmic Proof of Theorem 6.8
We first present a simple non-algorithmic proof that random graph has STC Θ(n)
with high probability. Theorem B.1 gives the upper bound and Theorem B.2
gives the lower bound. The proof of Theorem B.1 uses Lemma 1.5 and the fact
that for random graphs, vertex-connectivity and minimum degree are equal with
high probability. Theorem B.1 does not give an efficient algorithm.
Theorem B.1. If G ∈ G(n, p) where p ≥ 8 log n/n, then the spanning tree
congestion of G is at most 16n with probability at least 1 − o(1/n).
Proof. It is known that the threshold probability for a random graph being
k-connected is same as the threshold probability for it having minimum degree
at least k [12]. Since p ≥ 8 log n/n, using Chernoff bound and taking union
bound over all vertices gives that G has minimum degree at least np/2 with
probability at least 1 − o(1/n). Hence G is (np/2)-connected with probability at
least 1 − o(1/n). We also have that the number of edges in G is at most 2n2 p
with probability at least 1 − o(1/n). Now, by using Lemma 1.5, we have that with
probability at least 1 − o(1/n), the spanning tree congestion is at most 16n.
Theorem B.2. If G ∈ G(n, p) where p ≥ 32 log n/n, then the spanning tree
congestion of G is Ω(n) with probability 1 − O(1/n).
Proof. By using Chernoff Bounds and applying union bound, it is easy to show
that with probability 1 − o(1/n), every vertex of G has degree at most c1 np for a
sufficiently large constant c1 . Also, by Lemma 5.2, with probability 1 − O(1/n),
properties (i) and (iii) of that lemma holds. In the proof below, we conditioned
on the above mentioned highly probable events.
Take a spanning tree T of G which gives the minimum congestion. Let u be
a centroid of the tree T , i.e., each connected component of T \ {u} has at most
n/2 vertices. If there is a connected component with number of vertices at least
Spanning Tree Congestion
29
n/4, then define this connected component as T 0 . Else, all connected components
have at most n/4 vertices. In this case, let T 0 be the forest formed by the union
of a minimum number of connected components of T \ {u} such that |T 0 | ≥ n/4.
It is easy to see that |T 0 | ≤ n/2. Also, the number of edges in T from V (T 0 ) to
V (T ) \ V (T 0 ) is at most degG (u), which is at most c1 np.
By property (iii) of Lemma 5.2, the number of edges between V (T 0 ) and
V (G)\V (T 0 ) is Ω(n2 p). Each of these edges in G between V (T 0 ) and V (G)\V (T 0 )
have to contribute to the congestion of at least one of the edges in T between
V (T 0 ) and V (G) \ V (T 0 ). Now since T 0 sends at most c1 np tree edges to other
parts of T , it follows that there exists one edge in T with congestion at least
Ω(n2 p)/(c1 np) = Ω(n), as claimed.
B.2
Random Graph Satisfies Expanding Properties
Constants. For easy reference, we list out the constants used.
c0 = 64, c2 = 1/25, c3 = 1/16, c4 = 12
Proof of Lemma 6.4: . Let S = V (G) \ S. The probability that a fixed vertex
in S does not have edge to S is at most (1 − p)|S| ≤ (1 − p)1/p ≤ e−1 . Since
|S| ≥ n − 2/p ≥ n − 2n/(c0 log n) ≥ 31n/32, the expected value of |N (S)| is at
least (31/32) n(1 − e−1 ) ≥ n/2. Hence, using Chernoff bound, the probability
that |N (S)| < c2 n = n/25 is at most e−n/8 . Since the number of such S is at
most n2/p = 22n/c0 ≤ 2n/32 , we have the lemma by applying union bound.
Proof of Lemma 6.5: . Let S = V (G) \ S. Since |S| ≤ 1/p ≤ n/ log n, we
have |S| ≥ n/2 for sufficiently large n. Divide S into groups of size d1/(p|S|)e.
The probability that such a group does not have edge to S is at most (1 −
p)|S|(1/(p|S|)) ≤ 1/e. The expected number of groups having edge to S is at
least (np|S|/2)(1 − 1/e) ≥ np|S|/4. Thus, by Chernoff bound, the probability
that |N (S)| ≤ np|S|/16 is at most e−np|S|/16 ≤ 2−c0 |S| log n/16 ≤ 2−4|S| log n . The
number of sets of size |S| is at most 2|S| log n . Hence, taking union bound over all
S with |S| ≤ 1/p, we get the required lemma.
Proof of Lemma 6.6: . First, we prove that for all C, D ⊆ V (G) such that
|C| ≥ n/4,|D| ≥ c4 /p, and C ∩ D = ∅, there exist at least one edge between C
and D with high probability. The probability that there is no edge between such
a fixed C and D is at most (1 − p)(n/4)(c4 /p) ≤ e−c4 n/4 . The number of pairs of
such C and D is at most 22n . Hence, by taking union bound, the probability that
for all C and D, the claim holds is at least 1 − e2n−(c4 n/4) ≥ 1 − e−n .
Using the above claim, we prove that for all S ⊆ A, S has at least |S| − c4 /p
neighbors in A := V (G) \ A with high probability. Suppose there is an S which
violates the claim. Note that we can assume |S| ≥ c4 /p, because otherwise the
claim is vacuously true. Let B := A \ N (S). There cannot be any edges between
S and B. Also, |B| ≥ (n/2) − (|S| − (c4 /p)). So, |B| is at least c4 /p and when
|B| < n/4, |S| is at least n/4. Hence, using the previous claim, there is an edge
between S and B with probability at least 1 − e−n . Hence, we get a contradiction,
and hence our claim is true with probability at least 1 − e−n .
30
L. Sunil Chandran, Yun Kuen Cheung, and Davis Issac
Proof of Lemma 6.7: . Let C(S) denote |E(S, V (G) \ S)|. For a fixed vertex
subset S, the expected value of C(S) is at most np|S|. Therefore, probability
that C(S) > np|S| ≥ c0 |S| log n is at most n−c0 |S|/2 using Chernoff bounds. The
probability that C(S) ≤ np|S| for all sets S of size k is at least 1−n−c0 k/2+k ≥ 1−
n−c0 /2+1 using union bound and using k ≥ 1. The probability that C(S) ≤ np|S|
for all vertex subsets S is at least 1 − n−c0 /2+2 ≥ using union bound over all
k ∈ [n].
| 8 |
arXiv:1711.08848v2 [cs.CV] 29 Nov 2017
Real-Time Seamless Single Shot 6D Object Pose Prediction
Bugra Tekin
EPFL
Sudipta N. Sinha
Microsoft Research
Pascal Fua
EPFL
[email protected]
[email protected]
[email protected]
Abstract
detection pipeline made of one CNN to coarsely segment
the object and another to predict the 2D locations of the
projections of the object’s 3D bounding box given the segmentation, which are then used to compute the 6D pose using a PnP algorithm [16]. The method is effective but slow
due to its multi-stage nature. SSD-6D [10] is a different
pipeline that relies on the SSD architecture [19] to predict
2D bounding boxes and a very rough estimate of the object’s
orientation in a single step. This is followed by an approximation to predict the object’s depth from the size of its 2D
bounding box in the image, to lift the 2D detections to 6D.
Both BB8 and SSD-6D require a further pose refinement
step for improved accuracy, which increases their running
times linearly with the number of objects being detected.
We propose a single-shot approach for simultaneously
detecting an object in an RGB image and predicting its 6D
pose without requiring multiple stages or having to examine
multiple hypotheses. Unlike a recently proposed single-shot
technique for this task [10] that only predicts an approximate 6D pose that must then be refined, ours is accurate
enough not to require additional post-processing. As a result, it is much faster – 50 fps on a Titan X (Pascal) GPU
– and more suitable for real-time processing. The key component of our method is a new CNN architecture inspired
by [27, 28] that directly predicts the 2D image locations of
the projected vertices of the object’s 3D bounding box. The
object’s 6D pose is then estimated using a PnP algorithm.
For single object and multiple object pose estimation
on the L INE M OD and O CCLUSION datasets, our approach substantially outperforms other recent CNN-based
approaches [10, 25] when they are all used without postprocessing. During post-processing, a pose refinement step
can be used to boost the accuracy of these two methods, but
at 10 fps or less, they are much slower than our method.
In this paper, we propose a single-shot deep CNN architecture that takes the image as input and directly detects
the 2D projections of the 3D bounding box vertices. It is
end-to-end trainable and accurate even without any a posteriori refinement. And since, we do not need this refinement
step, we also do not need a precise and detailed textured
3D object model that is needed by other methods [10, 25].
We only need the 3D bounding box of the object shape for
training. This can be derived from other easier to acquire
and approximate 3D shape representations.
We demonstrate state-of-the-art accuracy on the
L INE M OD dataset [8], which has become a de facto
standard benchmark for 6D pose estimation. However,
we are much faster than the competing techniques by a
factor of more than five, when dealing with a single object.
Furthermore, we pay virtually no time-penalty when
handling several objects and our running time remains
constant whereas that of other methods grow proportional
to the number of objects, which we demonstrate on the
O CCLUSION dataset [1].
1. Introduction
Real-time object detection and 6D pose estimation is
crucial for augmented reality, virtual reality, and robotics.
Currently, methods relying on depth data acquired by RGBD cameras are quite robust [1, 3, 4, 11, 13]. However, active depth sensors are power hungry, which makes 6D object detection methods for passive RGB images more attractive for mobile and wearable cameras. There are many
fast keypoint and edge-based methods [21, 31, 35] that are
effective for textured objects. However, they have difficulty
handling weakly textured or untextured objects and processing low-resolution video streams, which are quite common
when dealing with cameras on wearable devices.
Therefore, our contribution is an architecture that yields
a fast and accurate one-shot 6D pose prediction without requiring any post-processing. It extends single shot CNN architectures for 2D detection in a seamless and natural way
to the 6D detection task. Our implementation is based on
YOLO [28] but the approach is amenable to other singleshot detectors such as SSD [19] and its variants.
Deep learning techniques have recently been used to address these limitations [10, 25]. BB8 [25] is a 6D object
1
2. Related Work
We now review existing work on 6D pose estimation
ranging from classical feature and template matching methods to newer end-to-end trainable CNN-based methods.
Classical methods. Traditional RGB object instance
recognition and pose estimation works used local keypoints
and feature matching. Local descriptors needed by such
methods were designed for invariance to changes in scale,
rotation, illumination and viewpoints [21, 31, 35]. Such
methods are often fast and robust to occlusion and scene
clutter. However, they only reliably handle textured objects in high resolution images [15]. Other related methods
include 3D model-based registration [17, 20], Hausdorff
matching [9], oriented Chamfer matching for edges [18]
and 3D chamfer matching for aligning 3D curve-based
models to images [26].
RGB-D methods. The advent of commodity depth cameras has spawned many RGB-D object pose estimation
methods [1, 3, 4, 11, 14, 23, 32, 38]. For example, Hinterstoisser et al. proposed template matching algorithms suitable for both color and depth images [7, 8]. Rios et al. [30]
extended their work using discriminative learning and cascaded detections for higher accuracy and efficiency respectively. RGB-D methods were used on indoor robots for 3D
object recognition, pose estimation, grasping and manipulation [3, 4, 5, 13, 14, 39]. Brachmann et al. [1] proposed
using regression forests to predict dense object coordinates,
to segment the object and recover its pose from dense correspondences. They also extended their method to handle
uncertainty during inference and deal with RGB images [2].
Zach et al. [37] explored fast dynamic programming based
algorithms for RGB-D images.
CNN-based methods. In recent years, research in most
pose estimation tasks has been dominated by CNNs. Techniques such as Viewpoints and Keypoints [34] and Render
for CNN [33] cast object categorization and 3D pose estimation into classification tasks, specifically by discretizing
the pose space. In contrast, PoseNet [12] proposes using a
CNN to directly regress from a RGB image to a 6D pose,
albeit for camera pose estimation, a slightly different task.
Since PoseNet outputs a translational and a rotational component, the two associated loss terms have to be balanced
carefully by tuning a hyper-parameter during training.
To avoid this problem, the newer PoseCNN architecture [36] is trained to predict 6D object pose from a single
RGB image in multiple stages, by decoupling the translation and rotation predictors. A geodesic loss function more
suitable for optimizing over 3D rotations have been suggested in [22]. Another way to address this issue has recently emerged. In [10, 25], the CNNs do not directly predict object pose. Instead, they output 2D coordinates, 2D
masks, or discrete orientation predictions from which the
6D pose can be inferred. Because all the predictions are
in the 2D image, the problem of weighting different loss
terms goes away. Also training becomes numerically more
stable, resulting in better performance on the L INE M OD
dataset [8]. We also adopt this philosophy in our work.
In parallel to these developments, on the 2D object detection task, there has been a progressive trend towards single
shot CNN frameworks as an alternative to two-staged methods such as Faster-RCNN [29] that first find a few candidate
locations in the image and then classifies them as objects
or background. Recently, single shot architectures such as
YOLO [27, 28] and SSD [19] have been shown to be fast
and accurate. SSD has been extended to predict the object’s
identity, its 2D bounding box in the image and a discrete estimate of the object’s orientation [10, 24]. In this paper, we
go beyond such methods by extending a YOLO-like architecture [28] to directly predict a few 2D coordinates from
which the full 6D object pose can be accurately recovered.
3. Approach
With our goal of designing an end-to-end trainable network that predicts the 6D pose in real-time, we were inspired by the impressive performance of single shot 2D object detectors such as YOLO [27, 28]. This led us to design the CNN architecture [27, 28] shown in Fig. 1. We designed our network to predict the 2D projections of the corners of the 3D bounding box around our objects. The main
insight was that YOLO was originally designed to regress
2D bounding boxes and to predict the projections of the 3D
bounding box corners in the image, a few more 2D points
had to be predicted for each object instance in the image.
Then given these 2D coordinates and the 3D ground control
points for the bounding box corners, the 6D pose can be calculated algebraically with an efficient PnP algorithm [16].
BB8 [25] takes a similar approach. However, they first find
a 2D segmentation mask around the object and present a
cropped image to a second network that predicts the eight
2D corners in the image. We now describe our network architecture and explain various aspects of our approach in
details.
3.1. Model
We formulate the 6D pose estimation problem in terms
of predicting the 2D image coordinates of virtual 3D control points associated with the 3D models of our objects of
interest. Given the 2D coordinate predictions, we calculate
the object’s 6D pose using a PnP algorithm. We parameterize the 3D model of each object with 9 control points.
For these control points, we select the 8 corners of the tight
3D bounding box fitted to the 3D model, similar to [25]. In
addition, we use the centroid of the object’s 3D model as
the 9th point. This parameterization is general and can be
S
S
(9x2+1+C)
Figure 1. Overview: (a) The proposed CNN architecture. (b) An example input image with four objects. (c) The S × S grid showing cells
responsible for detecting the four objects. (d) Each cell predicts 2D locations of the corners of the projected 3D bounding boxes in the
image. (e) The 3D output tensor from our network, which represents for each cell a vector consisting of the 2D corner locations, the class
probabilities and a confidence value associated with the prediction.
used for any rigid 3D object with arbitrary shape and topology. In addition, these 9 control points are guaranteed to be
well spread out in the 2D image and could be semantically
meaningful for many man-made objects.
Our model takes as input a single full color image, processes it with a fully-convolutional architecture shown in
Figure 1(a) and divides the image into a 2D regular grid
containing S × S cells as shown in Figure 1(c). In our
model, each grid location in the 3D output tensor will be associated with a multidimensional vector, consisting of predicted 2D image locations of the 9 control points, the class
probabilities of the object and an overall confidence value.
At test time, predictions at cells with low confidence values, ie. where the objects of interest are not present, will be
pruned.
The output target values for our network are stored in a
3D tensor of size S × S × D visualized in Fig. 1(e). The
target values for an object at a specific spatial cell location i ∈ S × S is placed in the i-th cell in the 3D tensor
in the form of a D dimensional vector vi . When N objects are present in different cells, we have N such vectors,
v1 , v2 , . . . , vn in the 3D tensor. We train our network to
predict these target values. The 9 control points in our case
are the 3D object model’s center and bounding box corners
but could be defined in other ways as well. To train our net-
work, we only need to know the 3D bounding box of the
object, not a detailed mesh or an associated texture map.
As in YOLO, it is crucial that a trained network is able
to predict not only the precise 2D locations but also high
confidence values in regions where the object is present and
low confidence where it isn’t present. In case of 2D object
detection, YOLO uses for its confidence values, an intersection over union (IoU) score associated with the predicted
(and true 2D rectangles) in the image. In our case, the objects are in 3D and to compute an equivalent IoU score with
two arbitrary cuboids, we would need to calculate a 3D convex hull corresponding to their intersections. This would
be tedious and would slow down training. Therefore, we
take a different approach. We model the predicted confidence value using a confidence function shown in Figure 2.
The confidence function, c(x), returns a confidence value
for a predicted 2D point denoted by x based on its distance
DT (x) from the ground truth i.e. target 2D point. Formally,
we define the confidence function c(x) as follows:
(
c(x) =
α(1−
e
0
DT (x)
dth )
, if DT (x) < dth
otherwise
(1)
The distance DT (x) is defined as the 2D Euclidean distance in the image space. To achieve precise localization
points, we do not constrain the network’s output as those
points should be allowed to fall outside the cell. The predicted control point (gx , gy ) is defined as
Confidence
1
0
0
Distance
Figure 2. Confidence c(x) as a function of the distance DT (x)
between a predicted point and the true point.
with this function, we choose a sharp exponential function
with a cut-off value dth instead of a monotonically decreasing linear function. The sharpness of the exponential function is defined by the parameter α. In practice, we apply
the confidence function to all the control points and calculate the mean value and assign it as the confidence. As
mentioned earlier, we also predict C conditional class probabilities at each cell. The class probability is conditioned on
the cell containing an object. Overall, our output 3D tensor
depicted in Figure 1(e) has dimension S × S × D, where the
2D spatial grid corresponding to the image dimensions has
S × S cells and each such cell has a D dimensional vector.
Here, D = 9×2+C +1, because we have 9 (xi , yi ) control
points, C class probabilities and one confidence value.
Our network architecture follows the fully convolutional
YOLO v2 architecture [28]. Thus, our network has 23
convolutional layers and 5 max-pooling layers. Similar to
YOLO v2, we choose S = 13 and have a 13 × 13 2D spatial grid on which we make our predictions. We also allow
higher layers of our network to use fine-grained features by
adding a passthrough layer. Specifically, we bring features
from an earlier layer at resolution 26 × 26, apply batch normalization and resize the input image during training onthe-fly. As the network downsamples the image by a factor
of 32, we change the input resolution to a multiple of 32
randomly chosen from the set {320, 352, . . . , 608} to be robust to objects of different size.
3.2. Training Procedure
Our final layer outputs class probabilities, (x, y) coordinate locations for the control points, and the overall confidence score. During training, this confidence value is computed on the fly using the function defined in Eq. 1 to measure the distance between the current coordinate predictions
and the ground-truth, DT (x). We predict offsets for the
2D coordinates with respect to (cx , cy ), the top-left corner
of the associated grid cell. For the centroid, we constrain
this offset to lie between 0 and 1. However, for the corner
gx = f (x) + cx
(2)
gy = f (y) + cy
(3)
where f (·) is chosen to be a 1D sigmoid function in case
of the centroid and the identity function in case of the eight
corner points. This has the effect of forcing the network
to first find the approximate cell location for the object and
later refine its eight corner locations. We minimize the following loss function to train our complete network.
L = λpt Lpt + λconf Lconf + λid Lid
(4)
Here, the terms Lpt , Lconf and Lid denote the coordinate
loss, confidence loss and the classification loss respectively.
We use mean-squared error for the coordinate and confidence losses, and cross entropy for the classification loss.
As suggested in [27, 28], to improve model stability, we
downweight the confidence loss for cells that don’t contain
objects by setting λconf to 0.1. For cells that contain objects, we set λconf to 5.0.
When multiple objects are located close to each other in
the 3D scene, they are more likely to appear close together
in the images or be occluded by each other. In these cases,
certain cells might contain multiple objects. To be able to
predict the pose of such multiple objects that lie in the same
cell, we allow up to 5 candidates per cell and therefore predict five sets of control points per cell. Similarly to [28], we
precompute with k-means, five anchor boxes that define the
size, ie. the width and height of a 2D rectangle tightly fitted
to a masked region around the object in the image. During training, we assign whichever anchor box has the most
similar size to the current object as the responsible one to
predict the 2D coordinates for that object.
3.3. Pose Prediction
We detect and estimate the pose of objects in 6D by invoking our network only once. At test time, we estimate the
class-specific confidence scores for each object by multiplying the class probabilities and the score returned by the confidence function. Each grid cell produces predictions in one
network evaluation and cells with predictions with low confidence are pruned using a confidence threshold. For large
objects and objects whose projections lie at the intersection
of two cells, multiple cells are likely to predict highly confident detections. To obtain a more robust and well localized
pose estimate, we inspect the cells in the 3×3 neighborhood
of the cell which has the maximum confidence score. We
combine the individual corner predictions of these adjacent
cells by computing a weighted average of the individual detections, where the weights used are the confidence scores
of the associated cells.
At run-time, the network gives the 2D projections of the
object’s centroid and corners of its 3D bounding box along
with the object identity. We estimate the 6D pose from
the correspondences between the 2D and 3D points using
a Perspective-n-Point (PnP) pose estimation method [16].
In our case, PnP uses only 9 such control point correspondences and provides an estimate of the 3D rotation R and
3D translation t of the object in camera coordinates.
4. Implementation Details
We initialize the parameters of our network by training
the original network on the ImageNet classification task. As
the pose estimates in the early stages of training are inaccurate, the confidence values computed using Eq. 1 are initially unreliable. To remedy this, we pretrain our network
parameters by setting the regularization parameter for confidence to 0. Subsequently, we train our network by setting
λconf to 5 for the cells that contain an object, and to 0.1
otherwise, to have more reliable confidence estimates in the
early stages of the network. In practice, we set the sharpness
of the confidence function α to 2 and the distance threshold
to 30 pixels. We use stochastic gradient descent for optimization. We start with a learning rate of 0.001 and divide
the learning rate by 10 at every 100 epochs. To avoid overfitting, we use extensive data augmentation by randomly
changing the hue, saturation and exposure of the image by
up to a factor of 1.2. We also randomly scale and translate the image by up to a factor of 20% of the image size.
Our implementation is based on PyTorch. We will make our
code publicly available for the sake of reproducibility.
5. Experiments
We first evaluate our method for estimating the 6D pose
of single objects and then we evaluate it in the case where
multiple objects are present in the image. We use the same
datasets and evaluation protocols as in [2, 10, 25], which we
review below. We then present and compare our results to
the state of the art methods.
5.1. Datasets
We test our approach on two datasets that were designed
explicitly to benchmark 6D object pose estimation algorithms. We describe them briefly below.
LineMod [8] has become a de facto standard benchmark
for 6D object pose estimation of textureless objects in cluttered scenes. The central object in each RGB image is assigned a ground-truth rotation, translation, and ID. A full
3D mesh representing the object is also provided.
Method
Object
Ape
Benchvise
Cam
Can
Cat
Driller
Duck
Eggbox
Glue
Holepuncher
Iron
Lamp
Phone
Average
w/o Refinement
Brachmann BB8 OURS
[2]
[25]
95.3 92.10
80.0 95.06
80.9 93.24
84.1 97.44
97.0 97.41
74.1 79.41
81.2 94.65
87.9 90.33
89.0 96.53
90.5 92.86
78.9 82.94
74.4 76.87
77.6 86.07
69.5
83.9 90.37
w/ Refinement
Brachmann BB8
[2]
[25]
85.2
96.6
67.9
90.1
58.7
86.0
70.8
91.2
84.2
98.8
73.9
80.9
73.1
92.2
83.1
91.0
74.2
92.3
78.9
95.3
83.6
84.8
64.0
75.8
60.6
85.3
73.7
89.3
Table 1. Comparison of our approach with state-of-the-art algorithms on LineMod in terms of 2D reprojection error. We report
percentages of correctly estimated poses. Bold face numbers denote the best overall methods, bold italic numbers denote the best
methods among those that do not use refinement as opposed to
the ones that use, if different. Note that even though we do not
rely on the knowledge of a detailed 3D object model our method
consistently outperforms the baselines.
OCCLUSION [1] is a multi-object detection and pose estimation dataset that contains additional annotations for all
objects in a subset of the LineMod images. As its name suggests, several objects in the images are severely occluded
due to scene clutter, which makes pose estimation extremely
challenging. With the exception of [10, 25], it has primarily
been used to test algorithms that require depth images.
5.2. Evaluation Metrics
We use three standard metrics to evaluate 6D pose accuracy, namely – 2D reprojection error, average 3D distance of
model vertices (referred to as ADD metric), and IoU score
as in [2, 10, 25]. In all cases, we calculate the accuracy
as the percentage of correct pose estimates for certain error
thresholds.
When using the reprojection error, we consider a pose
estimate to be correct when the mean distance between the
2D projections of the object’s 3D mesh vertices using the
estimate and the ground truth pose is less than 5 pixels [2].
This measures the closeness of the true image projection of
the object to that obtained by using the estimated pose. This
metric is suitable for augmented reality applications.
When comparing 6D poses using the ADD metric, we
take a pose estimate to be correct if the mean distance between the true coordinates of 3D mesh vertices and those
estimated given the pose is less than 10% of the object’s diameter [8]. For most objects, this is approximately a 2cm
threshold but for smaller objects, such as ape, the threshold drops to about 1cm. For rotationally symmetric objects
whose pose can only be computed up to one degree of rotational freedom, we modify slightly the metric as in [2, 8]
Method
and compute
s=
1 X
min k(Rx + t) − (R̂x + t̂)k ,
M
|M|
(5)
x1 ∈M
where (R, t) are the ground-truth rotation and translation,
(R̂, t̂) the predicted ones, and M the vertex set of the 3D
model. We use this metric when evaluating the pose accuracy for the rotationally invariant objects, eggbox and glue
as in [2, 8]. To compute the IoU metric, we measure the
overlap between the projections of the 3D model given the
ground-truth and predicted pose and accept a pose as correct
if the overlap is larger than 0.5.
5.3. Single Object Pose Estimation
We first estimate the 6D pose of the central object in the
RGB only LineMod images, without reference to the depth
ones. We compare our approach to those of [2, 10, 25],
which operate under similar conditions.
In this dataset, the training images are selected such that
the relative orientation between corresponding pose annotations are larger than a threshold. To avoid being influenced
by the scene context, we segment the training images using the segmentation masks provided with the dataset and
replace the background by a random image from the PASCAL VOC dataset [6].
We use exactly the same training/test splits as in [25].
We report our results in terms of 2D reprojection error in
Table 1 and 6D pose error in Table 2. We provide example
pose predictions of our approach in Figure 3.
5.3.1
w/o Refinement
Brachmann BB8 SSD-6D
Object
[2]
[25] [10]
Ape
27.9
0
Benchvise
62.0 0.18
Cam
40.1 0.41
Can
48.1 1.35
Cat
45.2 0.51
Driller
58.6 2.58
Duck
32.8
0
Eggbox
40.0
8.9
Glue
27.0
0
Holepuncher
42.4 0.30
Iron
67.0 8.86
Lamp
39.9 8.20
Phone
35.2 0.18
Average
32.3
43.6 2.42
w/ Refinement
OURS Brachmann BB8 SSD-6D
[2]
[25] [10]
21.62
33.2
40.4
65
81.80
64.8
91.8
80
36.57
38.4
55.7
78
68.80
62.9
64.1
86
41.82
42.7
62.6
70
63.51
61.9
74.4
73
27.23
30.2
44.3
66
69.58
49.9
57.8 100
80.02
31.2
41.2 100
42.63
52.8
67.2
49
74.97
80.0
84.7
78
71.11
67.0
76.5
73
47.74
38.1
54.0
79
55.95
50.2
62.7
79
Table 2. Comparison of our approach with state-of-the-art algorithms on LineMod in terms of ADD metric. We report percentages of correctly estimated poses. Bold face numbers denote the
best overall methods, bold italic numbers denote the best methods
among those that do not use refinement as opposed to the ones that
use, if different.
Threshold
Object
Ape
Benchvise
Cam
Can
Cat
Driller
Duck
Eggbox
Glue
Holepuncher
Iron
Lamp
Phone
Average
[10]
0
0.18
0.41
1.35
0.51
2.58
0
8.9
0
0.30
8.86
8.20
0.18
2.42
10%
OURS
21.62
81.80
36.57
68.80
41.82
63.51
27.23
69.58
80.02
42.63
74.97
71.11
47.74
55.95
30%
[10] OURS
5.62
70.67
2.07
91.07
34.52 81.57
61.43 99.02
36.87 90.62
56.01 99.01
5.56
70.70
24.61 81.31
14.18 89.00
18.23 85.54
59.26 98.88
57.64 98.85
35.55 91.07
31.65 88.25
50%
[10] OURS
19.95 88.10
10.62 98.85
63.54 94.80
85.49 99.90
64.04 98.80
84.86 99.80
32.65 89.39
48.41 98.31
26.94 97.20
38.75 96.29
88.31 99.39
81.03 99.62
61.22 98.85
54.29 96.78
Table 3. Comparison of our approach with SSD-6D [10] without
refinement using different thresholds for the 6D pose metric.
Comparative Accuracy
6D Accuracy in terms of projection error. In Table 1,
we compare our results to those of Brachmann et al. [2] and
to BB8 [25]. Both of these competing methods involve a
multi-stage pipeline that comprises a 2D detection step followed by pose prediction and refinement. Since we do not
have a refinement stage, we show in the table their results
without and with it. In both cases, we achieve better 6D
pose estimation accuracies.
In Table 4, we perform a similar comparison with SSD6D [10], whose authors report their projection accuracy in
terms of the IoU metric. That method also requires a posteriori refinement and our results are again better in both
cases, even though SSD-6D relies on a large training set
of rendered images that are sampled over a wide range of
viewpoints and locations.
the competing methods. Before refinement, we outperform
all the methods by a significant margin of at least 12%. After refinement, our pose estimates are still better than Brachmann et al. [2]. By assuming the additional knowledge of a
full 3D CAD model and using it to further refine the pose,
BB8 1 and SSD-6D 2 boost their pose estimation accuracy.
Without any bells and whistles, our approach achieves
state-of-the-art pose estimation accuracy in all the metrics
without refinement. When compared against methods that
rely on the additional knowledge of full 3D CAD models
and pose refinement, it still achieves state-of-the-art performance in 2D projection error and IoU metrics and yields
comparable accuracy in the ADD metric. Our approach
could be used in conjunction with such refinement strategies to further increase the accuracy however this comes at
a heavy computational cost as we describe below.
6D Accuracy in terms of the ADD metric. In Tables 2
and 3, we compare our methods against the other in terms of
the average of the 3D distances, as described in Section 5.2.
In Table 2, we give numbers before and after refinement for
1 The authors do not report results without refinement, however they
provided us with the accuracy numbers reported in Table 2.
2 The authors were not able to provide their accuracy numbers without
refinement for this metric, but made their code publicly available. We ran
their code with provided pretrained models to obtain the 6D pose errors.
Method
Object
Ape
Benchvise
Cam
Can
Cat
Duck
Glue
Holepuncher
Iron
Lamp
Phone
Average
Driller
Eggbox
w/o Refinement
SSD-6D OURS
[10]
98.46
99.81
100
99.90
99.53
100
100
99.81
99.34
99.90
99.04
100
97.24
99.81
98.95
99.90
99.65
100
99.38
100
99.91
100
99.22
99.92
100
99.91
w/ Refinement
SSD-6D
[10]
99
100
99
100
99
98
98
99
99
99
100
99.4
99
99
Table 4. Comparison of our approach against [10] on LineMod
using IoU metric. The authors of [10] were able to provide us the
results of our approach w/o the refinement.
5.3.2
Accuracy / Speed Trade-off
In Table 5, we report the computational efficiency of our
approach for single object pose estimation in comparison
to the state-of-the-art approaches [2, 10, 25]. Our approach
runs at real-time performance in contrast to the existing approaches which fall short of it. In particular, our algorithm
runs at least 5 times faster than the state-of-the-art techniques for single object pose estimation.
As can be seen in Table 2, pose refinement in Brachmann et al. increase the accuracy significantly by 17.9% at
an additional run-time of 100 miliseconds per object. BB8
also gets a substantial improvement of 19.1% in accuracy at
an additional run-time of 21 miliseconds per object. Even
without correcting for the pose error, our approach outperforms Brachmann et al. and yields close accuracy to BB8
while being 16 times faster for single object pose estimation. As discussed also in [10], the unrefined poses computed from the bounding boxes of the SSD 2D object detector, are rather approximate. We confirmed this by running
their publicly available code with the provided pretrained
models. We report the accuracy numbers without the refinement using the ADD metric in Table 3 for different thresholds. While providing a good initialization for the subsequent pose processing, the pose estimates of SSD-6D without refinement are much less accurate than our approach.
The further refinement increases the pose estimation accuracy significantly, however st of a computational time of 24
miliseconds per object. Moreover, in contrast to our approach, the refinement requires the knowledge of the full
3D object CAD model.
In Figure 3, we show example results of our method
on the L INE M OD. We include more visual results of our
method in the supplementary material.
Method
Overall speed for 1 object
Refinement runtime
2 fps
3 fps
10 fps
50 fps
100 ms/object
21 ms/object
24 ms/object
-
Brachmann et al. [2]
Rad & Lepetit [25]
Kehl et al. [10]
OURS
Table 5. Comparison of the overall computational runtime of our
approach for a single object in comparison to [2, 10, 25]. We further provide the computational runtime induced by the pose refinement stage of [2, 10, 25]
and report pose estimation accuracy as in [25]. The identity
of the objects cannot be assumed to be known a priori and
has to be guessed. To this end, the method of [25] assumes
that it has access to image crops based on the ground-truth
2D bounding boxes. 3 We make no such assumptions. Instead, we jointly detect the object in 2D, estimate its identity
and predict its 6D pose. We generate our training images
with the approach explained in Section 5.2. We further augment the LineMod training data by adding into the images
objects extracted from other training sequences. We report
our pose estimation accuracy in Figure 4 and demonstrate
that even without assuming ground-truth information as in
the case of [25], our method yields satisfactory pose accuracy in the case of severe occlusions. For object detection
purposes, we consider an estimate to be correct if its detection IoU is larger than 0.5. Note that here the detection IoU
corresponds to the overlap of the 2D bounding boxes of the
object, rather than the overlap of the projected masks as is
the case for the IoU metric defined in Sec 5.2. In Table 6,
we report a mean average precision (MAP) of 0.48 which is
similar to the accuracy reported by [2] and outperforms the
ones reported by [7, 10].
Method
MAP
Hinterstoisser et al. [7]
Brachmann et al. [2]
Kehl et al. [10]
OURS
0.21
0.51
0.38
0.48
Table 6. The detection experiment on the Occlusion dataset [2].
(Left) Precision-recall plot. (Right)
5.4. Multiple Object Pose Estimation
Our approach provides accurate 6D poses with real-time
performance. Upon one network invocation, our only computational overhead is an efficient PnP algorithm which operates on just 9 points per object. Furthermore we do not
require full 3D colored object models to further refine our
initial pose estimates. Our approach is therefore scalable to
handle multiple objects as shown in Figure 5 and has only
a negligible computational overhead of PnP (0.2 miliseconds/object) while the competing approaches [10] have a
linear runtime growth.
We use the OCCLUSION dataset to compare our approach to Brachmann et al. [2] for multi-object detection
3 This it is not explicitly stated in [25], but the authors confirmed this to
us in private email communication.
Figure 3. Pose estimation results of our approach. Note that our method can recover the 6D pose in these challenging scenarios, which
involve significant amounts of clutter, occlusion and orientation ambiguity. In the last column, we show failure cases due to motion blur,
severe occlusion and specularity (this figure is best viewed on a computer screen).
tions. With only 1-2 % decrease in accuracy we can reach
to a runtime of 94 fps and the runtime virtually remains the
same for estimating the pose of multiple objects.
Method
416 × 416
480 × 480
544 × 544
608 × 688
Figure 4. Percentage of correctly estimated poses as a function
of the projection error for different objects of the Occlusion
dataset [2].
500
Kehl et al. `17
OURS
Runtime in miliseconds
400
89.71
90.00
90.37
90.65
94 fps
67 fps
50 fps
43 fps
Table 7. Accuracy/speed trade-off of our method on the L INE M OD
dataset. Accuracy reported is the percentage of correctly estimated
poses with respect to the 2D projection error. The same network
model is used for all four input resolutions. Timings are on a
.
NVIDIA Titan X (Pascal) GPU.
6. Conclusion
300
200
100
0
2D projection metric Speed
0
5
10
15
20
Number of objects
Figure 5. The runtime of our approach with increasing number of
objects as compared to that of [10].
We also evaluated the accuracy and speed of our approach for different input resolutions. As explained in Section 3.1, we adopt a multi-scale training procedure and
change the input resolution during training randomly as
in [28]. This allows us to be able to change the input resolution at test-time and predict from images with higher resolution. This is especially useful for predicting the pose
of small objects more robustly. As we do not have an initial
step for 2D object detection and produce image crops which
are then resized to higher resolutions for pose prediction as
in [25], our approach requires better handling of the small
objects. In Table 7, we compare the accuracy and computational efficiency of our approach for different input resolu-
We have proposed a new CNN architecture for fast and
accurate single-shot 6D pose prediction that naturally extends the single shot 2D object detection paradigm to 6D
object detection. Our network predicts 2D locations of the
projections of the objects 3D bounding box corners which
involves predicting just a few more 2D points than for 2D
bounding box regression. Given the predicted 2D corner
projections, the 6D pose is computed via an efficient PnP
method. For high accuracy, existing CNN-based 6D object detectors all refine their pose estimates during postprocessing, a step that requires an accurate 3D object model
and also incurs a runtime overhead per detected object. In
contrast, our single shot predictions are very accurate which
alleviates the need for refinement. Due to this, our method
is not dependent on access to 3D object models and there
is virtually no overhead when estimating the pose of multiple objects. Our method is real-time; it runs at 50 – 94 fps
depending on the image resolution. This makes it substantially faster than existing methods.
Acknowledgements. We would like to thank Mahdi Rad
and Vincent Lepetit for fruitful discussions and providing
the results of their method in Table 2. Also, we thank
Wadim Kehl, Fabian Manhardt and Slobodan Ilic for helpful discussions and for their help in evaluating their algorithm without postprocessing in Table 4.
References
[1] E. Brachmann, A. Krull, F. Michel, S. Gumhold, J. Shotton, and
C. Rother. Learning 6D Object Pose Estimation Using 3D Object
Coordinates. In ECCV, 2014. 1, 2, 5, 10
[2] E. Brachmann, F. Michel, A. Krull, M. Ying Yang, S. Gumhold,
et al. Uncertainty-Driven 6D Pose Estimation of Objects and
Scenes from a Single RGB Image. In CVPR, 2016. 2, 5, 6, 7,
8
[3] C. Choi and H. I. Christensen. 3D Textureless Object Detection
and Tracking: An Edge-Based Approach. In IROS, 2012. 1, 2
[4] C. Choi and H. I. Christensen. RGB-D Object Pose Estimation in
Unstructured Environments. Robotics and Autonomous Systems,
2016. 1, 2
[5] A. Collet, M. Martinez, and S. S. Srinivasa. The MOPED Framework: Object Recognition and Pose Estimation for Manipulation.
The International Journal of Robotics Research, 2011. 2
[6] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and
A. Zisserman. The PASCAL Visual Object Classes (VOC) Challenge. IJCV, 2010. 6, 10
[7] S. Hinterstoisser, S. Holzer, C. Cagniart, S. Ilic, K. Konolige,
N. Navab, and V. Lepetit. Multimodal Templates for Real-Time
Detection of Texture-less Objects in Heavily Cluttered Scenes. In
ICCV, 2011. 2, 7
[8] S. Hinterstoisser, V. Lepetit, S. Ilic, S. Holzer, G. Bradski, K. Konolige, and N. Navab. Model Based Training, Detection and Pose Estimation of Texture-less 3D Objects in Heavily Cluttered Scenes.
In ACCV, 2012. 1, 2, 5, 6, 10
[9] D. P. Huttenlocher, G. A. Klanderman, and W. J. Rucklidge. Comparing Images using the Hausdorff Distance. TPAMI, 1993. 2
[10] W. Kehl, F. Manhardt, F. Tombari, S. Ilic, and N. Navab. SSD-6D:
Making RGB-Based 3D Detection and 6D Pose Estimation Great
Again. In ICCV, 2017. 1, 2, 5, 6, 7, 8
[11] W. Kehl, F. Milletari, F. Tombari, S. Ilic, and N. Navab. Deep
Learning of Local RGB-D Patches for 3D Object Detection and
6D Pose Estimation. In ECCV, 2016. 1, 2
[12] A. Kendall, M. Grimes, and R. Cipolla. PoseNet: A Convolutional
Network for Real-Time 6-DOF Camera Relocalization. In ICCV,
2015. 2
[13] K. Lai, L. Bo, X. Ren, and D. Fox. A Large-Scale Hierarchical
Multi-View RGB-D Object Dataset. In ICRA, 2011. 1, 2
[14] K. Lai, L. Bo, X. Ren, and D. Fox. A Scalable Tree-Based Approach for Joint Object and Pose Recognition. In AAAI, 2011. 2
[15] V. Lepetit and P. Fua. Monocular Model-Based 3D Tracking of
Rigid Objects: A Survey. Foundations and Trends in Computer
Graphics and Vision, 2005. 2
[16] V. Lepetit, F. Moreno-Noguer, and P. Fua. EPnP: An Accurate O(n)
Solution to the PnP problem. IJCV, 2009. 1, 2, 5
[17] Y. Li, L. Gu, and T. Kanade. Robustly Aligning a Shape Model and
Its Application to Car Alignment of Unknown Pose. TPAMI, 2011.
2
[18] M.-Y. Liu, O. Tuzel, A. Veeraraghavan, and R. Chellappa. Fast
Directional Chamfer Matching. In CVPR, 2010. 2
[19] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and
A. C. Berg. SSD: Single Shot MultiBox Detector. In ECCV, 2016.
1, 2
[20] D. G. Lowe. Fitting Parameterized Three-Dimensional Models to
Images. TPAMI, 1991. 2
[21] D. G. Lowe. Object Recognition from Local Scale-Invariant Features. In ICCV, 1999. 1, 2
[22] S. Mahendran, H. Ali, and R. Vidal. 3D Pose Regression using
Convolutional Neural Networks. CVPRW, 2017. 2
[23] F. Michel, A. Kirillov, E. Brachmann, A. Krull, S. Gumhold,
B. Savchynskyy, and C. Rother. Global Hypothesis Generation for
6D Object Pose Estimation. In CVPR, 2017. 2
[24] P. Poirson, P. Ammirato, C.-Y. Fu, W. Liu, J. Kosecka, and A. C.
Berg. Fast Single Shot Detection and Pose Estimation. In 3DV,
2016. 2
[25] M. Rad and V. Lepetit. BB8: A Scalable, Accurate, Robust to Partial Occlusion Method for Predicting the 3D Poses of Challenging
Objects without Using Depth. In ICCV, 2017. 1, 2, 5, 6, 7, 8, 11,
12
[26] K. Ramnath, S. N. Sinha, R. Szeliski, and E. Hsiao. Car Make and
Model Recognition using 3D Curve Alignment. In WACV, 2014. 2
[27] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You Only
Look Once: Unified, Real-Time Object Detection. In CVPR, 2016.
1, 2, 4
[28] J. Redmon and A. Farhadi. YOLO9000: Better, Faster, Stronger.
CVPR, 2017. 1, 2, 4, 8
[29] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards
Real-Time Object Detection with Region Proposal Networks. In
NIPS, 2015. 2
[30] R. Rios-Cabrera and T. Tuytelaars. Discriminatively Trained Templates for 3d Object Detection: A Real Time Scalable Approach.
In ICCV, 2013. 2
[31] F. Rothganger, S. Lazebnik, C. Schmid, and J. Ponce. 3D Object
Modeling and Recognition using Local Affine-Invariant Image Descriptors and Multi-View Spatial Constraints. IJCV, 2006. 1, 2
[32] J. Sock, S. H. Kasaei, L. S. Lopes, and T.-K. Kim. Multi-view 6D
Object Pose Estimation and Camera Motion Planning using RGBD
Images. In ICCV, 2017. 2
[33] H. Su, C. R. Qi, Y. Li, and L. J. Guibas. Render for CNN: Viewpoint Estimation in Images Using CNNs trained with Rendered 3D
Model Views. In ICCV, 2015. 2
[34] S. Tulsiani and J. Malik. Viewpoints and Keypoints. In CVPR,
2015. 2
[35] D. Wagner, G. Reitmayr, A. Mulloni, T. Drummond, and
D. Schmalstieg. Pose Tracking from Natural Features on Mobile
Phones. In ISMAR, 2008. 1, 2
[36] Y. Xiang, T. Schmidt, V. Narayanan, and D. Fox. PoseCNN: A
Convolutional Neural Network for 6D Object Pose Estimation in
Cluttered Scenes. arXiv preprint arXiv:1711.00199, 2017. 2
[37] C. Zach, A. Penate-Sanchez, and M.-T. Pham. A Dynamic Programming Approach for Fast and Robust Object Pose Recognition
from Range Images. In CVPR, 2015. 2
[38] H. Zhang and Q. Cao. Combined Holistic and Local Patches for
Recovering 6D Object Pose. In ICCV, 2017. 2
[39] M. Zhu, K. G. Derpanis, Y. Yang, S. Brahmbhatt, M. Zhang,
C. Phillips, M. Lecce, and K. Daniilidis. Single Image 3D Object Detection and Pose Estimation for Grasping. In ICRA, 2014.
2
Supplemental Material:
“Real-Time Seamless Single Shot 6D Object Pose Prediction”
In the supplemental material, we provide details on how
the training images were prepared and on the proposed confidence weighted prediction step. We also present qualitative results on O CCLUSION [1] and L INE M OD [8].
Training Images. As discussed in the main paper, we
segment the foreground object in the images in the training
set, using the segmentation masks provided and paste the
segmented image over a random image taken from the PASCAL VOC dataset [6]. Examples of such images, which are
given as input to the network at training time are shown in
Figure 6. This operation of removing the actual background
prevents the network from learning the scene context and is
essential in order to achieve proper generalization.
Confidence-weighted prediction. In the final step of our
method, we compute a weighted sum of multiple sets of
predictions for the corners and the centroid, using associated confidence values as weights. On L INE M OD, this gave
a 1–2% improvement in accuracy with the 2D projection
metric. The first step involves scanning the full 17×17 grid
to find the cell with the highest confidence for each potential object. We then consider a 3 × 3 neighborhood around
it on the grid and prune the cells with confidence values
lower than the detection threshold of 0.5. On the remaining
cells, we compute a confidence-weighted average of the associated predicted 18-dimensional vectors, where the eight
corner points and the centroid have been stacked to form
the vector. The averaged coordinates are then used in the
PnP method. This sub-pixel refinement on the grid usually
improves the pose of somewhat large objects that occupy
several adjoining cells in the grid. Figure 7 shows an example where the ape object lies between two adjoining cells
and the confidence weighting improves the pose accuracy.
Figure 7. (Left) The 17×17 grid on a 544×544 image. (Middle)
Confidence values for predictions of the ape object on the grid.
(Right) Cropped view of our pose estimate (shown in blue) and
the ground truth (shown in green). Here, three cells next to the
best cell have good predictions and their combination gives a more
accurate pose than the best prediction alone (best viewed in color).
Figure 6. (Top) Using segmentation masks given in L INE M OD, we
extract the foreground objects in our training images and composite them over random images from PASCAL VOC [6]. (Bottom)
We also augment the training set by combining images of multiple
objects taken from different training images.
Qualitative Results. We show qualitative results from the
O CCLUSION [1] and L INE M OD [8] datasets in Figures 8
to 13. These examples show that our method is robust to
severe occlusions, rotational ambiguities in appearance, reflections, viewpoint change and scene clutter.
(a)
(b)
(c)
Figure 8. Results on the O CCLUSION dataset. Our method is quite robust against severe occlusions in the presence of scene clutter and
rotational pose ambiguity for symmetric objects. (a) Input images, (b) 6D pose predictions of multiple objects, (c) A magnified view of
the individual 6D pose estimates of six different objects is shown for clarity. In each case, the 3D bounding box is rendered on the input
image. The following color coding is used – A PE (gold), B ENCHVISE (green), C AN (red), C AT (purple), D RILLER (cyan), D UCK (black),
G LUE (orange), H OLEPUNCHER (blue). In addition to the objects from the O CCLUSION dataset, we also visualize the pose predictions of
the Benchvise object from the L INE M OD dataset. As in [25], we do not evaluate on the Eggbox object, as more than 70% of close poses
are not seen in the training sequence. This image is best viewed on a computer screen.
(a)
(b)
(c)
Figure 9. Results on the O CCLUSION dataset. Our method is quite robust against severe occlusions in the presence of scene clutter and
rotational pose ambiguity for symmetric objects. (a) Input images, (b) 6D pose predictions of multiple objects, (c) A magnified view of
the individual 6D pose estimates of six different objects is shown for clarity. In each case, the 3D bounding box is rendered on the input
image. The following color coding is used – A PE (gold), B ENCHVISE (green), C AN (red), C AT (purple), D RILLER (cyan), D UCK (black),
G LUE (orange), H OLEPUNCHER (blue). In addition to the objects from the O CCLUSION dataset, we also visualize the pose predictions of
the Benchvise object from the L INE M OD dataset. As in [25], we do not evaluate on the Eggbox object, as more than 70% of close poses
are not seen in the training sequence. This image is best viewed on a computer screen.
Figure 10. Example results on the L INE M OD dataset: (left) A PE, (middle) B ENCHVISE, (right) C AM. The projected 3D bounding boxes
are rendered over the image and they have been cropped and resized for ease of visualization. The blue cuboid is rendered using our pose
estimate whereas the green cuboid is rendered using the ground truth object pose. Note that the input image dimension is 640 × 480 pixels
and the objects are often quite small. Noticeable scene clutter and occlusion makes these examples challenging.
Figure 11. Example results on the L INE M OD dataset: (left) C AN, (middle) C AT, (right) D RILLER. The projected 3D bounding boxes are
rendered over the image and they have been cropped and resized for ease of visualization. The blue cuboid is rendered using our pose
estimate whereas the green cuboid is rendered using the ground truth object pose. Note that the input image dimension is 640 × 480 pixels
and the objects are often quite small. Noticeable scene clutter and occlusion makes these examples challenging.
Figure 12. Example results on the L INE M OD dataset: (left) D UCK, (middle) E GGBOX, (right) G LUE. The projected 3D bounding boxes
are rendered over the image and they have been cropped and resized for ease of visualization. The blue cuboid is rendered using our pose
estimate whereas the green cuboid is rendered using the ground truth object pose. Note that the input image dimension is 640 × 480 pixels
and the objects are often quite small. Noticeable scene clutter and occlusion makes these examples challenging.
Figure 13. Example results on the L INE M OD dataset: (left) H OLE P UNCHER, (middle) I RON, (right) L AMP and P HONE. The projected 3D
bounding boxes are rendered over the image and they have been cropped and resized for ease of visualization. The blue cuboid is rendered
using our pose estimate whereas the green cuboid is rendered using the ground truth object pose. Note that the input image dimension is
640 × 480 pixels and the objects are often quite small. Noticeable scene clutter and occlusion makes these examples challenging.
| 1 |
Deep Residual Text Detection Network for Scene
Text
Xiangyu Zhu, Yingying Jiang, Shuli Yang, Xiaobing Wang, Wei Li, Pei Fu, Hua Wang and Zhenbo Luo
Machine Learning Lab
Samsung R&D Institute of China, Beijing
Beijing, China
{xiangyu.zhu, yy.jiang} @samsung.com
Abstract—Scene text detection is a challenging problem in
computer vision. In this paper, we propose a novel text detection
network based on prevalent object detection frameworks. In
order to obtain stronger semantic feature, we adopt ResNet as
feature extraction layers and exploit multi-level feature by
combining hierarchical convolutional networks. A vertical
proposal mechanism is utilized to avoid proposal classification,
while regression layer remains working to improve localization
accuracy. Our approach evaluated on ICDAR2013 dataset
achieves 0.91 F-measure, which outperforms previous state-ofthe-art results in scene text detection.
Keywords—Scene text detection; Deep
CTPN
Residual Networks;
I. INTRODUCTION
Text detection is an important part of text content analysis,
especially for reading natural text in the wild. Scene text
detection is becoming increasing attractive to researchers, with
the development of smart phone and tremendous demands of
text recognition in Augmentation Reality (AR). Unlike
traditional documental text, detecting scene text seems to be a
much more challenging task due to illuminations, perspective
distortion and complex background.
In the last few decades, series of methods [1, 2, 3] had been
proposed to deal with this problem, which achieved
considerable performance. Those methods can be categorized
into Sliding Window based methods and Connected
Component (CC) based methods. Sliding Window based
method utilizes sliding windows to search the image densely
for candidate text regions, and classifies text/non-text regions
by traditional machine learning tools. This kind of method can
be quite slow as a consequence of densely search and multiscale windows. Comparing to the previous method, CC base
method draws more attention until recently. It involved several
steps, typically three. First, CCs are extracted from images as
character candidates. Second, a character classifier is trained to
remove the non-text CCs. Finally, remained CCs are going to
be grouped into text-lines by clustering or rules. Maximally
Stable Extremal Regions (MSER) is one of the most popular
CC-based methods. It had been reported outstanding
performance in ICDAR2013 benchmark [4]. However, the
following limitations constrain its further improvement in
performance. Words constituent of single character are ignored
by grouping rules for the sake of precision, characters in low
color contrast can not be extracted by MSER, and another
disadvantage is the complex post-processing.
Convolutional Neural Networks (CNN) approach has led
to a great breakthrough in object detection. Region proposal
CNN (R-CNN) [5] was the first attempt to classify proposals
by CNN. Then Faster R-CNN [6] was proposed, where a subnetwork named RPN was designed to generate proposals
autonomously by feature maps and a few additional
convolution layers. Faster R-CNN used VGG-16 [7] as
baseline for feature map extraction and proposal classification
until deep residual network (ResNet) [8] was presented.
ResNet was reported better performance in PASCAL VOC
2007 [9] and ILSVRC 2016 comparing to VGG16 and
GoogLeNet [10, 11]. Moreover, the structure of ResNet was
designed fully convolutional, without heavy fully connected
layers. The ResNet version of Faster R-CNN was observed
better performance.
Inspired by the great progress in object detection, a few
CNN based methods [12, 13, 14, 15, 16, 17] had been proposed
to address scene text detection. The Connectionist Text
Proposal Network (CTPN) [12] is a novel framework based on
Faster R-CNN, which benefits from an additional recurrent
neural network and vertical proposal mechanism.
In this paper, we came up with a framework called Residual
Text detection Network (RTN). RTN were inspired by ResNet
and CTPN vertical proposal mechanism. First, ResNet was
used to generate strong semantic feature instead of traditional
networks like VGG-16. Rather than a naively layer
replacement, we combine multi-level features to produce
hierarchy residual feature. The outstanding performance was
mainly contributed by this stronger semantic feature. Second,
vertical proposal mechanism was adopted and an additional
regression part was used to improve localization accuracy, this
step was implemented by a two stage training strategy. It
achieved 91.54% F-measure on ICDAR2013.
II. RELATED WORK
A. Object detection
With the success of deep convolutional network in image
recognition, R-CNN was inspired to classify region proposal
via CNN. After R-CNN was proposed, the related object
detection approaches had been developed rapidly, such as SPPnet, Fast R-CNN, Faster R-CNN, R-FCN. Faster R-CNN is a
mature prevalent framework that trained and tested from end to
end. The framework constitutes of three parts: (1) Feature map
generation. Feature maps representing semantic information
were extracted by deep convolutional network, VGG-16 was
used in Faster R-CNN. (2) Proposal generation. A simple
ResNet
Vertical mechanism RPN
Conv 1x1
BLSTM
2k box
coordinates
regression
Hierarchy residual feature
PSROI Pooling
pool
Fig.1. Architecture of Residual Text detection Network (RTN)
convolutional network name Region Proposal Networks (RPN)
was designed to generate candidate regions with the input of
feature maps. (3) Region classification and regression. By
sharing features regions proposals were projected to the
location in feature maps, then a following Fast R-CNN
structure outputted final results by classification and regression.
Influenced by the latest progress in image recognition,
other deeper convolutional networks were transplant to this
framework instead of VGG-16 [7], including GoogLeNet [10,
11] and ResNet [8]. ResNet was proved to be a superior
convolutional network than GoogLeNet and VGG16 in
ImageNet classification task. R-FCN [18] is a completely fully
convolutional architecture that combines ResNet and Faster RCNN together. FPN (Feature Pyramid Network) [19] exploits
multi-scale pyramid of ResNet, and the framework using FPN
won the champion of COCO [20] detection challenge 2016.
Besides Faster R-CNN based pipeline, Single Shot
MultiBox Detector (SSD) [21] and You Look Only Once
(YOLO) [22] are two representative and promising works. SSD
is one of the first attempts to utilizing multi-level convolutional
networks, while YOLO is extremely faster than all the methods
mentioned above. However, they do not get a superior
performance with significant margin comparing to Faster RCNN pipeline.
B. CNN based text detection
General object detection pipeline can be transplant to text
detection realm barrier free. CNN based text detection
gradually becomes the most promising approach. Zhang [14]
proposed a fully convolutional network for text detection in
arbitrary orientation instead of semantic segmentation. It
achieved an F-measure of 0.74 on ICDAR2013. DeepText [13]
proposed a Inception-RPN and multi-level region-of-interest
pooling based on the framework of Faster R-CNN. It achieved
0.85 F-measure on ICDAR2013. Inspired by SSD, Liao [15]
presented a approach called TextBoxes, multi-level jointly
predictions and word recognition were utilized.
CTPN [12] is a unique network abandoned Fast R-CNN
classification and regression, which can be treated as a novel
individual RPN with Recurrent Neural Network (RNN). It
achieved previous state-of-the-art on ICDAR2013 as 0.88 Fmeasure among published papers. Nevertheless, it was just a
prototype for detection using RNN and fixed width proposal is
harmful for localization accuracy.
III. RESIDUAL TEXT DETECTION NETWORK
The architecture of this Residual Text detection Network
(RTN) is shown in Fig. 1. It consists of three parts: hierarchy
residual feature map for feature extraction, vertical mechanism
RPN for proposal prediction and bounding box regression part
for higherlocalization accuracy.
A. Hierarchy residual feature map
In our framework, we use ResNet to derive feature map
from original images. The feature map is a serial of features in
2D formation, similar to handcraft feature. It is fed to RPN and
regression part. ResNet consists of 5 concatenate blocks (i.e.,
conv1, conv2_x, conv3_x, conv4_x and conv5_x). Conv4_x
have the same stride as VGG-16 output (16 pixels). In RFCN[18], region proposals were predicted by conv4_x. They
believed the conv4_x feature maps were semantic strong
enough and comparable to VGG-16 feature maps. VGG-16
differs from ResNet in structure. Thus, a simple replacement,
from VGG-16 to ResNet, would not work properly. Unlike
VGG-16, typical ResNet based detection does not share the
same feature map between RPN and regression parts. Conv4_x
is utilized to generate proposals in RPN while conv5_x for
regression. In this kind of methods, RPN is unable to use a
deeper semantic feature. By visualizing feature maps of
conv3_x, conv4_x, conv5_x and VGG-16, we find out
conv3_x contains too many low level features, while conv4_x
and conv5_x are competitive to VGG-16 on the first glance.
We have carried out series of experiments on Faster R-CNN
baseline using conv3_x, conv4_x and conv5_x respectively.
Framework using conv3_x detected edges and lines instead of
objects and required much more computation due to larger
feature map sizes. It was a strong evidence that conv3_x
contained too many low level features to be used directly. On
the contrary, baselines using conv4_x and conv5_x detected
text correctly. However, framework using conv5_x fails on
detecting small text due to coarse resolution feature maps.
Although conv5_x represents deeper feature, the resolution is
half comparing to conv4_x. Even we adopt the “ à trous
algorithm” [23] to compensate stride difference,
the
performance is still unsatisfactory. Using conv5_x as the only
feature maps might be insufficient for text detection, but
abandon deeper representations seems to be an unwise choice.
We believe using conv5_x in a proper way will contribute to
proposal prediction.
It is rational to come up with a naive idea that predicting
multi-scale proposals on conv4_x and conv5_x respectively,
like previous approaches did, such as SSD and TextBoxes. In
this way, not only we can detect fine scale text and robust to
scale invariance, but also utilizing deeper feature
representations. Nevertheless, it is inconvenient to identify
reliability from multi-scale proposals without an additional
classification, as we introduced vertical mechanism to RPN, it
seems to be a rather complicated problem.
To deal with that, we combine the hierarchy feature maps
(conv4_x and conv5_x) together to produce a new hierarchy
feature map. In this way, we can use both conv4_x and con5_x
feature maps simultaneously, and the task to identify which
feature maps are more reliable is assigned to convolution layers.
As shown in Fig 2, the input size of original images is
224 224 , after several convolutional layers, conv4_x and
conv5_x get feature maps in size 14 14 and 7 7 ,
corresponding to 16 pixels and 32 pixels stride. A
deconvolution layer was used to upsample conv5_x, make sure
the shapes of conv5_x (res5c) match conv4_x (res4b22)
exactly. We attach a convolution layer with kernel 1 1 ,
which aim to work as learnable weights for combining
conv5_x and conv4_x. Our experiment shows hierarchy feature
lead to an improvement on both precision and recall.
7x7
Conv1x1
14 x 14
Conv5
14 x 14
deconv
Conv4
Conv1x1
14 x 14
hierarchy
feature
224 x 224
Fig.2. Hierarchy residual network architecture. First, conv5_x was upsampled
to make sure its shapes match conv4_x. Second ,we attached convlutional
layers with kernal
to both conv5_x and conv4_x. Finally, hierarchy
feature was produced by element wise addition.
B. Vertical mechanism RPN
In Faster R-CNN, a serial of CNN is used to classify
proposals. The structure is called Fast R-CNN [24]. However,
CTPN abandoned Fast R-CNN structure, namely RPN output
vertical proposals directly without classification and regression.
As we know, RPN can be treated as a general object detection
system. If the detection task is to distinguish only one category
from background (two categories in total), it seems that RPN is
already competent for text detection. Depending on vertical
proposal mechanism and recurrent neural network, CTPN [12]
was able to detect text without Fast-RCNN. That mechanism
makes the final model much smaller.
In this approach, we adopt this vertical mechanism to RPN.
Anchors and ground truth are divided into fixed width (16
pixels) boxes, shown in Fig 3. Particularly, spaces between
ground truths are treated as negative samples. This enable the
method to output result in word level. Sequences of vertical
proposals will be predicted by RPN. A threshold is applied to
remove non-text vertical proposals, therefore remained
adjacent text proposals can be connected together to produce
text line proposals.
Fig.3 Yellow box: ground truth of vertical proposals. Green box: space
between words which are treated as negative samples
C. Bounding box regression
By connecting vertical proposals, we will obtain text-line
proposals as result. Nevertheless, fixed width proposal might
lead to inaccurate localization, when the beginning and the end
of vertical proposals are not exactly fit text. In small text case,
the problem becomes more serious. Unlike general object
detection, this inaccuracy will influence recognition
tremendously. If parts of the characters are not included in
bounding box, they might be omitted or wrongly recognized.
On the contrary, a loose bounding box contains much
background, and that could be recognized as additional
characters. In conclusion, a tight and exact bounding box is
significant for text detection and recognition.
To achieve this goal, we introduce bounding box regression
to get exact coordinates, just as Faster R-CNN and R-FCN did
in their framework. In this paper, we refer to Fast R-CNN
structure. As text line proposals are obtained in section B,
bounding box offset of every proposal were calculated.
However, classification is not contained in this part, only
regression is remained. A further classification is unnecessary,
and experiments show it is harmful for performance. This is
because recurrent neural networks we adopted in RPN have a
tendency to connect words into text lines. After we set word
level as network learning goal, text line level proposal might be
classified as negative result.
The bounding box regression loss is defined as:
𝐿𝑙𝑜𝑐 (𝑡 𝑢 , 𝑣) = ∑ 𝑠𝑚𝑜𝑜𝑡ℎ𝐿1 (𝑡𝑖𝑢 − 𝑣𝑖 )
𝑖∈{𝑥,𝑤}
𝑠𝑚𝑜𝑜𝑡ℎ𝐿1 (𝑥) = {
0.5𝑥 2
|𝑥| − 0.5
𝑖𝑓|𝑥| < 1
𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
In this functions, 𝑣 = (𝑣𝑥 , 𝑣𝑤 , ) is the ground truth of
bounding box, 𝑡 𝑢 = (𝑡𝑥 , 𝑡𝑤 , ) is predicted coordinates, 𝑥 and
𝑤 stand for x coordinate and width. 𝐿1 smooth function is used
for regression. This loss function is almost the same as what
used in Fast R-CNN except that two coordinates (𝑥, 𝑤) offsets
are predicted instead of four coordinates (𝑥, 𝑦, ℎ, 𝑤) . It is
unnecessary to regression y coordinate (𝑦) and height (ℎ)
which was done in RPN layers for every single vertical
proposal.
We develop a two stage training strategy to implement this
further regression:
Stage one. Hierarchy residual feature and vertical
mechanism RPN were trained; the learning rates of
regression parts were set to 0.
Stage two. Regression parts were trained individually,
the learning rates of ResNet, hierarchy residual feature
and RPN were set to 0. A normal RPN presented in
Faster R-CNN is used to generate anchors and train
regression parts and will not be used in test model.
D. Training and testing details
Our model was trained on 15,000 natural images collected
and labeled by ourselves. These images were labeled in word
level and resized to (600, 1000) scale. There is no overlap
between these images or any kind of public dataset available
on the internet. On the condition of the extremely similarity
between ICDAR2013 training set and testing set, ICDAR2013
training set was not included to prevent over-fitting
ICDAR2013 testing set.
Training ground truth is labeled in word level, and then
divided into vertical ground truth by a fixed width (16 pixels)
in proposal layer, corresponding to vertical proposals
mentioned above. The space between words were labeled as
negative samples, anchor has an IoU> 0.5 overlap with space
samples were signed as negative label. About 10% of negative
samples are space. By adding space sample, the networks tend
to output word level proposal rather than text-line level.
IV. EXPERIMENTS
A. Evaluation of hierarchy residual feature
In CTPN, 3,000 natural images were collected and label
for training, much less than ours. In order to prove the
improvement is a consequence of stronger semantic feature
map rather than much more training data, we implement our
own version of CTPN and training on 15,000 images. All the
experiments carried out below were trained on the same
amount of images.
In this experiment, we used VGG-16 and ResNet-101 as
backbone for feature extraction. Feature map generated by
different layers were evaluated, including conv5 of VGG-16,
conv4_x (res4b22) of ResNet-101 and hierarchy residual
feature map (res4b22 + res5c) used in RTN.
Table 1 shows the performances on ICDAR2013, we use
CTPN framework as baseline and different feature maps
mentioned above are evaluated, all the parameters and
following processing are the same. We evaluated these
methods on two scales respectively, namely (600, 1000) and
(960, 1280). Scale (600, 1000) means the shortest side of
images is no more than 600 pixels and the longest side can not
exceed 1000 pixels, so does to scale (960, 1280).
One observation is that all these feature maps are
competitive in scale (600, 1000). However, when it comes to
scale (960, 1280), margins between these methods becoming
considerable. We had run the open source test code 2 provided
by the author of CTPN, marked as CTPN-tianzhi. Larger scale
did not benefit performance. On the contrary, the F-score
degraded. Moreover, our CTPN implementation with VGG-16
improved slightly on F-scores. In conclusion, larger test scale
does not always helpful for detection and localization.
Nevertheless, by simply replacing VGG-16 to ResNet-101
conv4_x (res4b22), F-score improved to from 88.75% to
90.32%, it proves ResNet-101 has a superior feature
representation comparing to VGG-16 as other papers [8, 18]
mentioned. Furthermore, baseline with hierarchy residual
feature map (res4b22 + res5c) achieved the best performance
with F-score=91.17%, which improve 5 points on recall
comparing to the original CTPN.
The results shows baseline with hierarchy residual feature
achieves the best performance on both recall and precision on
scale (960, 1280), which could be a convincing evidence for
stronger semantic feature.
We evaluated RTN on ICDAR 2013 benchmarks. It
consists of 233 focused text images taken in the wild. The
evaluation criteria are provided by the ICDAR2015 Robust
Reading Competition website1 as previous works did.
First, the effectiveness of hierarchy residual feature map
was verified comparing to other prevalent feature extraction
layers. Then, additional regression layers were proved to be
helpful for localization accuracy. Finally, this method was
compared to other published methods, and it achieved state-ofthe-art performance.
TABLE 1. EVALUATING BASELINE WITH DIFFERENT FEATURE MAP ON ICDAR2013
Method
CTPN-Tianzhi
CTPN-Tianzhi
CTPN
CTPN
CTPN
CTPN
RTN
RTN
1. http://rrc.cvc.uab.es
2. https://github.com/tianzhi0549/CTPN/
Backbone
VGG-16
VGG-16
VGG-16
VGG-16
ResNet-101
ResNet-101
ResNet-101
ResNet-101
Scale
(600,1000)
(960,1280)
(600,1000)
(960,1280)
(600,1000)
(960,1280)
(600,1000)
(960,1280)
Feature map
conv5
conv5
conv5
conv5
res4b22
res4b22
res4b22+res5c
res4b22+res5c
Precision
92.98%
91.02%
92.56%
91.52%
93.38%
93.62%
93.65%
93.64%
Recall
82.98%
82.98%
83.96%
86.14%
82.76%
88.09%
83.14%
88.82%
F score
87.69%
86.81%
88.06%
88.75%
87.75%
90.32%
88.08%
91.17%
Fig.4. Example detection results of our RTN on the ICDAR2013 benchmark. The first row of images is the result before
connection and regression. The second row of images is the result after vertical proposal connection and regression.
TABLE 2 REGRESSION IMPROVEMENT BY ADDITIONAL
Convolutional layers
RTN_no_regression
RTN_regression
REGRESSION
Precision
Recall
93.64%
88.82%
94.20%
89.02%
TABLE 3 COMPARISON WITH STATE-OF-THE-ART PUBLICATIONS
ON ICDAR2013
Method
Yin [1]
Faster R-CNN
baseline[6]
R-FCN[18]
Multi-OrientedFCN[14]
SegLink[17]
DeepText[13]
TextBoxes[15]
CCTN[16]
CTPN[12]
Proposed RTN
F score
91.17%
91.54%
B. Regression improvement
Proposals connected by fixed width vertical proposals are
inaccurate on both beginning and end sides. Moreover, the
evaluation criteria are extremely strict. Detection bounding
box can be judged as false positive sample if its boundary
exceed ground truth slightly. It means this inaccuracy can
degrade performance on both recall and precession, even if the
texts are detected correctly.
Through bounding box regression, we are able to deal with
this problem properly. As shown in Table 2, RTN with
regression improved 0.4% on F-scores, both recall and
precision benefit from this additional regression.
C. Evaluation of RTN on ICDAR2013
After proving the effectiveness of hierarchy residual
feature and additional regression, we compare RTN with other
published methods on ICDAR2013. This single model
approach did not utilize multi-scale training and multi-scale
testing. Running time of each image is about 0.8s with GPU.
Fig4 shows examples of detection results on ICDAR2013.
First, we compared RTN with methods mentioned in
recent publications. CNN based text detection methods were
compared, including Textboxes, DeepText, Multi-oriented
FCN, CCTN, SegLink and CTPN. The prevalent object
detection frameworks like Faster R-CNN and R-FCN are also
evaluated. Table 3 shows RTN achieved the best performance
with great margin.
Second, we submitted our results to ICDAR2015 Robust
Reading Competition website and compared RTN with other
competitors on CHALLENGE 2.1. This task is also evaluated
on ICDAR2013 dataset. RTN with single model ranked third
performance with slightly margin (F-scores=0.3%) compared
to “Tencent Youtu” and “NLPR-CASIA”.
Precision
0.88
Recall
0.66
F score
0.76
0.86
0.75
0.80
0.90
0.76
0.83
0.88
0.78
0.83
0.87
0.87
0.89
0.90
0.93
0.94
0.83
0.83
0.83
0.83
0.83
0.89
0.85
0.85
0.86
0.86
0.88
0.91
V. CONCLUSIONS
In this paper, a deep residual text detection network is
proposed based on the prevalent object detection framework.
First, stronger semantic feature is obtained by using deep
residual networks and combining multi-level feature from
different convolutional networks. Then, a vertical proposal
mechanism is introduced inRPN inspired by CTPN. At last, an
additional regression system is used to improve localization
accuracy.
TABLE 4 COMPARISON WITH STATE-OF–THE-ART SUBMISSIONS
ON COMPETITION WEBSITES
Method
Precision
Recall
Tencent Youtu
94.26 %
89.53 %
NLPR-CASIA
94.63 %
89.17 %
RTN
94.20 %
89.02 %
RRPN-4
95.19%
87.31
MSRA_v1
93.67%
88.58%
Baidu IDL
92.83%
87.11%
F score
91.84 %
91.82 %
91.54 %
91.08%
91.06%
89.88%
REFERENCES
[1]
[2]
[3]
X. Yin, X. Yin, K. Huang, and H. Hao, “Robust Text Detection in
Natural Scene Images,” IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 36, no. 5, pp. 970–983, 2014.
L. Sun, Q. Huo, W. Jia, et al. “A robust approach for text detection from
natural scene images”. Pattern Recognitiom, vol. 48, no. 9, pp. 29062920, 2015.
X. Yin, X. Yin, et al. "Effective text localization in natural scene images
with MSER, geometry-based grouping and AdaBoost." International
Conference on Pattern Recognition IEEE, pp. 725-728, 2012.
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
D. Karatzas, F. Shafait, S. Uchida, M. Iwamura, L. Gomez i Bigorda, S.
Robles Mestre, J. Mas, D. Fernandez Mota, J. Almaz`an Almaz`an, and
L. P. de las Heras, “ICDAR 2013 robust reading competition,” 12th
International Conference on Document Analysis and Recognition
(ICDAR), 2013, pp. 1115–1124.
Girshick, Ross, et al. "Region-Based Convolutional Networks for
Accurate Object Detection and Segmentation." IEEE Transactions on
Pattern Analysis & Machine Intelligence vol. 38, no. 1, 2016, pp. 142158.
R. Ren, K. He, R. Girshick and J. Sun, “Faster R-CNN: Towards RealTime Object Detection with Region Proposal Networks,” Advances in
Neural Information Processing Systems (NIPS), 2015.
K. Simonyan and A. Zisserman. Very deep convolutional networks for
large-scale image recognition. In ICLR, 2015.
K. He, et al. "Deep Residual Learning for Image Recognition." IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), 2016
pp. 770-778.
M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A.
Zisserman.The Pascal Visual Object Classes (VOC) Challenge. IJCV, pp.
303–338, 2010.
C. Szegedy, W. Liu, et al. "Going deeper with convolutions." IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), 2015,
pp. 1-9.
C. Szegedy, V. Vanhoucke, S. Ioffe, et al. “Rethinking the Inception
Architecture for Computer Vision.” Computer Vision and Pattern
Recognition (CVPR) . pp. 2818-2826, 2016.
T. Zhi, H. Weilin, H. Tong, H. Pan, Q. Yu. “Detecting Text in Natrual
Image with Connectionist Text Proposal Network”. ECCV, 2016
Z Zhong,L Jin,S Zhang,Z Feng. “DeepText: A Unified Framework
for Text Proposal Generation and Text Detection in Natural Images”.
[14] Z Zhang,C Zhang,W Shen,C Yao. “Multi-Oriented Text Detection
with Fully Convolutional Networks.” Computer Vision and Pattern
Recognition (CVPR) , 2016.
[15] M Liao,B Shi,X Bai,X Wang,W Liu. “TextBoxes: A Fast Text
Detector with a Single Deep Neural Network.” AAAI, 2017.
[16] T He, W Huang , Y Qiao , J Yao. Accurate Text Localization in
Natural Image with Cascaded Convolutional TextNetwork Technical
report, arXiv:1603.09423, March, 2016
[17] Shi, Baoguang , Bai, Xiang , Belongie, Serge. “Detecting Oriented
Text in Natural Images by Linking Segments” Computer Vision and
Pattern Recognition (CVPR) , 2017.
[18] D. Jifeng, L. Yi, He K, and S. Jian. “R-FCN: Object Detection via
Region-based Fully Convolutional Networks.” Conference on Neural
Information Processing Systems (NIPS), 2016
[19] L. Tsung-Yi, Dollár. P, Girshick.R , He. K. “Feature Pyramid Networks
for Object Detection” . Computer Vision and Pattern Recognition
(CVPR) 2017.
[20] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P.
Doll´ar, and C. L. Zitnick, “Microsoft coco: Common objects in context,”
arXiv:1405.0312, 2014.
[21] W Liu , D Anguelov , D Erhan , C Szegedy. “SSD:Single Shot
MultiBox Detector” ECCV, 2016
[22] J Redmon,S Divvala,R Girshick,A Farhadi. “You Only Look Once:
Unified, Real-Time Object Detection”. IEEE Conference on Computer
Vision and Pattern Recognition (CVPR) pp.779-788, 2016
[23] S. Mallat, A wavelet tour of signal processing. Academic press, 1999.
[24] R. Girshick , “Fast R-CNN” IEEE International Conference on
Computer Vision (ICCV), pp. 1440-1448 ,2015
| 1 |
Fiducial, confidence and objective Bayesian posterior
distributions for a multidimensional parameter
arXiv:1612.01882v1 [math.ST] 6 Dec 2016
Piero Veronese and Eugenio Melilli
Bocconi University, Milano, Italy
Abstract
We propose a way to construct fiducial distributions for a multidimensional parameter using a step-by-step conditional procedure related to the inferential importance of the components of the parameter. For discrete models, in which the nonuniqueness of the fiducial distribution is well known, we propose to use the geometric
mean of the “extreme cases” and show its good behavior with respect to the more
traditional arithmetic mean. Connections with the generalized fiducial inference approach developed by Hannig and with confidence distributions are also analyzed. The
suggested procedure strongly simplifies when the statistical model belongs to a subclass of the natural exponential family, called conditionally reducible, which includes
the multinomial and the negative-multinomial models. Furthermore, because fiducial
inference and objective Bayesian analysis are both attempts to derive distributions
for an unknown parameter without any prior information, it is natural to discuss
their relationships. In particular, the reference posteriors, which also depend on the
importance ordering of the parameters are the natural terms of comparison. We
show that fiducial and reference posterior distributions coincide in the location-scale
models, and we characterize the conditionally reducible natural exponential families
for which this happens. The discussion of some classical examples closes the paper.
Keywords: Confidence distribution, Jeffreys prior, Location-scale parameter model,
Multinomial model, Natural exponential family, Reference prior.
1
Introduction
Fiducial distributions, after having been introduced by Fisher (1930, 1935) and widely
discussed (and criticized) in the subsequent years, have been de facto brushed aside for a
long time and only recently they have obtained new vitality. The original idea of Fisher
was to construct a distribution for a parameter which includes all the information given
by the data, without resorting to the Bayes theorem. This is obtained by transferring the
randomness from the observed quantity given by the statistical model to the parameter.
Originally Fisher considered a continuous sufficient statistic S with distribution function
Fθ , depending on a real parameter θ. Let qα (θ) denote the quantile of order α of Fθ and
1
let s be a realization of S. If qα (θ) is increasing in θ (i.e., Fθ is decreasing in θ), the
statement s < qα (θ) is equivalent to θ > qα−1 (s) and thus Fisher assumes qα−1 (s) as the
quantile of order 1 − α of a distribution which he names fiducial. The set of all quantiles
qα−1 (s), α ∈ (0, 1), establishes the fiducial distribution function Hs (θ) so that
Hs (θ) = 1 − Fθ (s)
and hs (θ) =
∂
∂
Hs (θ) = − Fθ (s).
∂θ
∂θ
(1)
Of course Hs , and its density hs , must be properly modified if Fθ is increasing in θ.
Fisher (1973, cap.VI) also provides some examples of multivariate fiducial distributions obtained by a “step-by-step” procedure, but he never develops a general and
rigorous theory. This fact, along with the problem to cover discrete models, the presence
of some inconsistencies of the fiducial distribution (e.g. the marginalization paradox, see
Dawid & Stone, 1982), and the difficulties in its interpretation, gave rise to a quite strong
negative attitude towards Fisher proposal.
In the renewed interest for the fiducial approach a relevant role is played by the
generalized fiducial inference introduced and developed by Hannig (2009, 2013), see also
Hannig et al. (2016) for a review. He provides a formal and mathematically rigorous
definition which has a quite general applicability. The crucial element of his definition is
a data-generating equation X = G(U, θ), which links the unknown parameter θ and the
observed data X through a random element U having a known distribution. Roughly
speaking, by shifting the randomness of U from X to θ (inverting G with respect to
θ after having fixed X = x), the distribution given by the statistical model leads to a
distribution for the parameter θ. Contrary to the original idea of Fisher, the generalized
fiducial distribution is non-unique and Hannig widely discusses this point. Applications to different statistical models can be found for instance in Hannig et al. (2007),
Hannig & Iyer (2008) and Wandler & Hannig (2012).
Other recent contributions to the topic of fiducial distributions are given by Taraldsen & Lindqvist
(2013), Martin & Liu (2013) and Veronese & Melilli (2015), henceforth V&M (2015). In
this last paper the authors derive fiducial distributions for a parameter in a discrete or
continuous real natural exponential family (NEF), and discuss some of their properties
with particular emphasis on the frequentist coverage of the fiducial intervals.
In the past fiducial distributions have often been associated with confidence distributions even if these latter have a different meaning. A modern definition of confidence
distribution is given in Schweder & Hjort (2002) and Singh et al. (2005), see the book by
Schweder & Hjort (2016) for a complete and updated review on confidence distributions
and their connections with fiducial inference. It is important to emphasize that a confidence distribution must be regarded as a function of the data with reasonable properties
from a purely frequentist point of view. A confidence distribution is conceptually similar
2
to a point estimator: as there exist several unbiased estimators, several confidence distributions can be provided for the same parameter and choosing among them can be done
resorting to further optimality criteria. Thus the confidence distribution theory allows
to compare, in a quite general setting, formal distributions for the parameter derived by
different statistical procedures.
In this paper we suggest a way to construct a unique distribution for a multidimensional parameter, indexing discrete and continuous models, following a step-by-step
procedure similar to that used by Fisher (1973) in some examples. We call it fiducial distribution, but we look at it simply as a distribution on the parameter space in the spirit
of the confidence distribution theory. The key-point of the construction is the procedure
by conditioning: the distribution of the data is factorized as a product of one-dimensional
laws and, for each of these, the fiducial density for a real parameter component, possibly
conditional on other components, is obtained. The joint fiducial density for the parameter is then defined as the product of the (conditional) one-dimensional fiducial densities.
It is well known that Fisher’s fiducial argument presents several drawbacks in higher
dimensions, essentially because one cannot recover the fiducial distribution for a function of the parameters starting from the joint fiducial distribution, see Schweder & Hjort
(2016, Ch. 6 and 9). Our approach, when it can be applied, presents the advantage
to construct sequentially the fiducial distribution directly on the parameters of interest and different fiducial distributions can be obtained focusing on different parameters
of interest. Also, it should be noticed that a general definition of confidence distribution for a multidimensional parameter does not exist and more attention is given to the
construction of approximate confidence curves for specific nested families of regions, see
Schweder & Hjort (2016, Ch. 9 and Sec. 15.4).
Interestingly, our joint fiducial distribution coincides in many cases with the Bayesian
posterior obtained using the reference prior. This fact motivates the second goal of the
paper: to investigate the relationships between the objective Bayesian posteriors and
the suggested fiducial distributions. Objective Bayesian analysis, see e.g. Berger (2006),
essentially studies how to perform a good Bayesian inference, especially for moderate
sample size, when one is unwilling or unable to assess a subjective prior. Under this
approach, the prior distribution is derived directly from the model and thus it is labeled as objective. The reference prior, introduced by Bernardo (1979) and developed by
Berger & Bernardo (1992), is the most successful default prior proposed in the literature. For a multidimensional parameter the reference prior depends on the grouping and
ordering of its components and, in general, no longer coincides with the Jeffreys prior.
This is the reference prior only for a real parameter and it is unsatisfactory otherwise, as
well known.
3
Lindley (1958) was the first to discuss the connections between fiducial and posterior
distributions for a real parameter, when a real continuous sufficient statistic exists. V&M
(2015) extend this result to real discrete NEFs, characterizing all families admitting a
fiducial prior, i.e. a prior leading to a posterior coinciding with the fiducial distribution.
This prior is strictly related to the Jeffreys prior. We show here that when the parameter
is multidimensional this relationship no longer holds and a new one is established with
the reference prior. In particular we prove results for location-scale parameter models
and conditionally reducible NEFs, a subclass of NEFs defined in Consonni & Veronese
(2001).
The paper is structured as follows. Section 2 reviews some basic facts on fiducial
and confidence distributions for real NEFs and on generalized fiducial distributions. The
proposal for constructing a step-by-step multivariate fiducial distribution is presented in
Section 3, which also discusses: the relationships with confidence distributions (Section
3.1), the use of the geometric mean of fiducial densities for solving the non-uniqueness
problem in discrete models (Section 3.2), the connections with the generalized fiducial
inference and the consistency with the sufficiency principle (Section 3.3). Section 3.4 studies the fiducial distributions for conditionally reducible NEFs and provides their explicit
expression for a particular subclass which includes the multinomial and the negativemultinomial model. Section 4 analyzes the relationships between the fiducial distributions
and the reference posteriors, in particular for location-scale parameter models (Section
4.1) and NEFs (Section 4.2), characterizing those which admit the fiducial prior. Section
5 discusses further examples in which fiducial and reference posteriors coincide. Section 6
concludes the paper presenting some possible asymptotic extensions. Finally, Appendix
A1 collects some useful technical results on conditionally reducible NEFs, while Appendix
A2 includes the proofs of all the results stated in the paper.
2
Preliminary results
The modern definition of confidence distribution for a real parameter φ of interest, see
Schweder & Hjort (2002, 2016) and Singh et al. (2005), can be formulated as follows:
Definition 1. Let {Fφ,λ , φ ∈ Φ ⊆ R, λ ∈ Λ} be a parametric model for data X ∈ X ; here
φ is the parameter of interest and λ is a nuisance parameter. A function C : X × Φ → R
is a confidence distribution for φ if C(x, ·) is a distribution function for each x ∈ X
and C(X, φ0 ) has a uniform distribution in (0, 1) under Fφ0 ,λ0 , where (φ0 , λ0 ) is the true
parameter value.
The relevant requirement in the previous definition is the uniformity of the distribution, which ensures the correct coverage of the confidence intervals. As seen in Section
4
1, the confidence distribution theory must be placed in a purely frequentist context
and allows to compare distributions on the parameter space, obtained using different approaches. Finally, the definition of confidence distribution can be generalized by requiring
that the uniformity assumption holds only asymptotically.
Strictly linked to the notion of confidence distribution is that of confidence curve,
defined, for each observed X = x, as the function φ → cc(φ) = |1 − 2C(x, φ)|; see
Schweder & Hjort (2016). This function gives the extremes of equal-tail confidence inter-
vals for any level 1 − α, allowing a fast and clear comparison of confidence distributions
with respect to their interval length. When the parameter of interest is multidimensional,
how to extend the definitions of confidence distribution and confidence curve is much less
clear and various proposals have been made, see Schweder & Hjort (2002, 2016) and
Singh et al. (2005).
As detailed in Section 1, Hannig (2009) has proposed the notion of generalized fiducial
distribution, which is based on a data-generating equation X = G(θ, U). Because several
functions G can generate the same statistical model, and not all the resulting fiducial
distributions are reasonable in terms of properties or computational tractability, Hannig
(2013, Sec. 5) gives some hints on the choice of a default function G. In particular, if
X = (X1 , . . . , Xn ) is an independent identically distributed (i.i.d.) random sample from
an (absolutely) continuous distribution function Fθ , with density fθ , θ ∈ Rd , he suggests
to use Xi = Fθ−1 (Ui ), i = 1, . . . , n, where Ui are i.i.d. uniform random variables on (0, 1)
and Fθ−1 is the inverse (or generalized inverse) of Fθ . If other regularity assumptions are
satisfied, the generalized fiducial distribution for θ can be written as
r(θ) = R
fθ (x)J(x, θ)
,
Θ fθ (x)J(x, θ)dθ
(2)
where the expression of J(x, θ), given in Hannig (2013, formula (3.7)), is
J(x, θ) =
X
det
{(i1 ,...,id ):1≤i1 <···<id ≤n}
d
dθ (Fθ (xi1 ), . . . , Fθ (xid ))
Qd
j=1 fθ (xij )
.
(3)
In (3) the numerator of the ratio is the determinant of the matrix whose kj-entry is
∂Fθ (xij )/∂θk . This procedure leads to the Fisher definition of fiducial density (1) when
n = d = 1.
Hannig (2013, Example 4) explicitly recognizes the advise of Wilkinson (1977) that
the choice of a fiducial distribution should depend on the parameter of interest and uses
the well known example of d independent normal distributions N(µi , 1), in which the
P
parameter of interest is θ = ( di=1 µ2i )1/2 . He shows that the default data-generating
equations Xi = µi + Ui , i = 1, . . . , d, lead to a fiducial distribution which has good
frequentist properties for inference on the µ’s, but very bad ones when the interest is on
5
θ, as already recognized by Stein (1959). Thus Hannig suggests an ad hoc alternative
equation, which leads to a better solution. Notice that our general procedure, suggested
in the next section, constructs a fiducial distribution starting directly from the parameter
of interest and do not required the choice a priori of a data generating function.
Fiducial distributions and their properties, with particular emphasis on the frequentist
coverage of the fiducial intervals, for a discrete or a continuous real regular NEF, are
discussed in V&M (2015). More specifically, consider the sufficient statistic S associated
with a sample of size n and denote by S its support. Let Fθ (s) be the distribution
function of S and pθ (s) = exp {θs − nM (θ)} the corresponding density (with respect to
a measure ν). Let a = inf S, b = sup S and define S ∗ = [a, b) if ν(a) > 0, otherwise
S ∗ = (a, b). Then, for s ∈ S ∗ , Petrone & Veronese (2010) have proved that
θ ≤ inf Θ
0
Hs (θ) =
1 − Fθ (s)
inf Θ < θ < sup Θ
1
θ ≥ sup Θ
(4)
is a fiducial distribution function for the natural parameter θ. It follows that the fiducial
density of θ is
∂
∂
hs (θ) =
Hs (θ) = − Fθ (s) =
∂θ
∂θ
Z
(−∞,s]
(nM ′ (θ) − t)pθ (t)dν(t).
(5)
It is important to underline, and simple to verify, that the distribution function Hs is
also a confidence distribution (only asymptotically, in the discrete case), according to
Definition 1.
Notice that, for discrete NEFs, Fθ (s) = Prθ {S ≤ s} and Prθ {S < s} do not coincide
and thus, besides Hs in (4), one could define a left fiducial distribution as
Hsℓ (θ) = 1 − Prθ {S < s}.
(6)
For convenience, sometimes Hs will be called right fiducial distribution. A standard way
to overcome this non-uniqueness is referring to the half-correction device (see Schweder & Hjort,
2016, pag. 62) which amounts to consider the mixture HsA (θ) = (Hs (θ) + Hsℓ (θ))/2 =
Prθ {S > s} + Prθ {S = s}/2, whose density is the arithmetic mean of hs (θ) and hℓs (θ).
Instead, we will suggest to average hs and hℓs using their geometric mean hG
s (suitably
normalized) and show that it presents better properties than hA
s (Section 3.2) and a more
direct connection with objective Bayesian inference (Section 4.2), even if, operationally,
the difference is usually not particularly big.
Table 1 provides the fiducial distributions obtained in V&M (2015) for some important
discrete and continuous NEFs, which will be used in the forthcoming examples. It also
establishes the abbreviations used in the paper for the standard distributions.
6
Table 1: Fiducial distributions for some real NEFs
N(µ, σ 2 )
Sufficient
Fiducial
statistic
P
Hs (µ) : N(s/n, σ 2 /n)
S=
distributions
i Xi
(σ 2 known)
N(µ, σ 2 )
S=
P
S=
P
i
Xi
Hs (λ) : Ga(nα, s)
S=
P
i
log(Xi /x0 )
Hs (λ) : Ga(n, s)
S=
P
i
Xic
Hs (λ) : Ga(n, s)
S=
P
i
Xi
Hs (p) : Be(s + 1, nm − s)
i (Xi
− µ)2
Hs (σ 2 ): In-Ga(n/2, s/2)
(µ known)
Ga(α, λ)
(α known)
Pa(λ, x0 )
(x0 known)
We(λ, c)
(c known)
Bi(m, p)
Hsℓ (p) : Be(s, nm − s + 1)
(m known)
HsG (p) : Be(s + 1/2, nm − s + 1/2)
Po(µ)
S=
P
i
Xi
Hs (µ) : Ga(s + 1, n)
Hsℓ (µ) : Ga(s, n)
HsG (µ) : Ga(s + 1/2, n)
Ne-Bi(m, p)
S=
P
i
Xi
Hs (p) : Be(nm, s + 1)
Hsℓ (p) : Be(nm, s)
(m known)
HsG (p) : Be(nm, s + 1/2)
The following notations are used: Ga(α, λ) for a gamma distribution with shape α and mean α/λ;
In-Ga(α, λ) for an inverse-gamma distribution (if X ∼ Ga(α, λ) then 1/X ∼ In-Ga(α, λ)); Be(α, β) for
a beta distribution with parameters α and β; Bi(m, p) for a binomial distribution with m trials and success probability p; Ne-Bi(m, p) for a negative-binomial with m successes and success probability p; Po(µ)
for the Poisson distribuition with mean µ; Pa(λ, x0 ) for a Pareto distribution with density λxλ0 x−λ−1 ,
x > x0 > 0, λ > 0; We(λ, c) for a Weibull distribution with density cλxc−1 exp(−λxc), x, λ, c > 0.
7
3
Fiducial distributions for multidimensional parameters
A natural way to construct a suitable fiducial distribution for a multidimensional parameter is to follow the step-by-step procedure used by Fisher (1973) in some examples. The
key-point of our proposal stems on the factorization of the sampling distribution as a
product of one-dimensional conditional laws. For each of these the fiducial density for a
real component of the parameter, possibly conditional on other components, is defined. It
is well known that different factorizations of sampling distributions can produce different
joint fiducial distributions, see e.g. Dempster (1963). However, we do not consider this
aspect a drawback of the procedure if it is linked to the inferential importance ordering
of the parameter components implied by the factorization. For example, if a parameter θ ∈ R2 is transformed in such a way that φ is the parameter of interest and λ the
nuisance, the obvious ordering is (φ, λ) and a suitable factorization must be defined accordingly, see Example 4 (ctd.) in this section for an illustration. The crucial role played
by the ordering of the parameters accordingly to their inferential importance is widely
acknowledged by objective Bayesian inference, in which reference priors are different for
different orderings, see Section 4.
In order to construct a fiducial distribution, we consider two basic transformations:
one involving the sample data X = (X1 , . . . , Xn ), having a distribution parameterized by θ = (θ1 , . . . , θd ), d ≤ n, and one involving θ. Given X, consider a statistic
T = (T1 , . . . , Tm ), d ≤ m ≤ n, with density pθ (t), which summarizes X without losing
information on θ. T can be a sufficient statistic or a one-to-one transformation of X.
Split T in (T[d] , T−[d] ), where T[d] = (T1 , . . . , Td ) and T−[d] = (Td+1 , . . . , Tm ), and suppose that T−[d] is ancillary for θ. As a consequence pθ (t) = pθ (t[d] |t−[d] )p(t−[d] ) and all
the information on θ provided by X are included in the conditional distribution of T[d]
given T−[d] .
Assume now that there exists a one-to-one smooth reparameterization from θ to φ,
with φ1 , . . . , φd ordered with respect to their importance, such that
pφ (t[d] |t−[d] ) =
d
Y
k=1
pφd−k+1 (tk |t[k−1] , t−[d] ; φ[d−k] ).
(7)
The density pφd−k+1 (tk |t[k−1] , t−[d] ; φ[d−k] ), with the corresponding distribution function
Fφd−k+1 (tk |t[k−1] , t−[d] ; φ[d−k] ), must be interpreted as the conditional distribution of Tk
given (T[k−1] = t[k−1], T−[d] = t−[d] ), parameterized by φd−k+1 , assuming φ[d−k] known.
In the following, we will always assume that all the one-dimensional conditional distribution functions Fφj ’s involved in the analysis are monotone and differentiable in φj and
have limits 0 and 1 when φj tends to the boundaries of its domain. Notice that this is
always true if Fφj belongs to a NEF, see (4). Under these assumptions, the joint fiducial
8
density of φ is obtained as
ht (φ) =
d
Y
k=1
ht[k] ,t−[d] (φd−k+1 |φ[d−k] ),
(8)
∂
Fφ
(tk |t[k−1] , t−[d] ; φ[d−k] ) .
∂φd−k+1 d−k+1
(9)
and
ht[k] ,t−[d] (φd−k+1 |φ[d−k] ) =
Several applications of this procedure to well known models will be provided in Section
5. Here we illustrate some interesting features of the fiducial distribution (8).
i) The existence of an ancillary statistic is not necessary if there exists a sufficient statistic
with the same dimension of the parameter (m = d). An important case is m = d = 1 so
that formula (8) and (9) reduce to ht (φ) = |∂Fφ (t)/∂φ|, the original formula suggested
by Fisher (1930).
ii) If one is only interested in φ1 , it follows from (7) that it is enough to consider
ht (φ1 ) =
∂
Fφ (td |t[d−1] , t−[d] ) ,
∂φ1 1
which, depending on all observations, does not lose any sample information. A typical
choice for Td is given by the maximum likelihood estimator φb1 of φ1 and thus, when φb1 is
not sufficient, we have to consider the distribution of φb1 given the ancillary statistic t−[d] .
Similarly, if one is interested in φ1 , φ2 , it is enough to consider ht (φ1 ) · ht[d−1] ,t−[d] (φ2 |φ1 ),
and so on.
iii) When an ancillary statistic T−[d] is needed, the fiducial distribution (8) is invariant
with respect to any one-to-one transformation of T−[d] . All the sampling distributions
are conditional on it and thus any transformation establishes the same constraints; see
Section 4.1 for an example.
iv) The construction by successive conditioning makes the fiducial distribution invariant under the so called one-to-one lower triangular transformation of T[d] , for fixed
T−[d] . More precisely, we consider a transformation T∗ = (T∗[d] , T−[d] ) such that Tk∗ =
gk (T[k] , T−[d] ), for k = 1, . . . , d. To see this, assuming for instance t∗k = gk (t[k] , t−[d] )
increasing in tk , it is sufficient to show that
Prφd−k+1 (Tk∗ ≤ t∗k | T∗[k−1] = t∗[k−1] , T−[d] = t−[d] ; φ[d−k] )
= Prφd−k+1 (gk (T[k] , T−[d] ) ≤ gk (t[k] , t−[d] ) | T∗[k−1] = t∗[k−1] , T−[d] = t−[d] ; φ[d−k] )
= Prφd−k+1 (Tk ≤ tk | T[k−1] = t[k−1] , T−[d] = t−[d] ; φ[d−k] ).
9
It follows immediately that T and T∗ lead to the same fiducial distribution.
v) If (T[k−1] , T−[d] ) is sufficient for φ[d−k] , for each k, then the conditional distribution
of Tk given (T[k−1] = t[k−1] , T−[d] = t−[d] ) does not depend on φ[d−k] and the fiducial
distribution (8) becomes the product of the “marginal” fiducial distributions of the φk ’s.
As a consequence, (9) can be used alone to make inference on φd−k+1 and the fiducial
distribution does not depend on the inferential ordering of the parameters. An important
case in which this happens will be discussed in Section 3.4.
We close this section establishing the invariance property of the fiducial distribution
ht (φ) under a lower triangular transformation, i.e. a transformation from φ to λ =
(λ1 , . . . , λd ), say, which maintains the same decreasing ordering of importance in the
components of the two vectors.
Proposition 1. If φ = φ(λ) is a one-to-one lower triangular continuously differentiable
function from Λ to Φ, then the fiducial distribution hφ
t (φ), obtained applying (8) to
the model pφ (t), and the fiducial distribution hλ
t (λ), obtained applying (8) to the model
pλ (t) = pφ(λ) (t), are such that, for each measurable A ⊂ Φ,
Z
Z
φ
hλ
ht (φ)dφ =
t (λ)dλ.
A
3.1
(10)
λ−1 (A)
Relationships with confidence distributions
Given a real NEF, Hs (θ) in (4) is an exact or approximate confidence distribution if
the observations are continuous or discrete, respectively. It is possible to verify that the
same is true for the marginal fiducial distribution of the main parameter of interest φ1 in
the more general definition (8). Indeed, the distribution function of φ1 is Ht (φ1 ) = 1 −
Fφ1 (td |t[d−1] , t−[d] ), so that the first requirement in Definition 1 is clearly satisfied, thanks
to the assumption on the distribution function given after formula (7). For what concerns
the uniformity condition, assuming that Fφ1 is decreasing in φ1 (if it is increasing, replace
1 − Fφ1 with Fφ1 ), we have, for u ∈ (0, 1) and arbitrary φ,
Prφ Ht[d] ,t−[d] (φ1 ) ≤ u = 1 − Prφ Fφ1 (td |T[d−1] , T−[d] ) < 1 − u =
=1−
Z
Prφ Fφ1 (td |t[d−1] , t−[d] ) < 1 − u dFφ (t[d−1] , t−[d] ) = u
because, by construction, the integrand is equal to 1 − u for all fixed (t[d−1] , t−[d] ).
3.2
The discrete case: the geometric mean of the left and right fiducial
densities
As mentioned in Section 2, for a discrete statistic S with distribution depending on a
real parameter θ, we suggest to use the geometric mean of the right and left fiducial
10
−1
ℓ
1/2 , where c is the normalizing constant, instead of
densities, hG
s (θ) = c (hs (θ)hs (θ))
their arithmetic mean hA
s (θ).
A first justification of the use of the geometric mean of densities is suggested by
Berger et al. (2015) who mention its property to be the density “closest” to hs and
hℓs with respect to the the Kullback-Leibler divergence, as specified in the following
proposition. We give a simple proof of this fact, without resorting to the calculus of
variations. Recall that, given two densities p and q, having the same support and the
same dominating measure ν, the Kullback-Leibler divergence of p from q is defined as
R
KL(q|p) = q(x) log(q(x)/p(x))dν(x).
Proposition 2. Consider two densities p1 and p2 with the same support. The density
q which minimizes KL(q|p1 ) + KL(q|p2 ) is given by q = pG ∝ (p1 p2 )1/2 , which is the
(normalized) geometric mean of p1 and p2 .
Furthermore, Krishnamoorthy & Lee (2010) observe that a distribution for θ, whose
aim is to give a synthesis of two fiducial distributions, should “stochastically” lie between
them. In our setting, the extreme distributions are Hs and Hsℓ . This property is surely
satisfied by the arithmetic mean, because Hs (θ) < HsA (θ) < Hsℓ (θ) uniformly with respect
to θ, for each s belonging to the set S0 for which both Hs and Hsℓ can be defined. The
same inequalities are true for HsG under mild assumptions. As usual, here we assume
that Hs (θ) is defined as 1 − Fθ (s).
Proposition 3. Let pθ , θ ∈ Θ ⊆ R, be the probability mass function of a real observation
S, having a continuous derivative with respect to θ. For each s ∈ S0 , assume that the
function
γs (θ) =
∂pθ (s)
∂θ
∂pθ (s)
∂Fθ (s)
/ −
=
/hs (θ)
∂θ
∂θ
is decreasing on Θ. Then Hs (θ) < HsG (θ) < Hsℓ (θ) uniformly on Θ.
The assumptions required in the previous proposition are satisfied by many important
models. For example we have the following
Corollary 1. If pθ is the probability mass function of a real NEF, then Hs (θ) < HsG (θ) <
Hsℓ (θ) uniformly on Θ.
We now discuss the relationship between HsG and HsA .
Proposition 4. Let pθ , θ ∈ Θ ⊆ R, be the probability mass function of a real observation
S, satisfying the following assumptions in addition to those stated in Proposition 3:
lim γs (θ) = +∞;
θ→inf Θ
lim
θ→sup Θ
11
γs (θ) = −1.
Then, for each s ∈ S0 , there exists θ ∗ ∈ Θ (depending on s) such that HsG (θ) < HsA (θ)
for θ < θ ∗ and HsG (θ) > HsA (θ) for θ ≥ θ ∗ .
The result in Proposition 4 is important in connection with confidence intervals,
because it shows that HsG gives, for a fixed level, a confidence interval smaller than that
obtained from HsA (θ); see Figure 1 (graph 2) for an example.
Notice that the assumptions on γs (θ) in Proposition 4 are fulfilled by a real NEF
with natural parameter space Θ = R, as it occurs in the binomial and Poisson models.
However, these assumptions are not necessary to ensure the stated behavior of HsG and
HsA , that we conjecture to be quite general, as the following example shows.
Example 1. Consider an i.i.d. sample of size n from a logarithmic distribution with
parameter θ ∈ (0, 1) with probability mass function
pθ (x) =
The sufficient statistic T =
Pn
i=1 Xi
pθ (t) =
θx
I
(x).
−x log(1 − θ) {1,2,...}
is distributed as
n!|s(t, n)|θ t
I
(t),
t!(− log(1 − θ))n {n,n+1,...}
where s(t, n) is the Stirling number of the first kind with arguments t and n, see
Johnson et al. (2005). The distribution of T belongs to a real NEF with Fθ (t) decreasing
in θ, so that the fiducial distribution function Ht , for t = n, n + 1, . . . and θ ∈ (0, 1), is
Ht (θ) = 1 − Fθ (t) = 1 −
t
X
j=n
n!|s(j, n)|θ j
.
j!(− log(1 − θ))n
For this model
∂pθ (s)
∂Fθ (s)
|s(t, n)|θ t−1 (nθ + t(1 − θ) log(1 − θ))
γt (θ) = −
/
= − Pt
.
∂θ
∂θ
t! j=n |s(j, n)|θ j−1 (nθ + j(1 − θ) log(1 − θ))/j!
It can be seen that, for each t ≥ n, γt is decreasing in θ and
lim γt (θ) = +∞,
θ→0+
−1
t
X
|s(j, n)| t!
lim γt (θ) = −
∈ (−1, 0).
−
|s(t, n)| j!
θ→1
j=n
Nevertheless, the fiducial distributions HtG and HtA behave as stated in Proposition 4,
see Figure 1 (graph 1).
Finally, we justify our preference for HsG versus HsA showing that its confidence risk
under quadratic penalty, as defined in Schweder & Hjort (2016, Sec. 5.3), is uniformly
better for all the important discrete models reported in Table 1. The confidence risk
12
Figure 1: Graph 1: Fiducial distributions for a sample from the logarithmic distribution
G
(n = 10, t = 12): hℓt (θ) (red), hA
t (θ) (green), ht (θ) (yellow), ht (θ) (blue). Graph 2:
G
Confidence curves for hA
t (θ) (green) and ht (θ) (yellow)).
R(µ, Hs ) for the mean parameter µ and a confidence (or fiducial) distribution Hs under
quadratic penalty is
R(µ, Hs ) =
Z
(µ′ − µ)2 dHs (µ′ ) = Eµ (VarHs (µ)) + Eµ (µ̂ − µ)2 ,
where VarHs (µ) denotes the variance of µ under Hs , Eµ the expected value with respect
to the distribution of S and µ̂ = E Hs (µ) is the mean of µ under Hs . Now, recalling
that for the binomial and the negative-binomial distribution in Table 1, assuming m = 1
for simplicity, we have µ = p and µ = (1 − p)/p, it is easy to verify that for both
these models and the Poisson model µ̂ is the same under HsG and HsA . As a consequence,
A
G
R(µ, HsA )−R(µ, HsG ) = Eµ (VarHs (µ))−Eµ (VarHs (µ)) which becomes (4(n+1)(n+2))−1 ,
(4(n − 1)(n − 2)−1 and (4n2 )−1 , for the three models above, respectively. All these values
are strictly positive for each n uniformly in µ.
Let us now consider the fiducial distribution for a multivariate parameter defined in
(8). For each discrete component of the product, starting from Prφd−k+1 {Tk ≤ tk |T[k−1] =
t[k−1] , T−[d] = t−[d] ; φ[d−k] } = Fφd−k+1 (tk |t[k−1] , t−[d] ; φ[d−k] ) and Prφd−k+1 {Tk < tk |T[k−1] =
t[k−1] , T−[d] = t−[d] ; φ[d−k] }, it is possible to define a right and a left fiducial distribution,
respectively, and hence their geometric and arithmetic means. Notice that each compo-
nent of (8) involves a one-dimensional parameter and a real observation (the remaining
quantities being fixed), so that the Propositions 3 and 4 can be applied. Multivariate fiducial distributions for discrete observations can thus be obtained combining in the various
possible way these univariate distributions. In particular, we will consider Ht (φ), obtained as the product of all the right univariate conditional fiducial distributions, Htℓ (φ),
obtained as the product of all the left univariate conditional fiducial distributions, HtA (φ),
defined as the product of the d mixtures HtA[k] ,t−[d] = (Ht[k] ,t−[d] + Htℓ[k] ,t−[d] )/2 and finally
13
HtG (φ), corresponding to the density hG
t (φ) obtained as the product of the d geometric
1/2 . Notice that hG (φ) coincides with the geometric
ℓ
means hG
t
t[k] ,t−[d] ∝ (ht[k] ,t−[d] ·ht[k] ,t−[d] )
mean of all the 2d fiducial densities derived as described above.
3.3
Fiducial inference and the sufficiency principle
The step-by-step procedure introduced at the beginning of Section 3 gives a generalized
fiducial distribution, according to Hannig (2009), if one considers as data-generating
equation T = G(φ, U) with
(
Gk (φ, U[k] , U−[d] ) k = 1, . . . , d
,
Tk =
Uk
k = d + 1, . . . , m
where U is a random vector with a completely known distribution. The functions Gk
can be explicitly obtained iteratively as follows:
T1 = G1 φ, U1 , U−[d] = Fφ−1
U
|,
U
;
φ
1
−[d]
[d−1]
d
T2 = G2 φ, U1 , U2 , U−[d] = Fφ−1
U
|G
(φ,
U
,
U
),
U
;
φ
2
1
1
−[d]
−[d]
[d−2]
d−1
and so on.
It is interesting to observe that the generalized fiducial distribution r(θ) given in (2)
does not necessarily satisfy the sufficiency principle. This can be verified immediately
looking at the Example 2 in Hannig (2013), in which a uniform distribution on (θ, θ 2 )
is considered and r(θ) does not depend on the Xi ’s only through the sufficient statistic
S = (X(1) , X(n) ), where X(i) denotes the i-th order statistic. Despite its simple form,
this model is highly irregular, but the inconsistency with the sufficiency principle of
the generalized fiducial distribution r(θ) can also occur for more standard models. In
particular, if a real continuous sufficient statistic S for a real parameter exists, one could
derive two different fiducial distributions starting from S or from the whole sample. A
simple example of this issue can be easily constructed considering a beta model with
parameters 2 and θ. Another interesting example is the following.
Example 2. Let X = (X1 , . . . , Xn ) be an i.i.d. sample from a truncated exponential
density pθ (x) = θe−θx /(1 − e−θ ), 0 < x < 1, θ ∈ R − {0}. This density is not defined for
θ = 0, but it can be completed by continuity setting p0 (x) = 1. The distribution function
of Xi is Fθ (xi ) = (1 − e−θxi )/(1 − e−θ ), 0 < xi < 1, so that from (3) we have
n
J(x, θ) =
where s =
Pn
i=1 xi .
X
s
e−θ
(1 − e−θxi ),
+
−θ
θ θ(1 − e )
i=1
Thus, using (2), we obtain
θ n−1
e−θs
r(θ) ∝
(1 − e−θ )n+1
!
n
X
(1 − e−θxi ) ,
s(1 − e−θ ) + e−θ
i=1
14
θ ∈ R,
(11)
Figure 2: Graph 1: Fiducial densities: r(θ; x1 = 0.05, x2 = 0.95) (red); r(θ; x1 = 0.5, x2 =
0.5) (green); h(θ; s = 1) (blue). Graph 2: Fiducial densities r(θ; x1 = 0.02, x2 = 0.48) (red);
r(θ; x1 = 0.2, x2 = 0.3) (green); h(θ; s = 0.5) (blue).
Figure 3: Confidence curves for r(θ; x1 = 0.5, x2 = 0.5) (red) and h(θ; s = 1) (blue).
which depends on the values of the specific xi ’s. Consider now the sufficient statistic
P
S = ni=1 Xi and, for simplicity, assume n = 2. The density of S is
θ2
e−ts s
0<s≤1
(1−e−θ )2
pθ (s) =
.
2
θ
−ts
(2 − s) 1 < s < 2
−θ 2 e
(1−e
)
and the generalized fiducial density (11) reduces to hs (θ) = ∂Fθ (s)/∂θ. In Figure 2 we
report the fiducial densities r and hs for different values of (x1 , x2 ) and s = x1 + x2 . For
s = 1 all densities are symmetric with the mode in 0, while the dispersion is increasing
in |x1 − x2 |, so that the more concentrated fiducial density is obtained for x1 = x2 = 0.5.
However, for s 6= 1, the densities have different modes and are shifted to the left when x1
increases. In all cases the fiducial density hs is in the middle of the various cases. Notice
that hs has all the good properties discussed in V&M (2015) and, in particular, it is a
confidence distribution because the model belongs to a NEF. The confidence intervals
corresponding to hs (θ) are slightly smaller than those corresponding to r(θ), as can be
seen from the confidence curves reported in Figure 3. For instance, when x1 = x2 = 0.5,
the 95% confidence intervals are (-4.191,4.191) and (-4.399,4.399), for hs (θ) and r(θ),
respectively.
15
The computation of the fiducial distribution Ht defined in (8) is greatly simplified
starting with the sufficient statistic instead of the whole sample. However, when both
the alternatives are feasible, they seem to lead to the same result. In particular, the
following proposition states that the sufficiency principle is always satisfied by Ht when
there exists a complete sufficient statistic for the parameter.
Proposition 5. Consider the fiducial distribution Ht (φ), defined in (8) and (9), with
T a one-to-one transformation of the data X = (X1 , . . . , Xn ). If S = S(T) is a complete
and sufficient statistic of dimension d for φ, such that S = g(T[d] , T−[d] ) is a one-toone lower triangular transformation of T[d] for fixed T−[d] , then the fiducial distribution
Hs (φ) for φ, obtained using S instead of T in (8) and (9), coincides with Ht (φ).
Notice that the completeness of S is not necessary to satisfy the sufficiency principle,
as the following example shows.
Example 3. Given an i.i.d. sample X of size n from a uniform distribution on (θ, θ + 1),
it is immediate to verify that the sufficient statistic S = (X(1) , X(n) ) is not complete.
Because θ is a location parameter, Z = X(n) − X(1) is an ancillary statistic and the
fiducial distribution for θ can be obtained starting from the distribution function of X(n)
given Z, which is Fθ (x(n) |z) = (x(n) − z − θ)/(1 − z), x(n) − 1 < θ < x(n) − z = x(1) . Thus
hs (θ) = −
∂
∂ x(n) − z − θ
1
Fθ (x(n) |z) = −
=
,
∂θ
∂θ
1−z
1−z
x(n) − 1 < θ < x(n) − z = x(1) .(12)
If we start directly with X, we can consider the distribution function of Xn given Z =
(Z1 , . . . , Zn−1 ), where Zi = Xn − Xi . Omitting tedious calculations, we have
Fθ (xn | z) =
xn − θ − max(zi , 0)
,
1 + min(zi , 0) − max(zi , 0)
θ + max(zi , 0) < xn < θ + 1 + min(zi , 0),
and thus, for xn − 1 − min(zi , 0) < θ < xn − max(zi , 0),
hx (θ) = −
∂
1
Fθ (xn |z) =
.
∂θ
1 + min(zi , 0) − max(zi , 0)
Observing that min(zi , 0) = zi unless xn = x(n) and recalling that zi = xn − xi , i =
1, . . . , n−1, it follows that xn −1−min(zi , 0) = x(n) −1 and similarly xn −max(zi , 0) = x(1) ,
so that hx (θ) coincides with hs (θ) given in (12).
3.4
Conditionally reducible natural exponential families
Consider a multivariate natural exponential family whose density, with respect to a fixed
σ-finite positive measure ν, is given by
( d
)
X
pθ (x) = exp
θk xk − M (θ) , θ = (θ1 . . . , θd ) ∈ Θ, x = (x1 , . . . , xd ) ∈ Rd .
k=1
16
(13)
A NEF is d-conditionally reducible (in the sequel cr-NEF) if its joint density (13) can
be factorized as a product of d conditional densities each belonging to a real exponential
family. More precisely if
pφ (x|φ(θ)) =
d
Y
k=1
pφk (xk |x[k−1] ; φk (θ)) =
d
Y
k=1
exp φk (θ)xk − Mk (φk (θ); x[k−1] ) , (14)
where φ = (φ1 , . . . , φd ) is a one-to-one function from Θ onto φ(Θ) = Φ. Furthermore,
it can be shown that Φ = Φ1 × · · · × Φd , with φk ∈ Φk , k = 1, . . . , d, so that the φk ’s are
variation independent. Notice that φk is the natural parameter of the k-th conditional
distribution. For details on these families, with emphasis on enriched conjugate priors
and on reference Bayesian analysis, see Consonni & Veronese (2001) and Consonni et al.
(2004), respectively. Both these papers deal in particular with the families having simple quadratic variance function, named NEF-SQVFs, which include, as most interesting
cases, the multinomial and negative-multinomial models, see Casalis (1996) and Appendix A1.
Example 4 (Multinomial model). Consider a random vector X distributed according to
a multinomial distribution and denote by pk the probability of the k-th outcome Xk , k =
P
P
1, . . . , d, with dk=1 xk ≤ N , and dk=1 pk ≤ 1. It is well know that the conditional disPk−1
Pk−1
pj )),
xj , pk /(1− j=1
tribution of Xk given X[k−1] = x[k−1] , k = 2, . . . , d, is Bi(N − j=1
whereas the marginal distribution of X1 is Bi(N, p1 ). Since a binomial distribution is a
real NEF, one can factorize the multinomial distribution as in (14) with
k−1
X
pk
φk = log
xj log(1 + eφk ), φk ∈ R. (15)
, Mk (φk ; x[k−1] ) = N −
P
1 − kj=1 pj
j=1
For models belonging to a cr-NEF, the construction of the fiducial distribution proposed in Section 3 drastically simplifies. The existence of a sufficient statistic of the
same dimension of the parameter makes the ancillary statistic not necessary, while the
φ-parameterization, indexing each conditional distribution with a real parameter, implies
the independence of the φk ’s under the fiducial distribution.
Proposition 6. Let S be the sufficient statistic distributed according to a regular crNEF on Rd , parameterized by φ ∈ Φ = Φ1 × · · · × Φd , with Φk coinciding with the
natural parameter space of the k-th conditional distribution. Then, for S = s, with sk ,
k = 1, . . . , d, satisfying conditions similar to those given before (4),
Hs (φ) =
d
Y
k=1
Hs[k] (φk ) =
d
Y
(1 − Fφk (sk |s[k−1] )
k=1
17
(16)
is a fiducial distribution function on φ with density
hs (φ) =
d
Y
hs[k] (φk ),
where hs[k] (φk ) =
k=1
∂
∂
Hs[k] (φk ) = −
Fφ (sk |s[k−1] ).
∂φ
∂φk k
(17)
The φk ’s are independent under Hs (φ) and thus their importance ordering is irrelevant. This fact also justifies the simplification in the index notation adopted in (16).
Notice, however, that the definition and the interpretation of the φk ’s depend on the
particular ordering considered for the Xk ’s, as seen in Example 4.
As recalled in Section 2, a general definition of multi-dimensional confidence distribution does not exist. However, in our context, since Hs (φ) is constructed as a product
of marginal confidence distributions, it can be considered as a multivariate (possibly
asymptotic) confidence distribution for φ.
Some of the examples of Section 5 can be reconnected with this framework, but here
we consider in specific the NEF-SQVFs, whose variance function is given in (35). For
this class, with the exclusion of the negative-multinomial/hyperbolic secant distribution,
it is possible to give a simple explicit expression of the fiducial density of φ, recalling the
definition of Bk (φk ) given in (31) and setting zk = zkk in (35). The specifications of zkk
and q, appearing in (35), and of Bk (φk ) can be found in Appendix A1.
Proposition 7. Consider a sample of size n from a Poisson/normal, a multinomial, or
a negative-multinomial family on Rd . If S denotes the sufficient statistic, then the (right)
fiducial distribution for φ has density
d
d
k−1
Y
Y
X
hs (φ) =
hs[k] (φk ) ∝
sj − 1 Bk (φk ) , (18)
exp φk (sk + zk ) − n + q
k=1
j=1
k=1
while for the negative-multinomial/gamma/normal family, with an m dimensional nega-
tive multinomial component, the (right) fiducial distribution is given by
hs (φ) =
d
Y
k=1
h[sk ] (φk ) ∝
m
Y
k=1
exp{φk (sk + 1)}(1 − exp(φk ))n/q+
× exp {φm+1 sm+1 } (−φm+1 )n/q+
×
d
Y
k=m+2
Pk−1
j=1
sj −1
Pm
j=1 sj −1
exp φk sk − n sm+1 φ2k /2 .
(19)
Notice that the discrete components of a basic NEF-SQVF are integer-valued with
zk = 1, so that the left fiducial distribution is obtained by the previous formulas replacing
the term (sk + 1) by sk in (18) and (19). Thus it follows that the geometric mean hG
s
has the same structure in (18) and (19) with (sk + 1/2) instead of (sk + 1).
18
Example 4 (ctd.). For the multinomial family, because q = −1/N , zk = 1 and Bk (φk ) =
N log(1 + eφk ), k = 1, . . . , d, it easily follows from formula (18) that
(
)
1
d
d
Y
Y
1
eφk (sk + 2 )
G
G
hs[k] (φk ) =
hs (φ) =
,(20)
Pk−1
Pk
1
1
φk )nN − j=1 sj +1
B(s
+
,
nN
−
s
+
)
j
k
(1
+
e
j=1
2
2
k=1
k=1
where B(·, ·) denotes the beta function.
The fiducial distribution for φ not always is of particular interest in itself, but it can
be used as a starting point for the construction of the fiducial distribution for alternative
and more relevant parameters. We consider here the mean-parameter µ, which is a lower
triangular transformation of φ, see (32), so that its fiducial distribution can be directly
obtained from that of φ thanks to Proposition 1.
Corollary 2. The (right) fiducial distribution for the mean parameter µ, relative to the
ordering µ1 , . . . , µd , for the following NEF-SQVFs on Rd , has density:
• Poisson/normal family (with m Poisson components)
hs (µ) ∝
m
Y
s
µkk−1
exp(−nµk )
k=1
d
Y
k=m+1
n n
o
exp − 2 (µ2 − 2µk sk /n) ,
2σ
(21)
which corresponds to the product of m densities Ga(sk , n), k = 1, . . . , m and (d−m)
densities N(sk /n, σ 2 /n), k = m + 1, . . . , d.
• Multinomial family
hs (µ) ∝
d
Y
k=1
µskk N −
k
X
j=1
γk
µj
where γk = −1 for k = 1, . . . , d − 1 and γd = N n − 1 −
,
(22)
Pd
j=1 sj .
• Negative-multinomial family, with R occurrences in the (d + 1)-th cell
γk
k
d
X
Y
µj ,
µskk R +
hs (µ) ∝
(23)
j=1
k=1
where γk = −1 for k = 1, . . . , d − 1 and γd = −Rn − 1 −
Pd
j=1 sj .
• Negative-multinomial/gamma/normal family (with an m-dimensional negative -
19
multinomial component with R occurrences in the (m + 1)-th cell)
γk
m
k
Y
X
hs (µ) ∝
µj
µskk R +
j=1
k=1
× R +
×
d
Y
m
X
j=1
Rn+Pm
j=1 sj
µj
−(d−m−1)
µm+1
k=m+2
−(Rn+
µm+1
(24)
Pm
R
+
µ
s
m+1
j=1 j
j=1 sj )−1
exp −
µm+1
Pm
sk
nsm+1 2
,
µk − 2 µk µm+1
exp − 2
n
2µm+1
P
where γk = −1 for k = 1, . . . , m − 1 and γm = −Rn − 1 − m
j=1 sj . Notice that the
Pm
P
density of µm+1 given µ[m] is an In-Ga(Rn + j=1 sj , sm+1 (R + m
j=1 µj )), while
the density of µk given µ[k−1] , k = m + 2, . . . , d, is a N(µm+1 sk /n, µ2m+1 /(nsm+1 ))
depending only on µm+1 .
Example 4 (ctd.). Inference for the multinomial distribution is usually performed for the
cell-probabilities parameter p = (p1 . . . , pd ). Since pk = µk /N , the fiducial distribution
hG for p is easily derived from (22), noting that the left fiducial density can be obtained
replacing (sk + 1) by sk in (18), (and not in (22), which is derived aggregating the
hyperparameters). It follows that the geometric mean hG
s (p) is given by
γk
d
k
d
Y
X
X
sk −1/2
G
hs (p) ∝
pj
pk
1−
,
pk = 1, 0 < pk < 1,
k=1
j=1
with γk = −1/2 for k = 1, . . . , d − 1 and γd = N n − 1/2 −
Dirichlet distribution. Clearly,
hG
s (p)
(25)
k=1
Pd
j=1 sj .
This is a generalized
in (25) refers to the specific order of importance
p1 , p2 , . . . , pd . If we change this order, the fiducial distribution will change accordingly.
Similarly, for the negative-multinomial model, with R occurrences in the (d + 1)th cell, hG
s (φ) can be easily computed from (18) observing that zk = 1, q = 1/R and
Bk (φk ) = −R log(1 − exp(φk )).
4
Connections with objective Bayesian inference
As mentioned in Section 1, if we look at fiducial inference as a way to obtain a distribution
on the parameter space of the model without any prior information, it appears natural
to compare it with objective Bayesian inference. Recall that when a fiducial distribution
coincides with a posterior, the corresponding prior is called fiducial prior.
The step-by-step construction of the fiducial distribution ht (φ) defined in (8) is
based on the inferential importance ordering of the parameter components φ1 , . . . , φd .
20
This aspect is also crucial in the procedure adopted to construct reference priors, see
Bernardo & Smith (1994, Sec. 5.4.5). The reference prior π R for a parameter φ is generated by successive conditioning, established by the importance ordering of its compoQ
nents, as π R (φ) = dk=1 π R (φd−k+1 |φ[d−k] ). It is widely recognized that the dependence
of the reference prior on the choice of the parameter of interest is necessary to obtain good
frequentist properties such as coverage and consistency. For a one-dimensional parameter φ the reference prior coincides with the Jeffreys prior π J (φ) ∝ I(φ)1/2 , where I(φ)
denotes the Fisher information. While the Jeffreys prior is invariant under a reparameterization of the model, the reference prior (and thus the reference posterior) is generally
not invariant unless the transformation from φ to λ = (λ1 , . . . , λd ) is lower triangular, see
Datta & Ghosh (1996). Thus the reference posterior has the same invariance property
of the fiducial distribution proved in Proposition 1.
Recently Berger et al. (2015) recognize the existence of situations in which one is
interested simultaneously in all the parameter components of the model, or in none
of them but a prior (and thus a posterior) distribution is necessary to perform other
inferences such as predictions. In these cases an overall prior is needed. Its determination
is an open problem but, as they highlight, when there exists a “common reference prior
for all parameters”, this is the natural choice for the overall prior. A similar problem
occurs in our context and we will comment on this aspect in the following sections. Notice
that here the fiducial distribution (2) suggested by Hannig can be a good choice.
4.1
Location-scale parameter models
For location-scale parameter models the fiducial prior exists and coincides with the reference prior. Assume first that only one parameter, θ, is unknown. In this case the model
admits an ancillary statistic Z and, in particular, we take Zi = Xi − X1 or Zi = Xi /X1 ,
i = 2, . . . , n, if θ is a location or a scale parameter, respectively.
Proposition 8. Let X = (X1 , . . . , Xn ) be an i.i.d. sample from a density pθ , θ ∈
Θ ⊆ R. If θ is a location or a scale parameter, then the fiducial distribution coincides
with the Bayesian posterior obtained with the Jeffreys prior π J (θ) ∝ 1 or π J (θ) ∝ 1/θ,
respectively.
Example 5. Let X be an i.i.d. sample from the uniform distribution on (0, θ), θ > 0, so
that θ is a scale parameter. First notice that S = X(n) is a sufficient statistic for θ and
thus we can obtain directly the fiducial distribution
∂
nsn
∂
∂ s n
hs (θ) =
= n+1 ,
Hs (θ) = − Fθ (s) = −
∂θ
∂θ
∂θ θ
θ
θ > s.
(26)
However the same result can be obtained without resorting to the sufficient statistic.
Set w = max(z2 , . . . , zn ) and consider the distribution function of X1 given the ancillary
21
statistic Z = (X2 /X1 , . . . , Xn /X1 )
(
n
Fθ (x1 |z) =
x1
θ
x1 w n
θ
0 < x1 < θ,
0 < x1 < θ,
0<w≤1
w>1
.
(27)
Now, because w ≤ 1 means x1 = max(x1 , . . . , xn ), while for w > 1 we have x1 w =
max(x2 , . . . , xn ), expression (27), as a function of θ, is equivalent to Fθ (s) appearing in
(26) and thus provides the same fiducial distribution. It is immediate to verify that it
coincides with the Jeffreys posterior.
A case in which the sufficient statistic is not one-dimensional and thus it is necessary
to use an ancillary statistic can be found in the previous Example 3. Trivially hs (θ) given
in (12) coincides with the Bayesian posterior obtained by π J (θ) ∝ 1.
Consider now a model with a location parameter θ and a scale parameter σ, both
unknown. Given an i.i.d. sample of size n, an ancillary statistic is, for example, Z =
(Z3 , . . . , Zn ), with Zj = (Xj − X1 )/Z2 , j = 3, . . . , n, where Z2 = X2 − X1 is marginally
ancillary for θ. Then, the one-to-one transformation from X to (X1 , Z2 , Z) allows to write
the sampling distribution as pσ (z2 |z)pθ (x1 |z2 , z; σ)p(z). Note that in specific contexts
other transformations could be more appropriate. For example, in a normal model one
P
P
could use (X̄ = ni=1 Xi /n, S 2 = ni=1 (Xi − X̄)2 , Z) with Zj = (Xj − X̄)/S, j = 3, . . . , n,
so that the factorization becomes pσ (s2 |z)pθ (x̄|s2 , z; σ)p(z).
Proposition 9. Let X = (X1 , . . . , Xn ) be an i.i.d. sample from a density pθ,σ , where θ
and σ are a location and a scale parameter, respectively. Then the fiducial distribution
hx (σ, θ) for (σ, θ) coincides with the Bayesian posterior obtained with the reference prior
R (σ, θ) ∝ 1/σ.
πσ,θ
Notice that π R (σ, θ) ∝ 1/σ is different from π J (σ, θ) ∝ 1/σ 2 obtained by the Jeffreys
rule which, as already recalled, is not suitable for multidimensional parameters. Furthermore, while π R does not depend on the ordering of θ and σ, the step-by-step fiducial
distribution is in general not allowable if the ordering is reversed. However, hx (σ, θ)
coincides with the fiducial distribution obtained through other “symmetric” approaches,
see Hannig (2009) and Fraser (1961). Thus the inferential ordering of importance seems
irrelevant for this model and hx (σ, θ) can be assumed as an overall fiducial distribution.
4.2
Exponential families
Lindley (1958) was the first to study the existence of a fiducial prior, analyzing in particular the case of continuous real NEFs and proving that it exists only for gaussian
(with known variance) and gamma (with known shape) models. A full characterization
of the real NEFs which admit a fiducial prior is given in V&M (2015). The following
proposition summarizes their results.
22
Proposition 10. Let F be a real NEF with natural parameter θ.
i) A fiducial prior exists if and only if F is an affine transformation of one of the
following families: normal with known variance, gamma with known shape param-
eter, binomial, Poisson and negative-binomial. For the three discrete families, the
fiducial prior exists for all Hs , Hsℓ and HsG .
ii) When a fiducial prior exists, it belongs to the family of conjugate distributions.
Moreover, it coincides with the Jeffreys prior for continuous NEFs and for discrete
NEFs too if we choose HsG as the fiducial distribution.
iii) The fiducial distribution Hs (or HsA in the discrete case) and the Bayesian posterior distribution corresponding to the Jeffreys prior have the same Edgeworth’s
expansion up to the term of order n−1 .
The previous results establish a strong connections between Jeffreys posteriors and
fiducial distributions for real NEFs, and thus the two different approaches lead, in some
sense, to the same objective inference. A discussion about the coverage of the fiducial and
the Jeffreys intervals and their good frequentist properties, in particular when compared
with the standard Wald intervals, is given in V&M (2015, Section 5).
Consider now a cr-NEF. It is easy to verify that the fiducial distribution hs (φ) in (18)
belongs to the enriched conjugate family defined in Consonni & Veronese (2001, Section
4.3). This fact is the key-point to prove the following proposition.
Proposition 11. Let S be a sufficient statistic distributed according to a cr-NEF on Rd ,
parameterized by φ = (φ1 , . . . , φd ). Then a fiducial prior for φ exists if and only if the
conditional distribution of Sk given S[k−1] = s[k−1] is an affine transformation of one of
the following families: normal with known variance, gamma with known shape parameter,
binomial, Poisson and negative-binomial.
In particular, all basic NEF-SQVFs, with the exclusion of the Negative-multinomial/
hyperbolic secant, admit a fiducial prior, which belongs to the enriched conjugate family.
Moreover, if for the discrete components of these models we consider the geometric mean
hG
s[k] , then the product of the Jeffreys priors computed from the conditional distribution
of Sk given S[k−1] = s[k−1] , the reference prior and the fiducial prior are all equal.
Example 4 (ctd.). The multinomial distribution is a basic NEF-SQVF and thus from
Proposition 11, setting sk = n = 0 in hG
s (φ) given in (20), we obtain the fiducial prior
π(φ) ∝
d
Y
eφk /2 /(1 + eφk /2 ),
k=1
23
(28)
which coincides with the reference prior and with the product of the Jeffreys priors for
φk , k = 1, . . . , d, computed on the distribution of Xk given X[k−1] = x[k−1] .
Finally, we observe that the fiducial distribution (17) is always an overall fiducial
distribution for φ. However, the φ-parameterization is often not interesting in itself
even if in some cases it is strictly related with a more relevant one. For example, following Berger et al. (2015), consider a multinomial model applied to directional data,
as it happens for outcomes from an attitude survey. In this case the cells are naturally
ordered, so that it is meaningful to reparameterize the model in terms of the conditional
probabilities p∗k = exp(φk )/(1 + exp(φk )), k = 1, . . . , d. Then π(φ) in (28) induces on
p∗ = (p∗1 , . . . , p∗d ) an overall fiducial prior which is a product of independent Be(1/2,1/2)
distributions coinciding with the overall reference prior.
5
Further examples
5.1
Examples concerning normal models
i) Difference of means. Consider two independent normal i.i.d. samples, each of size
n, with known common variance σ 2 and means µ1 and µ2 , respectively. The sufficient
statistics are the sample sums S1 and S2 , with Si ∼ N(nµi , nσ 2 ), i = 1, 2. If the parameter
of interest is φ1 = µ2 − µ1 , we can reparameterize the joint density of (S1 , S2 ) in (φ1 =
µ2 −µ1 , φ2 = µ1 ), so that the conditional distribution of S2 given S1 +S2 , being N((nφ1 +
s1 + s2 )/2, nσ 2 /2), depends only on φ1 . From Table 1, the fiducial distribution of φ1 /2 +
(s1 + s2 )/(2n) is N(s2 /n, σ 2 /(2n)), and thus φ1 is N(x̄2 − x̄1 , 2σ 2 /n), where x̄i = si /n.
Because S1 + S2 is N(φ1 + 2φ2 , 2nσ), arguing as before, the fiducial distribution of φ2
given φ1 is N((x̄1 + x̄2 −φ1 )/2, σ 2 /(2n)), so that hS1 ,S2 (φ1 , φ2 ) = hS1 ,S2 (φ1 )hS1 +S2 (φ2 |φ1 ).
Notice that the same joint fiducial distribution is obtained if we consider the ordering
(φ2 , φ1 ) or even if we compute the (marginal) fiducial distributions of µ1 and µ2 and
obtain that of (φ1 , φ2 ) through the change-of-variable rule. Thus the ordering of the
parameter is irrelevant and hS1 ,S2 (φ1 , φ2 ) is an overall fiducial distribution. Furthermore,
it coincides with the reference posterior obtained with a constant prior and the marginal
distribution of φ1 and φ2 are both confidence distributions.
ii) Many normal means (Neyman Scott Problem). Consider n samples of size two
(Xi1 , Xi2 ), with each Xij independently distributed according to a N(µi , σ 2 ), i = 1, . . . , n
P
and let X̄i = (Xi1 + Xi2 )/2 and W = ni=1 (Xi1 − Xi2 )2 . The aim is to make inference
on the common variance σ 2 , with nuisance parameter µ = (µ1 , . . . , µn ). This well known
example is used to show that the maximum likelihood estimator σ̂ 2 = W/(4n) of σ 2 is
inconsistent, because W/(4n) → σ 2 /2, n → ∞. To obtain the fiducial distribution of
σ 2 , first notice that the joint distribution of the sufficient statistics X̄ = (X̄1 , . . . , X̄n )
24
and W can be factorized as
Qn
i=1 pµi ,σ2 (x̄i )
pσ2 (w), for the independence of X̄ and
W , with W ∼ Ga(n/2, 1/(4σ 2 )). Using Table 1 one can easily obtain from pσ2 (w)
the fiducial distribution for 1/(4σ 2 ), and hence that for σ 2 which is In-Ga(n/2, w/4),
while that of each µi given σ 2 , derived from pµi ,σ2 (x̄i ), is N(x̄i , σ 2 /2). As a consequence
Qn
2
2
hx̄,w (σ 2 , µ) =
i=1 hx̄i (µi |σ ) hw (σ ). This distribution coincides with the posterior
obtained from the order invariant reference prior π R (σ 2 , µ1 , . . . , µn ) ∝ 1/σ 2 and does
not present the inconsistency of the likelihood estimator, which instead occurs for the
posterior distribution obtained from the Jeffreys prior π J (σ 2 , µ1 , . . . , µn ) ∝ 1/σ n+2 .
5.2
Comparison of two Poisson rates
The comparison of Poisson rates µ1 and µ2 is a classical problem arising in many contexts,
see for example Lehmann & Romano (2005) for a discussion on an unbiased uniformly
most powerful test for the ratio φ1 = µ2 /µ1 . Given two i.i.d. samples of size n from
two independent Poisson distributions, the sufficient statistics are the sample sums S1
and S2 , with Si ∼ Po(nµi ), i = 1, 2. Reparameterizing the joint density of (S1 , S2 ) in
(φ1 = µ2 /µ1 , φ2 = µ1 + µ2 ), we have that the conditional distribution of S2 given S1 + S2
is Bi(s1 + s2 , φ1 /(1 + φ1 )) and the marginal distribution of S1 + S2 is Po(nφ2 ). Thus the
sampling distribution is a cr-NEF and we can apply (16). Using Table 1, the fiducial
density for φ1 /(1 + φ1 ), derived from the conditional distribution of S2 given S1 + S2 is
Be(s2 + 1/2, s1 + 1/2) which implies
hG
s1 ,s2 (φ1 ) =
1
s −1/2
φ12
(1 + φ1 )−s1 −s2 −1 ,
B(s2 + 1/2, s1 + 1/2)
φ1 > 0.
(29)
From the marginal distribution of S1 + S2 and using again Table 1, it follows that
G
G
G
hG
s1 +s2 (φ2 ) is Ga(s1 +s2 +1/2, n) and thus hs1 ,s2 (φ1 , φ2 ) = hs1 ,s2 (φ1 )hs1 +s2 (φ2 ). This joint
fiducial distribution is order-invariant, coincides with the reference posterior according
to Proposition 11, and is an overall distribution for (φ1 , φ2 ). Notice that hG
s1 ,s2 (φ1 ) is a
confidence distribution and that it differs from the fiducial distribution induced on φ1 by
the two independent marginal fiducial densities for µ1 and µ2 .
5.3
Bivariate binomial
A Bayesian analysis for the bivariate binomial model has been discussed by Crowder & Sweeting
(1989) in connection with a microbiological application. Consider m spores, each with a
probability p to germinate, and denote by R the random number of germinating spores, so
that R is Bi(m, p). If q is the probability that one of the latter spores bends in a particular direction and S is the random number of them, the probability distribution of S given
R = r is Bi(r, q). The joint distribution of R and S is called bivariate binomial. Crowder
and Sweeting observe that the Jeffreys prior π J (p, q) ∝ p−1 (1 − p)−1/2 q −1/2 (1 − q)−1/2
25
is not satisfactory for its asymmetry in p and 1 − p, while Polson & Wasserman (1990)
show that this fact does not occur using the order-invariant reference prior π R (p, q) ∝
p−1/2 (1 − p)−1/2 q −1/2 (1 − q)−1/2 which is the product of the two independent Jeffreys
priors.
G
The joint fiducial density hG
r,s (q, p) can be obtained as the product of hr,s (q), derived
from the conditional model Bi(r, q) of S given R = r, and hG
r (p|q), derived from the
marginal model Bi(m, p) of R, which does not depend on q. Thus p and q are independent
under hG
r,s so that it is an overall fiducial distribution. Because for the binomial model
the fiducial prior is equal to the Jeffreys prior, see Proposition 10, it follows immediately
that hG
r,s (q, p) coincides with the reference posterior. All previous conclusions hold even
if we consider the alternative parametrization (η = pq, λ = p(1 − q)/(1 − pq)).
5.4
Ratio of parameters of a trinomial distribution
Bernardo & Ramon (1998) perform the Bayesian reference analysis for the ratio of two
multinomial parameters presenting some applications. In particular they discuss the
case of (X1 , X2 ) distributed according to a trinomial distribution with parameters n and
p = (p1 , p2 ), and provide the joint reference prior for (φ1 = p1 /p2 , φ2 = p2 ), with φ1 the
parameter of interest. Then they derive the marginal reference posterior for φ1 which is
x −1/2
π R (φ1 |x1 , x2 ) ∝ φ1 1
(1 + φ1 )−x1 −x2 −1 .
(30)
To find the fiducial distribution of φ1 , we reparameterize the trinomial model in (φ1 , φ2 ).
The conditional distribution of X1 given T = X1 + X2 = t is Bi(t; φ1 /(1 + φ1 )), so that,
by Table 1, the fiducial density for φ1 /(1 + φ1 ) is Be(x1 + 1/2, t − x1 + 1/2) and hG
x1 ,t (φ1 )
coincides with (30). From the marginal distribution of T , which is Bi(n; φ2 (1 + φ1 )), it
is possible to derive the fiducial density
hG
t (φ2 |φ1 ) =
Γ(n + 1)
t−1/2
(1 + φ1 )t+1/2 φ2
(1 − (1 + φ1 )φ2 )n−t−1/2
Γ(t + 1/2)Γ(n − t + 1/2)
G
G
so that the joint fiducial density is hG
x1 ,x2 (φ1 , φ2 ) = hx1 ,t (φ1 )ht (φ2 |φ1 ) which coincides
with the joint reference posterior.
6
Conclusions and final remarks
We have suggested a way to construct a fiducial distribution which depends on the
inferential importance ordering of the parameter components. Our proposal appears
to be quite simple to apply and, even if it is not so general as the theory suggested
by Hannig, has some advantages in connection with the modern confidence distribution
theory and it is strictly related to objective Bayesian analysis.
26
In complex models an exact analysis is generally not possible, but approximate results
can be derived working with asymptotic distributions. In V&M (2015), starting from the
sufficient statistic, an expansion up to the first order of the fiducial distribution for the
mean parameter of a real NEF is provided. This result can be extended to arbitrary
regular models starting from the maximum likelihood estimator of the parameter. When
the maximum likelihood estimator is not sufficient a better fiducial distribution can
be obtained using an ancillary statistic, as suggested in Section 3. To this aim the
magic formula p∗ given by Barndorff-Nielsen (1983), which provides an approximation
of the conditional distribution of the maximum likelihood estimator given an ancillary
statistic, can be fruitfully adopted. Furthermore, these asymptotic results appear to be
strictly connected with the theory of matching priors, i.e. priors that ensure approximate
frequentist validity of posterior credible set. Notice that also these priors crucially depend
on the inferential ordering of the parameters, see Tibshirani (1989) and Datta & Mukerjee
(2004). However, a normal approximation of the fiducial distribution, when it can be
established and is enough for the analysis, can be proved to be order-invariant. These
type of results will be discussed in a forthcoming paper.
Acknowledgements
This research was supported by grants from Bocconi University.
Appendix
A1: Useful results on cr-NEFs
Some technical aspects related to cr-NEFs are the following.
1. A NEF is a cr-NEF if and only if the principal k × k matrix of the variance function
does not depend on µk+1 , . . . µd , for k = 1, . . . , d − 1.
2. The Fisher information matrix relative to the φ-parametrization is diagonal with
the kk-th element depending only on φ[k] .
3. The cumulant transform Mk (φk ; x[k−1] ) of the k-th conditional density is given by
Mk (φk ; x[k−1] ) =
k−1
X
Akj (φk )xj + Bk (φk ),
(31)
j=1
for some functions Akj and Bk .
4. The conditional expectation of Xk given X[k−1] = x[k−1] is linear in x[k−1] , because
it is the gradient of (31).
27
5. The parameter µk depends on φ only through φ[k] , because from (31)
µk =
k−1
X
∂Akj (φk )
j=1
∂φk
µj +
∂Bk (φk )
.
∂φk
(32)
6. Using (14) and (31), it can be checked that
θk = φk −
d
X
Auk (φu ),
and M (θ) =
d
X
Bk (φk (θ)).
(33)
k=1
u=k+1
As a consequence of the first part of (33), there exists a function gk such that
φk = θk + gk (θk+1 , . . . , θd ).
(34)
Of course all the previous formulas hold for k = 1, . . . , d, with the understanding that
components that lose meaning for a specific k are set to zero.
A NEF has a simple quadratic variance function (SQVF) if the ij-th element of its
variance-covariance matrix, seen as a function of the mean parameter µ = (µ1 , . . . , µd ),
P
(k)
can be written as Vij (µ) = qµi µj + dk=1 µk Lij + Cij , where q is a real constant and
L(k) , k = 1, . . . , d and C are constant d × d symmetric matrices. Any NEF-SQVF can be
obtained, via a nonsingular affine transformation, from one of the basic families: Poisson/normal (q = 0), multinomial (q = −1/N , N positive integer), negative-multinomial
(q = 1/R, R positive integer), negative-multinomial/gamma/normal (q = 1/R) and
negative-multinomial/hyperbolic-secant (q = 1/R), see Casalis (1996) for a detailed description of these distributions. The ij-th element of the variance function V (µ) of a
basic NEF-SQVF is
Vij (µ) = qµi µj +
d
X
zik µk + Cij ,
(35)
k=1
where zij = zji , i, j ∈ {1, . . . , d}, are constants. The values of zii for the basic NEFSQVFs, together with other technical details, are given in the proof of Corollary 2.
A2: Proofs
Proof of Proposition 1.
By the standard change-of-variable rule applied to the first integral in (10), it is enough
to show that
λ
hφ
t (φ(λ))|Jφ (λ)| = ht (λ),
28
(36)
where Jφ (λ) is the Jacobian of the transformation from φ to λ. Now, from (8) we have
hφ
t (φ(λ))
=
=
d
Y
ht[k] ,t−[d] (φd−k+1 (λ[d−k+1] )|φ[d−k] (λ[d−k] ))
k=1
d
Y
k=1
∂
Fφ
(tk |t[k−1] , t−[d] ; φ[d−k] )
∂φd−k+1 d−k+1
φ[d−k+1] =φ[d−k+1] (λ[d−k+1] )
while
Jφ (λ) =
d
Y
∂φd−k+1 (λ[d−k+1] )
,
∂λd−k+1
k=1
because the transformation from φ to λ is lower triangular. It follows from the last two
formulas and the chain rule that
hφ
t (φ(λ))|Jφ (λ)| =
d
Y
k=1
∂
∂λd−k+1
Fλd−k+1 (tk |t[k−1] , t−[d] ; λ[d−k] ) ,
where Fλd−k+1 is the distribution function of Tk given (T[k−1] = t[k−1], T[−d] = t[−d] )
in the λ parameterization. The equality (36) follows by applying (9) to the model
parameterized by λ.
⋄
Proof of Proposition 2.
Let pG (x) = c−1 (p1 (x)p2 (x))1/2 , where c =
constant. Then
Z
R
(p1 (x)p2 (x))1/2 dν(x) is the normalizing
Z
q(x)
q(x)
KL(q|p1 ) + KL(q|p2 ) =
log
q(x)dν(x) +
log
q(x)dν(x)
p1 (x)
p2 (x)
Z
Z
q(x)
q 2 (x)
q(x)dν(x) = 2
log
q(x)dν(x)
=
log
p1 (x)p2 (x)
CpG (x)
Z
q(x)
= 2
log G
q(x)dν(x) − 2 log c = 2KL(q|pG ) − 2 log c.
(37)
p (x)
Because c does not depend on q, it follows that the functional in (37) achieves its minimum
(equal to −2 log c) if and only if KL(q|pG ) = 0, i.e. q = pG .
⋄
Proof of Proposition 3.
We only prove that Hs (θ) < HsG (θ); the other inequality can be shown in the same way.
Using (5) and (6), we can write
hG
s (θ)
hs (θ)
=
=
1
c
1
c
p
s
hs (θ)hℓs (θ)
hs (θ)
1+
1
=
c
∂
∂θ pθ (s)
hs (θ)
=
29
q
hs (θ)(hs (θ) +
∂
∂θ pθ (s))
hs (θ)
1p
1 + γs (θ).
c
(38)
By hypothesis γs (θ) is decreasing and thus, from (38), hG
s (θ)/hs (θ) is also decreasing on
Θ. This is a sufficient condition for Hs (θ) < HsG (θ), see Shaked & Shanthikumar (2007,
Theorem 1.C.1).
⋄
Proof of Corollary 1.
Let pθ (s) = exp(θs − nM (θ) be the probability mass function (with respect to a measure
ν) of a real NEF, with θ the natural parameter. Fixing s ∈ S0 , we can write
∂pθ (s)
∂
(s − nM ′ (θ)) exp(θs − nM (θ))
γs (θ) =
(1 − Fθ (s)) = P+∞
/
′
∂θ
∂θ
t=s+1 (t − nM (θ)) exp(θt − nM (θ))
!−1
+∞
X
t − nM ′ (θ)
.
(39)
exp{(t − s)θ}
=
s − nM ′ (θ)
t=s+1
The elements in the sum (39) are continuous and increasing functions of θ in both intervals
for which θ < θbs and θ > θbs , where θbs = (M ′ )−1 (s/n). Thus γs (θ) is decreasing in these
intervals. Moreover, γs (θ) is equal to zero for θ = θbs , positive for θ < θbs and negative
for θ > θbs , because the denominator of γs (θ) is hs (θ), which is positive. Then γs (θ) is
decreasing over all Θ and from Proposition 3 the result follows.
⋄
Proof of Proposition 4.
In order to prove the proposition, it is sufficient to show that there exist θ1 and θ2 in Θ,
G
A
G
θ1 < θ2 , such that hA
s (θi ) = hs (θi ), i = 1, 2, with hs (θ) < hs (θ) for θ1 < θ < θ2 and
G
hA
s (θ) > hs (θ) otherwise, see Shaked & Shanthikumar (2007, proof of Theorem 3.A.44).
G
Thus we analyze the sign of hA
s (θ) − hs (θ). We can write
q
1
1
G
ℓ
(θ)
=
hA
(θ)
−
h
(h
(θ)
+
h
(θ))
−
hs (θ)hℓs (θ)
s
s
s
s
2
c
1p
1
1 + γs (θ) ,
(2 + γs (θ)) −
= hs (θ)
2
c
G
so that the sign of the difference hA
s (θ)−hs (θ) is a function of γs (θ) only. First notice that,
Rp
by a standard property of the arithmetic and geometric means, c =
hs (θ)hℓs (θ)dθ <
R
(hs (θ) + hℓs (θ))/2 dθ = 1 for all θ. After some straightforward algebra, it can be seen
√
−2
2
G
2
that hA
s (θ) − hs (θ) = 0 when (and only when) γs (θ) = 2c ((1 − c ) − 1 − c ) = k1 or
√
γs (θ) = 2c−2 ((1 − c2 ) + 1 − c2 ) = k2 , with k1 ∈ (−1, 0) and k2 > 0. Moreover we have
G
A
G
hA
s (θ) < hs (θ) for k1 < γs (θ) < k2 and hs (θ) > hs (θ) for γs (θ) < k1 or γs (θ) > k2 . By
assumption, γs (θ) is decreasing on Θ from +∞ to -1, so that there exist θ1 and θ2 , with
γs (θ1 ) = k2 and γs (θ2 ) = k1 , satisfying the sufficient condition stated at the beginning of
the proof.
⋄
30
Proof of Proposition 5.
First notice that if we use for constructing the fiducial distribution the sufficient statistic
S, which has the same dimension of the parameter, we do not need an ancillary statistic.
Furthermore, T is a one-to-one transformation of X, and thus S is a function of T =
(T [d] , T −[d] ) but, since S is complete, it is stochastically independent of T −[d] by Basu’s
Theorem. As a consequence, S[k] is also independent of T −[d] and thus
Prφd−k+1 (Sk ≤ sk | S[k−1] = s[k−1] ; φ[d−k] ) =
Prφd−k+1 (Sk ≤ sk | S[k−1] = s[k−1] , T−[d] = t−[d] ; φ[d−k] ).
(40)
From the one-to-one lower triangular transformation s = g(t[d] , t−[d] ), we have that
sk = gk (tk , t[k−1] , t−[d] ), with gk invertible with respect to tk , so that (assuming gk
increasing) (40) becomes
Prφd−k+1 (gk (Tk , T[k−1] , T −[d] ) ≤ sk | T[k−1] = t[k−1], T−[d] = t−[d] ; φ[d−k] ) =
Prφd−k+1 (Tk ≤ gk−1 (sk , T[k−1] , T −[d] ) | T[k−1] = t[k−1], T−[d] = t−[d] ; φ[d−k] ) =
Prφd−k+1 (Tk ≤ tk | T[k−1] = t[k−1], T−[d] = t−[d] ; φ[d−k] ),
which proves the proposition.
⋄
Proof of Proposition 6.
Because each conditional distribution of Xk given X[k−1] = x[k−1] belongs to a NEF with
natural parameter φk , using (4) we have that Hs[k] (φk ) is a distribution function for φk .
The result follows from the postulated independence among the φk ’s.
⋄
Proof of Proposition 7.
Formulas (18) and (19) derive by a direct application of (17) to the conditional distributions of the different families. For a detailed description of the cr-NEFs involved, see
Consonni & Veronese (2001, proof of Theorem 3).
⋄
Proof of Corollary 2.
First notice that the fiducial distribution hs (µ) can be more easily obtained via a double
transformation, namely
θ
hµ
s (µ) = hs (θ(µ))|Jφ (θ(µ))||Jθ (µ)|,
where the Jacobian |Jφ (θ)| = 1 for (34), and
)
( d
X
θk (µ)zk − q(d + 1)M (θ(µ)) ,
|Jθ (µ)| ∝ det{V (µ)}−1 = exp −
k=1
31
(41)
see Gutiérrez-Peña & Smith (1997, pag. 34) for the proportionality relationship and
Consonni et al. (2004, Prop. 1) for the equality. We consider now each family.
Poisson/normal family with m Poisson components. We have q = 0, φk = θk ; zk = 1,
θk = log(µk ) and Bk (φk ) = exp(φk ) for k = 1, . . . , m, while zk = 0, θk = µk /σ 2 and
Bk (φk ) = σ 2 φ2k /2 for k = m + 1, . . . , d, where σ 2 is the known variance of the normal
Q
−1
components. Then from (41), it follows that |Jφ (θ(µ))| = m
k=1 µk , and thus using
(18), the result (21) follows.
Multinomial family. Using the relationships in Example 1, (41) gives |Jφ (θ(µ))| = (N −
Pd
Qd
−1
−1
k=1 µk )
k=1 µk and using (8) we obtain (22).
Negative-multinomial family. We have q = 1/R, R > 0, zk = 1, φk (θ) = θk − log(1 −
Pd
Pd
φk
θu
j=1 µj ), and Bk (φk ) = −R log(1 − e ) for k =
u=k+1 e ), θk = log(µk ) − log(R +
Pd
Q
1, . . . , d. Then from (41), it follows that |Jφ (θ(µ))| = (R + k=1 µk )−1 dk=1 µ−1
k , and
thus using (18), the result (23) follows.
Negative-multinomial/gamma/normal family (with an m dimensional negative-multinomial
P
component). We have q = 1/R, R > 0; zk = 1 and φk = log(µk /(R + kj=1 µj )),
P
k = 1, . . . , m; zm+1 = 0 and φm+1 = −(R + m
j=1 µj )/µm+1 ; zk = 0 and φk = µk /µm+1 ,
k = m+2, . . . , d. In this case it is convenient to compute the fiducial density of µ directly
from (19). Observing that the Jacobian of the transformation from φ to µ is
!−1 m
!
m
m
X
Y
X
−d+m+1
R +
|Jφ (µ)| = R +
µk
µj µ−2
µ−1
m+1 µm+1
k
k=1
j=1
k=1
and using the previous expression of φk , the density (24) follows.
⋄
Proof of Proposition 8.
Let X be an i.i.d. sample of size n, with Xi ∼ pθ (xi ) = f (xi − θ), i.e. θ is a location
parameter, and consider the transformation Z1 = X1 , Zi = Xi − X1 , i = 2, . . . , n, whose
Jacobian is one. Then, setting z = (z1 , . . . , zn )
R +∞
Hx (θ) = Hz (θ) = 1 − Fθ (z1 |z2 , . . . , zn ) =
z1 f (t
R +∞
−∞ f (t
− θ)
− θ)
Qn
i=2 f (t
Qn
i=2 f (t
+ zi − θ)dt
+ zi − θ)dt
.
Using now the substitution m = −t + θ + z1 in the previous two integrals, and recalling
that π J (θ) ∝ 1 we obtain
Rθ
Qn
i=2 f (z1 + zi − m)dm
−∞ f (z1 − m)
R +∞
Qn
i=2 f (z1 + zi − m)dm
−∞ f (z1 − m)
=
=
Rθ
Qn
J
i=1 f (xi − m)π (m)dm
−∞
R +∞ Qn
J
i=1 f (x1 − w)π (m)dm
−∞
Z θ
J
π (m|x)dm.
−∞
32
The result relative to the scale parameter follows recalling that the model pθ (x) =
f (x/θ)/θ can be transformed in a model with location parameter µ setting y = log(x) and
µ = log(θ). In this case a constant prior on µ is equivalent to a prior on θ proportional
⋄
to 1/θ.
Proof of Proposition 9.
Let X be an i.i.d. sample of size n, with Xi ∼ pθ,σ (xi ) = f ((xi − θ)/σ)/σ, i = 1, . . . , n
and notice that the absolute value of the Jacobian of the transformation from x to
(x1 , z2 , z), with z2 = x2 − x1 , z = (z3 , . . . , zn ) and zj = (xj − x1 )/z2 , j = 3, . . . , n, is
|z2 |n−2 . Furthermore, the reference prior π R (θ, σ) is order-invariant and can be written
as π R (θ|σ)π R (σ), where π R (θ|σ) ∝ 1 and π(σ) ∝ 1/σ, see Fernández & Steel (1999).
Working conditionally on σ we can thus apply Proposition 8 to conclude that the reference
posterior and the fiducial distribution for θ given σ coincide. It remains to show that
Z ∞
R
R
p θ,σ (x1 , z2 , z)π R (θ|σ)π R (σ)dθ
(42)
π (σ|x) = π (σ|x1 , z2 , z) ∝
∝
Z
∞
1
−∞
σ n+1
f
−∞
x1 − θ
σ
n
z2 zi + x1 − θ
z2 + x1 − θ Y
f
f
dθ
σ
σ
i=3
corresponds to the fiducial density hz2 ,z (σ).
We have
Z
R +∞ R +∞
−∞
z2
=
+∞ Z +∞
X ,Z2 |Z
(t, w|z)/pZ (z)dtdw
z2
−∞
|w|n−2
wzi +t−θ
t−θ
w+t−θ Qn
dtdw
f
f
f
i=3
σn
σ
σ
σ
Hz2 ,z (σ) = 1 − Fσ (z2 |z) =
1
p θ,σ
pZ (z)
(43)
,
where the density of pZ (z) does not depend on the parameters because z is ancillary.
Assuming z2 > 0, and using the transformation m = x1 − v(t − θ)/σ, v = z2 σ/w, which
implies t = σ(x1 − m)/v + θ, w = z2 σ/v, with Jacobian z2 σ 2 /v 3 , the fiducial distribution
Hz2 ,z (σ) in (43) becomes
R σ R +∞ z2n−1
0 −∞ vn+1 f
x1 −m
v
f
z2 +x1 −m
v
pZ (z)
Qn
i=3 f
z2 zi +x1 −m
v
dmdv
.
Taking the derivative with respect to σ, it is immediate to see that the fiducial density
for σ coincides with the posterior distribution given in (42).
If z2 < 0, and applying to the integral the same transformation used in the previous
case, we have
Fσ (z2 |z) =
R z2 R +∞
−∞ −∞
= −
R σ R +∞
0
−∞
|w|n−2
σn f
t−θ
σ
(−z2 )n−1
f
vn+1
f
w+t−θ
σ
Z
p (z)
x1 −m
v
33
f
Qn
i=3 f
z2 +x1 −m
v
pZ (z)
wzi +t−θ
σ
Qn
i=3 f
dtdw
z2 zi +x1 −m
v
dmdv
,
so that again the derivative with respect to σ of Hz2 ,z (σ) = 1 − Fσ (z2 |z) leads to (42). ⋄
The following lemma will be used in the proof of Proposition 11.
Lemma 1. Consider a cr-NEF on Rd , with the k-th diagonal element in the Fisher information matrix given by Ikk (φ) = ak (φk )bk (φ[k−1] ). Then the d-group (order-invariant)
reference prior π R for φ = (φ1 , . . . , φd ) is
π R (φ) =
d
Y
k=1
πkJ (φk ) ∝
d
Y
(ak (φk ))1/2 ,
(44)
k=1
where πkJ (φk ) is the Jeffreys prior obtained from the conditional distribution of Xk given
X[k−1] = x[k−1] .
Proof of Lemma 1
First observe that µ[k] is a one-to-one transformation of φ[k] and that the information
matrix I(φ) of a cr-NEF is diagonal, see Appendix A1 (points 2 and 5). From (14) and
(31), the kk-th element of I is
2
∂
X
Ikk (φ[k] ) = −Eφ
log pφk (xk |x[k−1] ; φk ) = −EφX (Mk′′ (φk ; x[k−1] ))
∂φ2k
=
k−1
X
A′′kj (φk )µj (φ[j] ) + Bk′′ (φk ).
j=1
Under the assumption in the proposition, we can write Ikk (φ[k] ) = ak (φk )b∗k (µ[k−1] (φ[k−1] )).
From Datta & Ghosh (1995), it follows that the reference prior on φ is order-invariant
and is given by the last product in (44).
Consider now the Jeffreys prior on φk obtained from pφk (xk |x[k−1] ). This is propor-
tional to the square root of
−E
Xk |x[k−1]
(Mk′′ (φk ; x[k−1] ))
=
k−1
X
A′′kj (φk )xj + Bk′′ (φk ) = ak (φk )b∗k (x[k−1] ),
j=1
where again the last equality holds by the assumption in the proposition. Thus the
⋄
product of the d Jeffreys priors is equal to (44) and the result holds.
Proof of Proposition 11.
Due to the independence of the φk ’s, a fiducial prior for φ exists if and only if there
exists a fiducial prior for each φk . Because the conditional distribution of Sk given
S[k−1] = s[k−1] belongs to a real NEF with natural parameter φk , the result of the first
part of the proposition follows from Proposition 10.
The first statement of the second part of the proposition follows checking directly the
form of the conditional distributions of the basic NEF-SQVFs and using again Proposition
10. The second statement follows from the remark stated before the proposition and from
⋄
Lemma 1.
34
References
Barndorff-Nielsen, O. (1983). On a formula for the distribution of the maximum likelihood
estimator. Biometrika 70, 343–365.
Berger, J. O. (2006). The case for objective Bayesian analysis. Bayesian Analysis 1,
385–402.
Berger, J. O. & Bernardo, J. M. (1992). Ordered group reference priors with application
to a multinomial problem. Biometrika 79, 25–37.
Berger, J. O. Bernardo, J. M. & Sun, D. (2015). Overall objective priors. Bayesian
Analysis 10, 189–221.
Bernardo, J. M. (1979). Reference posterior distributions for Bayesian inference. J. R.
Stat. Soc. Ser. B 41, 113–147.
Bernardo, J.M. & Ramon, J.M. (1998). An introduction to Bayesian reference analysis:
inference on the ratio of multinomial parameters. The Statistician 28, 101–135.
Bernardo, J. M. & Smith, A. F. M. (1994). Bayesian Theory. Wiley: Chichester.
Casalis, M. (1996). The 2d+4 simple quadratic natural exponential families on Rd . Ann.
Statist. 24, 1828–1854.
Consonni, G. & Veronese, P. (2001). Conditionally reducible natural exponential families
and enriched conjugate priors. Scand. J. Stat. 28, 377–406.
Consonni, G., Veronese, P. & Gutiérrez-Peña, E. (2004). Reference priors for exponential
families with simple quadratic variance function. J. Multivariate Anal. 88, 335–364.
Crowder, M., & Sweeting, T. (1989). Bayesian inference for a bivariate binomial distribution. Biometrika 76, 599–603.
Datta, G. S. & Ghosh, M. (1995). Some Remarks on Noninformative Priors. J. Amer.
Statist. Assoc. 90, 1357–1363.
Datta, G. S. & Ghosh, M. (1996). On the Invariance of Noninformative Priors. Ann.
Statist. 24, 141–159.
Datta, G. S. & Mukerjee, R. (2004). Probability matching priors: higher order asymptotics (Lecture Notes in Statistics). Springer: New York.
Dawid, A. P. & Stone, M. (1982). The functional-model basis of fiducial inference. Ann.
Statist. 10, 1054–1074.
35
Dempster, A. P. (1963). Further examples of inconsistencies in the fiducial argument.
Ann. Statist. 34, 884–891.
Fernández, C. & Steel, F. J. (1999). Reference priors for the general location-scale model.
Statist. Prob. Lett. 43, 377–384.
Fisher, R. A. (1930). Inverse probability. Proceedings of the Cambridge Philosophical
Society 26, 4, 528–535.
Fisher, R. A. (1935). The fiducial argument in statistical inference. Ann. Eugenics VI,
91–98.
Fisher, R. A. (1973). Statistical methods and scientific inference. Hafner Press: New
York.
Fraser, D. A. S. (1961). On fiducial inference. Ann. Math. Statist. 32, 661–676.
Gutiérrez-Peña, E. & Smith, A. F. M. (1997). Exponential and Bayesian conjugate
families: review and extensions (with discussion). Test 6, 1–90.
Hannig, J. (2009). On generalized fiducial inference. Statist. Sinica 19, 491–544.
Hannig, J. (2013). Generalized fiducial inference via discretization. Statist. Sinica 23,
489–514.
Hannig, J. & Iyer, H. (2008). Fiducial intervals for variance components in an unbalanced
two-component normal mixed linear model. J. Amer. Statist. Assoc. 103, 854–865.
Hannig, J., Iyer, H. K. & Wang, C. M. (2007). Fiducial approach to uncertainty assessment accounting for error due to instrument resolution. Metrologia 44, 476–483.
Hannig, J., Iyer, H. K., Lai, R. C. S. & Lee T. C. M. (2016). Generalized Fiducial
Inference: A Review and New Results. J. American Statist. Assoc. 44, 476–483.
Johnson, N. L., Kemp, W. A. & Kotz, J. P. (2005). Univariate discrete distributions.
Wiley: New York.
Krishnamoorthy, K. & Lee, M. (2010). Inference for functions of parameters in discrete
distributions based on fiducial approach: binomial and Poisson cases. J. Statist. Plann.
Inference. 140, 1182–1192.
Lehmann, E. L. & Romano, J. P. (2005). Testing statistical hypotheses. Springer: New
York.
36
Lindley, D. V. (1958). Fiducial distributions and Bayes theorem. J. R. Stat. Soc. Ser. B
20, 102–107.
Martin, R. & Liu, C. (2013). Inferential models: a framework for prior-free posterior
probabilistic inference. J. Amer. Statist. Assoc. 108, 301–313.
Petrone, S. & Veronese, P. (2010). Feller operators and mixture priors in Bayesian
nonparametrics. Statist. Sinica 20, 379–404.
Polson, N., & Wasserman, L. (1990). Prior distributions for the bivariate binomial.
Biometrika 77, 901–904.
Schweder, T. & Hjort, N. L. (2002). Confidence and likelihood. Scand. J. Stat. 29,
309–332.
Schweder, T. & Hjort, N. L. (2016). Confidence, likelihood and probability. London:
Cambridge University Press.
Shaked, M. & Shanthikumar, J. G. (2007). Stochastic orders. Springer: New York.
Singh, K., Xie, M. & Strawderman, M. (2005). Combining information through confidence distribution. Ann. Statist. 33, 159–183.
Stein, C. (1959). An example of wide discrepancy between fiducial and confidence intervals. Ann. Math. Statist. 30, 877–880.
Taraldsen, G. & Lindqvist, B. H. (2013). Fiducial theory and optimal inference. Ann.
Statist. 41, 323–341.
Tibshirani, R. (1989). Noninformative priors for one parameter of many. Biometrika 76,
604–608.
Veronese, P. & Melilli, E. (2015). Fiducial and confidence distributions for real exponential families. Scand. J. Stat. 42, 471–484.
Wandler, D. & Hannig, J. (2012). A fiducial approach to multiple comparisons. J. Statist.
Plann. Inference 142, 878–895.
Wilkinson, G. N. (1977). On resolving the controversy in statistical inference. J. R. Stat.
Soc. Ser. B 39, 119–171.
37
| 10 |
Source Forager: A Search
Engine for Similar Source Code
Vineeth Kashyap∗, David Bingham Brown† , Ben Liblit† , David Melski∗ , and Thomas Reps∗†
∗ GrammaTech,
arXiv:1706.02769v1 [cs.SE] 8 Jun 2017
Inc., Ithaca, New York, USA
Email: {vkashyap,melski}@grammatech.com
† University of Wisconsin–Madison, USA
Email: {bingham,liblit,reps}@cs.wisc.edu
Abstract—Developers spend a significant amount of time
searching for code—e.g., to understand how to complete, correct,
or adapt their own code for a new context. Unfortunately, the
state of the art in code search has not evolved much beyond text
search over tokenized source. Code has much richer structure and
semantics than normal text, and this property can be exploited to
specialize the code-search process for better querying, searching,
and ranking of code-search results.
We present a new code-search engine named Source Forager.
Given a query in the form of a C/C++ function, Source Forager searches a pre-populated code database for similar C/C++
functions. Source Forager preprocesses the database to extract a
variety of simple code features that capture different aspects of
code. A search returns the k functions in the database that are
most similar to the query, based on the various extracted code
features.
We tested the usefulness of Source Forager using a variety of
code-search queries from two domains. Our experiments show
that the ranked results returned by Source Forager are accurate,
and that query-relevant functions can be reliably retrieved even
when searching through a large code database that contains very
few query-relevant functions.
We believe that Source Forager is a first step towards muchneeded tools that provide a better code-search experience.
Index Terms—code search, similar code, program features.
I. Introduction
In this age of software proliferation, it is useful to be able
to search large source-code corpora effectively for code with
desired properties.1 Developers routinely use code search as
a learning and debugging tool for tasks such as looking for
existing functionality in a code base, determining how to use
an API or library, gathering information about what code is
intended to do, etc. [1].
Text-based search techniques are not always precise enough
for code because they focus purely on strings in the code:
Supported, in part, by a gift from Rajiv and Ritu Batra; by AFRL under
DARPA MUSE award FA8750-14-2-0270, and by the UW–Madison Office
of the Vice Chancellor for Research and Graduate Education with funding
from the Wisconsin Alumni Research Foundation. Any opinions, findings,
and conclusions or recommendations expressed in this publication are those
of the authors, and do not necessarily reflect the views of the sponsoring
agencies. T. Reps has an ownership interest in GrammaTech, Inc., which has
licensed elements of the technology reported in this publication.
1 In this paper, the term “search” is used in the sense of Google search—
namely, to retrieve documents that are related to a specified query. “Search”
is not used in the sense of finding an occurrence of a user-specified string or
pattern in a given document.
comments, complete or partial names of functions and variables, and so on. Text search largely ignores code structure
and semantics (i.e., what the code does and how it does it).
A text-based approach can cause searching to be imprecise:
relevant code fragments may be missed, while many spurious
matches may be returned. Recent search techniques allow users
to specify certain aspects of code semantics in addition to
the textual query [2]–[8]. Some techniques allow users to
specify structural requirements, such as that the search target
should have nested loops. Others specify context, such as that
the search target should implement a particular interface. Yet
others specify sets of input/output pairs.
Additional semantic information can improve search accuracy. However, existing techniques share the following shortcomings:
• The techniques do not provide a unified way of specifying
semantics for the search query. Each technique has its own
ad-hoc specification of the semantic aspects of the code that
it uses.
• Each technique is closely married to its chosen semantic
aspect, which is deeply ingrained into the implementation
of the search technique. This tight coupling makes it hard
to extend these techniques to model additional semantic
aspects.
We propose a search technique for finding similar source
code that addresses these shortcomings:
•
•
Unified Query Specification. Our code-search mechanism
takes code fragments as queries. Various kinds of semantic
information can be extracted from the query and used by the
search. This approach provides a unified mechanism for code
search: searching code using code fragments. Moreover, the
same techniques for extracting semantic information are used
on both queries and elements of the corpus being searched,
leading to greater consistency.
Extensibility. Our code-search technique uses a vector of
feature-observations extracted from elements in the corpus.
Feature-observations capture various aspects of the syntax
and semantics of a program (each such aspect is called a
feature-class), and provide a unified interface for querying.
This approach also makes our search technique extensible:
it is easy to introduce more feature-classes that model
additional aspects of the code.
int binsearch (int x, int v[], int n) {
int low , high , mid;
low = 0;
high = n - 1;
while (low <= high) {
mid = (low + high) / 2;
if (x < v[mid ]) {
high = mid - 1;
} else if (x > v[mid]) {
low = mid + 1;
/* found match */
} else {
return mid;
}
}
/* no match */
return -1;
}
feature-observations of
various feature-classes
weights
similar-code results
feature-class weight
determination
similarity-based neighbor search
query
...
corpus
...
program
elements
feature extraction engine
code database
Fig. 2. Example program that implements a binary search over a sorted integer
array
Fig. 1. Overview of the Source Forager architecture
In addition to being useful on its own right as a developer A. Offline Phase: Population of the Source Forager Database
tool, similar-code search can serve as an important building
In this phase, Source Forager analyzes a given code corpus,
block for automated program repair and program synthesis.
and
populates a code database with rich information about
The ability to find other code similar to a query can help
each
of the functions in the code corpus. Source Forager
automated tools learn from the similar code, and fix bugs or
extracts
several different kinds of information about each
perform code completion tasks on the query.
function;
we refer to each of the different kinds of information
The main contributions of Source Forager are:
as
a
feature-class.
§III describes our different feature-classes in
• The ability to perform C/C++ code searches using code
detail.
A
feature-observation
is some specific value observed
fragments as queries. The searches and answers of Source
for
a
given
feature-class.
Thus,
each function has one featureForager are both based on a query formalism that is close
observation
for
each
feature-class.
For example, one of our
to the concepts that developers are already familiar with.
feature-classes
is
Numeric
Literals.
The
corresponding feature• A code-search architecture that uses multiple code featureobservation
is
the
set
of
all
the
numeric
constants used in
classes simultaneously. The architecture is extensible, althe
function.
For
the
binary-search
implementation
code given
lowing easy addition of new code feature-classes, which
in
Fig.
2,
the
Numeric
Literals
feature-observation
is the set
enhances the dimensions along which code is searched.
{−1,
0,
1,
2}.
• A mechanism for automatically selecting useful code featureA feature extraction engine consists of several feature extracclasses to be employed in code search of a given query, given
tors, which collect a given function’s feature-observations into
no a priori domain information about the query.
a feature-vector. Note that the elements of the feature-vector
• A supervised-learning technique to pre-compute the relative
importance of different feature-classes, when it is known can be non-numeric, such as sets, multisets, trees, maps, etc.
that a query belongs to a specific domain for which suitable The number of feature-classes determines the length of the
feature-vector.
training data is available.
Organization: The remainder of the paper is organized
The feature extractors operate on a code corpus, and popinto four sections: §II gives an overview of our approach and ulate a code database. Each element of the code database
algorithms. §III describes the methods in detail. §IV presents consists of a C/C++ function from the corpus along with its
our experimental results. §V discusses related work.
extracted feature-vector. If Numeric Literals is employed as
one of the feature-classes, then one element of a function’s
II. Overview
feature-vector is the set of numeric constants.
Source Forager is a search engine for finding similar source
The code database also has access to several similarity
code. It takes an input query as C/C++ source text, then functions, one for each feature-class. The similarity function
searches a pre-populated database for similar C/C++ code, for a given feature-class takes any two feature-observations
returning a ranked list of results. The units of code about belonging to that feature-class and returns a value between 0.0
which Source Forager can reason about are called program and 1.0. A higher value indicates greater similarity between
elements. In its current incarnation, program elements are two feature-observations. For example, the similarity function
C/C++ functions; that is, both queries and results are C/C++ for Numeric Literals is the Jaccard index. Given two sets S1
functions.
and S2 , the Jaccard index is given by:
Fig. 1 provides an architectural overview of Source Forager.
Source Forager has two stages: an offline phase to populate its
|S1 ∩ S2 |
.
(1)
simJacc(S1,S2 ) =
code database, and an online query-search phase.
|S1 ∪ S2 |
2
The second implementation integrates our infrastructure
with Pliny-DB [10], which is an in-memory object-store
database implemented in C++. The feature-observations in
feature-vectors are serialized into efficient in-memory data
structures by Pliny-DB. Pliny-DB has access to similarity functions implemented in C++ for all feature-classes. It implements
the search for the k functions most similar to the query by
(1) scanning all the feature-vectors in the database, (2) comparing each of them to the query feature-vector, and (3) maintaining a priority queue of size k that keeps track of the k
most-similar feature-vectors. Given a query feature-vector and
relative weights for different feature-classes, Pliny-DB can find
the 10 most-similar functions in a code database containing
500,000 functions in under 2 seconds on a single machine
with 8 Intel i7 3.6 GHz cores and 16 GB RAM. Effort is
underway by the developers of Pliny-DB to make a distributed
version, which would allow Source Forager to search large
code databases without taking a big performance hit: a large
code database can be split into p smaller units that can each
be searched in parallel, and the sorted k most-similar results
from each of the p units can be merged using a multi-way
merge algorithm.
int bins(int key ,int array [],int min ,int max) {
if (max < min) {
return KEY_NOT_FOUND;
}
else {
int midpoint = (int)floor (( min+max )/2);
if( array[ midpoint ] < key) {
return bins(key , array , midpoint +1, max);
}
else if( array [midpoint ] > key) {
return bins(key , array , min , midpoint -1);
}
else {
return midpoint ;
}
}
}
Fig. 3. Example Source Forager code-search result for the query in Fig. 2.
This result is a recursive implementation of binary search.
B. Online Phase: Search for Similar Code
In the online search phase, Source Forager takes a query and
uses the same feature-extraction infrastructure to obtain the
feature-vector that corresponds to the query. This infrastructure
reuse creates a consistent representation and view of code
throughout the code-search infrastructure. For each featureclass in the feature-vector, a weight is assigned to determine
the importance of that feature-class. This feature-class weight
determination is based on which configuration Source Forager
is run with; sections III-B, III-C and IV-B provide an overview
of the different configurations.
A combined similarity function is defined on any two
feature-vectors by combining the per-feature-class similarity
functions with the per-feature-class weight assignment using a
weighted average. That is,
Íncl
sim ( A® , B® ) · wc
® B)
® = c=1 Ícn c c
simcombined ( A,
,
(2)
cl
w
c=1 c
C. Extensible Architecture
Source Forager’s architecture allows for easy extension. To
add a new feature-class, one implements (1) a feature extractor
that determines the feature-observation for any given function,
and (2) a corresponding similarity function. We currently
implement our feature extractors using CodeSonar®. However,
Source Forager is not tightly coupled with CodeSonar: any
C/C++ processing tool can be used to implement a feature extractor. The feature-observations for all existing feature-classes
are represented with well-known container data structures,
such as lists, maps, and trees; all similarity functions work at
the level of container data structures, and thus are available to
be reused with any additional user-supplied feature extractors.
Furthermore, Source Forager is not tied to having functions as
the only kind of program element. The underlying architecture
is also not limited to C/C++, and thus Source Forager can be
re-targeted to perform code searches of programs written in
other languages.
where A® and B® are two feature-vectors; ncl is the total number
of feature-classes (i.e, the length of each feature-vector); simc
is the similarity function for feature-class c; A®c and B®c
®
are the feature-observations for feature-class c in A® and B,
respectively; and wc is the weight assigned to feature-class c.
The feature-vector of the query is compared with each of
the feature-vectors in the code database using this combined
similarity function, and the k most-similar functions (that is,
with the highest similarity scores to the query) are returned
as results (for some configurable limit k). Fig. 3 shows an
example Source Forager code-search result when the code in
Fig. 2 is used as query.
We have two implementations of Source Forager. The first
one is a slower-performing version, in which the code database
is implemented as a large, in-memory JSON [9] object, and
the various similarity functions and the algorithm for k-mostsimilar function-search are implemented in Python. This implementation allows for easier and quicker experimentation with
new ideas. We use this version for the experiments reported
in §IV.
III. Code Search
In this section, we first describe the different feature-classes
and the accompanying similarity functions that are employed
in Source Forager. We then describe two configurations of
Source Forager. The first configuration (dyn-select) selects
a subset of the feature-classes on a per-query basis for performing code search: this configuration is useful when no
additional information is available regarding a code query. The
second configuration (svm-weights) pre-computes the relative
importance of feature-classes for a specific domain ahead of
time using supervised-learning techniques. This configuration
is useful when the domain of the code query is known.
3
Seq
TABLE I
A brief overview of the different feature-classes employed in Source
Forager. The marked* feature-classes all use Jaccard index (Eq. (1))
as the similarity function. The similarity functions used for the
remaining feature-classes accompany their descriptions in §III-A.
−
negate
Loop
Seq
Feature-Class
Brief Description
Type–Operation
Coupling*
types used and operations performed on the
types
Skeleton Tree
structure of loops and conditionals
Decorated Skeleton
Tree
structure of loops, conditionals, and operations
Weighted NL Terms
processed natural language terms in code
3 Graph CFG BFS
CFG subgraphs of size 3, BFS used for
generating subgraphs
4 Graph CFG BFS
CFG subgraphs of size 4, BFS used for
generating subgraphs
3 Graph CFG DFS
CFG subgraphs of size 3, DFS used for
generating subgraphs
4 Graph CFG DFS
CFG subgraphs of size 4, DFS used for
generating subgraphs
Modeled Library
Calls*
calls made to modeled libraries
Unmodeled Library
Calls*
calls made to unmodeled libraries
User-Defined Library
Calls*
calls made to user-defined libraries
Type Signature
input types and the return type
Local Types*
types of local variables
Numeric Literals*
numeric data constants used
String Literals*
string data constants used
Comments*
associated comment words
Seq
Loop
<= / +
Seq
Cond
Seq
Cond
< −
Seq
Cond
Seq
Cond
> +
(a) Skeleton Tree
(b) Decorated Skeleton Tree
Fig. 4. Tree-structured feature-observations for the example program in Fig. 2.
The AST is further abstracted by retaining only the loops (for,
while, do. . . while) and conditionals (if. . . else, switch).
Operationally, the feature extractor can be realized as a tree
transducer that drops all AST nodes that are not loops or conditionals. Sequences of loops or conditionals are encapsulated
within a sequence node, and empty sequences are dropped
from the feature-observation. The intuition behind using this
feature-class for code search is that similar functions tend to
have similar loop and conditional structures.
Fig. 4a shows the Skeleton Tree feature-observation for the
example code in Fig. 2.
The similarity function used for Skeleton Tree featureobservations is based on tree edit distances. Let dr be a rough
approximation of the distance between two trees, only based
on their sizes:
A. Feature-Classes and Similarity Functions
Table I summarizes Source Forager’s feature-classes. Below,
we further describe these feature-classes and their associated
similarity functions.
Type–Operation Coupling: The feature-observation for
this feature-class consists of the types of variables operated
on in the function, coupled with the operations performed
on those types. The feature-observation is a set of (type,
operation) pairs. Primitive types are paired with the builtin arithmetic, logical, and relational operations, for example,
(int, >=). User-defined types such as C++ classes are paired
with the user-defined operations on them, including direct and
indirect field accesses and method calls. For example, the
pair (Bar, .foo) indicates that the field foo of an aggregate
data type Bar is accessed. The intuition behind including
this feature-class is that similar functions tend to use similar
type–operation pairs. For the example in Fig. 2, the Type–
Operation Coupling feature-observation extracted is the set
{(int, unary-), (int, /), (int*, +), (int, >), (int, +),
(int, <=), (int, -), (int, <)}.
Skeleton Tree: The feature-observation for this featureclass is based on the abstract syntax tree (AST) of a function.
dr (T1, T2 ) =
|size(T1 ) − size(T2 )|
max(size(T1 ), size(T2 ))
Further, let DT be a fixed distance threshold (which we set to
0.5). We obtain an approximate distance between two trees, dt
as follows:
dr (T1, T2 )
!
ed(pre(T1 ), pre(T2 )),
dt (T1, T2 ) = max
ed(post(T1 ), post(T2 ))
max(size(T1 ), size(T2 ))
if dr (T1, T2 ) ≥ DT
otherwise
Here pre(T ) is the sequence obtained by performing a pre-order
traversal of the tree T , post(T ) is the sequence obtained by
performing a post-order traversal of the tree T , and ed(S1, S2 )
is the word edit distance between the sequences S1 and S2 . The
similarity function used for Skeleton Tree feature-observations
is then computed as:
simtree (T1, T2 ) = 1 − dt (T1, T2 )
4
(3)
An exact tree-edit-distance computation [11] has quartictime complexity in the size of the trees being compared. We
instead use a fast under-approximation of edit distance [12]
that gives our similarity function quadratic-time complexity
overall. Note that we also use a further rough approximation
based on just the size of the trees, if one of the two trees being
compared is at least twice as large as the other. We found that
using these approximations as opposed to the exact tree-editdistance based similarity made no discernible difference in
the quality of the final search results obtained, but made a big
difference in performance: more than 6× faster in our tests.
Decorated Skeleton Tree: This feature-class is similar to
the Skeleton Tree, except that instead of retaining just the
loop and conditional structure in the feature-observations, most
operations (e.g., +, -, and <) are also retained from the AST.
We discard some common operations, such as assignment (=)
and address-of (&), because they cause excessive bloat. The
intuition behind including this feature-class is that similar functions use similar operations in structurally similar locations.
Fig. 4b shows the Decorated Skeleton Tree featureobservation for the example code in Fig. 2. The similarity
function used is simtree from Eq. (3).
Weighted NL Terms: The feature-observations for this
feature-class consist of various natural-language (NL) terms in
source code, such as function name, comments, local variable
names, and parameter names of a function. Such NL terms,
after extraction, are subjected to a series of standard NL preprocessing steps, such as splitting words with under_scores or
CamelCase, stemming, lemmatization, and removing singlecharacter strings and stop-words. Stop-word removal discards
both typical English stop words such as “the”, “and”, and
“is” [13], as well as stop words specialized for code, such
as “fixme”, “todo”, and “xxx”. Additionally, we use a greedy
algorithm [14] for splitting terms into multiple words based on
dictionary lookup. This splitting is to handle the case where
programmers choose identifiers that combine multiple words
without under_scores or CamelCase.
After NL pre-processing, we compute a term frequencyinverse document frequency (TF-IDF) score for each NL term.
We consider each function as a document, and compute the
TF-IDF per C/C++ project. We give function-name terms an
inflated score (5× more than other terms) because these often
provide significant information about functions’ purposes. The
intuition behind including this feature-class is that similar
functions tend to have similar natural-language vocabulary.
The feature-observation for the example in Fig. 2 is {“bin”:
0.65, “search”: 0.65, “high”: 0.13, “low”: 0.13, “found”:
0.13, “mid”: 0.13, “match”: 0.13}.
The similarity function for two observations of Weighted
NL Terms uses cosine similarity:
Ín
i=1 Ai Bi
simnl (A, B) = q
q
Ín
2 Ín B 2
i=1 Ai
i=1 i
A
B
C D
A
B
C
D
A
0
0
0
0
B
1
0
0
0
C
0
1
0
0
D
0
1
0
0
Fig. 5. An example 4-graph and its corresponding adjacency matrix. Serializing the adjacency matrix entries yields binary digits “0100 0011 0000 0000”,
or 17,152 in decimal. Node ordering in the adjacency matrix is the traversal
order.
K-Subgraphs of CFG: We implement multiple featureclasses based on k-sized subgraphs of the control flow graph
(CFG) of a function. Given the CFG of a function, we begin
either a breadth-first-search (BFS) traversal or a depth-first
search (DFS) traversal at a node until k nodes are traversed; a
subgraph of the CFG involving these k nodes is extracted. If
fewer than k nodes are reachable from a node (including itself),
then such a sub-graph is thrown away. We repeat this process
for every node in the CFG, extracting at most n subgraphs of
size k, where n is the size of the CFG. We represent a graph
of size k as a k 2 -bit integer, which is a 1-D representation of
a 2-D adjacency-matrix representation of the graph, obtained
by concatenating each of the matrix rows in order. Thus, from
each function’s CFG, we extract a multiset of k-graph shapes.
Fig. 5 shows an example of converting a 4-graph into a 16-bit
integer in this manner.
We implement the following four feature-classes based on
the value of k and the traversal strategy chosen:
3 Graph CFG BFS: k = 3, traversal strategy is BFS.
4 Graph CFG BFS: k = 4, traversal strategy is BFS.
3 Graph CFG DFS: k = 3, traversal strategy is DFS.
4 Graph CFG DFS: k = 4, traversal strategy is DFS.
For the example in Fig. 2, the feature-observation extracted
for the feature-class “4 Graph CFG BFS” is the multiset {134,
134, 134, 194, 194, 194, 194, 194, 2114, 2114, 2114}. The
intuition behind including these feature-classes is that similar
functions tend to have similar control-flow structures [15].
The similarity function used for these feature-class is based
on the generalized Jaccard index between two multisets O1
and O2 :
Í
min(O1i, O2i )
simGen-Jacc(O1,O2 ) = Í i
(4)
i max(O1i, O2i )
Here, i iterates over all the unique elements in O1 ∪ O2 , and
O1i is the number of times i appeared in the multiset O1 .
Calls to Library Functions: We implement three featureclasses that extract calls to various kinds of library functions:
Modeled Library Calls: CodeSonar models a large range of
library functions for performing static analysis on C/C++
code. For this feature-class, calls made to any of these
modeled library functions are extracted.
Unmodeled Library Calls: Calls made to any unmodeled
library functions are extracted for this feature-class—that is,
calls to a function not modeled by CodeSonar, and whose
definition is not available in the source code.
Here n is the total number of words in the universe, A and
B are vectors with TF-IDF scores, and the i th index Ai is the
TF-IDF value for the i th word.
5
User-Defined Library Calls: For this feature-class, calls to B. Dynamic Feature-Class Selection
functions whose definitions are available in a directory difCombining feature-classes can be beneficial for code search,
ferent from the caller function are extracted. We use such
however, the feature-classes that are useful for performing a
functions as a heuristic for identifying user-defined libraries.
code search may vary from one query to another. For example,
The intuition behind including the above three featureconsider a query function containing of just straight-line code.
classes is that similar code tends to call the same library
A significant number of functions in our code-database are
functions. For each of these three feature-classes, the featuredevoid of loops and conditionals,2 and all such functions look
values are sets of library functions called. A library function
identical to the query function with respect to the Skeleton
is represented as tuple: it includes the name of the function
Tree feature-class. Thus, performing a code search with this
together with the file name containing the function’s declaraquery by including the Skeleton Tree feature-class can lead to
tion. For example, if a function calls strcpy and strncpy,
lower-quality results. On the other hand, if a query function
then the feature-observation corresponding to Modeled Library
has an unusual loop and conditional structure that is idiomatic
Calls for that function is {(strcpy, string.h), (strncpy,
to the computation being performed, then the Skeleton Tree
string.h)}.
feature-class would be useful in code search: other instances
Type Signature: For this feature-class, the featureof the same distinctive structure from the code database would
observations consist of the type signature of the function: i.e.,
have high similarity scores to the query function.
the argument types and the return type of a function. Together,
Thus, it is useful to select feature-classes automatically on a
the argument types and the return type form a multiset of
per-query
basis for code search. This configuration of Source
types. For the example code in Fig. 2, the feature-observation
Forager
is
called dyn-select. Intuitively, a feature-class for a
corresponding to Type Signatures is {int, int, int*, int}.
given
query
is selected for code search if the corresponding
Type signatures define a function’s interface for interaction
feature-observation
is sufficiently discriminatory/unique with
with the rest of the code. Similar code tends to have similar
respect
to
the
overall
feature-observation distribution for that
interfaces, and therefore type signatures could help with code
feature-class.
search.
To prepare for the dynamic feature-class selection on a perThe generalized Jaccard index (Eq. (4)) is used as the
query
basis, we take following steps offline:
similarity function for this feature-class.
•
From
the code database, we retrieve a random sample S
Local Types: For this feature-class, the featureof
feature-vectors.
Random sampling gives an inexpensive
observations consist of the set of types of all the local variables.
estimate
of
feature-observation
distributions across the entire
The intuition behind using local variable types in code search
code
database.
is that similar code creates and operates on variables of similar
types. For the example code in Fig. 2, the Local Types feature- • We calculate a similarity threshold for each feature-class c
by (1) computing pairwise similarity scores on the featureobservation is {int}.
observations for c in S and (2) taking the sum of means
Constants: We implement two feature-classes that extract
and
standard deviations of the similarity scores. Two featureconstants from a function:
observations
for c are considered similar if their similarity
Numeric Literals: This feature-class is described in §II-A.
score
is
above
the similarity threshold for c.
String Literals: For this feature-class, a feature-observation
Online, when a query is posed, we take the following steps
is the set of all the literal strings used in a function.
The intuition behind using sets of constants in code search is for each feature-class c (which can be performed in parallel):
• We compare the query’s feature-observation for c with all
that similar code typically uses similar constants.
other feature-observations for c in sample S (of size nsamp ),
Comments: For this feature-class, the featureand count the number of similar feature-observations nsim-c.
observations consist of the comments associated with a
function. The comments are represented as a set of words. • We select the feature-class c for code search if it is not too
< tuniq . Here tuniq is a threshold
common, that is, if nnsim-c
The intuition behind using comments in code search is that
samp
the comments in similar pieces of code are likely to use
that indicates a feature-observation is sufficiently unique in
a similar vocabulary. For the example code in Fig. 2, the
the sample. For example, tuniq = 0.15 indicates that any
Comments feature-observation is {“found”, “match”, “no”}.
feature-observation that is similar to less than 15% of the
Combining Feature-Classes: Using several featuresample feature-observations is considered distinctive enough
classes in combination allows Source Forager to obtain good
to warrant inclusion.
code-search results in a fairly robust manner by using different
Each feature-class is assigned a weight of exactly 1.0 or
dimensions of the code. For example, consider the binary- exactly 0.0, based on whether the feature-class is selected
search implementation in Fig. 2. We see that variables named in the above process. These weights are used for combining
mid, low, high are used; that there are two conditionals feature-class similarities for code search (Eq. (2)), and the knested inside a single loop; and that an integer division and most-similar-function search is carried out between the query
integer less-than-or-equal-to operation is performed. When put
together, these observations are hallmarks of a binary-search
2We did a brief study of feature-observation distributions for the Skeleton
Tree feature-class over our corpus, which revealed this data point.
implementation.
6
function and the functions in the code database as described
in §II-B, to obtain the k functions most similar to the query.
TABLE II
Task categories used for code-search queries in algo-qs. “#Similar”
gives the number of similar functions that were manually found for a
given task category. “Partial” reports how many function pairs MOSS
considered to be potential clones. “Significant” reports the % of
function pairs with at least 50% code overlap.
C. SVM-Guided Feature-Class Weight Generation
Note that dyn-select does not need any additional knowledge
about the query. However, if we know ahead of time that a
query belongs to a specific domain, and we have ground-truth
information available regarding what constitutes similar code
in that domain, then we can use supervised-learning techniques
to learn good feature-class weights (for Eq. (2)) for that domain
ahead of time, and use these weights for code search with all
future queries in that domain.
Given a particular ground-truth data set with labeled similar
code, we generate fine-tuned weights by training a binaryclassification support vector machine (SVM). We do not train
using raw code text, or even raw sets of feature-observations.
Because we use the SVM training process to generate relative weights for feature-class similarity scores in Eq. (2),
we train the SVM on these similarity scores directly. The
similarity scores for all feature-classes between two functions
are assembled into a similarity vector. The SVM is then
trained on examples of similarity vectors for both similar and
dissimilar functions, each labeled accordingly. This technique
allows us to optimize ahead of time how these feature-classes
are relatively weighted in a code search, by using the same
similarity functions that are employed in code search of a
query.
Our SVM uses a linear classifier, which allows a convenient
interpretation of internal weights [16]. The final pre-processing
step is to extract these internal weights and normalize them
relative to the sum of their magnitudes, truncating negative
weights. These normalized weights are then used directly as
feature-class weights in Eq. (2).
§IV-B provides more details about the corpus and training
process. Of course, it is not obvious that weights obtained by
training for classification purposes are useful in ranking results
for code-search queries. §IV-C measures the effectiveness of
this strategy in practice.
MOSS Detected
Task Category
Binary Search
Edit Distance
Insertion Sort
Knapsack
Modular Exponentiation
Non Recursive Depth First Search
Red Black Tree Left Rotate
#Similar
Partial
Significant
5
5
5
5
6
5
6
20%
40%
30%
10%
0%
0%
40%
10%
0%
30%
0%
0%
0%
13%
A code-search task involves searching for relevant documents from a group of documents that include both relevant
and non-relevant documents. (In the case of Source Forager,
“documents” are C/C++ functions.) Non-relevant documents
are also known as distractors, which leads naturally to the
following question:
RQ4 How much does Source Forager’s code-search performance degrade as we increase the number of distractors
in the code base being searched?
B. Experimental Setup and Methodology
Source Forager uses CodeSonar, an industrial-strength
C/C++ static-analysis engine, to analyze C/C++ corpora and
implement feature extractors. CodeSonar handles real-world
C/C++ projects with tens of millions of lines of code. CodeSonar also exposes a wealth of information about a program
through well-defined APIs. Source Forager’s feature extractors
are implemented as CodeSonar plugins that use these APIs.
Consequently, Source Forager inherits CodeSonar’s requirement that programs must be compilable to be analyzable.
Code-Search Tasks: Our experiments assess Source Forager’s performance under various configurations. Code-search
tasks are set up as follows. For each query function, there is
a set of known relevant functions that are similar to the query.
The relevant functions are treated as ground truth. The relevant
functions are then mixed with many non-relevant functions
as distractors, and together they form the code database
used in the experiment. Source Forager then searches the
code database for similar functions. We compute informationretrieval statistics based on the ranking of the known-relevant
functions in the returned results.
Queries: We use two query ground-truth sets for the
code-search tasks, representing two domains. One, called
algo-qs, represents “algorithmic” code queries. For algo-qs,
we created seven tasks, outlined in Table II, and manually
curated a total of thirty-eight functions that each accomplish
one of the seven tasks. The functions were mostly obtained
from GitHub, and were written by a variety of programmers,
none of whom are authors of this paper. The functions that
accomplish a specific task have been manually vetted to be
IV. Experimental Evaluation
This section outlines the research questions we seek to
answer through experiments (§IV-A); describes the setup and
methodology used in the experiments (§IV-B); and presents
the results of the experiments (§IV-C).
A. Research Questions
Our experiments were designed to answer the following
research questions:
RQ1 How do the individual feature-classes described in §III
perform in code-search tasks relative to each other?
RQ2 Does combining feature-classes using per-query dynamic feature-class selection (§III-B) improve Source
Forager’s performance?
RQ3 Does combining feature-classes using supervised learning (§III-C) further improve Source Forager’s performance, when the query domain is known?
7
similar to each other. We thus have a total of thirty-eight base
queries.
We use these sets of real-world functions as queries (and the
desired search results), and consider them to be an appropriate
proxy for the code-search queries performed (and search
results expected) by users in the algorithm domain. We have
made the labeled queries available for inspection.3
To make sure that the similar functions we found were not
all clones of each other, we ran them through the MOSS
software-plagiarism detector [17]. Given a group of programs,
MOSS reports program pairs that may be clones, along with
an overlap percentage. Table II reports MOSS’s findings, run
using default settings. In this table, partial overlap represents
any pair that MOSS reports as possible clones, while significant overlap counts only possible clones with at least 50%
overlap. Observe that many function pairs marked manually
as being similar are not just MOSS-detectable clones of each
other. Thus, recognizing similar function pairs in this corpus
is a nontrivial challenge.
The second query ground-truth set we use is called
libc-qs, and represents code queries from systems programming. We looked at three implementations of the standard
C library: musl libc [18], diet libc [19], and uClibc [20].
From these we define 88 function categories corresponding
to 88 functions that all three implementations provide. We
assume that within the same function category, the three libc
implementations are “similar.” For this domain, we have 88×3
queries. For example, musl libc’s sprintf is labeled to be
similar to diet libc’s sprintf and uClibc’s sprintf, and
dissimilar to everything else.
Distractor Functions: The distractor functions have been
taken from the openly available MUSE corpus [21], and mainly
consist of code from Fedora source packages (SRPMs). Our
feature extractors currently require compilable code, which
Fedora SRPMs provide. Due to the large size of the distractorfunction corpus (over 200,000), we have not manually vetted
all of the distractor functions to be sure that they are irrelevant
to the queries issued. It is possible that some distractor
functions are indeed relevant to some queries, so our retrieval
statistics are under-approximations. With the exception of the
experiments reported in Fig. 7, all experiments use 10,000
distractors.
Retrieval Statistics: We compute Mean Average Precision
(MAP) as the retrieval statistic, as is common in information
retrieval. MAP is typically used to measure the quality of
ranked retrieval results, because MAP takes into account
the rank of the relevant documents in the retrieved results.
MAP provides a measure of quality across all recall levels.
MAP is the mean of the average precision computed for
each query. The average precision (AP) for each query is
Í
given by nk=1 P(k) · r(k) /R, where n is the total number of
documents searched; R is the number of documents marked
relevant to the query; P(k) is the precision when k documents
are requested; and r(k) is 1 when the k th retrieved document is
relevant, and 0 otherwise. That is, AP is the average precision
at all the points when a new relevant document is retrieved in
a ranked result list. The best MAP score that can be achieved
is 1.0, when for each query, the R relevant documents appear
as the top R search results.
SVM-Guided Weights: We applied the techniques discussed in §III-C on algo-qs and libc-qs to provide labeled
data-sets on which to train an SVM. Each instance in our
training set is generated by comparing two functions a and
b, yielding a single similarity vector that consists of similarity
scores for each feature-class. The binary classification for each
training instance is 1.0 if a and b are implementations of the
same function, 0.0 otherwise.
We use LIBLINEAR [22] to train the SVM to classify
these function comparisons; this process takes roughly twenty
milliseconds. Using this technique, we are able to achieve over
98% accuracy under ten-fold cross-validation.
Once the SVM is trained, we extract and normalize its
internal weights for use in code search. For the svm-weights
configuration described below, within each domain, the dataset is divided into multiple folds of training-set and test-set
pairs. The weights extracted from the training set are used
to obtain MAP scores on the test set. That is, weights are
trained on a subset of a given domain (algo-qs or libc-qs)
and tested using queries from a different subset of the same
domain. For the cross-weights configuration described below,
algo-qs is used to train weights for queries from libc-qs,
and vice-versa.
Source Forager Configurations: Our experiments run
Source Forager under many configurations. Each configuration
is defined by the weight wc assigned to each of the featureclasses c given in Table I. These weights are used in Eq. (2)
for performing the code search.
solo-c: For each query, the weight wc corresponding to
feature-class c is 1.0. Weights corresponding to all other
feature-classes are set to 0.0.
equal-all: For each query, for all feature-classes c, wc = 1.0,
giving equal importance to all feature-classes for all queries.
dyn-select: For each query, a subset of feature-classes are
selected and given equal weights, as described in §III-B. The
dynamic selection of feature-classes adds a small run-time
overhead to each query.4
rand-select: For each query, a new random configuration is
used as follows: a random subset of the feature-classes is
selected, and the selected feature-classes are given equal
weights. Repeat this process 10 times with different random
selections, and report mean results over these 10 trials.
svm-weights: For each query, use weights learned for the
domain that the query belongs to, as described in §III-C and
above.
cross-weights: For each query, use weights learned for the
domain that the query does not belong to.
4In our naive Python implementation, dyn-select adds an average run-time
overhead of 2.1 seconds per query for dynamic selection of feature-classes.
Currently, the selection decision on each feature-class is done sequentially,
instead of in parallel as suggested in §III-B.
3Available at the URL: http://tinyurl.com/source-forager-algo-benchmarks.
8
Note that, unlike the other configurations, the svm-weights and
cross-weights configurations permit weights to give different
(non-zero) importance levels to different feature-classes.
RQ3 Finding: When the domain of a query is known,
and training data is available, combining multiple featureclasses using weights derived from supervised learning
(§III-C) is the most effective strategy for code search.
C. Results and Discussion
The cross-weights (0.74, 0.85) configuration tests whether
the weights learned from one domain are useful in a different
The left side of Fig. 6 shows how each individual featuredomain. The rightmost two bars in Fig. 6 show that it is
class performs on the code-search tasks in isolation. This
hard to derive a single set of relative feature-class weights that
experiment addresses RQ1. The solo feature-class Weighted
work well for queries in both domains. Thus, in the absence of
NL Terms (0.70, 0.86)5 performs the best individually on both
domain information about the query, dyn-select is preferred.
algo-qs and libc-qs. Thus:
Fig. 7 shows how Source Forager’s result quality scales
with increasing distractor-set sizes. This experiment addresses
RQ1 Finding: If we were to drive Source Forager using
RQ4. Source Forager is used in the dyn-select and svm-weights
only one feature-class, Weighted NL Terms is the best
configurations for this experiment. As one would expect, MAP
option. However, Fig. 6 shows that the performance of the
scores decline as distractors proliferate. However, consider
different feature-classes varies considerably depending on
that relevant sets contain just 2 to 6 items competing against
the query ground-truth set. This variance suggests that
distractor sets that are up to five orders of magnitude larger.
different feature-classes are important for different kinds
of queries.
RQ4 Finding: Resilient MAP scores indicate that Source
Forager returns high-quality results even when distractors
outnumber relevant items by several orders of magnitude.
RQ2 asks whether multiple feature-classes can be usefully
combined, and whether dyn-select is a good way to do such
a combination. A straight-forward manner in which featureclasses can be combined is the equal-all configuration, which
represents a baseline to compare against other configurations.
The dyn-select configuration selects different subsets of the
feature-classes on a per-query basis (§III-B). As a sanity check
for the selections performed by dyn-select, we also compare
it with the rand-select configuration, which randomly selects
feature-class subsets for every query. The right side of Fig. 6
shows that dyn-select (0.84, 0.89) performs better on both
algo-qs and libc-qs when compared to equal-all (0.67,
0.73) and rand-select (0.57, 0.63). dyn-select also outperforms
each of the solo configurations from the left side of Fig. 6.
D. Threats to Validity
The issue of whether evaluation benchmarks are appropriate
is a potential threat to the validity of any information retrieval
system. We mitigate this threat for Source Forager in several
ways. First, we use benchmark queries from two different
domains, algo-qs and libc-qs. Second, we use the MOSS
plagiarism detector to show that our manually labeled set of
relevant functions in algo-qs are not trivial clones of each
other. Third, we draw the algo-qs and libc-qs data sets
from real-world code written by arbitrary programmers, not
artificial programs written by us.
Feature-classes can be combined in various ways to perform
code searches. We have explored part of the vast space of all
such combinations, and our results speak only to those we have
tried. We find that the MAP scores of the configuration dynselect on both algo-qs and libc-qs are good. We designed
the experiments with equal-all and rand-select configurations
to test whether the selections made by dyn-select are indeed
necessary and useful, and find that they are.
RQ2 Finding: In the absence of any additional information about the query, combining multiple feature-classes
and dynamically selecting feature-classes on a per-query
basis (§III-B) is the most effective strategy for code
search.
RQ3 addresses the scenario where the domain of a query is
known, and additional information is available regarding that
domain (as described in §III-C). The svm-weights configuration tests Source Forager under this scenario. Pre-learning the
relative importance of feature-classes for a given domain (in
the form of weights wc for each feature-class) also makes code
search more efficient by eliminating any run-time overhead in
feature-class selection. The right side of Fig. 6 shows that svmweights (0.86, 0.95) outperforms all other configurations.
V. Related Work
Code-search engines: Several popular text-based codesearch tools “grep” over tokenized source code: GitHub,
SearchCode, Open HUB, etc. While these tools are useful,
they fall short in many use cases, as they do not exploit the
rich semantics of code. For example, the top search results
for the term “dfs” on C code projects in GitHub yields
function declarations, macro names, and #include directives
that mention “dfs”, but that are not actually useful.
The Sourcerer code-search engine [2] combines text-based
search techniques with information about relations among programming “entities” like packages, classes, methods, and fields.
5Pairs of numbers following a configuration in this section indicates the
MAP scores of that configuration on algo-qs and libc-qs, respectively.
9
0.95
0.85
0.74
0.40
0.09
0.00
0.00
0.03
0.09
0.01
0.04
0.00
0.07
User-Defined
Library Calls
0.18
0.01
0.00
Unmodeled
Library Calls
0.04
0.04
0.03
0.06
0.03
0.21
0.25
0.07
0.12
0.04
0.12
0.14
0.01
0.2
0.31
0.4
0.86
0.84
0.89
0.67
0.73
0.58
0.70
0.45
0.6
0.19
MAP score
0.8
algo-qs
libc-qs
0.57
0.63
0.86
1
cross-weights
svm-weights
dyn-select
rand-select
equal-all
Comments
String Literals
Numeric
Literals
Local Types
Type Signatures
Modeled
Library Calls
4 Graph
CFG DFS
3 Graph
CFG DFS
4 Graph
CFG BFS
3 Graph
CFG BFS
Weighted
NL Terms
Decorated
Skeleton Tree
Skeleton Tree
Type–Operation
Coupling
0
Fig. 6. Information retrieval performance with 10,000 distractors. The left side of the plot, from “Type–Operation Coupling” through “Comments”, uses the
solo-c configuration with the given feature-class as the only non-zero-weighted feature. The right side of the plot, from “equal-all” through “cross-weights”,
uses the various other Source Forager configurations that leverage multiple feature-classes simultaneously. Although no MAP score is exactly zero, several are
below 0.005 and therefore round to “0.00” in the data-point labels above.
0.87
0.70
0.91
0.93
0.75
0.94
0.84
0.79
0.96
0.97
0.88
0.86
0.98
0.99
0.90
0.99
0.95
0.93
0.98
0.99
0.95
0.92
1.00
0.96
100
200
400
800
1,600
3,200
6,400
12,800
25,600
51,200
102,400
204,800
MAP score
Strathcona [8] returns relevant Java code examples to developers
learning to use complex object-oriented frameworks. It
1
uses several heuristics based on class-inheritance hierarchies,
method calls, and type uses. Source Forager could also use the
0.8
applicable heuristics from Sourcerer and Strathcona as featureclasses, but additionally demonstrates how to search using
more complex structures, such as decorated skeleton trees and
0.6
CFG-subgraphs.
CodeGenie [7], [23] proposes test-driven code search, in
which
the user supplies a set of unit tests for the code compo0.4
nent they want to find. CodeGenie leverages Sourcerer [2] to
svm-weights with libc-qs
perform keyword-based search; test cases refine these results.
dyn-select with libc-qs
Source Forager could be used as a replacement for Sourcerer
0.2
svm-weights with algo-qs
to perform similar code search in CodeGenie.
dyn-select with algo-qs
Stollee et al. [4], [24] perform code search based on logical
0
characterizations of programs’ I/O behaviors, obtained via
symbolic execution. A query consists of concrete I/O pairs
for the desired code fragment. While this approach precisely
captures the semantics of the corpus elements, it does not imNumber of Distractor Functions
mediately handle some common programming constructs, such
as loops and global variables. It also restricts the size of the
Fig. 7. Impact of the number of distractor functions on MAP scores using dynprogram elements in the corpus, because symbolic execution
select and svm-weights for all algo-qs and libc-qs queries. The horizontal
axis is on a log scale.
of larger elements may lead to path explosion. Source Forager
can easily be extended to use I/O pairs as an additional featureclass in scenarios where the above restrictions are acceptable.
Sourcerer also uses fingerprints that capture some light-weight
XSnippet [5] and ParseWeb [25] are specialized code-search
structural information about the code, such as depth of loop engines: XSnippet looks specifically for code that instantiates
nesting and presence or absence of certain language constructs. objects of given type in a given context, ParseWeb has a similar
Queries in Sourcerer are text-based and are powered by Lucene focus on code sequences that instantiate objects. Codify [6]
(http://lucene.apache.org), as opposed to the code-based search extracts and stores a large amount of metadata for each symbol
by Source Forager.
in a program, and provides a user interface for querying that
10
metadata. Codify aids in understanding and browsing code.
The goal of Source Forager’s code search is different from the
above, i.e., to find source code similar to a query.
Code-clone detection: Source Forager’s code searches
differ from the typical clone detection problem in that we are
interested in finding code that has both semantic and syntactic
similarity. Therefore, we use a range of feature-classes that
span from syntactic to semantic. Source Forager’s notion of
similarity does not neatly fall into any of the definitions of
standard clone types 1–4 [26].
Similar-machine-code search: Finding similar machine
code [15], [27]–[29] is useful in finding known vulnerabilities
in third-party code for which source code is not available.
The primary difference in code search at the source-level
and machine-level is that machine code has poorer syntactic,
semantic, and structural information available compared to
source code. As a result, while there is some overlap between
techniques, research on machine-code search is focused on
tackling different problems, such as how to do similar-machinecode search across different CPU architectures, compiler optimizations, compilers, operating systems, etc.
SVM-based code-classification: Rosenblum et al. [30]
train SVMs with features extracted from source code in the
attempt to classify programs by author. Source Forager builds
on this idea by training an SVM with similarity scores derived
from feature-observations, and then extracting internal weights
from the trained SVM to strengthen the combined similarity
function used for code search.
References
[1] C. Sadowski, K. T. Stollee, and S. G. Elbaum, “How developers search
for code: a case study,” in Found. of Softw. Eng., 2015.
[2] E. Linstead, S. K. Bajracharya, T. C. Ngo, P. Rigor, C. V. Lopes,
and P. Baldi, “Sourcerer: Mining and searching internet-scale software
repositories.” Data Mining and Knowledge Discovery, vol. 18, no. 2, pp.
300–336, Apr. 2009.
[3] S. P. Reiss, “Semantics-based code search,” in Int. Conf. on Softw. Eng.,
2009, pp. 243–253.
[4] K. T. Stollee, S. G. Elbaum, and D. Dobos, “Solving the search for
source code,” Trans. on Softw. Engineering and Methodology, vol. 23,
no. 3, May 2014.
[5] N. Sahavechaphan and K. T. Claypool, “XSnippet: mining for sample
code,” in Conf. on Object-Oriented Prog. Systems, Languages, and
Applications, 2006, pp. 413–430.
[6] A. Begel, “Codifier: A programmer-centric search user interface,” in
Workshop on Human-Computer Interaction and Inf. Retrieval, 2007.
[7] O. Lemos, B. K. Bajracharya, J. Ossher, R. S. Morla, P. C. Masiero,
P. Baldi, and C. V. Lopes, “Codegenie: using test-cases to search and
reuse source code,” in Int. Conf. on Automated Softw. Eng., 2007.
[8] R. Holmes and G. C. Murphy, “Using structural context to recommend
source code examples,” in Int. Conf. on Softw. Eng., 2005.
[9] D. Crockford, “Introducing JSON,” Apr. 2016. [Online]. Available:
http://json.org/
11
[10] C. Jermaine, “The pliny database,” Aug. 2016. [Online]. Available:
http://cmj4.web.rice.edu/PlinyDBSlides.pdf
[11] K. Zhang and D. Shasha, “Simple fast algorithms for the editing distance
between trees and related problems,” SIAM J. Comput., vol. 18, no. 6,
Dec. 1989.
[12] S. Guha, H. Jagadish, N. Koudas, D. Srivastava, and T. Yu, “Approximate
XML joins,” in Int. Conf. on Management of Data. ACM, 2002, pp.
287–298.
[13] NLTK Project, “Stopwords corpus,” Mar. 2016. [Online]. Available:
http://www.nltk.org/nltk_data
[14] H. Feild, D. Binkley, and D. Lawrie, “An empirical comparison of
techniques for extracting concept abbreviations from identifiers,” in Proc.
IASTED Int. Conf. on Software Engineering and Applications (SEA).
Citeseer, 2006.
[15] W. M. Khoo, A. Mycroft, and R. Anderson, “Rendezvous: A search
engine for binary code,” in Proceedings of the 10th Working Conference
on Mining Software Repositories, 2013, pp. 329–338.
[16] I. Guyon and A. Elisseeff, “An introduction to variable and feature
selection,” J. Mach. Learn. Res., vol. 3, pp. 1157–1182, Mar. 2003.
[17] S. Schleimer, D. S. Wilkerson, and A. Aiken, “Winnowing: Local
algorithms for document fingerprinting,” in Int. Conf. on Management
of Data, 2003, pp. 76–85.
[18] Eta Labs, “musl libc,” Aug. 2016. [Online]. Available:
https://www.musl-libc.org/
[19] diet libc contributors, “diet libc,” Aug. 2016. [Online]. Available:
https://www.fefe.de/dietlibc/
[20] E. Andersen, “uClibc,” Aug. 2016. [Online]. Available:
https://github.com/klee/klee-uclibc
[21] Leidos Holdings, Inc., “MUSE corpus,” Apr. 2016. [Online]. Available:
http://corpus.museprogram.org/
[22] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin,
“LIBLINEAR: A library for large linear classification,” Journal of
Machine Learning Research, vol. 9, pp. 1871–1874, 2008.
[23] O. Lemos, B. K. Bajracharya, J. Ossher, P. C. Masiero, and C. V. Lopes,
“A test-driven approach to code search and its application to the reuse of
auxiliary functionality,” J. Information and Software Technology, vol. 53,
no. 4, Apr. 2011.
[24] Y. Ke, K. T. Stollee, C. L. Gouse, and Y. Brun, “Repairing programs
with semantic code search,” in Int. Conf. on Automated Softw. Eng.,
2015, pp. 295–306.
[25] S. Thummalapenta and T. Xie, “Parseweb: a programmer assistant for
reusing open source code on the web,” in Int. Conf. on Automated Softw.
Eng., 2007.
[26] C. K. Roy, J. R. Cordy, and R. Koschke, “Comparison and evaluation
of code clone detection techniques and tools: A qualitative approach,”
Sci. Comput. Program., May 2009.
[27] Y. David and E. Yahav, “Tracelet-based code search in executables,” in
Proceedings of the 35th ACM SIGPLAN Conference on Programming
Language Design and Implementation, ser. PLDI ’14. New York, NY,
USA: ACM, 2014, pp. 349–360.
[28] S. Eschweiler, K. Yakdan, and E. Gerhards-Padilla, “discovRE: Efficient
cross-architecture identification of bugs in binary code,” in Network and
Dist. Syst. Security, 2016.
[29] J. Pewny, B. Garmany, R. Gawlik, C. Rossow, and T. Holz, “Crossarchitecture bug search in binary executables,” in Security and Privacy
(SP), 2015 IEEE Symposium on. IEEE, 2015, pp. 709–724.
[30] N. Rosenblum, X. Zhu, and B. P. Miller, “Who wrote this code?
identifying the authors of program binaries,” in Proceedings of the 16th
European Conference on Research in Computer Security, 2011, pp. 172–
189.
| 6 |
This is a pre-print of the conference paper accepted at the IEEE Winter Conference on Applications of Computer Vision (WACV) 2018.
Towards Robust Deep Neural Networks with BANG
Andras Rozsa, Manuel Günther, and Terrance E. Boult
Vision and Security Technology (VAST) Lab
University of Colorado, Colorado Springs, USA
arXiv:1612.00138v3 [cs.CV] 30 Jan 2018
{arozsa,mgunther,tboult}@vast.uccs.edu
Abstract
Machine learning models, including state-of-the-art
deep neural networks, are vulnerable to small perturbations that cause unexpected classification errors. This unexpected lack of robustness raises fundamental questions
about their generalization properties and poses a serious
concern for practical deployments. As such perturbations
can remain imperceptible – the formed adversarial examples demonstrate an inherent inconsistency between vulnerable machine learning models and human perception –
some prior work casts this problem as a security issue. Despite the significance of the discovered instabilities and ensuing research, their cause is not well understood and no
effective method has been developed to address the problem. In this paper, we present a novel theory to explain why
this unpleasant phenomenon exists in deep neural networks.
Based on that theory, we introduce a simple, efficient, and
effective training approach, Batch Adjusted Network Gradients (BANG), which significantly improves the robustness of
machine learning models. While the BANG technique does
not rely on any form of data augmentation or the utilization
of adversarial images for training, the resultant classifiers
are more resistant to adversarial perturbations while maintaining or even enhancing the overall classification performance.
(a) MNIST Samples and Their Distortions Yielding Misclassifications
(b) CIFAR-10 Samples and Their Distortions Yielding Misclassifications
Figure 1: I MPROVING ROBUSTNESS VIA BANG. This figure demonstrates the enhanced robustness against perturbations
generated via the non-gradient-based hot/cold adversarial generation method on MNIST digits and CIFAR-10 samples displayed
in top rows of (a) and (b). Underneath the raw test images, we
show their distorted versions formed by the smallest perturbations
that change the correctly classified class labels of the test samples. The second rows of (a) and (b) present perturbations that
we obtained on regularly trained learning models, while the last
rows show examples that we generated on networks trained via
our Batch Adjusted Network Gradients (BANG) approach. As indicated by most of the perturbations being highly perceptible, the
learning models trained with BANG have become more robust to
adversarial perturbations.
1. Introduction
Machine learning is broadly used in various real-world
vision applications and recent advances in deep learning
have made deep neural networks the most powerful learning
models that can be successfully applied to different vision
problems [27, 25, 7, 28, 20, 14, 18, 17, 29]. The recent
performance gain is mainly the result of improvements in
two fields, namely, building more powerful learning models
[25, 7] and designing better strategies to avoid overfitting
[24]. These advancements are then leveraged by the use of
larger datasets and massive GPU-enhanced computing.
Although deep neural networks (DNNs) achieve state-
of-the-art performance in a wide range of tasks, the generalization properties of these learning models were questioned by Szegedy et al. [26] when the existence of adversarial examples was revealed. DNNs are capable of learning high-level feature embeddings that enable them to be
successfully adapted to different problems. They were generally considered to generalize well and, hence, expected to
be robust to moderate distortions to their inputs. Surprisingly, adversarial examples formed by applying imperceptible perturbations to otherwise correctly recognized inputs
can lead machine learning models – including state-of-the-
art DNNs – to misclassify those samples, often with high
confidence. This highly unexpected and intriguing property
of machine learning models highlights a fundamental problem that researchers have been trying to solve.
To explain why adversarial examples exist, several controversial explanations were proposed. As hypothesized in
[4, 2], adversarial instability exists due to DNNs acting as
high-dimensional linear classifiers that allow even imperceptibly small, well-aligned perturbations applied to inputs
to spread among higher dimensions and radically change
the outputs. This belief was challenged in [19], where –
by analyzing and experimenting with DNNs trained to recognize objects in more unconstrained conditions – it was
demonstrated that those classifiers are only locally linear to
changes on the recognized object, otherwise DNNs act nonlinearly. After performing various experiments, Gu et al. [6]
concluded that adversarial instability is rather related to “intrinsic deficiencies in the training procedure and objective
function than to model topology.”
The problem addressed in this paper is not only about
preventing attacks via adversarial examples, the focus is on
the overall robustness and generalizability of DNNs. This
fundamental problem of deep learning has recently received
increasing attention by researchers [3, 8, 21]. Considering
state-of-the-art learning models applied to computer vision
tasks, the classification of many incorrectly or uncertainly
recognized inputs can be corrected and improved by small
perturbations [30, 22], so this is a naturally occurring problem for learning-based vision systems.
In this paper, we introduce our theory on the instability
of machine learning models and the existence of adversarial
examples: evolutionary stalling. During training, network
weights are adjusted using the gradient of loss, evolving to
eventually classify examples correctly. Ideally, we prefer
broad flat regions around samples to achieve good generalization [11] and adversarial robustness [2]. However, after a training sample is correctly classified, its contribution
to the loss and, thus, on forming the weight updates is reduced. As the evolution of the local decision surface stalls,
the correctly classified samples cannot further flatten and
extend their surroundings to improve generalization. Therefore, as the contributions of those correctly classified training samples to boundary adjustments are highly decreased
compared to other batch elements, samples can end up being stuck close to decision boundaries and, hence, susceptible to small perturbations flipping their classifications.
To mitigate evolutionary stalling, we propose our Batch
Adjusted Network Gradients (BANG) training algorithm.
We experimentally evaluate robustness using a combination of gradient- and non-gradient-based adversarial perturbations, and random distortions. The paper explores the
impact of BANG parameters and architectural variations,
such as Dropout [24], on instability and adversarial robust-
ness. In conclusion, we validate our theory by experimentally demonstrating that BANG significantly improves the
robustness of deep neural networks optimized on two small
datasets while the trained learning models maintain or even
improve their overall classification performance.
2. Related Work
Deep neural networks (DNNs) achieve high performance
on various tasks as they are able to learn non-local generalization priors from training data. Counter-intuitively,
Szegedy et al. [26] showed that machine learning models
can misclassify samples that are formed by slightly perturbing correctly recognized inputs. These so-called adversarial
examples are indistinguishable from their originating counterparts to human observers, and their unexpected existence
itself presents a problem. The authors introduced the first
technique that is capable of reliably finding adversarial perturbations and claimed that some adversarial examples generalize across different learning models.
A computationally cheaper adversarial example generation algorithm, the Fast Gradient Sign (FGS) method, was
presented by Goodfellow et al. [4]. While this approach
also uses the inner state of DNNs, it is more efficient as
FGS requires the gradient of loss to be calculated only once.
The authors demonstrated that by using adversarial examples generated with FGS implicitly in an enhanced objective
function, both accuracy and robustness of the trained classifiers can be improved. In their paper focusing on adversarial
machine learning, Kurakin et al. [13] proposed new algorithms extending the FGS method to target a specific class
and to calculate and apply gradients iteratively instead of a
single gradient calculation via FGS. The authors compared
the effect of different types of adversarial examples used for
implicit adversarial training and found that the results vary
based upon the type of the applied adversarial examples.
Rozsa et al. [23] introduced the non-gradient-based
hot/cold approach, which is capable of efficiently producing
multiple adversarial examples for each input. They demonstrated that using samples explicitly with higher magnitudes
of adversarial perturbations than the sufficient minimal can
outperform regular adversarial training. The authors also
presented a new metric – the Perceptual Adversarial Similarity Score (PASS) – to better measure the distinguishability of original and adversarial image pairs in terms of human perception. As the commonly used L2 or L∞ norms
are very sensitive to small geometric distortions that can remain unnoticeable to us, PASS is more applicable to quantify similarity and the quality of adversarial examples.
Although adversarial training, both implicit and explicit,
was demonstrated to decrease the instability of learning
models, forming those examples is still computationally expensive, which limits the application of such techniques.
Furthermore, considering the various adversarial generation
techniques, utilizing certain types of those samples might
not lead to improved robustness to adversarial examples
of other techniques. Alternatively, Zheng et al. [30] proposed their stability training as a lightweight and still effective method to stabilize DNNs against naturally occurring distortions in the visual input. The introduced training
procedure uses an additional stability objective that makes
DNNs learn weights that minimize the prediction difference
of original and perturbed images. In order to obtain general
robustness and not rely on any class of perturbations, the authors applied Gaussian noise to distort the training images.
Gu et al. [6] conducted experiments with different network topologies, pre-processing, and training procedures to
improve the robustness of DNNs. The authors proposed the
Deep Contractive Network (DCN), which imposes a layerwise contractive penalty in a feed-forward DNN. The formulated penalty aims to minimize output variances with
respect to perturbations in inputs, and enable the network
to explicitly learn flat, invariant regions around the training
data. Based on positive initial results, they concluded that
adversarial instability is rather the result of the intrinsic deficiencies in the training procedure and objective function
than of model topologies.
Luo et al. [19] proposed a foveation-based technique that
selects and uses only a sub-region of the image during classification. As the authors demonstrated, the negative effect
of foveated perturbations to the classification scores can
be significantly reduced compared to entire perturbations.
Graese et al. [5] showed that transformations of the normal
image acquisition process can also negate the effect of the
carefully crafted adversarial perturbations. While these preprocessing techniques can alleviate the problem posed by
adversarial images, they do not solve the inherent instability
of DNNs. In other words, these methods treat the symptoms
and not the disease.
In summary, a wide variety of more or less efficient
approaches were proposed in the literature that all aim at
improving the robustness and generalization properties of
DNNs, but none of those proved to be effective enough.
3. Approach
In this section, we first briefly describe our intuition
about why the unexpected adversarial instability exists in
machine learning models. Afterwards, we present our simple and straightforward modification in the training procedure that aims to optimize weights in a way that the resulting DNNs become more robust to distortions of their inputs.
3.1. Intuition
During training, some inputs in the batch are correctly
and others are incorrectly classified. In general, the calculated loss and, thus, the gradient of loss for the misclassified
ones are larger than for the correctly classified inputs of the
same batch. Therefore, in each training iteration most of the
weight updates go into learning those inputs that are badly
predicted. On the other hand, the correctly classified samples do not have a significant impact on advancing decision
boundaries and can remain in the positions close to what
they obtained when becoming correctly classified. Due to
this evolutionary stalling, samples with low gradients cannot form a flatter, more invariant region around themselves.
Consequently, samples of those regions remain more susceptible to adversarial perturbations – even a small perturbation can push them back into an incorrect class. By increasing the contribution of the correctly classified examples in the batch on the weight updates, and forcing them
to continue improving decision boundaries, it is reasonable
to think that we can flatten the decision space around those
training samples and train more robust DNNs.
3.2. Implementation
The core concept of our Batch Adjusted Network Gradients (BANG) approach is a variation of batch normalization
[9]. However, rather than trying to balance the inputs of
the layers, we seek to ensure that the contributions on the
weight updates are more balanced among batch elements
by scaling their gradients.
Let us dive into the details and introduce our notations
we use to formulate BANG. In short, we scale the gradients
of batch elements that will be used to compute the weight
updates in each training iteration. Let us consider a network
fw with weights w in a layered structure having layers y (l)
where l ∈ [1, L], with their respective weights w(l) :
fw (xi ) = y (L) y (L−1) . . . y (1) (xi ) . . . . (1)
For a given input xi , the partial derivatives of the loss
E(fw , xi ) with respect to the output of layer y (l) are:
(l)
κi = κ(l) (xi ) =
∂Ei
(l)
.
(2)
∂yi
For simplicity, we leave out the structure of the weights w(l)
in layers and the structure of the layer outputs which can be
either one-dimensional for fully connected layers or threedimensional for convolutional layers.
With BANG, our goal is to balance gradients in the batch
by scaling up those that have lower magnitudes. In order
to do so, we determine the highest gradient for the batch
having N inputs xi , i ∈ {1, . . . , N } at given layer y (l) in
terms of L2 norm. We use that as the basis for balancing
the magnitudes of gradients in the batch. Weight updates
are calculated after scaling each derivative κi in the batch
with the element-wise learning rate:
ρi(l)
(l)
max
kκ
0 k
i
i0 ∈[1,N ]
(l)
ηi =
(3)
(l)
kκi k
where:
(l)
ρi = (l) 1 −
(l)
kκi k
max
0
i ∈[1,N ]
(l)
kκi0 k
.
(4)
As a key parameter for our approach, (l) specifies the degree of gradient balancing among batch elements. While the
(l)
exponent ρi might appear a little complex and ambiguous,
its sole purpose is to scale up gradients with small magnitudes more than others having larger L2 norms.
Assuming that the regular backward pass combines the
gradients of the batch elements by calculating:
∇fw(l)
N
N
1 X (l) ∂y (l)
1 X ∂Ei
=
κ
=
N i=1 ∂w(l)
N i=1 i ∂w(l)
(5)
which is normally scaled with the learning rate and then
used to update weights (after combining with the previous
weight update scaled with momentum), BANG produces:
∇fw(l) = β (l)
N
1 X (l) ∂Ei
η
,
N i=1 i ∂w(l)
(6)
where β (l) is the second (set of) parameter(s) of our approach used for scaling. In general, β (l) acts as a local
learning rate that can play a more important role in future
work. Throughout our experiments, we keep BANG parameters fixed for all layers: (l) = and β (l) = β (which will
actually just modify the original learning rate η).
Note that although our approach changes the actual calculation of weight updates for the layers, there is no impact on the backpropagation of the original gradient down
the network. Finally, we implemented BANG by applying
small modifications to the regular training procedure with
negligible computational overhead.
4. Experiments
To evaluate our approach, we conducted experiments on
the slightly modified versions of LeNet [16] and “CIFAR10 quick” models distributed with Caffe [10]. Namely, after
running preliminary experiments with BANG, we added a
Dropout layer [24] to both model architectures that serves
multiple purposes. We observed that BANG tends to cause
overfitting on the trained LeNet networks, and the resultant models made very confident classifications – even when
they misclassified the test images. While the additional
Dropout layer alleviates both problems, the adjusted network architectures also result in improved classification performances with both regular and BANG training.
After obtaining learning models with regular and BANG
training, we assess and compare the robustness of those
classifiers in two ways. It is important to note that we do
not select the best training models based on their performance on the validation set for these evaluations, but we
simply use the models obtained at the last training iteration. As our primary goal is to measure the evolving robustness, we believe that this decision leads to a fairer comparison, however, the classification performance of the selected
models are not optimal. Finally, we would like to mention
that we conducted experiments to discover the effectiveness
of BANG used for fine-tuning regularly trained models, and
found that the robustness of the resultant networks are not
even comparable to those that we trained from scratch.
First, we evaluate the adversarial vulnerability against
two adversarial example generation methods: the gradientbased Fast Gradient Sign (FGS) method [4] and the nongradient-based hot/cold approach [23]. Although the latter
is capable of forming multiple adversarial perturbations for
each input, we only target the most similar class with the
hot/cold approach, referred to as HC1.
We aim to form adversarial perturbations for every correctly classified image from the MNIST [15] or CIFAR10 [12] test set, respectively. We consider an adversarial example generation attempt successful, if the direction
specified by either FGS or HC1 leads to a misclassification,
where the only constraint is that the discrete pixel values are
in [0, 255] range. Of course, this limitation means that the
formed perturbations may or may not be adversarial in nature as they can be highly perceptible to human observers.
We compare the adversarial robustness of classifiers by collecting measures to quantify the quality of the produced adversarial examples. For this purpose, we calculate the Perceptual Adversarial Similarity Score (PASS) [23] of original and adversarial image pairs, and we also determine the
L∞ norms of adversarial perturbations. Although the L∞
norm is not a good metric to quantify adversarial quality in
terms of human perception, it can demonstrate how far the
actual perturbed image is from the original sample.
Second, we quantify how the robustness of the learning
models evolve during training by applying a more general
approach. For a given pair of classifiers where one was
regularly trained while the other was obtained by BANG
training, we add a certain level of random noise to 100 test
images from each class that are correctly classified by both
networks at all tested stages and compute the proportion of
perturbed images that are classified differently than the originating one. While the previously described test assessing
the adversarial vulnerability explores only two directions –
specified by the FGS method and the HC1 approach – applying 1000 random distortions to each inspected image for
every noise level gives us a more general evaluation.
Although experimenting with random noise is more universal as it does not rely on any specific adversarial generation technique, small random perturbations that cause misclassifications are hard to find [23] and, hence, the collected
Table 1: L E N ET T RAINING. This table highlights the difference between LeNet models obtained by using regular (R0-R1) and BANG
training (B0-B5). Accuracy on the MNIST test set, the achieved success rates of FGS and HC1 adversarial example generation methods
with PASS scores and L∞ norms of the produced examples on the MNIST test set are listed.
ID
β
Accuracy
FGS-Rate
FGS-PASS
FGS-L∞
HC1-Rate
HC1-PASS
HC1-L∞
R0
-
-
99.16%
90.33%
0.4072 ± 0.1081
40.51 ± 15.72
99.53%
0.7535 ± 0.1143
122.60 ± 49.46
R1
-
-
99.15%
91.41%
0.4072 ± 0.1065
40.70 ± 15.88
99.77%
0.7517 ± 0.1160
122.16 ± 49.07
B0
1.00
0.785
99.16%
3.51%
0.6806 ± 0.1457
8.34 ± 04.32
95.13%
0.5359 ± 0.2023
187.86 ± 63.66
B1
1.00
0.815
99.22%
1.68%
0.7638 ± 0.1367
5.52 ± 02.86
94.19%
0.4880 ± 0.2110
201.84 ± 62.51
B2
1.20
0.810
99.31%
2.13%
0.7579 ± 0.1452
5.57 ± 03.05
94.56%
0.5129 ± 0.2015
186.10 ± 63.77
B3
1.35
0.780
99.25%
3.86%
0.6763 ± 0.1471
8.28 ± 04.26
94.73%
0.5709 ± 0.2127
178.11 ± 65.76
B4
1.50
0.840
99.11%
1.52%
0.8220 ± 0.1310
4.19 ± 03.01
97.68%
0.4669 ± 0.1881
203.50 ± 58.97
B5
1.60
0.780
99.32%
4.45%
0.6771 ± 0.1487
8.20 ± 04.40
98.95%
0.6376 ± 0.1829
146.96 ± 61.97
(a) Accuracy
(b) FGS Success Rate
(c) HC1 PASS
Figure 2: L E N ET M ODELS T RAINED WITH BANG. These plots summarize our results on LeNet models trained with BANG using
combinations of β and . We tested a grid of those two parameters where β ∈ [1.0, 1.6] with step size 0.05, and ∈ [0.78, 0.84] with step
size 0.005. We trained a single model with each combination and show (a) the obtained accuracy on the MNIST test set, (b) the achieved
success rates by using FGS and (c) the mean PASS score of HC1 adversarial examples on the MNIST test images. Each solid green line
represents the level of regularly trained learning models. For better visual representation we applied interpolation.
results are qualitatively not as good as explicitly forming
adversarial perturbations. Furthermore, in order to evaluate
the stability of the trained classifiers, we distorted the images with Gaussian noise far beyond the noise level that can
be considered imperceptible or adversarial.
4.1. LeNet on MNIST
We commenced our experiments by evaluating BANG
on the LeNet model optimized on the MNIST dataset.
MNIST contains 70k images overall: 50k used for training,
10k for validation, and the remaining 10k for testing. The
tested network originally has four layers (two convolutional
and two fully connected) – extended with one additional
Dropout layer – that we optimize without changing the hyperparameters distributed with Caffe. The learning model
is trained with a batch size of 64 for 10k iterations using the
inverse decay learning rate policy with an initial learning
rate of 0.01.
Since our training procedure has two parameters, β defined in Equation (6) and introduced in Equation (4), we
trained LeNet models with parameter combinations from a
grid, and evaluated the accuracy and adversarial vulnerability of the trained classifiers. The results of the conducted
experiments are visualized in Figure 2, we also show accuracies and metrics indicating adversarial robustness in Table 1 for some models obtained with regular training (R0R1) and optimized with BANG training (B0-B5).
As we can see in Table 1, FGS success rates achieved by
regular training can be dramatically decreased by BANG:
the rate drops from above 90% to below 2%. Almost every single failed adversarial example generation attempt is
due to blank gradients – the gradient of loss with respect to
the original image and its ground-truth label contains only
zeros – which means that methods utilizing that gradient of
loss cannot succeed. As we increase , or in other words,
as we balance the contributions of batch elements more by
scaling up gradients with lower magnitudes, the resultant
classifiers become more resistant to gradient-based adversarial generation methods. Although the success rates obtained by the HC1 method remain relatively high, the qual-
(a) Regular Training
(b) BANG Training
(c) Absolute Improvement
Figure 3: L E N ET: ROBUSTNESS TO R ANDOM D ISTORTIONS. These plots show the evolving robustness of LeNet models: (a)
obtained with regular training (R0 from Table 1), (b) trained with BANG (B1 from Table 1), and (c) displays the improvement. After
identifying 100 test images per class that are correctly classified by both networks at every 500 iterations, we perturb each 1000 times by
adding the level of Gaussian noise specified by the standard deviation, and test the networks at several stages of training. The plots show
the percentage of distortions yielding misclassifications. For better visual representation we applied interpolation.
ities of HC1 examples degrade significantly on LeNet models trained with BANG compared to the regular training as
displayed in Figure 2(c). This degradation is highlighted
by both decreasing PASS scores and by the significantly increased L∞ norms of perturbations listed in Table 1.
With respect to the achieved classification performances,
we find that there can be a level of degradation depending
on the selected values for β and . This phenomenon can
be seen in Figure 2(a); it is partially due to random initializations and can be the result of overfitting or our decision
to evaluate all networks at 10k training iterations. Still, we
can observe that BANG can yield improved classification
performance over regular training paired with improved robustness as listed in Table 1.
Additionally, we conducted experiments to quantify
and compare how the robustness to random perturbations
evolves during training. For this general approach, we selected to test two classifiers from Table 1: R0 optimized
with regular training and B1 trained with BANG. We can
see in Figure 3(a) that the regularly trained model is initially
highly susceptible to larger distortions, but as the training
progresses it becomes more stable, and settles at approximately 20% with respect to the strongest class of Gaussian
noise that we formed by using standard deviation of 100
pixels. Contrarily, the classifier trained with BANG maintains significantly lower rates throughout the whole training
as shown in Figure 3(b), and after 10k iterations only 3% of
the strongest distortions can alter the original classification.
The absolute improvements are displayed in Figure 3(c).
4.2. CIFAR-10
We also evaluated training with BANG on the so-called
“CIFAR-10 quick” model of Caffe trained on the CIFAR10 dataset. CIFAR-10 consists of 60k images, 50k training
images, and 10k images used for both validation and testing
purposes. The network architecture originally has five layers (three convolutional and two fully connected) that we
extended with one Dropout layer, and the learning model
is trained with a batch size of 100 for 20k iterations (40
epochs). We use a fixed learning rate of 0.001 that we decrease by a factor of 10 after 36 epochs, and once again after
another 2 epochs.
Due to the different nature of CIFAR-10 training,
we slightly adjusted BANG parameters. Specifically, as
the classification performance is significantly worse than
achieved by LeNet on MNIST yielding proportionately
more incorrectly classified samples in each mini-batch, we
applied lower local learning rates (β) and higher values for
scaling (). Furthermore, we found that scaling incorrectly
classified inputs less than correct ones has beneficial effects
on robustness, hence, we applied 50% of the specified
values on the incorrectly classified batch elements. Similarly to our conducted experiments on LeNet, we trained
classifiers on CIFAR-10 with all possible combinations of
β and parameters of a grid and then measured the accuracy and adversarial vulnerability of each of those networks.
The results are visualized in Figure 4, and for some models
obtained with regular training (R0-R1) and optimized with
BANG training (B0-B5), we show accuracies and metrics
indicating adversarial robustness in Table 2.
As we can see in Table 2, FGS success rates achieved by
regular training are significantly decreased by BANG: the
rate drops from approximately 96% to 34% where, again,
the majority of the failed adversarial example generation attempts are due to blank gradients. Figure 4(b) shows that
as we increase , the classifiers become more resistant to
gradient-based adversarial generation methods. The higher
levels of success rates in comparison to LeNet might sim-
Table 2: CIFAR-10 T RAINING. This table shows the difference between classifiers obtained using regular (R0-R1) and BANG training
(B0-B5). The accuracy on the CIFAR-10 test set, the achieved success rates of FGS and HC1 adversarial example generation methods with
PASS scores and L∞ norms of the formed examples on the CIFAR-10 test images are listed.
ID
β
Accuracy
FGS-Rate
FGS-PASS
FGS-L∞
HC1-Rate
HC1-PASS
HC1-L∞
R0
-
-
79.59%
96.52%
0.9553 ± 0.0969
4.08 ± 06.40
98.97%
0.9669 ± 0.1005
18.15 ± 29.80
R1
-
-
79.55%
96.71%
0.9513 ± 0.1057
4.43 ± 07.05
98.91%
0.9557 ± 0.1332
22.16 ± 39.77
B0
0.40
0.855
79.26%
34.27%
0.9511 ± 0.1302
4.11 ± 10.31
95.94%
0.8712 ± 0.1649
55.52 ± 49.98
B1
0.45
0.805
80.43%
45.94%
0.9818 ± 0.0548
2.04 ± 02.49
96.20%
0.7966 ± 0.2438
77.34 ± 71.20
B2
0.75
0.800
79.74%
41.71%
0.9828 ± 0.0586
1.94 ± 03.03
98.34%
0.8362 ± 0.2195
64.26 ± 63.57
B3
0.75
0.845
79.41%
35.00%
0.9526 ± 0.1266
3.94 ± 08.71
96.54%
0.8603 ± 0.1981
59.83 ± 58.28
B4
0.95
0.840
79.30%
34.88%
0.9575 ± 0.1236
3.61 ± 09.60
96.87%
0.8994 ± 0.1487
48.44 ± 47.35
B5
1.00
0.800
79.22%
41.34%
0.9803 ± 0.0722
2.03 ± 03.64
98.17%
0.8586 ± 0.1948
61.14 ± 61.23
(a) Accuracy
(b) FGS Success Rate
(c) HC1 PASS
Figure 4: BANG CIFAR-10 M ODELS. These plots summarize our results on CIFAR-10 models trained with BANG using combinations of β and . We tested a grid of those two parameters where β ∈ [0.4, 1.0] with step size 0.05, and ∈ [0.80, 0.86] with step size
0.005. We trained a single model with each combination and show (a) the obtained accuracy on the CIFAR-10 test set, (b) the achieved
success rates by FGS, and the (c) mean PASS score of HC1 adversarial examples on the CIFAR-10 test images. Each solid green line
represents the level of regularly trained learning models. For better visual representation we applied interpolation.
ply be due to the fact that the classifiers trained on CIFAR10 are less accurate, therefore, learning the incorrect samples of the batch still has a large contribution on weight
updates. While the success rates achieved by HC1 remain
high, the quality of HC1 adversarial examples degrades significantly compared to regular training. This degradation
is highlighted by both decreasing PASS scores shown in
Figure 4(c) and by the significantly increased L∞ norms
of adversarial perturbations listed in Table 2. Finally, as
shown in Table 2, we can train classifiers with BANG that
slightly outperform models of regular training in terms of
classification accuracy. Of course, the achieved overall performance depends on the chosen parameters as depicted in
Figure 4(a).
Finally, we ran experiments to better quantify and compare how the robustness of the trained classifiers to random
perturbations evolves during training. Similarly to our experiments on LeNet, we selected two classifiers from Table 2 for testing: R0 trained regularly and B0 optimized
with BANG. We can see in Figure 5(a) that the regularly
trained R0 model is highly susceptible to larger distortions,
its robustness does not improve during training, and finally
achieves 46.0% with respect to the strongest class of Gaussian noise that we formed by using standard deviation of
40 pixels. Contrarily, the B0 model trained with BANG remains more robust throughout training epochs as shown in
Figure 5(b) and at the end 39.1% of the strongest distortions
change the original classification. The absolute improvements are visualized in Figure 5(c). We can conclude that
although BANG enhanced robustness to random perturbations, the results are less impressive in comparison to LeNet
– at least, with respect to the strongest distortions.
5. Conclusion
In this paper, we introduced our theory to explain an intriguing property of machine learning models. Namely, the
regular training procedure can prevent samples from forming flatter and broader regions around themselves. This
evolutionary stalling yields samples remaining close to de-
(a) Regular Training
(b) BANG Training
(c) Absolute Improvement
Figure 5: CIFAR-10: ROBUSTNESS TO R ANDOM D ISTORTIONS. These plots show the evolving robustness of CIFAR-10 models:
(a) obtained with regular training (R0 from Table 2), (b) trained with BANG (B0 from Table 2), and (c) displays the improvement. After
identifying 100 test images per class that are correctly classified by both networks at every second epoch, we perturb each 1000 times with
the level of Gaussian noise specified by the standard deviation, and test the networks at different stages of training. The plots show the
percentage of distortions yielding misclassifications. For better visual representation we applied interpolation.
cision boundaries and, hence, being susceptible to imperceptibly small perturbations causing misclassifications. To
address this problem, we proposed a novel approach to improve the robustness of Deep Neural Networks (DNNs) by
slightly modifying the regular training procedure. Our approach does not require additional training data – neither
adversarial examples nor any sort of data augmentation – to
achieve improved robustness, while the overall performance
of the trained network is maintained or even enhanced.
We experimentally demonstrated that optimizing DNNs
with our Batch Adjusted Network Gradient (BANG) technique leads to significantly enhanced stability in general.
By balancing the contributions of batch elements on forming the weight updates, BANG allows training samples to
form flatter, more invariant regions around themselves. The
trained classifiers become more robust to random distortions, and as we demonstrated with the gradient-based Fast
Gradient Sign (FGS) method and the non-gradient-based
hot/cold approach where we targeted the closest scoring
class (HC1), they are also less vulnerable to adversarial example generation methods. To visualize the advancement
achieved by BANG training in terms of improved adversarial robustness, in Figure 1 correctly classified MNIST and
CIFAR-10 test images are presented along with adversarial
examples formed via the HC1 approach on DNNs trained
regularly and with BANG. While BANG helps to mitigate
adversarial instability, learning models can maintain or even
improve their overall classification performance. Our proposed approach achieves these results with negligible computational overhead over the regular training procedure.
Although we managed to achieve good results on two
DNNs trained on different datasets, we found that BANG
parameters needed to be adjusted to these problems. To
obtain better results, exploring the effect of different pa-
rameters on different layers, and changing the contributions
of correctly and incorrectly classified batch elements can
be considered. Future work will focus on having a better understanding of BANG, enhancing the algorithm to be
more self-adaptive, and exploring its application for training
DNNs on real-world datasets. While some might argue that
a similar balancing effect can be achieved by distillation,
Carlini et al. [1] demonstrated that defensive distillation is
not effective to improve adversarial robustness. The effectiveness of BANG to adversarial perturbations obtained via
various adversarial example generation techniques likely
varies – as Kurakin et al. [13] observed for adversarial training – and further research needs to explore that.
In summary, we can conclude that the adversarial instability of DNNs is closely related to the applied training procedures – as was claimed by Gu et al. [6] – and there is a
huge potential in this research area to further advance the
generalization properties of machine learning models and
their overall performances as well.
Acknowledgments
This research is based upon work funded in part by NSF
IIS-1320956 and in part by the Office of the Director of
National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R&D Contract No. 2014-14071600012. The views and conclusions
contained herein are those of the authors and should not be
interpreted as necessarily representing the official policies
or endorsements, either expressed or implied, of the ODNI,
IARPA, or the U.S. Government. The U.S. Government is
authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation
thereon.
References
[1] N. Carlini and D. Wagner. Defensive distillation is not robust
to adversarial examples. arXiv preprint arxiv:1607.04311,
2016. 8
[2] A. Fawzi, O. Fawzi, and P. Frossard. Fundamental limits on
adversarial robustness. In International Conference on Machine Learning (ICML), Workshop on Deep Learning, 2015.
2
[3] A. Fawzi, S.-M. Moosavi-Dezfooli, and P. Frossard. Robustness of classifiers: from adversarial to random noise. In Advances in Neural Information Processing Systems, 2016. 2
[4] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and
harnessing adversarial examples. In International Conference on Learning Representation (ICLR), 2015. 2, 4
[5] A. Graese, A. Rozsa, and T. E. Boult. Assessing threat of adversarial examples on deep neural networks. In IEEE International Conference on Machine Learning and Applications
(ICMLA), 2016. 3
[6] S. Gu and L. Rigazio. Towards deep neural network architectures robust to adversarial examples. In International
Conference on Learning Representation (ICLR) Workshops,
2015. 2, 3, 8
[7] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning
for image recognition. In IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), 2016. 1
[8] M. Hein and M. Andriushchenko. Formal guarantees on the
robustness of a classifier against adversarial manipulation. In
Advances in Neural Information Processing Systems, 2017.
2
[9] S. Ioffe and C. Szegedy. Batch normalization: Accelerating
deep network training by reducing internal covariate shift.
In International Conference on Machine Learning (ICML),
2015. 3
[10] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional
architecture for fast feature embedding. In International
Conference on Multimedia. ACM, 2014. 4
[11] N. S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy,
and P. T. P. Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint
arXiv:1609.04836, 2016. 2
[12] A. Krizhevsky and G. Hinton. Learning multiple layers of
features from tiny images. 2009. 4
[13] A. Kurakin, I. J. Goodfellow, and S. Bengio. Adversarial
machine learning at scale. In International Conference on
Learning Representation (ICLR), 2017. 2, 8
[14] H. Lai, Y. Pan, Y. Liu, and S. Yan. Simultaneous feature
learning and hash coding with deep neural networks. In
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. 1
[15] Y. LeCun, C. Cortes, and C. J. Burges. The MNIST database
of handwritten digits, 1998. 4
[16] Y. LeCun, L. Jackel, L. Bottou, C. Cortes, J. S. Denker,
H. Drucker, I. Guyon, U. Muller, E. Sackinger, P. Simard,
et al. Learning algorithms for classification: A comparison
on handwritten digit recognition. Neural networks: the statistical mechanics perspective, 261:276, 1995. 4
[17] K. Lin, H.-F. Yang, J.-H. Hsiao, and C.-S. Chen. Deep
learning of binary hash codes for fast image retrieval. In
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2015. 1
[18] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional
networks for semantic segmentation. In IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), 2015. 1
[19] Y. Luo, X. Boix, G. Roig, T. Poggio, and Q. Zhao. Foveationbased mechanisms alleviate adversarial examples. arXiv
preprint arXiv:1511.06292, 2015. 2, 3
[20] W. Ouyang, X. Wang, X. Zeng, S. Qiu, P. Luo, Y. Tian, H. Li,
S. Yang, Z. Wang, C.-C. Loy, et al. DeepID-Net: Deformable
deep convolutional neural networks for object detection. In
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. 1
[21] J. Peck, Y. Saeys, B. Goossens, and J. Roels. Lower bounds
on the robustness to adversarial perturbations. In Advances
in Neural Information Processing Systems, 2017. 2
[22] A. Rozsa, M. Günther, E. M. Rudd, and T. E. Boult. Are
facial attributes adversarially robust? In International Conference on Pattern Recognition (ICPR), 2016. 2
[23] A. Rozsa, E. M. Rudd, and T. E. Boult. Adversarial diversity and hard positive generation. In IEEE Conference
on Computer Vision and Pattern Recognition (CVPR) Workshops, 2016. 2, 4, 5
[24] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and
R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning
Research (JMLR), 15(1):1929–1958, 2014. 1, 2, 4
[25] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed,
D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich.
Going deeper with convolutions. In IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), 2015. 1
[26] C. J. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of
neural networks. In International Conference on Learning
Representation (ICLR), 2014. 1, 2
[27] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and
tell: A neural image caption generator. In IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), 2015.
1
[28] B. Yang, J. Yan, Z. Lei, and S. Z. Li. Convolutional channel features. In IEEE International Conference on Computer
Vision (ICCV), 2015. 1
[29] Z. Zhang, Y. Chen, and V. Saligrama. Efficient training
of very deep neural networks for supervised hashing. In
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 1
[30] S. Zheng, Y. Song, T. Leung, and I. Goodfellow. Improving
the robustness of deep neural networks via stability training.
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 2, 3
| 1 |
Neural Domain Adaptation for Biomedical Question Answering
Georg Wiese1,2 , Dirk Weissenborn2 and Mariana Neves1
1
Hasso Plattner Institute, August Bebel Strasse 88, Potsdam 14482 Germany
2
Language Technology Lab, DFKI, Alt-Moabit 91c, Berlin, Germany
[email protected],
[email protected], [email protected]
Abstract
arXiv:1706.03610v2 [cs.CL] 15 Jun 2017
Factoid question answering (QA) has recently benefited from the development of
deep learning (DL) systems. Neural network models outperform traditional approaches in domains where large datasets
exist, such as SQuAD (≈ 100, 000 questions) for Wikipedia articles. However,
these systems have not yet been applied
to QA in more specific domains, such
as biomedicine, because datasets are generally too small to train a DL system
from scratch. For example, the BioASQ
dataset for biomedical QA comprises less
then 900 factoid (single answer) and list
(multiple answers) QA instances.
In
this work, we adapt a neural QA system
trained on a large open-domain dataset
(SQuAD, source) to a biomedical dataset
(BioASQ, target) by employing various
transfer learning techniques. Our network
architecture is based on a state-of-theart QA system, extended with biomedical
word embeddings and a novel mechanism
to answer list questions. In contrast to existing biomedical QA systems, our system
does not rely on domain-specific ontologies, parsers or entity taggers, which are
expensive to create. Despite this fact, our
systems achieve state-of-the-art results on
factoid questions and competitive results
on list questions.
1
Introduction
Question answering (QA) is the task of retrieving answers to a question given one or more contexts. It has been explored both in the opendomain setting (Voorhees et al., 1999) as well as
domain-specific settings, such as BioASQ for the
biomedical domain (Tsatsaronis et al., 2015). The
BioASQ challenge provides ≈ 900 factoid and list
questions, i.e., questions with one and several answers, respectively. This work focuses on answering these questions, for example: Which drugs are
included in the FEC-75 regimen? → fluorouracil,
epirubicin, and cyclophosphamide.
We further restrict our focus to extractive QA,
i.e., QA instances where the correct answers can
be represented as spans in the contexts. Contexts
are relevant documents which are provided by an
information retrieval (IR) system.
Traditionally, a QA pipeline consists of namedentity recognition, question classification, and answer processing steps (Jurafsky, 2000). These
methods have been applied to biomedical datasets,
with moderate success (Zi et al., 2016). The creation of large-scale, open-domain datasets such as
SQuAD (Rajpurkar et al., 2016) have recently enabled the development of neural QA systems, e.g.,
Wang and Jiang (2016), Xiong et al. (2016), Seo
et al. (2016), Weissenborn et al. (2017), leading
to impressive performance gains over more traditional systems.
However, creating large-scale QA datasets for
more specific domains, such as the biomedical,
would be very expensive because of the need
for domain experts, and therefore not desirable.
The recent success of deep learning based methods on open-domain QA datasets raises the question whether the capabilities of trained models
are transferable to another domain via domain
adaptation techniques. Although domain adaptation has been studied for traditional QA systems
(Blitzer et al., 2007) and deep learning systems
(Chen et al., 2012; Ganin et al., 2016; Bousmalis
et al., 2016; Riemer et al., 2017; Kirkpatrick et al.,
2017), it has to our knowledge not yet been applied
for end-to-end neural QA systems.
To bridge this gap we employ various do-
main adaptation techniques to transfer knowledge from a trained, state-of-the-art neural QA
system (FastQA, Weissenborn et al. (2017)) to
the biomedical domain using the much smaller
BioASQ dataset. In order to answer list questions
in addition to factoid questions, we extend FastQA
with a novel answering mechanism. We evaluate
various transfer learning techniques comprehensively. For factoid questions, we show that mere
fine-tuning reaches state-of-the-art results, which
can further be improved by a forgetting cost regularization (Riemer et al., 2017). On list questions, the results are competitive to existing systems. Our manual analysis of a subset of the factoid questions suggests that the results are even
better than the automatic evaluation states, revealing that many of the ”incorrect” answers are in fact
synonyms to the gold-standard answer.
2
Related Work
Traditional Question Answering Traditional
factoid and list question answering pipelines
can be subdivided into named-entity recognition,
question classification, and answer processing
components (Jurafsky, 2000). Such systems have
also been applied to biomedical QA such as the
OAQA system by Zi et al. (2016). Besides a number of domain-independent features, they incorporate a rich amount of biomedical resources, including a domain-specific parser, entity tagger and thesaurus to retrieve concepts and synonyms. A logistic regression classifier is used both for question
classification and candidate answer scoring. For
candidate answer generation, OAQA employs different strategies for general factoid/list questions,
choice questions and quantity questions.
Neural Question Answering Neural QA systems differ from traditional approaches in that the
algorithm is not subdivided into discrete steps. Instead, a single model is trained end-to-end to compute an answer directly for a given question and
context. The typical architecture of such systems
(Wang and Jiang, 2016; Xiong et al., 2016; Seo
et al., 2016) can be summarized as follows:
1. Embedding Layer: Question and context tokens are mapped to a high-dimensional vector space, for example via GloVe embeddings (Pennington et al., 2014) and (optionally) character embeddings (Seo et al., 2016).
2. Encoding Layer: The token vectors are processed independently for question and context, usually by a recurrent neural network
(RNN).
3. Interaction Layer: This layer allows for interaction between question and context representations. Examples are Match-LSTM
(Wang and Jiang, 2016) and Coattention
(Xiong et al., 2016).
4. Answer Layer: This layer assigns start and
end scores to all of the context tokens, which
can be done either statically (Wang and Jiang,
2016; Seo et al., 2016) or by a dynamic decoding process (Xiong et al., 2016).
FastQA FastQA fits into this schema, but reduces the complexity of the architecture by removing the interaction layer, while maintaining
state-of-the-art performance (Weissenborn et al.,
2017). Instead of one or several interaction layers of RNNs, FastQA computes two simple wordin-question features for each token, which are appended to the embedding vectors before the encoding layer. We chose to base our work on this
architecture because of its state-of-the-art performance, faster training time and reduced number of
parameters.
Unsupervised Domain Adaptation Unsupervised domain adaptation describes the task of
learning a predictor in a target domain while labeled training data only exists in a different source
domain. In the context of deep learning, a common method is to first train an autoencoder on
a large unlabeled corpus from both domains and
then use the learned input representations as input features to a network trained on the actual task
using the labeled source domain dataset (Glorot
et al., 2011; Chen et al., 2012). Another approach
is to learn the hidden representations directly on
the target task. For example, domain-adversarial
training optimizes the network such that it computes hidden representations that both help predictions on the source domain dataset and are
indistinguishable from hidden representations of
the unlabeled target domain dataset (Ganin et al.,
2016). These techniques cannot be straightforwardly applied to the question answering task, because they require a large corpus of biomedical
question-context pairs (albeit no answers are required).
Supervised Domain Adaptation In contrast to
the unsupervised case, supervised domain adaptation assumes access to a small amount of labeled
training data in the target domain. The simplest
approach to supervised domain adaptation for neural models is to pre-train the network on data from
the source domain and then fine-tune its parameters on data from the target domain. The main
drawback of this approach is catastrophic forgetting, which describes the phenomenon that neural networks tend to ”forget” knowledge, i.e., its
performance in the source domain drops significantly when they are trained on the new dataset.
Even though we do not directly aim for good performance in the source domain, measures against
catastrophic forgetting can serve as a useful regularizer to prevent over-fitting.
Progressive neural networks combat this issue by keeping the original parameters fixed
and adding new units that can access previously
learned features (Rusu et al., 2016). Because this
method adds a significant amount of new parameters which have to be trained from scratch, it is not
well-suited if the target domain dataset is small.
Riemer et al. (2017) use fine-tuning, but add an
additional forgetting cost term that punishes deviations from predictions with the original parameters. Another approach is to add an L2 loss which
punishes deviation from the original parameters.
Kirkpatrick et al. (2017) apply this loss selectively
on parameters which are important in the source
domain.
3
Model
Our network architecture is based on FastQA
(Weissenborn et al., 2017), a state-of-the-art neural QA system. Because the network architecture
itself is exchangeable, we treat it as a black box,
with subtle changes at the input and output layer
as well as to the decoding and training procedure.
These changes are described in the following. See
Figure 3 for an overview of the system.
3.1
Input Layer
In a first step, words are embedded into a highdimensional vector space. We use three sources
of embeddings, which are concatenated to form a
single embedding vector:
• GloVe embeddings: 300-dimensional GloVe
vectors (Pennington et al., 2014). These are
Start Probabilities pstart
End Probabilitiesp(e|s)
p(e|s)
End
End Probabilities
Probabilities pend
sigmoid
softmax
EndScores
Scoresee(s)(s)
End
End
Scores yend
Start Scores ystart
Extractive QA System
Biomedical Embeddings
...
...
GloVe Embeddings
Character Embeddings
Question Type Features
Context Embeddings
Question Embeddings
Figure 1: Network architecture of our system
for biomedical question answering. At its core,
it uses an extractive neural QA system as a black
box (we use FastQA (Weissenborn et al., 2017)).
The embedding layer is modified in order to include biomedical word embeddings and question
type features. The output layer is adjusted to add
the ability to answer list questions in addition to
factoid questions.
open-domain word vectors trained on 840 billion tokens from web documents. The vectors are not updated during training.
• Character embeddings: As used in FastQA
(Weissenborn et al., 2017) and proposed originally by Seo et al. (2016), we employ a
1-dimensional convolutional neural network
which computes word embeddings from the
characters of the word.
• Biomedical Word2Vec embeddings: 200dimensional vectors trained using Word2Vec
(Mikolov et al., 2013) on about 10 million
PubMed abstracts (Pavlopoulos et al., 2014).
These vectors are specific to the biomedical domain and we expect them to help on
biomedical QA.
As an optional step, we add entity tag features
to the token embeddings via concatenation. Entity tags are provided by a dictionary-based entity
tagger based on the UMLS Metathesaurus. The
entity tag feature vector is a 127-dimensional bit
vector that for each of the UMLS semantic types
states whether the current token is part of an entity
of that type. This step is only applied if explicitly
noted.
Finally, a one-hot encoding of the question type
(factoid or list) is appended to all the input vectors. With these embedding vectors as input, we
invoke FastQA to produce start and end scores for
each of the n context tokens. We denote start
i
scores by ystart
and end scores conditioned on a
i,j
predicted start at position i by yend
, with start index i ∈ [1, n] and end index j ∈ [i, n].
3.2
Output Layer
In our adapted output layer, we convert the start
and end scores to span probabilities. The computation of these probabilities is independent of
the question type. The interpretation, however,
depends on the question type: While for factoid
questions, the list of answer spans is interpreted as
a ranked list of answer candidates, for list questions, answers above a certain probability threshold are interpreted as the set of answers to the
question.
1
n
Given the start scores ystart
, ..., ystart
and end
i,n
i,1
scores yend , ..., yend , we compute the start and end
probabilities as follows:
i
pistart = σ(ystart
)
i,·
pi,·
end = softmax(yend )
(1)
(2)
where σ(x) is the sigmoid function. As a consequence, multiple tokens can be chosen as likely
start tokens, but the network is expected to select a single end token for a given start token,
hence the softmax function. Finally, the probability that a given span (i, j) answers the question
i,j
i
is pi,j
span = pstart · pend . This extension generalizes the FastQA output layer such that multiple answer spans with different start positions can have
a high probability, allowing us to retrieve multiple
answers for list questions.
3.3
Decoding
Given a trained model, start probabilities can be
obtained by running a forward pass and computing the start probability as in Equation 1. For the
top 20 starts, we compute the end probabilities as
given by Eq. 2. From the start and end probabilities, we extract the top 20 answer spans ranked
by pi,j
span . As a simple post-processing step, we remove duplicate strings and retain only those with
the highest probability.
For factoid questions, we output the 5 most
likely answer spans as our ranked list of answers.
For list questions, we learn a probability cutoff
threshold t that defines the set of list answers
A = {(i, j)|pi,j
span ≥ t}. We choose t to be the
threshold that optimizes the list F1 score on the
respective development set.
3.4
Domain Adaptation
Fine-tuning Our training procedure consists of
two phases: In the pre-training phase, we train the
model on SQuAD, using a token F1 score as the
training objective as by Weissenborn et al. (2017).
We will refer to the resulting parameters as the
base model. In the fine-tuning phase, we initialize the model parameters with the base model and
then continue our optimization on the BioASQ
dataset with a smaller learning rate.
Forgetting Cost Regularization To avoid
catastrophic forgetting during fine-tuning as a
means to regularize our model, we optionally
add an additional forgetting cost term Lf c , as
proposed by Riemer et al. (2017). It is defined
as the cross-entropy loss between the current
predictions and the base model’s predictions.
L2 Weight Regularization We also add an L2
loss term Ll2 which penalizes deviations from the
base model’s parameters. Note that a more advanced approach would be to apply this loss selectively on weights which are particularly important in the source domain (Kirkpatrick et al.,
2017). The final loss is computed as Lf inal =
Loriginal + Cf c · Lf c + Cl2 · Ll2 where Cf c and
Cl2 are hyperparameters which are set to 0 unless
otherwise noted.
4
4.1
Experimental Setup
Datasets
SQuAD SQuAD (Rajpurkar et al., 2016) is a
dataset of ≈ 100, 000 questions with relevant contexts and answers that sparked research interest
into the development of neural QA systems recently. The contexts are excerpts of Wikipedia
articles for which crowd-source workers generated questions-answer pairs. Because of the large
amount of training examples in SQuAD, it lends
itself perfectly as our source dataset.
BioASQ The BioASQ challenge provides a
biomedical QA dataset (Tsatsaronis et al., 2015)
consisting of questions, relevant contexts (called
snippets) from PubMed abstracts and possible answers to the question. It was carefully created with
the help of biomedical experts.
In this work, we focus on Task B, Phase B of the
BioASQ challenge, in which systems must answer
questions from gold-standard snippets. These
questions can be either yes/no questions, summary
questions, factoid questions, or list questions. Because we employ an extractive QA system, we restrict this study to answering factoid and list questions by extracting answer spans from the provided contexts.
The 2017 BioASQ training dataset contains
1, 799 questions, of which 413 are factoid and
486 are list questions. The questions have ≈ 20
snippets on average, each of which are on average ≈ 34 tokens long. We found that around 65%
of the factoid questions and around 92% of the list
questions have at least one extractable answer. For
questions with extractable answers, answers spans
are computed via a simple substring search in the
provided snippets. All other questions are ignored
during training and treated as answered incorrectly
during evaluation.
4.2
Training
We minimize the cross-entropy loss for the gold
standard answer spans. However, for multiple answer spans that refer to the same answer
(e.g. synonyms), we only minimize the loss
for the span of the lowest loss. We use the
ADAM (Kingma and Ba, 2014) for optimization
on SQuAD with a learning rate starting at 10−3
which is halved whenever performance drops between checkpoints. During the fine-tuning phase,
we continue optimization on the BioASQ dataset
with a smaller learning rate starting at 10−4 . During both phases, the model is regularized by variational dropout of rate 0.5 (Gal and Ghahramani,
2015).
4.3
Evaluation
The official evaluation measures from BioASQ are
mean reciprocal rank (MRR) for factoid questions
and F1 score for list questions 1 . For factoid questions, the list of ranked answers can be at most
five entries long. The F1 score is measured on the
gold standard list elements. For both measures,
1
The details can be found at http://
participants-area.bioasq.org/Tasks/b/
eval_meas/
case-insensitive string matches are used to check
the correctness of a given answer. A list of synonyms is provided for all gold-standard answers.
If the system’s response matches one of them, the
answer counts as correct.
For evaluation, we use two different finetuning datasets, depending on the experiment:
BioASQ3B, which contains all questions of the
first three BioASQ challenges, and BioASQ4B
which additionally contains the test questions of
the fourth challenge. BioASQ4B is used as the
training dataset for the fifth BioASQ challenge
whereas BioASQ3B was used for training during
the fourth challenge.
Because the datasets are small, we perform 5fold cross-validation and report the average performance across the five folds. We use the larger
BioASQ4B dataset except when evaluating the ensemble and when comparing to participating systems of previous BioASQ challenges.
All models were implemented using TensorFlow (Abadi et al., 2016) with a hidden size of
100. Because the context in BioASQ usually comprises multiple snippets, they are processed independently in parallel for each question. Answers from all snippets belonging to a question are
merged and ranked according to their individual
probabilities.
5
5.1
Results
Domain Adaptation
In this section, we evaluate various domain adaptation techniques. The results of the experiments
are summarized in Table 1.
Baseline As a baseline without transfer learning, Experiment 1 trains the model on BioASQ
only. Because the BioASQ dataset by itself is
very small, a dropout rate of 0.7 was used, because it worked best in preliminary experiments.
We observe a rather low performance, which is
expected when applying deep learning to such a
small dataset.
Fine-tuning Experiments 2 and 3 evaluate the
pure fine-tuning approach: Our base model is
a system trained on SQuAD only and tested on
BioASQ (Experiment 2). For Experiment 3, we
fine-tuned the base model on the BioASQ4B training set. We observe that performance increases
significantly, especially on list questions. This increase is expected, because the network is trained
Experiment
Factoid MRR
List F1
(1) Training on BioASQ only
17.9%
19.1%
(2) Training on SQuAD only
(3) Fine-tuning on BioASQ
20.0%
24.6%
8.1%
23.6%
(4) Fine-tuning on BioASQ w/o biomedical embeddings
(5) Fine-tuning on BioASQ w/ entity features
21.3%
23.3%
22.4%
23.8%
(6) Fine-tuning on BioASQ + SQuAD
(7) Fine-tuning on BioASQ w/ forgetting cost (Cf c = 100.0)
(8) Fine-tuning on BioASQ w/ L2 loss on original parameters (Cl2 = 0.3)
23.9%
26.2%
22.6%
23.8%
21.1%
20.4%
Table 1: Comparison of various transfer learning techniques. In Experiment 1, the model was trained on
BioASQ only. In Experiment 2, the model was trained on SQuAD and tested on BioASQ. We refer to it as
the base model. In Experiment 3, the base model parameters were fine-tuned on the BioASQ training set.
Experiments 4-5 evaluate the utility of domain dependent word vectors and features. Experiments 6-8
address the problem of catastrophic forgetting. All experiments have been conducted with the BioASQ4B
dataset and 5-fold cross-validation.
on biomedical- and list questions, which are not
part of the SQuAD dataset, for the first time. Overall, the performance of the fine-tuned model on
both question types is much higher than the baseline system without transfer learning.
Features In order to evaluate the impact of using biomedical word embeddings, we repeat Experiment 3 without them (Experiment 4). We see
a factoid and list performance drop of 3.3 and
1.2 percentage points, respectively, showing that
biomedical word embeddings help increase performance.
In Experiment 5, we append entity features to
the word vector, as described in Section 3.1. Even
though these features provide the network with
domain-specific knowledge, we found that it actually harms performance on factoid questions. Because most of the entity features are only active
during fine-tuning with the small dataset, we conjecture that the performance decrease is due to
over-fitting.
Catastrophic Forgetting We continue our
study with techniques to combat catastrophic
forgetting as a means to regularize training during
fine-tuning. In Experiment 6 of Table 1 we
fine-tune the base model on a half-half mixture
of BioASQ and SQuAD questions (BioASQ
questions have been upsampled accordingly).
This form of joint training yielded no significant
performance gains. Experiment 7 regularizes the
model via an additional forgetting cost term, as
proposed by Riemer et al. (2017) and explained
in Section 3.4. We generally found that this
technique only increases performance for factoid
questions where the performance boost was
largest for Cf c = 100.0. The fact that the forgetting loss decreases performance on list questions
is not surprising, as predictions are pushed more
towards the predictions of the base model, which
has very poor performance on list questions.
Experiment 8 adds an L2 loss which penalizes
deviations from the base model’s parameters. We
found that performance decreases as we increase
the value of Cl2 which shows that this technique
does not help at all. For the sake of completeness
we report results for Cl2 = 0.3, the lowest value
that yielded a significant drop in performance.
5.2
Ensemble
Model ensembles are a common method to tweak
the performance of a machine learning system.
Ensembles combine multiple model predictions,
for example by averaging, in order to improve generalization and prevent over-fitting. We evaluate
the utility of an ensemble by training five models on the BioASQ3B dataset using 5-fold crossvalidation. Each of the models is evaluated on
the 4B test data, i.e., data which is not included
in BioASQ3B.
During application, we run an ensemble by averaging the start and end scores of individual models before they are passed to the sigmoid / softmax functions as defined in Eq. 1 and 2. In Table 2 we summarize the average performance of
Experiment
Factoid MRR
List F1
Average
Best
Ensemble
23.4%
24.3%
27.3%
24.0%
27.7%
28.6%
Table 2:
Performance of a model ensemble.
Five models have been trained on the BioASQ3B
dataset and tested on the 4B test questions. We
report the average and best single model performances, as well as the ensemble performance.
the five models, the best performance across the
five models, and the performance of the ensemble. We observe performance gains of 3 percentage points on factoid questions and a less than 1
percentage point on list questions, relative to the
best single model. This demonstrates a small performance gain that is consistent with the literature.
5.3
Comparison to competing BioASQ
systems
Because the final results of the fifth BioASQ challenge are not available at the time of writing, we
compare our system to the best systems in last
year’s challenge 2 . For comparison, we use the
best single model and the model ensemble trained
on BioASQ3B (see Section 5.2). We then evaluate
the model on the 5 batches of last year’s challenge
using the official BioASQ evaluation tool. Each
batch contains 100 questions of which only some
are factoid and list questions. Note that the results underestimate our system’s performance, because our competing system’s responses have been
manually evaluated by humans while our system’s
responses are evaluated automatically using string
matching against a potentially incomplete list of
synonyms. In fact, our qualitative analysis in Section 5.4 shows that many answers are counted as
incorrect, but are synonyms of the gold-standard
answer. The results are summarized in Table 3 and
compared to the best systems in the challenge in
each of the batches and question type categories.
With our system winning four out of five
batches on factoid questions, we consider it stateof-the-art in biomedical factoid question answering, especially when considering that our results
might be higher on manual evaluation. The results
on list questions are slightly worse, but still very
2
Last year’s results are available at http:
//participants-area.bioasq.org/results/
4b/phaseB/
competitive. This is surprising, given that the network never saw a list question prior to the finetuning phase. Due to small test set sizes, the sampling error in each batch is large, causing the single model to outperform the model ensemble on
some batches.
5.4
Qualitative Analysis
In order to get a better insight into the quality of
the predictions, we manually validated the predictions for the factoid questions of batch 5 of the
fourth BioASQ challenge as given by the best single model (see Table 3). There are in total 33 factoid questions, of which 23 have as the gold standard answer a span in one of the contexts. According to the official BioASQ evaluation, only
4 questions are predicted correctly (i.e., the gold
standard answer is ranked highest). However,
we identified 10 rank-1 answers which are not
counted as correct but are synonyms to the gold
standard answer. Examples include ”CMT4D disease” instead of ”Charcot-Marie-Tooth (CMT) 4D
disease”, ”tafazzin” instead of ”Tafazzin (TAZ)
gene”, and ”β-glucocerebrosidase” instead of
”Beta glucocerebrosidase”. In total, we labeled
14 questions as correct and 24 questions as having their correct answer in the top 5 predictions.
In the following, we give examples of mistakes
made by the system. Questions are presented in
italics. In the context, we underline predicted answers and present correct answers in boldface.
We identified eight questions for which the semantic type of the top answer differs from the
question answer type. Some of these cases are
completely wrong predictions. However, this category also includes subtle mistakes like the following:
In which yeast chromosome does
the rDNA cluster reside?
The rDNA cluster in
Saccharomyces cerevisiae is
located 450 kb from the left end
and 610 kb from the right end of
chromosome XII...
Here, it predicted a yeast species the rDNA
cluster is located in, but ignored that the question
is asking for a chromosome.
Another type of mistakes is that the top answer
is somewhat correct, but is missing essential information. We labeled four predictions with this
category, like the following example:
Batch
Factoid MRR
Best Participant
Single
Ensemble
Best Participant
List F1
Single
1
2
3
4
5
12.2% (fa1)
22.6% (LabZhu-FDU)
24.4% (oaqa-3b-3)
32.5% (oaqa-3b-4)
28.5% (oaqa-3b-5)
25.2%
16.4%
24.7%
34.0%
23.7%
29.2%
24.2%
20.6%
40.3%
23.2%
16.8% (fa1)
15.5% (LabZhu-FDU)
48.3% (oaqa-3b-3)
31.2% (oaqa-3b-4)
29.0% (oaqa-3b-5)
29.1%
25.8%
31.8%
29.0%
23.5%
27.9%
20.8%
33.3%
24.1%
26.1%
Avg.
24.0%
24.8%
27.5%
28.1%
27.8%
26.5%
Ensemble
Table 3: Comparison to systems on last year’s (fourth) BioASQ challenge for factoid and list questions.
For each batch and question type, we list the performance of the best competing system, our single model
and ensemble. Note that our qualitative analysis (Section 5.4) suggests that our factoid performance on
batch 5 would be about twice as high if all synonyms were contained in the gold standard answers.
How early during pregnancy does
non-invasive cffDNA testing allow
sex determination of the fetus?
Gold Standard Answer: "6th to
10th week of gestation" or "first
trimester of pregnancy"
Given Top Answer: "6th-10th"
In summary, to our judgment, 14 of 33 questions (42.4%) are answered correctly, and 24 of 33
questions (72.7%) are answered correctly in one
of the top 5 answers. These are surprisingly high
numbers considering low MRR score of 23.7% of
the automatic evaluation (Table 3).
poor prior to fine-tuning which is due to the
lack of list questions in SQuAD. We believe that
large scale open-domain corpora for list questions
would enhance performance further.
Unsupervised domain adaptation could be an
interesting direction for future work, because the
biomedical domain offers large amounts of textual
data, some of which might even contain questions
and their corresponding answers. We believe that
leveraging these resources holds potential to further improve biomedical QA.
6
In this paper, we described a deep learning approach to address the task of biomedical question
answering by using domain adaptation techniques.
Our experiments reveal that mere fine-tuning in
combination with biomedical word embeddings
yield state-of-the-art performance on biomedical
QA, despite the small amount of in-domain training data and the lack of domain-dependent feature engineering. Techniques to overcome catastrophic forgetting, such as a forgetting cost, can
further boost performance for factoid questions.
Overall, we show that employing domain adaptation on neural QA systems trained on large-scale,
open-domain datasets can yield good performance
in domains where large datasets are not available.
Discussion and future work
The most significant result of this work is that
state-of-the-art results in biomedical question answering can be achieved even in the absence of
domain-specific feature engineering. Most competing systems require structured domain-specific
resources, such as biomedical ontologies, parsers,
and entity taggers. While these resources are
available in the biomedical domain, they are not
available in most domains.
Our system, on the other hand, requires a large
open-domain QA dataset, biomedical word embeddings (which are trained in an unsupervised
fashion), and a small biomedical QA dataset. This
suggests that our methodology is easily transferable to other domains as well.
Furthermore, we explored several supervised
domain adaptation techniques. In particular, we
demonstrated the usefulness of forgetting cost for
factoid questions. The decreased performance
on list questions is not surprising, because the
model’s performance on those questions is very
7
Conclusion
Acknowledgments
This research was supported by the German
Federal Ministry of Education and Research
(BMBF) through Software Campus project GeNIE
(01IS12050).
References
Martı́n Abadi, Ashish Agarwal, Paul Barham, Eugene
Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado,
Andy Davis, Jeffrey Dean, Matthieu Devin, et al.
2016. Tensorflow: Large-scale machine learning on
heterogeneous distributed systems. arXiv preprint
arXiv:1603.04467 .
John Blitzer, Mark Dredze, Fernando Pereira, et al.
2007. Biographies, bollywood, boom-boxes and
blenders: Domain adaptation for sentiment classification. In ACL. volume 7, pages 440–447.
Konstantinos Bousmalis, George Trigeorgis, Nathan
Silberman, Dilip Krishnan, and Dumitru Erhan.
2016. Domain separation networks. In Advances in
Neural Information Processing Systems. pages 343–
351.
Minmin Chen, Zhixiang Xu, Kilian Weinberger, and
Fei Sha. 2012. Marginalized denoising autoencoders for domain adaptation.
arXiv preprint
arXiv:1206.4683 .
Yarin Gal and Zoubin Ghahramani. 2015. Dropout
as a bayesian approximation: Representing model
uncertainty in deep learning.
arXiv preprint
arXiv:1506.02142 2.
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan,
Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky.
2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research
17(59):1–35.
Xavier Glorot, Antoine Bordes, and Yoshua Bengio.
2011. Domain adaptation for large-scale sentiment
classification: A deep learning approach. In Proceedings of the 28th international conference on machine learning (ICML-11). pages 513–520.
Dan Jurafsky. 2000. Speech & language processing.
Pearson Education India.
Diederik Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint
arXiv:1412.6980 .
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz,
Joel Veness, Guillaume Desjardins, Andrei A Rusu,
Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences page
201611835.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing
systems. pages 3111–3119.
Ioannis Pavlopoulos, Aris Kosmopoulos, and
Ion Androutsopoulos. 2014.
Continuous
space word vectors obtained by applying
word2vec to abstracts of biomedical articles
http://bioasq.lip6.fr/info/BioASQword2vec/.
Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for
word representation. In Empirical Methods in Natural Language Processing (EMNLP). pages 1532–
1543. http://www.aclweb.org/anthology/D14-1162.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and
Percy Liang. 2016. Squad: 100,000+ questions
for machine comprehension of text. arXiv preprint
arXiv:1606.05250 .
Metthew Riemer, Elham Khabiri, and Richard Goodwin. 2017. Representation stability as a regularizer for improved text analytics transfer learning
https://openreview.net/pdf?id=HyenWc5gx.
Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray
Kavukcuoglu, Razvan Pascanu, and Raia Hadsell.
2016. Progressive neural networks. arXiv preprint
arXiv:1606.04671 .
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and
Hannaneh Hajishirzi. 2016. Bidirectional attention
flow for machine comprehension. arXiv preprint
arXiv:1611.01603 .
George Tsatsaronis, Georgios Balikas, Prodromos
Malakasiotis, Ioannis Partalas, Matthias Zschunke,
Michael R Alvers, Dirk Weissenborn, Anastasia
Krithara, Sergios Petridis, Dimitris Polychronopoulos, et al. 2015. An overview of the bioasq largescale biomedical semantic indexing and question answering competition. BMC bioinformatics 16(1):1.
Ellen M Voorhees et al. 1999. The trec-8 question answering track report. In Trec. volume 99, pages 77–
82.
Shuohang Wang and Jing Jiang. 2016. Machine comprehension using match-lstm and answer pointer.
arXiv preprint arXiv:1608.07905 .
Dirk Weissenborn, Georg Wiese, and Laura Seiffe.
2017. Making neural qa as simple as possible but
not simpler. arXiv preprint arXiv:1703.04816 .
Caiming Xiong, Victor Zhong, and Richard Socher.
2016. Dynamic coattention networks for question
answering. arXiv preprint arXiv:1611.01604 .
Yang Zi, Zhou Yue, and Eric Nyberg. 2016. Learning
to answer biomedical questions: Oaqa at bioasq 4b.
ACL 2016 page 23.
| 9 |
1
Uncertainty Marginal Price, Transmission Reserve,
and Day-ahead Market Clearing with Robust Unit
Commitment
arXiv:1507.01540v3 [math.OC] 2 Aug 2016
Hongxing Ye, Member, IEEE, Yinyin Ge, Mohammad Shahidehpour, Fellow, IEEE, Zuyi Li, Senior Member, IEEE
Abstract—The increasing penetration of renewable energy in
recent years has led to more uncertainties in power systems.
These uncertainties have to be accommodated by flexible resources (i.e. upward and downward generation reserves). In this
paper, a novel concept, Uncertainty Marginal Price (UMP), is
proposed to price both the uncertainty and reserve. At the same
time, the energy is priced at Locational Marginal Price (LMP). A
novel market clearing mechanism is proposed to credit the generation and reserve and to charge the load and uncertainty within
the Robust Unit Commitment (RUC) in the Day-ahead market.
We derive the UMPs and LMPs in the robust optimization
framework. UMP helps allocate the cost of generation reserves
to uncertainty sources. We prove that the proposed market
clearing mechanism leads to partial market equilibrium. We find
that transmission reserves must be kept explicitly in addition to
generation reserves for uncertainty accommodation. We prove
that transmission reserves for ramping delivery may lead to
Financial Transmission Right (FTR) underfunding in existing
markets. The FTR underfunding can be covered by congestion
fund collected from uncertainty payment in the proposed market
clearing mechanism. Simulations on a six-bus system and the
IEEE 118-bus system are performed to illustrate the new concepts
and the market clearing mechanism.
Index Terms—Uncertainty Marginal Price, Cost Causation,
Robust Unit Commitment, Financial Transmission Right, Generation Reserve, Transmission Reserve
N OMENCLATURE
Indices
i, l, t
m, n
mi
k
indices for generators, lines, and time intervals
index for buses
index of bus where unit i is located
index of the worst point for uncertainty
Functions and sets
ˆ
F
U
CiP (·), CiI (·)
L(·)
G(m)
K
symbol for the optimal value of a variable
feasible set for UC and dispatch
uncertainty set
cost related to dispatch and UC for unit i
Lagrangian function
set of units located at bus m
set of the indices for ˆk
up
down
Km,t
, Km,t
set of indices k for upward and downward
UMPs at bus m time t
Constants
ND , NT number of buses and time intervals
dm,t
aggregated equivalent load
Fl
transmission line flow limit
Γl,m
shift factor for line l with respect to bus m
Pimin , Pimax minimum and maximum generation outputs
riu , rid
ramping-up/down limits between sequential intervals
Riu , Rid
ramping-up/down limits for uncertainty accommodation
um,t
bound for uncertainty
ˆ
ˆk is the k th worst uncertainty vector in K, ˆk ∈
RND NT , ˆkm,t ∈ R
FTRm→n FTR amount from bus m to n
Variables
Ii,t
unit on/off status indicators
yi,t , zi,t unit start-up and shut-down indicators
Pi,t
generation dispatch
inj
Pm,t
net power injection
m,t
uncertainty at bus m time t
Z
optimal value of problem (SP) given
(x, y, z, I, P )
∆Pi,t
generation re-dispatch
pos
∆fl,t
transmission capacity reserve in positive direction
neg
∆fl,t
transmission capacity reserve in negative direction
inj
∆Pm,t
net power injection change
λt , λkt
Lagrangian multipliers
α, β, η
non-negative Lagrangian multipliers
u,k
e
πm,t
marginal prices. πm,t
for energy price; πm,t
is the
u,up
UMP for kth uncertainty point; πm,t for upward
u,down
UMP; πm,t
for downward UMP
up
Qi,t , Qdown
i,t upward and downward generation reserves
Ψm,t
charge for uncertainty source
T
ΘG
i,t , Θl,t credits to generation reserve for unit i and transmission reserve for line l at time t
This work is supported by the U.S. National Science Foundation Grant
I. I NTRODUCTION
ECCS-1549937. The early version of this work was available on arXiv July
06, 2015, titled “Market Clearing for Uncertainty, Generation Reserve, and
N modern power systems, uncertainties grow significantly
Transmission Reserve”. The authors are with the Galvin Center for Electricity
with the increasing penetration of Renewable Energy
Innovation at Illinois Institute of Technology, Chicago, IL 60616, USA. (email: [email protected]; [email protected]; [email protected]; [email protected]).
Source (RES), such as wind power generation. They pose
0885-8950 c 2016 IEEE. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7524711,
Citation information: DOI 10.1109/TPWRS.2016.2595621, IEEE Transactions on Power Systems
I
2
new challenges for the operation of electricity markets. In the
Day-ahead market (DAM), the Unit Commitment (UC) and
Economic Dispatch (ED) problems considering uncertainties
have become a focus of research in recent years. The objective
of the UC problem is to find the least cost UC solution
for the second day while respecting both system-wide and
unit-wise constraints. By fixing the UC variables, the ED
problem is established. The Locational Marginal Price (LMP)
and reserve price are then obtained as byproducts of the
ED problem [1], [2]. When considering the uncertainties, the
generation from uncontrollable RES are uncertain parameters
in the optimization problem.
Recently, Robust UC (RUC) is proposed to address the
issues of uncertainty [3]–[7]. The largest merit is that the
UC solution can be immunized against all the uncertainties
in predefined set. The key idea of the two-stage RUC is to
determine the optimal UC in the first stage which leads to
the least cost for the worst scenario in the second stage.
However, this approach is conservative and the Robust ED
(RED) is absent. Authors in [8] combined the stochastic and
robust approach using a weight factor in the objective function
to address the conservativeness issue. [9], [10] employed the
Affine Policy (AP) to formulate and solve the RED problem.
A Multi-stage RUC is proposed to incorporate the latest
information in each stage [11], where AP is also used to
overcome the computational challenge. Recently, we reported
a new approach which tries to bridge the gap of RUC and
RED [7], [12].
In DAM, the main difficulty for pricing is that RED is
absent in the traditional RUC [4], [5], [8]. On the other hand,
a large number of works on pricing reserves exists within
the UC framework considering contingencies [2], [13], [14]
and stochastic security [15]. They are normally modeled as
co-optimization problem. In [2], the reserve is cleared on
zonal levels. Instead of countable contingency scenarios [15]
or single additional scenario for reserve [16], the infinite
continuous uncertainties are considered in the RUC, and the
reserves are fully deliverable in infinite scenarios. In this
paper, we propose a novel mechanism to price the energy,
uncertainty, and flexibility simultaneously based on the RUC
in [7]. An explicit price signal is derived for pricing the
uncertainty. As the ED solution obtained is robust [7], both
marginal impacts of the uncertainty and flexibility are reflected
in these prices. In the proposed mechanism, reserve costs
are allocated to uncertainty sources. Generation reserves, also
called flexibilities in this paper, are the key factor for the
robust optimization approaches. They are entitled to proper
credits based on their contribution to uncertainty management.
According to the market equilibrium analysis, market participants (price takers) can get the maximal profit by following
the ISO/RTO’s dispatch instruction.
The generation reserve and its deliverability are the main
focus in [7]. The definition of LMP in [17] are employed
to derive the energy price. The new concept, Uncertainty
Marginal Price (UMP), is proposed to define the marginal cost
of immunizing the next increment of uncertainty at a specific
location. Load and generation are a pair, and they are priced
at LMP. Uncertainty and flexibility (i.e., generation reserve)
are another pair, and they are priced at UMP. Both LMPs and
UMPs may vary with the locations due to transmission congestions. Limited by the transmission capacity and power flow
equations, sometimes the uncertainties at certain buses cannot
be mitigated by the system-wide cheapest generation reserve,
and expensive generation reserve, which is deliverable, has
to be kept in the system. Therefore, uncertainty sources are
charged and generation reserves are credited based on UMPs
at the corresponding buses.
As the transmission reserve is kept within the RUC framework, the congestion component may exist in both the energy
price and reserve price even if the physical limit of the line
is not reached yet in the base case scenario. LMP congestion
costs are distributed to Financial Transmission Right (FTR)
holders in the existing market according to the LMP difference and the FTR amount. The revenue inadequacy occurs
when the LMP congestion cost collected is smaller than the
credit distributed to FTR holders, which is also called FTR
underfunding. This has been a serious issue in recent years
in the industry [18], [19]. We reveal that transmission reserve
will be another reason for FTR underfunding when physical
transmission limit is adopted in Simultaneous Feasibility Test
(SFT) for FTR market [18], [20], [21]. This conclusion is
applicable to any robust UC framework for DAM.
The main contributions of this paper are listed as follows.
1) The novel UMP for uncertainties and generation reserves, as well as LMP for energy, are derived within a
robust UC framework. The derivation is for uncertainties
set with interval and budget constraints. The general
concepts still apply when other uncertainty sets are
modeled.
2) It is revealed that transmission capacities have to be
reserved for uncertainty accommodation and the transmission reserves may cause FTR underfunding because
of the deficiency of energy congestion revenues based
on existing market rules.
3) A new market clearing mechanism is proposed to credit
the generation and reserve and to charge the load and
uncertainty. The payment collected from uncertainty
sources can exactly cover the credits to generation
reserves and transmission reserves, effectively resolving
the FTR underfunding issue.
The rest of this paper is organized as follows. Derivation of
the LMP and UMP is presented in Section II, so is the market
clearing mechanism for charge and credit based on LMP and
UMP. Case studies are presented in Section III. Section IV
concludes this paper.
II. RUC AND M ARKET C LEARING
One motivation of this work is to price the uncertainty,
and allocate the cost of uncertainty accommodation to the
uncertainty source. As the uncertainty source is charged the
uncertainty payment, it has the incentive to reduce the uncertainty. With UMP, we can follow the cost causation principle,
which is normally required in the market design, to charge the
uncertainty sources. Cost causation principle is described as
“require that all approved rates reflect to some degree the costs
3
actually caused by the customer who must pay them” in KN
Energy, Inc. V. FERC, 968 F.2d 1295, 1300 (D.C. Cir. 1992).
Another important motivation is to provide a theory that
supports the application of the RUC in the DAM clearing.
Although the RUC/RED are studied extensively, the only
application of the RUC now is for the Reliability Assessment
Commitment (RAC) in the DAM. There are several reasons
why they are not applied in the DAM clearing. First, the
computation burden of RUC is much larger than the standard
UC. Second, as the objective is the cost of the worst-case scenario [3], [5], the solution is criticized on over conservatism.
Third, no economic dispatch and prices are available within
the RUC framework. Recently, with the new achievements
in the algorithms, models, and high-performance computing
application [6]–[8], [22]–[24], the first two obstacles are being
addressed with great promises. This paper tries to clear the last
obstacle with the new model [7]. Adopting RUC in the market
clearing can give clear price signals for the uncertainties and
reserves. On the other side, it is also easier for the solution to
pass the robustness test, which is a RUC, in RAC. To our best
knowledge, this is the first work on pricing energy, uncertainties, and reserves within the robust optimization framework in
DAM. Hence, we focus on illustrating the concept with the
following assumptions.
• Network loss is ignored. Shift factor matrix is constant.
• Uncertainty is from load and RES. Contingency is ignored.
• The uncertainty budget set can be truly formulated by the
ISO/RTO.
A. RUC and RED
ISOs/RTOs desire to get the optimal UC and ED solution
in the base-case scenario. They can re-dispatch the flexible resources, such as adjustable load demands and generators with
fast ramping capabilities, to follow the load when deviation
occurs (or uncertainty is revealed). Consistent with the robust
literature [4], [5], the uncertainty set is modeled as
U := { ∈ RND NT : −um,t ≤ m,t ≤ um,t , ∀m, t
X |m,t |
≤ Λ∆
t , ∀t}
u
m,t
m
(RUC) min C I (x) + C P (p)
s.t. Ax + Bp ≤ b
n
F := (x, p) : ∀ ∈ U, ∃∆p such that
o
Cx + Dp + G∆p ≤ d + E .
(1)
(2)
The basic idea of the above model is to find a robust UC and
ED for the base-case scenario. The UC x and dispatch p are
immunized against any uncertainty ∈ U. When uncertainty
occurs, it is accommodated by the generation adjustment ∆p.
Please refer to Appendix A for the detailed formulation.
(x,p)
s.t.
Ax + Bp ≤ b
Cx + Dp + G∆pk ≤ d + Eˆ
k , ∀k ∈ K (3a)
and
Z := max
min
1> s
∈U
(s,∆p)∈R()
n
R() := (s, ∆p) : s ≥ 0
(SP)
(4a)
(4b)
o
G∆p − s ≤ d − Cx − Dp + E
(4c)
where K is the index set for uncertainty points ˆ which are
dynamically generated in (SP) with iterations. Please refer to
Appendix B for the detailed formulation. It should be noted
that ˆk is the extreme point of U. Variable ∆pk is associated
with ˆk . The objective function in (SP) is to find the worst
point in U given (x, p). The procedure is
1: K ← ∅, k ← 1, Z ← +∞, define feasibility tolerance δ
2: while Z ≥ δ do
3:
Solve (MP), obtain optimal (x̂, p̂).
4:
Solve (SP) with x = x̂, p = p̂, get solution (Z, ˆk )
5:
K ← K ∪ k, k ← k + 1
6: end while
Once the procedure is converged, we also get the optimal
UC and ED solution by solving (MP). Similar to traditional
LMP calculation, we fix the binary variables as x̂. Then a
convex linear programming problem (RED) can be formed as
XX
CiP (Pi,t )
(RED) min
(5)
P,∆P
s.t.
(λt )
X
(β̄i,t )
(αi,t )
¯
(η̄l,t )
(ηl,t )
¯
k
(β̄i,t
)
k
(βi,t
)
¯k
(ᾱi,t )
k
(αi,t
)
t
i
X
dm,t , ∀t,
m
Pi,t ≤ Iˆi,t Pimax , ∀i, t
−Pi,t ≤ −Iˆi,t Pimin , ∀i, t
Pi,t − Pi,t−1 ≤ riu (1 − ŷi,t ) + Pimin ŷi,t , ∀i, t
−Pi,t + Pi,t−1 ≤ rid (1 − ẑi,t ) + Pimin ẑi,t , ∀i, t
X
inj
Γl,m Pm,t
≤ Fl , ∀l, t
m
X
inj
−
Γl,m Pm,t
≤ Fl , ∀l, t
m
X
X
k
∆Pi,t
=
ˆkm,t , ∀t, ∀k ∈ K
m
i
k
Pi,t + ∆Pi,t ≤ Iˆi,t Pimax , ∀i, t, ∀k ∈ K
k
−Pi,t − ∆Pi,t
≤ −Iˆi,t Pimin , ∀i, t, ∀k ∈ K
k
∆Pi,t
≤ Riu (1 − ŷi,t ), ∀i, t, ∀k ∈ K
k
−∆Pi,t
≤ Rid (1 − ẑi,t+1 ), ∀i, t, ∀k ∈ K
Pi,t =
i
(λkt )
(x,p)∈F
min C I (x) + C P (p)
(MP)
(βi,t )
¯
(ᾱi,t )
is the budget parameter and assumed as an integer [3]. It is
noted that all the flexible resources are modeled as generators.
In this paper, the RUC is formulated according to the model
in [7].
Λ∆
t
Column and Constraint Generation (CCG) based method is
used to solve the above model [6]. Problem (MP) and (SP)
are established as follows.
¯
X
inj
inj,k
k
(η̄l,t
)
Γl,m (Pm,t
+ ∆Pm,t
) ≤ Fl , ∀l, t, ∀k ∈ K
(6a)
(6b)
(6c)
(6d)
(6e)
(6f)
(6g)
(7a)
(7b)
(7c)
(7d)
(7e)
(7f)
m
k
(ηl,t
) −
¯
X
m
inj
inj,k
Γl,m (Pm,t
+ ∆Pm,t
) ≤ Fl , ∀l, t, ∀k ∈ K,(7g)
4
where (6a)-(6g) are the constraints for the base ED, and (7a)(7g) are constraints for different extreme points in U. (7a)
denotes the load balance after re-dispatch. The generation
adjustments respects capacity limits (7b)(7c) and ramping
limits (7d)(7e). Network constraints are denoted by (7f)(7g).
inj
inj,k
The Pm,t
and ∆Pm,t
are defined as
inj
Pm,t
:=
X
i∈G(m)
and
inj,k
∆Pm,t
:=
X
i∈G(m)
Pi,t − dm,t , ∀m, t,
k
∆Pi,t
− ˆkm,t , ∀m, t, k,
respectively.
B. Marginal Prices
In this section, marginal prices for the energy, uncertainty, and generation reserve are derived based on the Lagrangian function. Denote the Lagrangian function for (RED)
as L(P, ∆P, λ, α, β, η), which is shown in Appendix C. According to the definition of marginal price [17], the LMP for
energy at bus m is
∂L(P, ∆P, λ, α, β, η)
(8)
∂dm,t
XX
X
k
k
=λt −
Γl,m η̄l,t − ηl,t −
Γl,m η̄l,t
− ηl,t
¯
¯
l
l k∈K
e
πm,t
=
It is observed that the impact of the uncertainty is also reflected
in the LMP.
The new concept, UMP for DAM, is defined as the marginal
cost of immunizing the next unit increment of uncertainty. For
ˆk , an extreme point of U, the UMP is
X
∂L(P, ∆P, λ, α, β, η)
k
k
= λkt −
Γl,m η̄l,t
− ηl,t
=
k
∂ˆ
m,t
¯
l
(9)
u,k
Both the uncertainty and generation reserve are priced at πm,t
.
u,k
In the derivation of πm,t
, the worst point ˆk is the only
concern. Therefore, the general principles in this paper still
work when U is replaced with other sets.
It should be noted that (9) is intermediate price signals. In
order to get the aggregated UMPs, the following new sets are
u,k
defined based on the sign of πm,t
.
u,k
πm,t
up
u,k
Km,t
:= {k : πm,t
≥ 0};
u,k
down
Km,t
:= {k : πm,t
< 0}
(10)
The aggregated upward and downward UMPs are defined as
u,up
πm,t
:=
X
up
k∈Km,t
u,k
πm,t
;
u,down
πm,t
:=
X
u,k
πm,t
(11)
down
k∈Km,t
respectively. In the following context, we will show how the
aggregated UMPs are used.
C. Market Clearing Mechanism
With LMP and UMP, the charges and credits for the
market participants become clear and fair in the DAM. Energy
clearing is straightforward. The basic principle related to
uncertainty and flexibility is that those who cause uncertainties
(uncertainty sources), such as RES, pay based on UMP and
those who contribute to the management of uncertainties
(uncertainty mitigators), such as generators or storage with
ramping capabilities, get paid.
1) Energy Payment and Credit: LSEs pay based on the
amount of the load and LMP. The energy payment from the
e
LSE at Bus m at t is πm,t
dm,t . It should be noted that RES
is entitled to the credit due to the negative load modeled in
RUC. Generator i, located at Bus mi , is entitled to the credit
e
πm
P for energy production.
i ,t i,t
2) Charge to Uncertainty Source: The uncertainty source
can be charged as
X u,k
Ψm,t =
πm,t ˆkm,t
(12)
k∈K
The uncertainty source pays based on the marginal price and
the worst point ˆk . The uncertainty source is charged only
u,k
when πm,t
is non-zero, and it may have to pay more when
the uncertainty becomes larger. The uncertainty point ˆkm,t can
be upward (i.e. ˆkm,t ≥ 0) or downward (i.e. ˆkm,t ≤ 0). We
have the following lemma regarding the relation between the
u,k
signs of πm,t
and ˆkm,t .
u,k
Lemma 1. If ˆkm,t > 0, then πm,t
≥ 0. If ˆkm,t < 0, then
u,k
πm,t ≤ 0.
Please check Appendix D-A for the proof. When the budget
set is adopted, the extreme point ˆkm,t ∈ {−um,t , 0, um,t } [4],
[7], so the uncertainty charge in (12) can also be written as
(13) according to Lemma 1 and (11).
u,up
u,down
Ψm,t = πm,t
um,t + πm,t
(−um,t )
(13)
Thus, upward and downward uncertainties are charged separately. It should be noted that we still need to use (12) when
other uncertainty sets are used.
3) Credit to Generation Reserve: Only resources that can
provide deliverable generation reserve are entitled to credits.
If i ∈ G(m), then the credits can be formulated as
X u,k
k
ΘG
πm,t ∆Pi,t
.
(14)
i,t =
k∈K
In other words, generation reserve is paid the UMP at the bus
u,k
where it is located. If πm,t
= 0, then the associated credit
k
is zero no matter what the value of ∆Pi,t
is. Similar to the
uncertainties, the generation reserves can be in either upward
or downward direction. Denote the upward generation reserve
down
as Qup
i,t and the downward generation reserve as Qi,t
n
o
ˆi,t Pimax − Pi,t , Riu (1 − ŷi,t ) , (15)
Qup
:=
min
I
i,t
n
o
down
Qi,t := max Iˆi,t Pimin − Pi,t , −Rid (1 − ẑi,t+1 ) . (16)
We also have the following lemma regarding the relation
down
k
between Qup
and ∆Pi,t
.
i,t , Qi,t
5
k
to
Lemma 2. If i ∈ G(m), then the optimal solution ∆Pi,t
problem (RED) is
(
u,k
Qup
if πm,t
>0
k
i,t ,
∆Pi,t =
u,k
Qdown
,
if
π
m,t < 0
i,t
and
u,k
k
k
k
k
πm,t
= β̄i,t
− βi,t
+ ᾱi,t
− αi,t
,
(17)
¯
¯
Please check Appendix D-B for the proof. The credit to
generation reserve i located at bus m (14) can be rewritten as
(18) according to Lemma 2 and (11).
u,up up
u,down down
ΘG
i,t = πm,t Qi,t + πm,t Qi,t
(18)
(18) shows that the upward and downward generation reserves
are credited separately. Flexible resources may receive credits
for both the upward and downward generation reserves simultaneously. (18) always holds even if other uncertainty sets are
modeled in RUC.
D. Transmission Reserve and Revenue Adequacy
Some transmission capacities are reserved according to the
solution to (RED). These transmission reserves are used to
ensure the ramping deliverability when the uncertainty is
revealed, as shown in (7f) and (7g). It is noted that they
are determined automatically in (RED), and kept explicitly
without explicit transmission reserve requirement constraints.
Just like the “scheduled” generation reserve, the “scheduled”
transmission reserves in positive direction and negative direction are
X
pos
inj
,
(19)
∆fl,t
:= Fl −
Γl,m Pm,t
neg
∆fl,t
:= Fl +
m
X
inj
,
Γl,m Pm,t
(20)
m
respectively. They are always non-negative.
An important issue related to the transmission reserve is
the credit entitled to the Financial Transmission Right (FTR)
holders. FTR is a financial instrument used to hedge congestion cost in the electricity market, where participants are
charged or credited due to the transmission congestion [21],
[25]. Within the robust framework, the effective transmission
capacity for base-case scenario is different from the physical
limit, which is used in the Simultaneous Feasibility Test (SFT)
for FTR market [18], [20], [21]. In the existing market, the
FTR credit is funded by the energy congestion cost, which
is the net payment of energy. However, the energy congestion
cost may not be sufficient to fund the FTR credit [18], [19]. We
argue that the transmission reserve becomes a new reason for
FTR underfunding in any framework to guarantee the ramping
deliverability.
pos
neg
Theorem 1. If transmission reserve ∆fl,t
and ∆fl,t
are
kept for line l at time t in DAM, then the maximum FTR
underfunding associated with line l at time t is
X
pos
neg
k
k
η̄l,t
∆fl,t
+ ηl,t
∆fl,t
(21)
¯
k∈K
due to the deficiency of energy congestion cost.
Uncertainty
Payment
Gen. Res.
Credit
Energy
Payment
Trans. Res.
Credit
LMP Cong. Cost
Energy
Credit
FTR Credit
Fig. 1. Money flow of the proposed market clearing mechanism, where
uncertainty sources make the uncertainty payment, and LSEs make the energy
payment.
Please check Appendix D-C for the proof. From the FTR
holder’s point of view, (21) is the credit due to the transmission
reserve. Therefore, we also call (21) transmission reserve
credit, and denote it as
X
pos
neg
k
k
η̄l,t
∆fl,t
∆fl,t
ΘTl,t :=
+ ηl,t
.
(22)
¯
k∈K
k
k
At most one of η̄l,t
and ηl,t
is non-zero for transmission l.
¯
The credit to positivePtransmission reserve is zero for line l at
pos
k
time t, when either k∈K η̄l,t
= 0 or ∆fl,t
= 0.
Theorem 2. If (RED) is feasible, then uncertainty payment
can exactly cover generation reserve credit and transmission
reserve credit, and the revenue adequacy is always guaranteed
in the proposed market clearing mechanism.
Please check Appendix D-D for the proof. Theorem 1
reveals that FTR underfunding issue can occur within the
existing market structures as long as the transmission reserve
is non-zero, even if the LMPs are calculated based on other
approaches. Theorem 2 shows that the new market clearing mechanism overcomes the FTR underfunding issue. The
money flow of the proposed market clearing mechanism is
depicted in Fig.1. Energy payment collected based on LMP
is distributed to FTR holders as LMP congestion cost and
generators as energy credit. On the other hand, the payment
collected based on UMP is distributed to FTR holders as
transmission reserve credit and flexible resources as generation
reserve credit. The LMP congestion cost and transmission
reserve credit can exactly cover the FTR credit, which is
calculated based on the LMP difference and FTR amount.
E. Market Equilibrium
In this section, we characterize the competitive market equilibrium model. In the electricity industry, the partial market
equilibrium model is often employed [1], [13], [26], where
market participants are price takers [27].
The energy is cleared according to (6a). Uncertainty and
generation reserve are cleared according to (7a). Without loss
of generality, consider unit i located at bus m. Its profit
maximization problem can be formulated as
!
u,up up
u,down down
e
X Pi,t πm,t
+ πm,t
Qi,t + πm,t
Qi,t
(PMPi )max
Pi,t
−CiP (Pi,t )
t
s.t. (6b) − (6e), (15) − (16)
6
where the decision variable is Pi,t given the price signal
u,up
u,down
e
(πm,t
, πm,t
, πm,t
). As proved in Appendix D-E, unit i
is not inclined to change its power output level as it can
obtain the maximum profit by following the ISO’s dispatch
e
instruction P̂i,t . Price signal πm,t
provides the incentives for
u,k
unit i to dispatch power output to P̂i,t , and price πm,t
gives
incentives for unit i to maintain the generation reserve for
uncertainty. Hence, the dispatch instruction P̂i,t and price
u,up
u,down
e
signal (πm,t
, πm,t
, πm,t
) constitute a competitive partial
equilibrium [27].
F. Discussions
down
k
As the Pi,t and ∆Pi,t
(or (Qup
i,t , Qi,t )) are coupled by
k
k
(7b) and (7c), the opportunity cost (β̄i,t − βi,t
) is enough to
¯
provide the incentives for i to keep the generation level at P̂i,t .
k
k
Including ᾱi,t
− αi,t
in the generation reserve price has several
¯
benefits. Firstly, generation reserves provided by different units
are priced fairly. Generation reserve prices are the same for the
units at the same bus, and they may vary with locations if line
congestions exist. Secondly, higher generation reserve price
attracts long-term investment for flexible resources. Thirdly, it
is consistent with the existing reserve pricing practice [2], [28].
In fact, generation reserve price is consistent with the UMP.
Therefore, the uncertainties and flexibilities are also treated
fairly at the same bus.
The upward and downward UMPs are obtained according to
(11), respectively. The uncertainty sources are charged according to (13). The generation reserves are credited according to
u,k
k
defined in (9) and re-dispatch ∆Pi,t
(18). The price signal πm,t
are intermediate variables for market clearing. The proposed
UMP may be non-zero even if the uncertainty at a bus is zero.
This is similar to the LMP, which may also be non-zero for
the bus without load.
The market clearing mechanism proposed in this paper
follows the cost causation principle for the cost allocation.
In reality, it may be controversial to allocate the reserve cost
to uncertainty sources. However, we argue that it would be fair
and must be done when the RES penetration level is high. An
extreme case is when the loads are all supplied by RES. There
has been study showing it is possible that the increasing RES
penetration can cause higher system operation cost. This issue
cannot be handled by the existing market clearing mechanism,
in which loads pay for the additional system reserve that is
required to accommodate the uncertainty from RES. In other
words, loads are actually providing subsidies to RES. When
the RES penetration level is low, the subsidies can help the
growth of the RES. However, when the RES penetration level
is high, these growing subsidies will cause serious fairness
issue. On the other hand, with UMP as the stimulating price
signals, RES will have the incentives to improve its forecast
techniques and reduce its uncertainty. In the ideal case when
its uncertainty approaches zero, RES will no longer pay.
Following the existing practice, the UC variables are fixed
during the marginal price derivation. Hence, the uplift issue,
which exists in the real market, still remains in the proposed
market clearing mechanism. Although the UC variables are
fixed, the LMP and reserve price in the real market can provide
effective signals for the long-term investment of generation
and transmission as well as consumption strategy of electricity.
Similarly, the uncertainty impact is not only reflected in UC,
but also in the ED within the RUC model in this paper.
Hence, the proposed LMP and UMP can also provide signals
for the long-term investment of flexibilities (i.e. generation,
transmission, and demand).
The pricing for uncertainties proposed in this paper is not
in conflict with the pricing for traditional reserves, which
are mainly prepared for the contingencies. The traditional
reserve prices can be derived in the framework by adding extra
traditional reserve constraints, and the corresponding reserve
costs can still be allocated to LSEs.
It is observed that the credit in (14) is the sum of credits for
all extreme points. That is because the related constraints may
be binding for multiple extreme points, and the dual variables
(shadow prices) for these constraints work together in the dual
problem. The traditional price for energy and reserve also has
similar form when multiple contingencies are modeled.
Although only one scenario will happen in reality, we still
need to consider the worst scenario defined in uncertainty set
and keep enough reserves in DAM. That is because DAM is
a financial market, and the LMP and UMP are the financially
binding prices. This is similar to the existing market model
considering contingencies. Even if the contingency seldom
occurs, they are still modeled for market clearing, and the
contingencies are reflected in LMP and reserve price.
The issue of price multiplicity still exists in the proposed
model [29] because problem (RED) is a linear programming
(LP) problem. However, the price is unique with the nondegeneracy assumption. For simplicity, we have considered a
single-sided auction in the proposed model. By introducing
the demand bids, we can formulate a double-sided auction
and the general principles in this paper will still apply.
III. C ASE S TUDY
A six-bus system and the IEEE 118-Bus system are simulated to illustrate the proposed market clearing mechanism.
In the six-bus system, the basic ideas of UMP are presented
within the robust optimization framework. FTR underfunding
issue is illustrated and a comparison between the UMP and
traditional reserve price are presented. In the IEEE 118-Bus
system, the UMP related products are presented for different
uncertainty levels. The behaviors and impacts of flexible
sources are analyzed by an energy storage example.
A. Six-bus System
A six-bus system is studied in this section. The one-line
diagram is shown in Fig. 2. The unit data and line data
are shown in Table I and Table II, respectively. Table III
presents the load and uncertainty information. Column “Base
Load” shows the hourly forecasted load. Assume that the load
distributions are 20%, 40%, and 40% for Bus 3, Bus 4, and
Bus 5, respectively. ū1,t and ū3,t in Table III are the bounds
of the uncertainties at Bus 1 and Bus 3, respectively. The
uncertainty bounds at other buses are 0.
7
1
G1
G2
2
L1
TABLE IV
M ARGINAL C OSTS AT D IFFERENT G ENERATION L EVELS ($/MW H )
3
Gen. 1
P1w
6
5
4
L3
L2
¯
100
124
148
172
196
G3
Fig. 2. One-line diagram for 6-bus system.
TABLE I
U NIT DATA FOR THE 6- BUS S YSTEM
# P min P max P0
1 100
2 10
6 10
a
b
c
4
3
1
4
2
1
4
3
−2
P min ,P max ,P0 : min/max/initial generation level (MW);
fuel cost ($): aP 2 + bP + c ;
Ru ,Rd : ramping up/down rate (MW/h);
Cu ,Cd : startup/shutdown cost ($);
T on ,T off ,T0 : min on/min off/initial time (h)
It is assumed that the relative forecasting errors increase
with hours. Uncertainty 1,t and 3,t also respect
(23a)
−Λ · ūm,t ≤ m,t ≤ Λ · ūm,t , ∀t, m
X |m,t |
≤ Λ∆ , ∀t,
ūm,t
m
(23b)
where (23a) denotes the uncertainty interval at a single bus,
and (23b) represents the system-wide uncertainty [5]. The Λ
and Λ∆ are the budget parameters for the single bus and
system, respectively.
1) LMP and UMP: Consider the case where Λ = 1, Λ∆ =
2. The CCG based approach converges after 2 iterations.
TABLE II
L INE DATA FOR THE 6- BUS S YSTEM
from
1
1
2
5
3
2
4
to
2
4
4
6
6
3
5
x(p.u.)
0.17 0.258 0.197 0.14 0.018 0.037 0.037
capacity(MW) 200 100 100 100 100 200 200
TABLE III
L OAD AND U NCERTAINTY DATA FOR THE 6- BUS S YSTEM (MW)
Time (h) Base Load
1
2
3
4
5
6
7
8
9
10
11
12
175.19
165.15
158.67
154.73
155.06
160.48
173.39
177.6
186.81
206.96
228.61
236.1
ū1,t
1.09
2.06
2.98
3.87
4.85
6.02
7.59
8.88
10.51
12.94
15.72
17.71
ū3,t Time (h) Base Load
0.29
0.55
0.79
1.03
1.29
1.6
2.02
2.37
2.8
3.45
4.19
4.72
13
14
15
16
17
18
19
20
21
22
23
24
242.18
243.6
248.86
255.79
256
246.74
245.97
237.35
237.31
232.67
195.93
195.6
mar. cost
124
148
172
196
220
14.396
14.588
14.78
14.972
15.164
P2w
P̄2w
¯
10 28
28 46
46 64
64 82
82 100
Gen. 3
mar. cost
32.638
32.674
32.71
32.746
32.782
P3w
¯
10
12
14
16
18
P̄3w mar. cost
12
14
16
18
20
17.71
17.73
17.75
17.77
17.79
TABLE V
G ENERATION AND R ESERVE (Λ = 1, Λ∆ = 2, MW)
Ru Rd Cu Cd T on T off T0
220 120 0.004 13.5 176.9 24 24 180 50
100 50 0.001 32.6 129.9 12 12 360 40
20
0 0.005 17.6 137.4 5 5 60 0
Gen. 2
P̄1w
ū1,t
ū3,t
19.68
21.32
23.33
25.58
27.2
27.76
29.21
29.67
31.15
31.99
28.16
29.34
5.25
5.68
6.22
6.82
7.25
7.4
7.79
7.91
8.31
8.53
7.51
7.82
T
P1
P2
P3
up
up
up
Q1 Qdown
Q2 Qdown
Q3
1
2
21 195.19 25.58 16.54 24
−24
12
−12 3.46
Qdown
3
−5
Hence, K = {1, 2}. Given the UC solutions, the problem
(RED) can be solved by commercial LP solver. The marginal
prices are then obtained as byproducts.
The generation outputs are presented in Table V at Hours 21.
It can be observed that G1 supplies most of the loads at Hour
21, which is 195.19 MW. According to the bid information in
Table IV, G2 is much more expensive than G1 and G3. Hence,
the output of G2 is relatively small and at the low level of
its capacity. The upward and downward generation reserves
provided by the three units are also listed in Table V. These
data can be obtained directly from Eqs. (15) and (16) given
the generation output Pi,t . Although the remaining generation
capacity of G1 is 220 − 195.19 = 24.62 MW, the upward
reserve is limited by its upward ramping rate 24 MW. In the
meantime, the upward reserve provided by G3 is limited by its
generation capacity although it has more remaining ramping
capacity (i.e. min{20 − 16.54, 5} = 3.46 MW).
Table VI shows the extreme points obtained in the CCGbased approach. The intermediate price signals for these points
u,k
πm,t
are also presented. It can be observed that the worst point
is always obtained at the extreme point of the uncertainty set.
For example, at Hour 21, the ˆ11,1 is 31.15 MW. It is exactly the
upper bound of the uncertainty at Hour 21 at Bus 1. The data
in Table VI also verifies Lemma 1. The intermediate UMPs
u,k
πm,t
have the same sign as the uncertainties ˆkm,t at the same
bus.
The LMPs, aggregated upward UMPs, and aggregated
downward UMPs at Hour 21 are shown in Table VII. It is
noted that UMPs still exist at buses without uncertainties (i.e.,
Buses 2,4,5,6). This is similar to LMPs, which also exist
at buses where net power injections are 0. The LMPs vary
with locations, which indicates that the line congestion exists.
TABLE VI
E XTREME P OINTS OF U NCERTAINTY S ET
k=1
t
ˆ11,t
ˆ13,t
u,1
π1,t
k=2
u,1
π3,t
ˆ21,t
ˆ23,t
u,2
π1,t
u,2
π3,t
21 31.15 8.31 14.87 14.87 −31.15 8.31 −17.67 1.77
8
TABLE VII
LMP AND UMP AT H OUR 21 (Λ = 1, Λ∆ = 2)
Bus1
Bus2
Bus3
Bus4
Bus5
Bus6
πe
π u,up
14.97
14.87
-17.67
32.64
14.87
0
34.4
16.63
0
43.71
25.94
0
41.94
24.17
0
35.26
17.49
0
π u,down
10
16
Price $/MW
Price
Price $/MW
16.5
15.5
15
0
2
4
Bus
The load at Bus 4 has to pay the highest LMP $43.71/MWh.
The UMPs are also different at various locations. The highest
upward UMP at Hour 21 is also located at Bus 4. With these
prices, the market participants can be paid and credited.
The LMP paid to G3 is $35.26/MWh on Bus 6, which
is $17.49/MWh larger than its marginal cost. In the same
time, The upward UMP is $17.49/MWh on Bus 6, which is
exactly the difference between the LMP and G3’s marginal
cost. Hence, G3 is the UMP setter on Bus 6. The UMPs
provide important price signals on the planning of renewable
energy sources and storages. For example, the UMP at Bus 2 is
relatively small, so it is an ideal location for renewable energy
sources in terms of payment for uncertainties. In contrast,
the UMP at Bus 4 is large, which may attract the long-term
investment for storages or generation plants with large ramping
rates.
2) Comparison with Existing LMPs and Reserve Prices:
The motivation of this part is to compare the proposed clearing
scheme with the existing one. However, as the reserve is not
robust in the traditional scheme, we cannot compare them
fairly. With the observation that the transmission constraints
are the most challenging one in the robust UC framework,
we drop these constraints in this subsection and add reserve
constraints as follows.
Ii,t Pimin ≤ Qdown
i,t + Pi,t ,
max
Qup
, ∀i, t (24a)
i,t + Pi,t ≤ Ii,t Pi
u
−Rid Ii,t ∆T ≤ Qdown
Qup
i,t ,
i,t ≤ Ri Ii,t ∆T, ∀i, t
X
X up
Qdown
Qi,t ≥ R̄t , ∀t,
i,t ≤ Rt ,
¯
i
i
(24b)
(24c)
down
where Qup
are the largest upward and downward
i,t and Qi,t
reserves, respectively. Rt and R̄t are system-wide reserve
¯
requirements. Refer to [2], [30] for more details on the
reserve formulations. In the experiment, ∆T is set to 1 and
Λ = 0.8, Λ∆ = 2. The reserve requirements Rt and R̄t are set
¯
to the lower and upper bounds of the system-wide uncertainty
in (23b), respectively.
The results are as expected. The optimal solutions of the
RUC and the standard UC with explicit reserve constraints are
the same. LMPs calculated in the RUC and UC also have the
same values. The UMPs calculated in the proposed mechanism
are also exactly the same as the reserve prices in standard
UC. Two things are verified with these results. First, without
transmission constraints, the solution to standard UC can easily
be robust by adding reserve constraints. Second, the proposed
LMPs and UMPs are consistent with LMPs and reserve prices
in the existing market when the transmission constraints are
dropped.
When considering transmission constraints, the generation
reserve cannot be guaranteed at bus levels in the traditional
5
(a) Hour 20
6
2
4
6
Bus
(b) Hour 21
Fig. 3. Upward UMP (blue bar) and reserve price (red bar) with network
constraint.
UC model. For simplicity, we assume that the 6 buses are
in a zone. Consider the case where Λ = 0.8, Λ∆ = 2. The
upward UMP and reserve price at Hour 20 are depicted in
Fig. 3a. It is observed UMPs at Bus 1 and Bus 2 are lower
than the traditional reserve prices. In the same time, the UMPs
at Bus 4 and Bus 5 is higher than the traditional reserve
prices. The differences are caused by the congestion of Line
1-4 for reserve delivery. It is worth mentioning that the LMP
differences in two models are within 1% at Hour 20. The
prices illustrated in Fig. 3b reveals another trends that the
UMP may be higher than the traditional reserve prices. At
Hour 21, the zonal reserve price is 0 while the UMPs are nonzeros at bus 4, 5, and 6. Because the constraint related with
reserves in the RUC is stronger than the one in traditional UC
model. Consequently, more expensive resources are used in
RUC, which also generally leads to higher UMPs.
3) FTR Underfunding: When Λ∆ = 2, Λ = 1, the generation schedules at Hour 21 are 195.193MW, 25.577MW,
and 16.54MW. The power flow of Line 2 is 97.63MW,
which is 2.47MW smaller than its physical limit of 100MW.
The transmission reserve 2.47 MW is kept to guarantee
the delivery of the generation reserve. The binding constraint for Line 2 causes LMP differences. Hence, the
FTR holder gets credits. Consider a set of FTR amounts
[202.3429, 23.2771, −55.772, −94.924, −94.924, 20]. It can
be verified that the FTR amounts satisfy the SFT in the FTR
market. Then the total credit for the FTR holders is $5,554.77.
However, the congestion cost in the DAM is $5,422.87. It
means that the LMP congestion cost collected is not enough
to cover the FTR credit. The FTR underfunding value is
$5554.77 − $5422.87 = $131.90.
The revenue residues after UMP settlement is $131.9. It exactly covers the FTR underfunding in this scenario. Therefore,
the revenue is adequate at Hour 21.
B. IEEE 118-Bus System
The simulations are performed for the IEEE 118-bus system
with 54 thermal units and 186 branches in this section. The
peak load is 6600MW. The detailed data including generator
parameters, line reactance and ratings, and load profiles can be
found at http://motor.ece.iit.edu/Data/RUC118UMP.xls. Two
cases are studied in this section.
1) The uncertainty levels and load levels are changed to
analyze the simulation results in the system level. The
impact of transmission line capacity on prices is also
studied.
9
·104
TABLE VIII
O PERATION C OST AND UMP PAYMENT ($,Λ∆ = 10)
·106
UP
GRC
2
OC
Op. Cost
Un. Payment
Gen. Res. Credit
Rev. Res.
0.2
0.25
0.3
1,866,023
1,871,364
1,877,471
11,043
20,044
30,879
10,560
19,209
28,658
483
835
2,221
1.9
3
OC ($)
Λ
UP and GRC ($)
4
2
1.8
2) An energy storage is installed at a specified bus with
high UMP to show the potential application of UMPs.
95
100
105
Load Level (%)
Fig. 5. Uncertainty payment (UP), generation reserve Credit (GRC), and
operation cost (OC) with different load levels (Λ = 0.25)
Upward UMP
LMP
50
30
40
Price $/MW
Price $/MW
1) Case 1: We assume that the uncertainty sources are located at buses (11, 15, 49, 54, 56, 59, 60, 62, 80, 90). The budget parameter Λ∆ is set to 10 in this section. The buslevel uncertainty budget parameter Λ changes from 0.2 to
0.3, and the bound of the uncertainty is the base load. The
simulation results are shown in Table VIII. It can be observed
that the total operation cost increases with increasing Λ. It
indicates that a larger uncertainty level may increase the
operation cost. The columns “Un. Payment” and “Gen. Res.
Credit” denote the total payment from uncertainty sources and
credit to generation reserves, respectively. The lowest payment
is $11,043 and the highest one is $30,879. On the other
hand, the credit entitled to the generation reserves is also a
monotonically increasing function of Λ. When Λ = 0.3, the
generation reserves have the highest credit. The last column
“Rev. Res.” shows that the revenue residues related to UMPs.
It can be observed that the residue is always positive.
Fig. 4a in the next page depicts the heat map for the
upward UMPs from Bus 80 to Bus 100 in 24 hours. The xaxis represents time intervals and the y-axis represents bus
numbers. The color bar on the right shows different colors for
various UMP values. For example, the $0/MWh is denoted by
the blue color at the bottom, and the $18/MWh is represented
by the dark red color on the top of the color bar. It can be
observed that the uncertainty sources have system-wide unique
UMPs at some intervals, such as Hours 8, 13, 15, and so on. It
indicates that there is no transmission reserve in these hours.
On the other hand, the UMPs at Hour 11 vary dramatically
with different locations. The highest upward UMP is around
$18/MWh, and the lowest one is around $2/MWh. According
to the data shown in Fig. 4a, the high UMP at Bus 94
may attract investment of flexible resources, such as energy
storages, in terms of generation reserve credit, and Bus 100 is
an attractive location for the investment of renewable energy
sources in terms of uncertainty payments.
Fig. 5 shows the uncertainty payment and generation reserve
credit with respect to load levels. The base load level is set
at 100%. Higher loads in general lead to more uncertainty
payments and generation reserve credits. It is also consistent
with the heat map of UMPs in Fig. 4a, where UMPs at peak
load hours are high. It suggests that the generation reserves
also become scarce resources when load levels are high.
The transmission line capacity plays an important role in the
price calculation. Fig. 6 shows the LMPs and upward UMPs
at Hour 11 with respect to increasing capacity of Line 94100. The prices at Buses 88, 94, and 100 are depicted. When
the line capacity increases from 165MW to 175MW, LMP at
1
30
20
10
20
0
170
180
170
Line Capacity
180
Line Capacity
Bus 88
Bus 94
Bus 100
Fig. 6. LMP (left) and upward UMP (right) at Hour 11 with respect to
increasing capacity of Line 94-100
Bus 94 decreases from $47.92/MWh to $35.84/MWh and that
at Bus 88 also drops to $30.58/MWh from $38.78/MWh. The
upward UMPs at Bus 94 and Bus 88 also drop by $8.20/MWh
and $12.08/MWh, respectively. In contrast, the LMP and
upward UMP at Bus 100, which is connected to Line 94-100,
remain at $19.42/MWh and $1.64/MWh, respectively. It shows
that the change of line capacity may only have impacts on the
prices at some buses. When the line capacity further increases
to 185MW from 175MW, the changes of LMPs and UMPs at
Bus 94 and Bus 88 are within $0.1/MWh, and there is still no
change at Bus 100. It means that the additional 10MW cannot
help deliver cheaper energy and reserves to Bus 94 and Bus
88. These results are also consistent with the analysis of the
traditional LMPs [31].
2) Case 2: As discussed in Case 1, the upward UMP on
Bus 94 is high at Hour 11. Assume that an energy storage
(8MW/30MWh) is installed at Bus 94. A simple model for
the energy storage is formulated as follows.
Et = Et−1 + ρd PtD + ρc PtC , ∀t
0 ≤ Et ≤ E max , ∀t
0 ≤ −PtD ≤ ItD RD , ∀t
0 ≤ PtC ≤ ItC RC , ∀t
ItD + ItC ≤ 1, ∀t
ENT = E0 ,
where Et denotes the energy level, PtD and PtC represent
the discharging and charging rates, and ItD and ItC are the
indicators of discharging and charging. As the UMP is the
major concern in this section, we use simplified parameters for
10
18.00
18.00
Bus 82
Bus 82
16.00
16.00
Bus 84
Bus 84
14.00
14.00
Bus 86
Bus 86
12.00
12.00
Bus 88
Bus 88
10.00
Bus 90
8.00
Bus 92
10.00
Bus 90
8.00
Bus 92
6.00
Bus 94
6.00
Bus 96
4.00
Bus 96
4.00
Bus 98
2.00
Bus 98
2.00
Bus 94
Bus 100
Hour 5
Hour 10
Hour 15
Hour 20
0.00
Bus 100
Hour 5
Hour 10
Hour 15
Hour 20
0.00
(b) With Storage at Bus 94
(a) Without Storage at Bus 94
Fig. 4. Heat Map for Upward UMPs (Λ = 0.3). Different colors represent various UMPs. Figure (a) depicts the UMPs from Bus 80 to Bus 100 in 24 hours
without the energy storage at Bus 94. Figure (b) depicts the new UMPs after the energy storage is sited at Bus 94.
storage. The discharging efficiency ρd and charging efficiency
ρc are set to 100%. The capacity E max and initial energy
level E0 are set to 30 MWh and 15 MWh, respectively. The
maximal charging rate RD and discharging rate RC are set to
8 MW/h.
By siting the energy storage, we can lower the new operation
cost to $1,875,211 from $1,877,471. The payment collected
from the uncertainty sources becomes $27,473, and the credit
to generation reserves decreases to $24,289. Compared to the
data in Table VIII, the energy storage also helps to reduce the
payment related to UMPs. The storage is entitled to $1326
generation reserve credit. Fig. 4b depicts the new upward
UMPs after the installation of the energy storage. Compared
to that in Fig. 4a, the upward UMP for Hour 11 at Bus 94
decreases a lot. The UMPs for Hour 10 and 12 are also lower.
It suggests that sitting the energy storage at Bus 94 effectively
lower the generation reserve price.
The simulation results demonstrate that flexible resources
can lower the UMPs, and UMPs provide the investment signal
at locations where generation reserves are scarce resources.
prices within the new market scheme, as the reserve fees are
paid by uncertainty sources.
Many potential applications on UMPs are open. As UMPs
are unified prices of uncertainties and reserves, it is interesting
to investigate the optimal strategy for the one who is the
uncertainty source as well as reserve provider in the market
(e.g. the wind generation company with energy storages). The
UMPs derived in this paper also provide an important price
signal for the long-term investment of flexible resources. When
the upward UMP or downward UMP at a bus is high, the
investor can get more return in terms of generation reserves.
Another potential future research on UMP is to study how to
determine the budget uncertainty set in the market. Modeling
the traditional spinning reserve for the contingency [13], [30]
is also our future work. In this paper, the demand bids are
not considered. We have forecasted load, forecasted RES, and
uncertainty of load and RES for market clearing with a singlesided model. In an extended double-sided model, we can have
demand bids, forecasted RES, and uncertainty of load and
RES for market clearing. The forecasted load, forecasted RES,
uncertainty of load and RES can be used in RAC.
IV. C ONCLUSIONS
A novel market model in this paper clears uncertainty,
energy, and generation reserve simultaneously within the RUC
framework in DAM. The uncertainty sources are charged
and the generator reserve providers are credited based on
the proposed UMP. The UMP formulation is derived within
a robust optimization framework. We also characterize the
market equilibrium for the new market clearing mechanism.
As the market clearing mechanism is established within the
robust optimization framework, the robustness of the dispatch
is guaranteed. The optimal reserves for uncertainty accommodation are obtained in the model. The UMP proposed in this
paper can effectively address the issue on how to charge and
credit the uncertainties and generation reserve fairly in the
market with RES.
Our study also shows that traditional pricing mechanism
within RUC framework may lead to FTR underfunding. The
proposed market clearing mechanism can address this issue.
Our study shows load serving entities can have lower energy
A PPENDIX A
D ETAILED F ORMULATION FOR P ROBLEM (RUC)
(RUC)
s.t.
min
(x,y,z,I,P )∈F
X
Pi,t =
X
Ii,t Pimin
m
t
X
m
i
−Fl ≤
XX
i
CiP (Pi,t ) + CiI (Ii,t )
dm,t , ∀t.
Γl,m
X
i∈G(m)
(25a)
(25b)
Pi,t − dm,t ≤ Fl , ∀l, t (25c)
≤ Pi,t ≤ Ii,t Pimax , ∀i, t
Pi,t − Pi,(t−1) ≤ riu (1 − yi,t ) + Pimin yi,t , ∀i, t
−Pi,t + Pi,(t−1) ≤ rid (1 − zi,t ) + Pimin zi,t , ∀i, t
minimum on/off time limit
(26a)
(26b)
(26c)
11
inj
∆Pm,t
=
and
n
F := (x, y, z, I, P ) : ∀ ∈ U, ∃∆P such that
X
X
∆Pi,t =
m,t , ∀t,
i∈G(m)
(27a)
m
i
Ii,t Pimin ≤ Pi,t + ∆Pi,t ≤ Ii,t Pimax , ∀i, t
(27b)
−Rid (1 − zi,t+1 ) ≤ ∆Pi,t ≤ Riu (1 − yi,t ), ∀i, t (27c)
X
inj
∆Pm,t
=
∆Pi,t − m,t , ∀m, t
(27d)
i∈G(m)
−Fl ≤
o
inj
inj
Γl,m (Pm,t
+ ∆Pm,j
) ≤ Fl , , ∀l, t . (27e)
X
m
The basic idea of the above model is to find a robust UC and
dispatch for the base-case scenario. In the base-case scenario,
(25b) denotes the load balance constraint; (25c) represents the
transmission line constraint; (26a) denotes the unit capacity
limit constraint; (26b)-(26c) denote the unit ramping up/down
limits. Ii,t , yi,t , and zi,t are the indicators of the unit being
on, started-up, and shutdown, respectively. Units also respect
the minimum on/off time constraints which are related to
these binary variables [1]. The UC and dispatch solution are
immunized against any uncertainty ∈ U. When uncertainty
occurs, it is accommodated by the generation adjustment ∆Pi,t
(27a). Generation dispatch is also enforced by the capacity
limits (27b). (27c) models the ramping rate limits of generation
adjustment ∆Pi,t . In fact, the right and left hand sides of (27c)
can correspond to a response time ∆T , which is similar to the
10-min or 30-min reserves in the literatures [30]. (27e) stands
for the network constraint after uncertainty accommodation.
A PPENDIX B
D ETAILED F ORMULATION FOR P ROBLEM (MP) AND (SP)
(MP)
S.T.
X
i
min
(x,y,z,I,P,∆P )
XX
t
CiP (Pi,t ) + CiI (Ii,t )
i
(25b), (25c), (26a)-(26c), minimum on/off time limit
X
k
∆Pi,t
=
km,t , ∀t, ∀k ∈ K
(28a)
m
k
Ii,t Pimin ≤ Pi,t + ∆Pi,t
≤ Ii,t Pimax , ∀i, t, ∀k ∈ K
(28b)
k
−∆Pi,t
(28d)
k
∆Pi,t
≤ Riu (1 − yi,t ), ∀i, t, ∀k ∈ K
−Fl ≤
Rid (1
≤
X
m
inj,k
∆Pm,t
=
− zi,t+1 ), ∀i, t, ∀k ∈ K
(28e)
k
∆Pi,t
− km,t , ∀m, t, ∀k ∈ K,
(28f)
i∈G(m)
and
(28c)
inj
inj,k
Γl,m (Pm,t
+ ∆Pm,t
) ≤ Fl , ∀k ∈ K, ∀l, t
X
(SP) max
∈U
min
(s+ ,s− ,∆P )∈R()
XX
−
(s+
m,t + sm,t ) (29a)
m
t
n
+ −
R():= (s , s , ∆P ) :
X
X
−
∆Pi,t =
(m,t + s+
m,t − sm,t ), ∀m, t
i
−Fl ≤
X
m
m
X
(29b)
(29c)
inj
inj
+ ∆Pm,t
≤ Fl , ∀l, t (29d)
Γl,m Pm,t
−
∆Pi,t − (m,t + s+
m,t − sm,t )(29e)
−
s+
m,t , sm,t ≥ 0, ∀m, t
o
(27b), (27c)
(29f)
where K is the index set for uncertainty points ˆ which are
dynamically generated in (SP) with iterations. It should be
k
noted that ˆk is the extreme point of U. Variable ∆Pi,t
is
k
associated with ˆ . The objective function in (SP) is the
−
summation of non-negative slack variables s+
m,t and sm,t ,
which evaluates the violation associated with the solution
−
(x, y, z, I, P ) from (MP). s+
m,t and sm,t are also explained as
un-followed uncertainties (generation or load shedding) due to
system limitations.
A PPENDIX C
L AGRANGIAN F UNCTION FOR P ROBLEM (RED)
Please check equation (30) in the next page.
A PPENDIX D
P ROOFS FOR LEMMAS AND THEOREMS
A. Proof of Lemma 1
u,k
< 0. With a small perturbation
Proof. Consider ˆkm,t > 0, πm,t
δ > 0 to ˆkm,t , we replace ˆkm,t with ˆkm,t − δ in (RED).
u,k
As the πm,t
< 0 , then the optimal value to problem
(RED) increases. It means that there are violations for the
original optimal solution Pi,t to problem (RED) with ˆkm,t − δ.
Hence, the optimal solution Pi,t to problem (RED) cannot be
immunized against the uncertainty ˆkm,t −δ. It contradicts with
the robustness of the solution Pi,t . Therefore, if ˆkm,t > 0, then
u,k
u,k
≤ 0.
πm,t
≥ 0. Similarly, if ˆkm,t < 0, then πm,t
B. Proof of Lemma 2
Proof. Assume i ∈ G(m), according to the KKT condition
∂L(P, ∆P, λ, α, β, η)
=0
k
∂∆Pi,t
(31)
at the optimal point, we have
k
k
k
k
β̄i,t
− βi,t
+ ᾱi,t
− αi,t
− λkt +
¯
¯
X
l
k
k
(η̄l,t
− ηl,t
)Γl,m = 0. (32)
¯
u,k
k
k
Then (17) holds. If πm,t
> 0, then β̄i,t
+ ᾱi,t
> 0 as
k
k
k
k
β̄i,t , βi,t , ᾱi,t , and αi,t are non-negative. According to the
¯
¯
complementary
conditions for (7b) and (7d), at least one of
(7b) and (7d) is binding. Hence,
n
o
k
∆Pi,t
= min Iˆi,t Pimax − Pi,t , Riu (1 − ŷi,t )
u,k
holds. Similarly, the other equation holds when πm,t
< 0.
12
L(P, ∆P, λ, α, β, η)
XX
XX
X X
X
CiP (Pi,t ) +
λt
dm,t −
Pi,t +
β̄i,t (Pi,t − Iˆi,t Pimax ) + βi,t (Iˆi,t Pimin − Pi,t )
¯
t
t
m
t
i
i
i
XX
min
min
+
ᾱi,t Pi,t − Pi,t−1 − riu (1 − ŷi,t ) − Pi,t
ŷi,t + αi,t Pi,t−1 − Pi,t − rid (1 − ẑi,t ) − Pi,t
ŷi,t
¯
t
i
X
X X X
XX X
X
inj
inj
k
+
η̄l,t
Γl,m Pm,t − Fl − ηl,t
Γl,m Pm,t + Fl
+
λkt
km,t −
∆Pi,t
¯
t
m
m
m
i
l
k∈K t
X XX
k
k
max
k ˆ
min
k
ˆ
+
β̄i,t (Pi,t + ∆Pi,t − Ii,t Pi ) + βi,t (Ii,t Pi − Pi,t − ∆Pi,t )
¯
i
k∈K t
X XX
k
k
k
k
∆Pi,t
+ Rid (1 − ẑi,t )
∆Pi,t
− Riu (1 − ŷi,t ) − αi,t
+
ᾱi,t
¯
i
k∈K t
X
X XX
X
inj
inj,k
inj
inj,k
k
k
+
η̄l,t
Γl,m (Pm,t
+ ∆Pm,t
) − Fl − ηl,t
Γl,m (Pm,t
+ ∆Pm,t
) + Fl
¯
m
m
k∈K t
l
=
C. Proof of Theorem 1
Proof. The energy congestion cost at t is
X
X
e
e
πm,t
dm,t −
πm,t
Pi,t
m
=
X
(33)
i∈G(m)
m
XX
X
X
k
η̄l,t
k
ηl,t
− ηl,t −
¯
m
l
k∈K
k∈K ¯
!
X
X
pos
k
=
η̄l,t +
η̄l,t
Fl − ∆fl,t
=
Γl,m
l
−
=
=
η̄l,t +
k∈K
X
X
l
X
k
ηl,t
ηl,t +
¯
k∈K ¯
!
X
neg
∆fl,t
− Fl
X
k
k
η̄l,t
+ ηl,t +
ηl,t
¯
¯
l
k∈K
k∈K
XX
pos
neg
k
k
−
η̄l,t ∆fl,t + ηl,t ∆fl,t
¯
l k∈K
X
pos
neg
−
η̄l,t ∆fl,t + ηl,t ∆fl,t
¯
l
X
η̄l,t +
X
k
k
η̄l,t
+ ηl,t +
ηl,t
¯
¯
l
k∈K
k∈K
XX
pos
neg
k
k
−
η̄l,t ∆fl,t + ηl,t ∆fl,t
¯
l k∈K
η̄l,t +
X
!
inj
Pm,t
!
!
Fl
Fl
The first equality holds following the definition of net power
injection.
The second equality holds according to (8) and
P
inj
P
following (19) and
m m,t = 0. The third equality
Pholds
k
(20). The sign change of ηl,t and l ηl,t
in the third equality
¯ direction. According
is because of the definition¯ of power flow
to the complementary conditions, the third term in the fourth
equality must be zero based on the following three cases.
pos
∆fl,t
3) η̄l,t = 0 and ηl,t = 0.
¯
The second term in the last equality corresponds to (21).
The credits to FTR holders can be written as
X
e
e
(πm,t
− πn,t
)FTRm→n
(34)
(m→n)
inj
e
πm,t
Pm,t
1) If η̄l,t 6= 0, then
= 0, and ηl,t = 0.
neg
2) If η̄l,t = 0 and ηl,t 6= 0, then ∆f¯l,t
= 0.
¯
(30)
X
Γl,m (η̄l,t − ηl,t )
¯
l
XX
k
k
Γl,m (η̄l,t
− ηl,t
)
−
X
¯
l k∈K
X
=
FTRm→n
−λt +
Γ
(η̄
−
η
)
l,n l,t
l,t
(m→n)
¯
l
XX
k
k
+
Γl,n (η̄l,t
− ηl,t
)
¯
l k∈K
X
k
(Γl,n − Γl,m )(η̄l,t +
η̄l,t
)
X X
k∈K
FTRm→n
=
X
k
−(Γl,n − Γl,m )(ηl,t +
ηl,t )
(m→n) l
¯
k∈K ¯
!
X
X
X
k
k
≤
η̄l,t +
η̄l,t + ηl,t +
ηl,t Fl
¯
l
k∈K
k∈K ¯
λt −
The first equality holds according to (8). The inequality is true
as the amount of FTRm→n respects
X
−Fl ≤
(Γl,m − Γl,n )FTRm→n ≤ Fl .
m→n
according to the SFT for FTR market [18], [20], [21]. The
right-hand-side of the inequality is the first term in the last
equality of (33). Based on (33) and (34), the maximum
difference between the FTR credit and the energy congestion
cost is equal to the transmission reserve credit. That is, the
maximum FTR underfunding is (21).
D. Proof of Theorem 2
Proof. According to Theorem 1, the FTR underfunding value
is (21) due to the deficiency of the energy congestion cost.
13
Therefore, we need to prove that the money collected from
uncertainty sources can cover the FTR underfunding and
credits to generation reserve.
Without loss of generality, we consider the payment collected from uncertainty sources at time t for ˆk
X u,k
πm,t ˆkm,t
m
λkt −
m
=
l
XX
m
=
X
k
∆Pi,t
+
l
i∈G(m)
l
k
k
Γl,m η̄l,t
(
∆Pi,t
)
i∈G(m)
XX
m
i∈G(m)
k
Γl,m η̄l,t
l
X
P P
X
−
k
η̄l,t
X
X
m
l
inj
− Fl
Γl,m Pm,t
pos
k
η̄l,t
∆fl,t
l
k k
Γl,m ηl,t
ˆm,t can be reformulated similarly. Hence,
¯
the second equality holds. The third equality holds from (9).
Therefore,
XX
XX
XX
Ψm,t =
ΘG
ΘTl,t
i,t +
m
l
m
t
i
t
l
t
holds. That is, the uncertainty payment covers the generation
reserve credit and transmission reserve credit. Then, following
the energy congestion cost shown in (33),
XX
XX
e
πm,t
dm,t +
Ψm,t
≥
m
t
X
X
i
+
e
πm
P +
i ,t i,t
t
X X
t
m→n
e
(πm,t
−
m
t
X
X
ΘG
i,t
t
i
e
πn,t )FTRm→n
holds. That is, the total payments collected from loads and
uncertainty sources can cover the total credits to energy,
generation reserve, and FTR holders. So, the revenue adequacy
of the proposed market clearing mechanism is guaranteed.
E. Proof of Competitive Equilibrium
down
Proof. Pi,t and (Qup
i,t , Qi,t ) are coupled by constraints (15)
and (16). According to (17), we can rewrite generation reserve
credit as
X u,k
u,up up
u,down down
k
πm,t
Qi,t + πm,t
Qi,t =
πm,t ∆Pi,t
k∈K
k
k
k
k
k
(β̄i,t
− βi,t
+ ᾱi,t
− αi,t
)∆Pi,t
¯
¯
k∈K
!
k ˆ
k
β̄i,t
(Ii,t Pimax − Pi,t ) + βi,t
(Pi,t − Iˆi,t Pimin )
=
(35)
¯k d
k
+ᾱi,t
(Riu (1 − ŷi,t )) + αi,t
Ri (1 − ẑi,t+1 )
k∈K
¯
Substituting (35) into problem (PMPi ), we can decouple Pi,t
down
and (Qup
i,t , Qi,t ). In fact, we also get all terms related to
Pi,t in Lagrangian L(P, λ, α, β, η) for problem (RED). Since
the problem (RED) is a linear programming problem, the
saddle point P̂i,t , which is the optimal solution to (RED), is
also the optimal solution to (PMPi ). Consequently, unit i is
not inclined to deviate its output level as it can obtain the
maximum profit by following the ISO’s dispatch instruction
u,k
e
P̂i,t . Therefore, dispatch P̂i,t and price signal (πm,t
, πm,t
)
constitute a competitive partial equilibrium [27].
X
The first equalityPholds
to (9). According to (7a),
P according
k k
(7f), and (7g), the m l Γl,m η̄l,t
ˆm,t in the second line can
be rewritten as
X
X
XX
k
k
k
Γl,m η̄l,t
(∆Pi,t
+ Pi,t ) − dm,t −
η̄l,t
Fl
X
X
k
k
k
Γl,m η̄l,t
ˆm,t
− ηl,t
¯
m
l
X
XX
X
k k
k
k
k
(
∆Pi,t
=
∆Pi,t
λt −
Γl,m η̄l,t
− ηl,t
)
¯
m
i
l
i∈G(m)
X
pos
neg
k
k
η̄l,t
+
∆fl,t
+ ηl,t
∆fl,t
¯
l
X X u,k
X
pos
neg
k
k
k
η̄l,t
=
πm,t ∆Pi,t
+
∆fl,t
+ ηl,t
∆fl,t
¯
m i∈G(m)
l
X u,k
X
pos
neg
k
k
k
πmi ,t ∆Pi,t
η̄l,t
=
+
∆fl,t
+ ηl,t
∆fl,t
¯
i
l
=
X
=
R EFERENCES
[1] M. Shahidehpour, H. Yamin, and Z. Li, Market Operations in Electric
Power Systems: Forecasting, Scheduling, and Risk Management, 1st ed.
Wiley-IEEE Press, 2002.
[2] T. Zheng and E. Litvinov, “Contingency-Based Zonal Reserve Modeling
and Pricing in a Co-Optimized Energy and Reserve Market,” IEEE
Trans. Power Syst., vol. 23, no. 2, pp. 277–286, May 2008.
[3] R. Jiang, J. Wang, and Y. Guan, “Robust unit commitment with wind
power and pumped storage hydro,” IEEE Trans. Power Syst., vol. 27,
no. 2, pp. 800 – 810, 2012.
[4] R. Jiang, M. Zhang, G. Li, and Y. Guan, “Two-stage network constrained
robust unit commitment problem,” J. Eur. Oper. Res., vol. 234, no. 3,
pp. 751 – 762, 2014.
[5] D. Bertsimas, E. Litvinov, X. Sun, J. Zhao, and T. Zheng, “Adaptive
robust optimization for the security constrained unit commitment problem,” IEEE Trans. Power Syst., vol. 28, no. 1, pp. 52–63, 2013.
[6] B. Zeng and L. Zhao, “Solving two-stage robust optimization problems
using a column-and-constraint generation method,” Operations Research
Letters, vol. 41, no. 5, pp. 457–461, sep 2013.
[7] H. Ye and Z. Li, “Robust security-constrained unit commitment and
dispatch with recourse cost requirement,” IEEE Trans. Power Syst., DOI:
10.1109/TPWRS.2015.2493162 (early access).
[8] C. Zhao and Y. Guan, “Unified stochastic and robust unit commitment,”
IEEE Trans. Power Syst., vol. 28, no. 3, pp. 3353–3361, 2013.
[9] J. Warrington, P. Goulart, S. Mariethoz, and M. Morari, “Policy-based
reserves for power systems,” IEEE Trans. Power Syst., vol. 28, no. 4,
pp. 4427–4437, 2013.
[10] R. A. Jabr, “Adjustable robust OPF with renewable energy sources,”
IEEE Trans. Power Syst., vol. 28, no. 4, pp. 4742–4751, 2013.
[11] A. Lorca, A. Sun, E. Litvinov, and T. Zheng, “Multistage adaptive robust
optimization for the unit commitment problem,” Operations Research,
vol. 64, no. 1, pp. 32–51, 2016.
[12] H. Ye and Z. Li, “Robust security-constrained unit commitment with
recourse cost requirement,” in Proc. IEEE Power & Energy Soc. General
Meeting, July 2015, pp. 1–5.
[13] J. Wang, M. Shahidehpour, and Z. Li, “Contingency-constrained reserve
requirements in joint energy and ancillary services auction,” IEEE Trans.
Power Syst., vol. 24, no. 3, pp. 1457–1468, 2009.
[14] J. M. Arroyo and F. D. Galiana, “Energy and reserve pricing in security
and network-constrained electricity markets,” IEEE Trans. Power Syst.,
vol. 20, no. 2, pp. 634–643, 2005.
[15] F. Bouffard, F. D. Galiana, and A. J. Conejo, “Market-clearing with
stochastic security-part ii: Case studies,” IEEE Trans. Power Syst.,
vol. 20, no. 4, pp. 1827–1835, 2005.
[16] M. Aganagic, K. H. Abdul-Rahman, and J. G. Waight, “Spot pricing
of capacities for generation and transmission of reserve in an extended
poolco model,” IEEE Trans. Power Syst., vol. 13, no. 3, pp. 1128–1135,
Aug 1998.
[17] F. C. Schweppe, R. D. Tabors, M. Caraminis, and R. E. Bohn, Spot
pricing of electricity. Kluwer Academic Publishers, Norwell, MA,
1988.
[18] “PJM manual on financial transmission rights,” PJM, Tech. Rep.,
access:March 16, 2016. [Online]. Available: http://www.pjm.com/∼/
media/documents/manuals/m06.ashx
14
[19] “PJM
options
to
address
FTR
underfunding,”
PJM,
Tech.
Rep.,
2012,
access:May
19,
2015.
[Online]. Available: https://www.pjm.com/∼/media/documents/reports/
20120430-pjm-options-to-address-ftr-underfunding.ashx
[20] W. W. Hogan, “Financial transmission right formulations,” Tech.
Rep. [Online]. Available: http://www.hks.harvard.edu/fs/whogan/FTR
Formulations 033102.pdf
[21] W. W. Hogan, “Contract networks for electric power transmission,”
Journal of Regulatory Economics, vol. 4, no. 3, pp. 211–242, 1992.
[22] H. Ye, J. Wang, and Z. Li, “MIP reformulation for max-min
problems in two-stage robust SCUC,” IEEE Trans. Power Syst.,
DOI:10.1109/TPWRS.2016.2569609 (early access).
[23] C. Wang and Y. Fu, “Fully parallel stochastic securityconstrained unit commitment,” IEEE Trans. Power Syst.,
DOI:10.1109/TPWRS.2015.2494590 (early access).
[24] A. Papavasiliou, S. S. Oren, and B. Rountree, “Applying high performance computing to transmission-constrained stochastic unit commitment for renewable energy integration,” IEEE Trans. Power Syst.,
vol. 30, no. 3, pp. 1109–1120, May 2015.
[25] H.-p. Chao, S. Peck, S. Oren, and R. Wilson, “Flow-based transmission
rights and congestion management,” The Electricity Journal, vol. 13,
no. 8, pp. 38–58, 2000.
[26] T. Zheng and E. Litvinov, “On ex post pricing in the real-time electricity
market,” IEEE Trans. Power Syst., vol. 26, no. 1, pp. 153–164, 2011.
[27] A. Mas-Colell, M. D. Whinston, J. R. Green et al., Microeconomic
theory. Oxford university press New York, 1995, vol. 1.
[28] J. F. Ellison, L. S. Tesfatsion, V. W. Loose, and R. H. Byrne, “Project
report: A survey of operating reserve markets in us iso/rto-managed
electric energy regions,” Sandia Natl Labs Publications, 2012.
[29] W. W. Hogan, “Multiple market-clearing prices, electricity market design
and price manipulation,” The Electricity Journal, vol. 25, no. 4, pp. 18–
32, 2012.
[30] Z. Li and M. Shahidehpour, “Security-constrained unit commitment for
simultaneous clearing of energy and ancillary services markets,” IEEE
Trans. Power Syst., vol. 20, no. 2, pp. 1079–1088, 2005.
[31] H. Ye, Y. Ge, X. Liu, and Z. Li, “Transmission line rating attack in twosettlement electricity markets,” IEEE Trans. Smart Grid, vol. 7, no. 3,
pp. 1346–1355, May 2016.
Hongxing Ye (S’14-m’16) received his B.S. degree
in Information Engineering, in 2007, and M.S. degree in Systems Engineering, in 2011, both from
Xi’an Jiaotong University, China, and the Ph.D.
degree in Electrical Engineering from the Illinois Institute of Technology, Chicago in 2016. His research
interests include large-scale optimization in power
systems, electricity market, renewable integration,
and cyber-physical system security in smart grid. He
is “Outstanding Reviewer” for IEEE Transactions on
Power Systems and IEEE Transactions on Sustainable Energy in 2015. He received Sigma Xi Research Excellence Award at
Illinois Institute of Technology in 2016.
Yinyin Ge (S’14) received the B.S. degree (2008)
in Automation and M.S. degree (2011) in Systems
Engineering from Xian Jiaotong University, China.
She also received Ph.D. degree (2016) in Electrical Engineering at Illinois Institute of Technology,
Chicago. Her research interests are power system optimization and modeling; PMU applications in Smart
Grid; monitoring, visualization, and state estimation
for distribution systems.
Mohammad Shahidehpour (F’01) received his
Ph.D. degree from the University of Missouri in
1981 in electrical engineering. He is currently the
Bodine Chair Professor and Director of the Robert
W. Galvin Center for Electricity Innovation at the
Illinois Institute of Technology, Chicago. He is
the founding Editor-in-Chief of IEEE Transactions
on Smart Grid. He is a member of US National
Academy of Engineering (NAE).
Zuyi Li (SM’09) received the B.S. degree from
Shanghai Jiaotong University, Shanghai, China, in
1995, the M.S. degree from Tsinghua University,
Beijing, China, in 1998, and the Ph.D. degree from
the Illinois Institute of Technology (IIT), Chicago, in
2002, all in electrical engineering. Presently, he is a
Professor in the Electrical and Computer Engineering Department at IIT. His research interests include
economic and secure operation of electric power
systems, cyber security in smart grid, renewable
energy integration, electric demand management of
data centers, and power system protection.
| 3 |
QUIVER MUTATIONS AND BOOLEAN REFLECTION MONOIDS
arXiv:1711.09995v3 [math.RA] 21 Feb 2018
BING DUAN, JIAN-RONG LI, AND YAN-FENG LUO
Abstract. In 2010, Everitt and Fountain introduced the concept of reflection monoids.
The Boolean reflection monoids form a family of reflection monoids (symmetric inverse
semigroups are Boolean reflection monoids of type A). In this paper, we give a family
of presentations of Boolean reflection monoids and show how these presentations are
compatible with mutations of certain quivers. A feature of the quivers in this paper
corresponding to presentations of Boolean reflection monoids is that the quivers have
frozen vertices. Our results recover the presentations of Boolean reflection monoids
given by Everitt and Fountain and the presentations of symmetric inverse semigroups
given by Popova. Surprisingly, inner by diagram automorphisms of irreducible Weyl
groups or Boolean reflection monoids can be constructed by sequences of mutations
preserving the same underlying diagrams. As an application, we study the cellularity of semigroup algebras of Boolean reflection monoids and construct new cellular
bases of such cellular algebras using presentations we obtained and inner by diagram
automorphisms of Boolean reflection monoids.
Key words: Boolean reflection monoids; presentations; mutations of quivers; inner
by diagram automorphisms; cellular semigroups; cellular basis
2010 Mathematics Subject Classification: 13F60; 20M18; 16G20; 20F55; 51F15
1. Introduction
In their influential work on cluster algebras, Fomin and Zelevinsky associated mutations of skew-symmetrizable matrices [20, Definition 4.2] with mutations of quivers
[21, Proposition 8.1]. The quivers whose underlying graphs are Dynkin diagrams play
an important role in the cluster algebra theory, as they appear in the finite type classification [21].
It is well known that a finite irreducible crystallographic reflection group W or a finite
irreducible Weyl group W can be classified by Dynkin diagrams, whose vertex set is in
one-to-one correspondence with a family S of simple reflections and for which there is
an edge labeled 1 (respectively, 2, 3) between vertices i and j if and only if (si sj )3 = e
(respectively, (si sj )4 = e, (si sj )6 = e) where si , sj ∈ S, e is the identity element of W ,
see [2, 4, 6, 28].
Let Γ be a Dynkin diagram and a Γ quiver be a quiver whose underlying diagram is
Γ. In [2], Barot and Marsh gave presentations of the reflection group WΓ determined by
Γ and showed that these presentations are compatible with mutation of Γ quivers. More
precisely, Barot and Marsh introduced some additional relations (cycle relations) corresponding to chordless cycles arising in quivers of finite type. For each quiver Q mutation
equivalent to a Γ quiver, they first defined an abstract group W (Q) by generators (corresponding to vertices of Q) and relations, and then proved that W (Q) ∼
= WΓ . Motivated
1
2
BING DUAN, JIAN-RONG LI, AND YAN-FENG LUO
by Barot and Marsh’s work, the similar presentations of affine Coxeter groups, braid
groups, Artin groups, and Weyl groups of Kac-Moody algebras, have been considered in
[17, 18, 25, 30, 38], respectively.
Let V be a Euclidean space with standard orthonormal basis {v1 , v2 , . . . , vn } and
Φ ⊆ V an irreducible crystallographic root system which is in turn classified by Dynkin
diagrams. In [11], Everitt and Fountain introduced the concept of reflection monoids.
The Boolean reflection monoid M (Φ, B) of type Φ, formed from the Weyl group W (Φ)
for classical root system Φ and the Boolean system B, is a family of reflection monoids.
Symmetric inverse semigroups are Boolean reflection monoids of type A. Note that the
root systems of types Bn and Cn give rise to the same Weyl group. So we only concern
the classical Weyl group W (Bn ). In [12], Everitt and Fountain provided a presentation
of the Boolean reflection monoid M (Φ, B) for Φ = An−1 , Bn , or Dn .
One of the aims in present paper is to obtain new presentations of Boolean reflection
monoids and show how these presentations are compatible with mutation of certain
quivers. Let Aεn−1 (respectively, Bnε , Dnε ) be the Dynkin diagram with n (respectively,
n + 1, n + 1) vertices, where the first n − 1 (respectively, n, n) vertices are mutable
vertices and the n-th (respectively, n + 1-th, n + 1-th) vertex ε is a frozen vertex, which
is shown in the 4-th column of Table 1. In practice the label is left on an edge only if
its weight is greater than 2, and the edge is left unlabelled if its weight is 1.
Let ∆ ∈ {Aεn−1 , Bnε , Dnε } and Q any quiver mutation equivalent to a ∆ quiver.
We define an inverse monoid M (Q) from Q, see Section 4.2, and then we show that
M (Q) ∼
= M (Φ, B), see Theorem 4.7 and Proposition 4.8. This implies that Boolean
reflection monoids can also be classified by ∆, see Table 1. In [2, 17, 25, 30], the diagrams corresponding to generators of irreducible Weyl groups, affine Coxeter groups,
braid groups, Artin groups have no frozen vertices. In present paper, the diagrams
corresponding to generators of Boolean reflection monoids have frozen vertices.
Type of Φ
Boolean reflection monoids
Generators
An−1 (n ≥ 2)
M (An−1 , B)
{s1 , . . . , sn−1 , sε }
Bn (n ≥ 2)
M (Bn , B)
{s0 , . . . , sn−1 , sε }
∆ = Φε
1
2
n−1
ε
1
n−1
ε
2
n−1
ε
2
0
1
Dn (n ≥ 4)
M (Dn , B)
{s0 , . . . , sn−1 , sε }
0
Table 1. Boolean reflection monoids and Dynkin diagrams Aεn−1 , Bnε , Dnε .
In Proposition 3.1 of [11], Everitt and Fountain proved that the symmetric inverse
semigroup In is isomorphic to the Boolean reflection monoid of type An−1 . So we recover the presentation of the symmetric inverse semigroup In defined in [10, 34]. The
presentation corresponds exactly to the presentation determined by Dynkin diagram
QUIVER MUTATIONS AND BOOLEAN REFLECTION MONOIDS
3
Aεn−1 . Moreover, we also recover Everitt and Fountain’s presentations of Boolean reflection monoids defined in Section 3 of [12]. These presentations can be obtained from any
∆ quiver by a finite sequence of mutations.
We show in Theorem 4.10 that the inner automorphism group of Boolean reflection
monoid M (Φ, B) is naturally isomorphic to W (Φ)/Z(W (Φ)). We further study the
actions of (inward) mutations. Surprisingly, inner by diagram automorphisms of finite
irreducible Weyl groups and Boolean reflection monoids can be constructed by a sequence
of mutations preserving the same underlying diagrams, see Theorem 3.7 and Theorem
4.11 respectively.
As an application, we study the cellularity of semigroup algebras of Boolean reflection
monoids. It is well known that Hecke algebras of finite type, q-Schur algebras, the
Brauer algebra, the Temperley-Lieb algebras and partition algebras are cellular, see
[22–24, 42, 43]. Recently, the cellularity of semigroup algebras is investigated by East
[8], Wilox [41], Guo and Xi [26], and Ji and Luo [32] respectively. By applying Geck’s
and East’s results, we show that semigroup algebras of Boolean reflection monoids are
cellular algebras, see Proposition 4.12. Moreover, we construct new cellular bases of such
cellular algebras by presentations we obtained and inner by diagram automorphisms of
Boolean reflection monoids.
The results and methods of this paper have applications in several lines of research
which will be studied in future work, including automorphisms of Boolean reflection
monoids [39], Hecke algebras of Boolean reflection monoids [27, 31, 36], Coxeter arrangement monoids [7, 11, 12, 14, 15], Braid inverse monoids [9, 13, 25], and algebraic monoids.
The paper is organized as follows. In Section 2, we recall some notations and background knowledge which will be useful to us. In Section 3, building on Barot and
Marsh’s work, we further study inner by diagram automorphisms of irreducible Weyl
groups (Theorem 3.7) and cellular basis of group algebras of irreducible Weyl groups. In
Section 4, we state our main results, Theorem 4.7 and Proposition 4.8, which show that
presentations of Boolean reflection monoids are compatible with mutations of ∆ quivers.
We recover the presentations of Boolean reflection monoids given by Everitt and Fountain and the presentations of symmetric inverse semigroups given by Popova. Moreover,
we characterize inner by diagram automorphisms of Boolean reflection monoids by the
method of mutations, Theorem 4.11. Furthermore, we study the cellularity of semigroup
algebras of Boolean reflection monoids and give new cellular bases of such cellular algebras. In Section 5, we consider the way of mutations of ∆ quivers and the oriented cycles
appearing in them. In Section 6, we find an efficient subset of the relations sufficient to
define the inverse monoid M (Q). The last section, Section 7, we prove our main result,
Theorem 4.7.
2. Preliminaries
2.1. Mutation of quivers. Let Q be a quiver with finitely many vertices and finitely
many arrows that have no loops or oriented 2-cycles. Given a quiver Q, let I be the
set of its vertices and Qop its opposite quiver with the same set of vertices but with the
reversed orientation for all the arrows. If there are q arrows pointing from a vertex i to
a vertex j, then we draw an arrow from i to j with a weight wij = q. We will frequently
draw an arrow with no label if wij = 1.
4
BING DUAN, JIAN-RONG LI, AND YAN-FENG LUO
For each mutable vertex k of Q, one can define a mutation of Q at k, due to Fomin and
Zelevinsky [21]. This produces a new quiver denoted by µk (Q) which can be obtained
from Q in the following way:
(i) The orientations of all edges incident to k are reversed and their weights intact.
(ii) For any vertices i and j which are connected in Q via a two-edge oriented path
going through k, the quiver mutation µk affects the edge connecting i and j in
the way shown in Figure 1, where the weights c and c′ are related by
√
√
√
± c ± c′ = ab,
√
√
where the sign before c (resp., c′ ) is “+” if i, j, k form an oriented cycle in
Q (resp., in µk (Q)), and is “−” otherwise. Here either c or c′ may be equal to 0,
which means no arrows between i and j.
i
k
✂@ ❂❂❂
✂
❂❂b
a ✂✂
❂❂
✂✂
❂
✂
✂
c
µ
k
←→
j
i
k
✂ ^❂❂❂
✂
❂❂b
a ✂✂
❂❂
✂✂
❂
✂
✂
c′
j
Figure 1. Quiver mutation
(iii) The rest of the edges and their weights in Q remain unchanged.
Two quivers Q1 and Q2 are said to be mutation equivalent if there exists a finite
sequence of mutations taking one to the other. We write Q1 ∼mut Q2 to indicate that
Q1 is mutation equivalent to Q2 .
The underlying diagram of a quiver Q is a undirected diagram obtained from Q by
forgetting the orientation of all the arrows. We call a quiver Q connected if its underlying
diagram is connected (every node is “reachable”). It is obvious that Dynkin quivers are
connected quivers. It was shown in [21, Theorem 1.4] that there are only finitely many
quivers in the mutation classes of Dynkin quivers. We call a cycle in the underlying
diagram of a quiver a chordless cycle if no two vertices of the cycle are connected by an
edge that does not itself. As shown in [21, Proposition 9.7] (or see [2, Proposition 2.1]),
all chordless cycles are oriented in the mutation classes of Dynkin quivers.
2.2. Cellular algebras and cellular semigroups. Let us first recall the basic definition of cellular algebras introduced by Graham and Lehrer [24].
Let R be a commutative ring with identity.
Definition 2.1. An associative R-algerba A is called a cellular algebra with cell datum
(Λ, M, C, i) if the following conditions are satisfied:
(C1) Λ is a finite partially ordered set. Associated with each λ ∈ Λ there is a finite set
λ | λ ∈ Λ; S, T ∈ M (λ)} of A;
M (λ) of indices and there exists an R-basis {CS,T
2
λ
λ ;
(C2) i is an R-linear anti-automorphism of A with i = i, which sends CS,T
to CT,S
(C3) For each λ ∈ Λ, S, T ∈ M (λ), and each a ∈ A,
X
λ
aCS,T
≡
ra (S ′ , S)CSλ′ ,T (mod A(< λ)),
S ′ ∈M (λ)
QUIVER MUTATIONS AND BOOLEAN REFLECTION MONOIDS
5
where ra (S ′ , S) ∈ R is independent of T and A(< λ) is the R-submodule of A
generated by {CSµ′′ ,T ′′ | µ < λ, S ′′ , T ′′ ∈ M (µ)}.
Cellular algebras provide a general framework to studying the representation theory
of many important classes of algebras including Hecke algebras of finite type, q-Schur
algebras, the Brauer algebra, the Temperley-Lieb algebras and partition algebras, see
[22–24, 42, 43].
Recently, the cellularity of semigroup algebras is investigated by East [8], Wilox [41],
Guo and Xi [26], and Ji and Luo [32] respectively. In the following, we shall recall some
basic notions and facts from the theory of semigroups.
Let S be a semigroup. For any a, b ∈ S, define
a L b ⇔ S 1 a = S 1 b,
a R b ⇔ aS 1 = bS 1 ,
a J b ⇔ S 1 aS 1 = S 1 bS 1 ,
H = L ∩ R and D = L ∨ R = L ◦ R = R ◦ L, where S 1 is the monoid obtained from S by
adding an identity if necessary. A semigroup S is said to be inverse if for each element
s ∈ S, there exists a unique inverse s−1 ∈ S such that ss−1 s = s and s−1 ss−1 = s−1 .
If S is a finite inverse semigroup, then D = J . For any s, t ∈ S, define Ds ≤ Dt if and
only if s ∈ S 1 tS 1 .
Let S be an inverse semigroup with the set E(S) of idempotents. Let D be a D-class
of S. Suppose that e1 , . . . , ek ∈ D ∩ E(S). Choose a1 = e1 , . . . , ak ∈ Le1 such that
aj R ej for each j. Then D = Re1 ∪ Ra2 ∪ · · · ∪ Rak . Put HD = He1 and by Green’s
Lemma, for each x ∈ D we have x = ai ga−1
j for unique 1 ≤ i, j ≤ k and g ∈ HD . Using
East’s symbol in [8], let [ei , ej , g]D be the element x. Then x−1 = [ej , ei , g−1 ]D . For
more detail knowledge of semigroups, the reader is referred to [8, 29].
A semigroup S is said to be cellular if its semigroup algebra R[S] is a cellular algebra.
In [8], East proved the following theorem:
Theorem 2.2. [8, Theorems 15] Let S be a finite inverse semigroup with the set E(S)
of idempotents. If S satisfies the following conditions:
(1) For each D-class D, the subgroup HD is cellular with cell datum (ΛD , MD , CD , iD );
(2) The map i : R[S] → R[S] sending [e, f, g]D to [f, e, iD (g)]D is an R-linear antihomomorphism.
Then S is a cellular semigroup with cell datum (Λ, M, C, i), where Λ = {(D, λ) | D ∈
S/D, λ ∈ ΛD } with partial order defined by (D, λ) ≤ (D ′ , λ′ ) if D < D ′ or D =
D ′ and λ ≤ λ′ , M (D, λ) = {(e, s) | e ∈ E(S) ∩ D, s ∈ MD (λ)} for (D, λ) ∈ Λ, and
(D,λ)
λ ] | (D, λ) ∈ Λ; (e, s), (f, t) ∈ M (D, λ)} for (D, λ) ∈ Λ and
C = {C(e,s),(f,t) = [e, f, Cs,t
D
(e, s), (f, t) ∈ M (D, λ).
By the definition of cellular algebras, we have the following corollary.
Corollary 2.3. Suppose that A is a cellular algebra over R with cell datum (Λ, M, C, i)
λ
λ ) for any λ ∈ Λ, (S, T ) ∈
and ϕ is an R-linear automorphism of A. Let C S,T = ϕ(CS,T
M (λ) × M (λ), and i = ϕiϕ−1 . Then (Λ, M, C, i) is a cellular basis of A.
λ
Proof. Since {CS,T
| λ ∈ Λ, (S, T ) ∈ M (λ) × M (λ)} is an R-basis of A and ϕ is an
λ
R-linear automorphism of A, {C S,T | λ ∈ Λ, (S, T ) ∈ M (λ) × M (λ)} is also an R-basis
2
λ
λ
of A. It follows from the definition of i that i = i and i(C S,T ) = C T,S .
6
BING DUAN, JIAN-RONG LI, AND YAN-FENG LUO
For each λ ∈ Λ, S, T ∈ M (λ), and each a ∈ A,
X
λ
aCS,T
≡
ra (S ′ , S)CSλ′ ,T
(mod A(< λ)),
S ′ ∈M (λ)
where ra (S ′ , S) ∈ R is independent of T and A(< λ) is the R-submodule of A generated
by {CSµ′′ ,T ′′ | µ < λ, S ′′ , T ′′ ∈ M (µ)}. For each a ∈ A, there exists a b ∈ A such that
a = ϕ(b). Then
X
λ
λ
λ
) = ϕ(bCS,T
)≡
rb (S ′ , S)ϕ(CSλ′ ,T ) (mod A(< λ))
aC S,T = ϕ(b)ϕ(CS,T
S ′ ∈M (λ)
≡
X
λ
rb (S ′ , S)C S ′ ,T
(mod A(< λ)),
S ′ ∈M (λ)
where rb (S ′ , S) ∈ R is independent of T . Therefore (Λ, M, C, i) is a cellular basis of A,
as required.
3. Some new results of irreducible Weyl groups
Let V be a Euclidean space with a standard orthonormal basis {v1 , v2 , . . . , vn }. Let
Φ ⊆ V be a root system and Π the set of simple roots in Φ. For each αi ∈ Π, the
associated simple reflection is si . Then the finite irreducible Weyl group W (Φ) can be
generated by S = {si | αi ∈ Π} and the number of reflections in W (Φ) is equal to the
number of positive roots in Φ. We refer the reader to [3, 19, 28, 35] for more information
about Weyl groups, root systems and reflection groups.
3.1. Barot and Marsh’s results. It is well known that the finite irreducible crystallographic reflection groups or the irreducible Weyl groups have been classified by Dynkin
diagrams, see [28]. Let Γ be a Dynkin diagram and I the set of its vertices. Let WΓ be
the finite irreducible Weyl group determined by Γ. We say that a Γ quiver is a quiver
whose underlying diagram is Γ. In [2], Barot and Marsh gave presentations of WΓ . The
construction works as follows: Let Q be a quiver mutation equivalent to a Γ quiver.
Barot and Marsh defined an inward mutation at vertex k as follows:
(
sk si sk if there is an arrow i → k in Q (possibly weighted),
ti =
(3.1)
si
otherwise.
For two vertices i, j of Q, one defines
2 i and j are not connected,
3 i and j are connected by an edge with weight 1,
mij =
4 i and j are connected by an edge with weight 2,
6 i and j are connected by an edge with weight 3.
(3.2)
Definition 3.1. Let W (Q) be the group with generators si , i ∈ I, subjecting to the
following relations:
(1) s2i = e for all i;
(2) (si sj )mij = e for all i 6= j;
QUIVER MUTATIONS AND BOOLEAN REFLECTION MONOIDS
7
(3) For any chordless cycle C in Q:
ω
ω
ωd−1
ω
1
2
0
i0 −→
i1 −→
· · · −→ id−1 −→
i0 ,
where either all of the weights are 1, or ω0 = 2, we have:
(si0 si1 · · · sid−2 sid−1 sid−2 · · · si1 )2 = e;
where e is the identity element of W (Q).
One of Barot and Marsh’s main results in [2] is stated as follows.
Theorem 3.2 ([2, Theorem A]). The group W (Q) does not depend on the choice of a
quiver in the mutation class of Q. In particular, W (Q) ∼
= WΓ for each quiver Q mutation
equivalent to a Γ quiver.
3.2. Inner by diagram automorphisms of irreducible Weyl groups. Let W be a
Coxeter group defined by a set S of generators and relations (1) and (2) of Definition 3.1.
We call the pair (W, S) a Coxeter system. In what follows, given any two Coxeter systems
(W, S1 ) and (W, S2 ), we say that there exists an automorphism of W , we mean that there
is an automorphism α ∈ Aut(W ) such that α(S1 ) = S2 . If such an automorphism can
always be chosen from Inn(W ), the group of inner automorphisms of W , then W is
called strongly rigid. In case W is strongly rigid, the group Aut(W ) has a very simple
structure (see Corollary 3.2 of [5]):
Aut(W ) = Inn(W ) × Diag(W ),
where Diag(W ) consists of diagram automorphisms of the unique Coxeter diagram corresponding to W .
The following lemma is well known.
Lemma 3.3. Let W be a finite group generated by a finite set S of simple reflections.
Then the set of all reflections in W is {wsw−1 | w ∈ W, s ∈ S}.
In [1, Table I], Bannai computed the center Z(W (Φ)) of an irreducible Weyl group
W (Φ). The longest element w0 in W (Φ) is a central element of W (Φ) except for Φ =
An (n ≥ 2), D2k+1 (k ≥ 2), E6 .
The following important notation was introduced by Franzsen in [16].
Definition 3.4. [16, Definition 1.36] An inner by diagram automorphism is an automorphism generated by some inner and diagram automorphisms in Aut(W ). The subgroup
of inner automorphisms is a normal subgroup of Aut(W ), therefore any inner by diagram
automorphism can be written as the product of an inner and a diagram automorphism.
The following two lemmas collect together some facts from [16] which will be useful
later.
∼
Lemma 3.5. [16, Proposition 2.4] If W is Weyl group of type An , then Aut(W (An )) =
∼
W (An ) if n 6= 5, while Aut(W (A5 )) = W (A5 ) ⋊ Z2 . So, for n 6= 5, any automorphism
of Weyl group of type An maps reflections to reflections, furthermore any automorphism
of A5 that does preserve reflection is inner.
Lemma 3.6. [16, Proposition 1.44, Propositions 2.8–2.10, Proposition 2.13] Let W (Φ)
be Weyl group of type Φ.
8
BING DUAN, JIAN-RONG LI, AND YAN-FENG LUO
(a) Any automorphism of W (Bn ) for n > 2 that does preserve reflections must be
inner.
(b) For k ≥ 2, Aut(W (D2k+1 )) ∼
= W (D2k+1 ), that is, all automorphisms of W (D2k+1 )
map reflections to reflections.
(c) All automorphisms of W (E6 ) or W (E7 ) are inner.
(d) All automorphisms of Weyl groups that preserve reflections are inner by diagram
automorphisms.
The following theorem reveals a connection between inner by diagram automorphisms
of irreducible Weyl groups and quiver mutations.
Theorem 3.7. Let Q be a Γ quiver and W (Q) the corresponding Weyl group generated
by a set S of simple reflections. Then α is an inner by diagram automorphism of W (Q)
if and only if there exists a sequence of mutations preserving the underlying diagram Γ
such that α(S) can be obtained from Q by mutations. In particular, all the reflections in
W (Q) can be obtained from Q by mutations.
Proof. Through observation, every variable obtained by mutations must be some reflection of the corresponding Weyl group. The sufficiency follows from the fact that all
automorphisms of Weyl groups that preserve reflections are inner by diagram automorphisms, see Lemmas 3.5 and 3.6.
To prove necessity, assume without loss of generality that the vertex set of Q is
{1, 2, . . . , n} and α is an inner by diagram automorphism of W (Q). Note that diagram
automorphisms of any Dynkin diagram keep the underlying Dynkin diagram. Then
relabelling the vertices of Q if necessary, there exists an inner automorphism α of W (Q)
such that α(S) = α(S). It is sufficient to prove that α(S) can be obtained from Q by
mutations and the sequence of mutations preserves the underlying diagram Γ.
Let g = si1 si2 · · · sir ∈ W (Q) be a reduced expression for g, where sik ∈ S, 1 ≤ k ≤ r.
We assume that α(S) = gSg −1 . In the following we shall use induction to prove that
gSg−1 can be obtained from Q by a sequence of mutations preserving the underlying
diagram Γ.
Step 1. We mutate firstly Q at the vertex i1 twice. Then we get a quiver Q1 , which
has the same underlying diagram with Q. Moreover, the set S becomes
si1 Ssi1 = {si1 s1 si1 , si1 s2 si1 , . . . , si1 sn−1 si1 , si1 sn si1 }.
We keep vertices of Q1 having the same label as vertices of Q.
Step 2. We then mutate Qt−1 , t = 2, 3, · · · , r at the vertex it twice. Note that the
variable corresponding to the vertex it of Qt−1 is si1 . . . sit−1 sit sit−1 . . . si1 . Then we get
a quiver Qt , which has the same underlying diagram with Q, Q1 , . . . , Qt−1 . Moreover,
the set si1 . . . sit−1 Ssit−1 . . . si1 of generators in Qt−1 becomes
si1 · · · sit−1 sit sit−1 · · · si1 (si1 · · · sit−1 Ssit−1 · · · si1 )si1 · · · sit−1 sit sit−1 · · · si1
= si1 · · · sit−1 sit Ssit sit−1 · · · si1 .
We keep vertices of Qt having the same label as vertices of Qt−1 .
Step 3. We repeat Step 2 until we get the quiver Qr .
By induction, it is not difficult to show that the set of generators in Qr is
si1 si2 · · · sir Ssir · · · si2 si1 = gSg −1 = α(S).
QUIVER MUTATIONS AND BOOLEAN REFLECTION MONOIDS
9
Finally, every reflection in W (Φ) is conjugate to a simple reflection by Lemma 3.3.
So we assume that a reflection is of the form gsi g−1 , where g = si1 si2 · · · sik ∈ W (Φ)
is a reduced expression for some subset {i1 , i2 , . . . , ik } ⊆ {1, 2, . . . , n}. By the same
arguments as before, we mutate the sequence i1 , i1 , i2 , i2 , . . . , ik , ik starting from Q, we
can obtain the reflection gsi g−1 .
Remark 3.8.
(1) In types An , Bn (n > 2), D2k+1 , E6 , E7 , and E8 , all inner by diagram automorphisms of the corresponding Weyl groups are inner automorphisms.
W (Φ) is strongly rigid for Φ = An (n 6= 5), D2k+1 , E6 , E7 .
(2) If (W, S) is a Coxeter system of W , then for any inner by diagram automorphism
α of W , (W, α(S)) is also a Coxeter system of W .
(3) Suppose that Q is a quiver mutation equivalent to a Γ quiver. Let W (Q) be the
corresponding Weyl group with a set S of generators. Then Theorem 3.7 holds
for Q.
3.3. Cellular basis of group algebras of irreducible Weyl groups. In [23], Geck
proved Hecke algebras of finite type are cellular algebras. Let ϕ be any inner by diagram
automorphism of irreducible Weyl groups, see Theorem 3.7. By Lemma 2.3, we can
obtain new cellular basis of group algebras of irreducible Weyl groups.
4. Main results of Boolean reflection monoids
Let Aεn−1 (respectively, Bnε , Dnε ) be the Dynkin diagram with n (respectively, n + 1,
n + 1) vertices, where the first n − 1 (respectively, n, n) vertices are mutable vertices and
the n-th (respectively, n + 1-th, n + 1-th) vertex ε is a frozen vertex, which is shown in
Figure 2. The label is left on an edge only if its weight is greater than 2, and the edge
is left unlabelled if its weight is 1. We shall always assume that ∆ is one of Aεn−1 , Bnε
and Dnε .
A quiver is said to be a ∆ quiver if the underlying diagram of such quiver is ∆.
Aεn−1
Bnε
1
2
3
n−1
ε
1
2
n−1
ε
2
3
n−1
ε
2
0
1
Dnε
0
Figure 2. Classical Dynkin diagrams Aεn−1 , Bnε and Dnε with a frozen
vertex ε.
10
BING DUAN, JIAN-RONG LI, AND YAN-FENG LUO
4.1. Boolean reflection monoids. In 2010, Everitt and Fountain introduced reflection
monoids, see [11]. The Boolean reflection monoids are a family of reflection monoids
(symmetric inverse semigroups are Boolean reflection monoids of type A).
Let V be a Euclidean space with standard orthonormal basis {v1 , v2 , . . . , vn } and let
Φ ⊆ V be a root system and W (Φ) the associated Weyl group of type Φ. A partial linear
isomorphism of V is a vector space isomorphism Y → Z for two vector subspaces Y , Z of
V . Any partial linear isomorphism of V can be realized by restricting a full isomorphism
to some subspace. We will write gY for the partial isomorphism with domain Y and effect
that of restricting g to Y . We denote by M (V ) (respectively, GL(V )) the general linear
monoid (respectively, general linear group) on V consisting of partial linear isomorphisms
(respectively, linear isomorphisms) of V . In M (V ), gY = hZ if and only if Y = Z and
gh−1 is in the isotropy group GY = {g ∈ GL(V )|gv = v for all v ∈ Y }. Moreover,
gY hZ = (gh)Y ∩g−1 Z and (gY )−1 = (g −1 )gY .
Let us recall the notation “system” in V for a group G ⊆ GL(V ) introduced in [11].
Definition 4.1 ([11, Definition 2.1]). Let V be a real vector space and G ⊆ GL(V ) a
group. A collection S of subspaces of V is called a system in V for G if and only if
(1) V ∈ S,
(2) GS = S, that is, gX ∈ S for any g ∈ G and X ∈ S,
(3) if X, Y ∈ S, then X ∩ Y ∈ S.
For J ⊆ X = {1, 2, . . . , n}, let
X(J) =
M
j∈J
Rvj ⊆ V,
and B = {X(J) : J ⊆ X} with X(∅) = 0. Then B is a Boolean system in V for
W (Φ), where Φ = An−1 , Bn /Cn , or Dn . For example, the Weyl group W (An−1 )-action
on the subspaces X(J) ∈ B is just g(π)X(J) = X(πJ), where g(π) 7→ π induces an
isomorphism between W (An−1 ) and the symmetric group Sn on the set X. Note that B
is not a system for any of the exceptional W (Φ).
Definition 4.2 ([11, Definition 2.2]). Let G ⊆ GL(V ) be a group and S the system in
V for G. The monoid of partial linear isomorphisms given by G and S is the submonoid
of M (V ) defined by
M (G, S) := {gX : g ∈ G, X ∈ S}.
If G is a reflection group, then M (G, S) is called a reflection monoid.
Let G be the reflection group W (Φ) for Φ = An−1 , Bn /Cn , or Dn , and S the Boolean
system in V for G, then M (G, S) is called a Boolean reflection monoid. In general, we
write M (Φ, B) instead of M (W (Φ), B), and call M (Φ, B) the Boolean reflection monoid
of type Φ.
Recall that Everitt and Fountain gave a presentation of the Boolean reflection monoid
M (Φ, B) for Φ = An−1 , Bn , or Dn in Section 4 of [12].
QUIVER MUTATIONS AND BOOLEAN REFLECTION MONOIDS
11
Lemma 4.3. Everitt and Fountain’s presentations of Boolean reflection monoids are
shown as follows:
M (An−1 , B) = hs1 , s2 , . . . , sn−1 , sε | (si sj )mij = e for 1 ≤ i, j ≤ n − 1,
s2ε = sε , si sε = sε si for i 6= 1,
sε s1 sε s1 = s1 sε s1 sε = sε s1 sε i .
M (Bn , B) = hs0 , s1 , . . . , sn−1 , sε | (si sj )mij = e for 0 ≤ i, j ≤ n − 1,
s2ε = sε , s0 s1 sε s1 = s1 sε s1 s0 , s0 sε = sε ,
si sε = sε si for i 6= 1, s1 sε s1 sε = sε s1 sε s1 = sε s1 sε i.
M (Dn , B) = hs0 , s1 , . . . , sn−1 , sε | (si sj )mij = e for 0 ≤ i, j ≤ n − 1,
s2ε = sε , si sε = sε si for i > 1, sε s1 sε s1 = s1 sε s1 sε = sε s1 sε ,
s0 sε s0 = s1 sε s1 , s0 s2 s1 sε s1 s2 = s2 s1 sε s1 s2 s0 i .
Here mij is defined in (3.2).
4.2. Inverse monoids determined by quivers. Let I ∪ {ε} be the set of vertices of
a quiver Q with a frozen vertex ε. For any i, j ∈ I and ε, define
2 if i and j are not connected,
3 if i and j are connected by an edge of weight 1,
mij =
4 if i and j are connected by an edge of weight 2,
6 if i and j are connected by an edge of weight 3.
2 if ε and j are not connected,
mεj = 3 if ε and j are connected by an edge of weight 1,
1 if ε and j are connected by an edge of weight 2.
2 if ε and j are not connected,
mjε = 4 if ε and j are connected by an edge of weight 1,
2 if ε and j are connected by an edge of weight 2.
Let mii = 1 for any i ∈ I ∪ {ε}. Then (mij )i,j∈I is a Coxeter matrix and (mij )i,j∈I∪{ε}
is a generalized Coxeter matrix, see [40]. To illustrate, generalized Coxeter matrices
corresponding to a Aεn−1 quiver and a Bnε quiver are respectively
1 3
2 ··· ··· ··· 2
1 4
2 ··· ··· ··· 2
3 1
4 1
3
2 · · · · · · 2
3
2 · · · · · · 2
2 3
1
3
2 · · · 2
1
3
2 · · · 2
2 3
.. . . . . . . . . . . ..
.. . . . . . . . . . . ..
and
.
.
.
.
.
.
. .
.
.
.
.
. .
.
2 · · · 2
2 · · · 2
3
1
3 2
3
1
3 2
2 · · · · · · 2
3
1 4
2 ··· ··· 2
3
1 4
2 ··· ··· ··· 2
3 1 n×n
2 ··· ··· ··· 2
3 1 (n+1)×(n+1)
Let (i1 , . . . , ε) be an ordered tuple such that the subquiver of Q on the vertices i1 , . . . , ε
contains only one underlying subdiagram
or
and does not contain one
12
BING DUAN, JIAN-RONG LI, AND YAN-FENG LUO
. Such an ordered tuple (i1 , . . . , ε) is called a shortest path
underlying subdiagram
tuple if it is the shortest path from i1 to ε in Q. For any shortest path tuple (i1 , . . . , ε),
we denote by P (si1 , sε ) the word si1 . . . sε .
Denote by e the identity element of an inverse monoid and denote by (aba . . .)m an
alternating product of m terms.
Definition 4.4. Let Q be any quiver mutation equivalent to a ∆ quiver. Define an
inverse monoid M (Q) with generators si , i ∈ I ∪ {ε} and relations:
(R1) s2i = e for i ∈ I, s2ε = sε ;
(R2) (si sj )mij = e for i, j ∈ I and (sε sj sε · · · )mεj = (sj sε sj · · · )mjε = (sε sj sε · · · )mεj +1
for any j ∈ I;
(R3) (i) for every chordless oriented cycle C in Q:
w
w
w
1
2
0
i0 −→
i1 −→
· · · → id−1 −→
i0 ,
where id ∈ I for d = 0, 1, . . . , d−1, either all of the weights are 1, or w0 = 2,
we have:
(si0 si1 · · · sid−2 sid−1 sid−2 · · · si1 )2 = e.
(ii) for every chordless oriented cycle C in Q:
ε −→ i1 −→ · · · → id−1 −→ ε,
where id ∈ I for d = 1, . . . , d − 1, we have:
sε si1 · · · sid−2 sid−1 sid−2 · · · si1 = si1 · · · sid−2 sid−1 sid−2 · · · si1 sε .
(iii) for every chordless oriented cycle C in Q:
w
2
w
1
2
ε −→
i1 −→ i2 −→
ε,
where i1 , i2 ∈ I, if w1 = 1 and w2 = 2, we have sε si1 si2 si1 = si1 si2 si1 sε ; if
w1 = 2 and w2 = 1, we have si1 si2 sε si2 = si2 sε si2 si1 .
(R4) (path relations) for every underlying subdiagram of Q of the form shown in the
first column of Table 2, we take path relations listed in the second column of
Table 2.
Remark 4.5.
(1) In (R2), if mjε = 2 then mεj = 2. In this case the equation
(sε sj sε . . .)mεj = (sj sε sj . . .)mjε = (sε sj sε . . .)mεj +1 can be reduced to be sε sj =
sj sε .
(2) For (R3) (ii) relation, though in this paper we only use the case d = 3, 4, the
defined relation for arbitrary d is still meaningful, see our unpublished paper [7].
The following lemma is well known and easily verified.
Lemma 4.6. If two quivers Q1 and Q2 both have the same underlying diagram G and
G is a tree, then Q1 ∼mut Q2 .
It follows from the connectivity and finiteness of ∆ that any two ∆ quivers are
mutation-invariance.
QUIVER MUTATIONS AND BOOLEAN REFLECTION MONOIDS
Subdiagrams of Q
2
ε
0
i1
Path relations
P (s0 ,sε )=P (si1 ,sε ), P (sε ,s0 )=P (sε ,si1 )
2
0
P (s0 ,sε )=P (si1 ,sε ), P (sε ,s0 )=P (sε ,si1 )
2
ε
i1
i3
i4
13
i2
C
ε
i1
id−1
P (si2 ,sε )P (sε ,si2 )=si3 si4 ···sid P (si1 ,sε )P (sε ,si1 )sid ···si4 si3 for d ≥ 4
id
i2
i3
i1
ε
i1
ε
P (si2 ,sε )P (sε ,si2 )=P (si4 ,sε )P (sε ,si4 )
i4
i2
i3
P (si2 ,sε )P (sε ,si2 )=P (si4 ,sε )P (sε ,si4 )
i4
i2
i1
ε
P (si2 ,sε )P (sε ,si2 )=P (si3 ,sε )P (sε ,si3 )
i3
Table 2. Path relations of underlying subdiagrams of Q, where C stands
for a chordless cycle.
Now we are ready for our main results in this section.
Theorem 4.7. Let ∆ ∈ {Aεn−1 , Bnε , Dnε } and Q0 be a ∆ quiver. If Q ∼mut Q0 then
M (Q) ∼
= M (Q0 ).
We will prove Theorem 4.7 in Section 7. Up to the above isomorphism, we denote by
M (∆) the inverse monoid determined by any quiver appearing in the mutation class of
∆ quivers.
When we say that we mutate a sequence (n, n − 1, · · · , 2, 1) of vertices of a quiver we
mean that we first mutate the n-th vertex of the quiver, then we mutate the (n − 1)-th
vertex, and so on until the first vertex. The following proposition shows that Everitt
and Fountain’s presentations of Boolean reflection monoids can be obtained from any ∆
quiver by mutations.
Proposition 4.8. Let Φ = An−1 , Bn or Dn . Then M (Φ, B) = M (Φε ).
Proof. All ∆ quivers are mutation-invariance, so any ∆ quiver can be viewed as an initial
quiver we mutate.
14
BING DUAN, JIAN-RONG LI, AND YAN-FENG LUO
We mutate the sequence (n − 1, n − 2, · · · , 2, 1) of vertices of the following Aεn−1 quiver
◦
/◦
1
/ ◦ ❴ ❴ ❴/ ◦
2
3
n−1
/•,
ε
we obtain the quiver Q1 :
/◦
◦
ε
/ ◦ ❴ ❴ ❴/ ◦
1
2
/ • .
n−1
n−2
Then by Definition 4.4
M (Q1 ) = s1 , s2 , . . . , sn−1 , sε | (si sj )mij = e for 1 ≤ i, j ≤ n − 1, s2ε = sε ,
si sε = sε si for i 6= 1, sε s1 sε s1 = s1 sε s1 sε = sε s1 sε i ,
where
1 if i = j,
mij = 3 if |i − j| = 1,
2 otherwise.
By Lemma 4.3, we deduce that M (Q1 ) = M (An−1 , B).
Mutating a sequence (n − 1, n − 2, · · · , 1, 0) of vertices of the following Bnε quiver
2
◦
/◦
/ ◦ ❴ ❴ ❴/ ◦
1
0
2
n−1
/•,
ε
we get obtain quiver Q2 :
ε
◦
☎☎
2 ☎☎
☎
☎☎
☎ ☎
• \✿
2
0
✿✿
✿✿
✿✿
✿✿
/◦
1
.
/ ◦ ❴ ❴ ❴/ ◦
2
n−2
/ ◦
n−1
Then by Definition 4.4
M (Q2 ) = hs0 , s1 , . . . , sn−1 , sε | (si sj )mij = e for 0 ≤ i, j ≤ n − 1,
s2ε = sε , s0 s1 sε s1 = s1 sε s1 s0 , s0 sε = sε s0 = sε ,
si sε = sε si for i 6= 1, s1 sε s1 sε = sε s1 sε s1 = sε s1 sε i,
where
1
2
mij =
3
4
if
if
if
if
i = j,
i and j are not connected,
i and j are connected by an edge with weight 1,
i and j are connected by an edge with weight 2.
By Lemma 4.3, we have M (Q2 ) = M (Bn , B).
QUIVER MUTATIONS AND BOOLEAN REFLECTION MONOIDS
15
By mutating a sequence (n − 1, n − 2, · · · , 2, 0) of vertices of the following Dnε quiver
0
◦
/ ◦2
O
/ ◦3 ❴ ❴ ❴/ n−1
◦
/ •ε ,
◦
1
we obtain the quiver Q3 :
0
◦
✆B ✾✾✾
✆
✾✾
✆
✾✾
✆✆
✆
✾
✆
ε ✆
2
• \✾
◦
✾✾
✆✆
✾✾
✆✆
✾✾
✆
✾✾ ✆✆✆
✆
.
/ ◦3 ❴ ❴ ❴/ n−2
◦
/ n−1
◦
◦
1
Then by Definition 4.4
M (Q3 ) = s0 , s1 , . . . , sn−1 , sε | (si sj )mij = e for 0 ≤ i, j ≤ n − 1, s2ε = sε ,
si sε = sε si for i > 1, sε sj sε sj = sj sε sj sε = sε sj sε for j = 0, 1,
s0 sε s0 = s1 sε s1 , s0 s2 s1 sε s1 s2 = s2 s1 sε s1 s2 s0 i ,
where
1
mij = 2
3
if i = j,
if i and j are not connected,
if i and j are connected by an edge with weight 1.
We claim that M (Q3 ) = M (Dn , B), which follows from Lemma 6.2 (3).
Suppose that a quiver Q is mutation equivalent to a ∆ quiver, then by Theorem 4.7
and Proposition 4.8, M (Q) gives a presentation of the Boolean reflection monoid M (∆).
In [11], Everitt and Fountain proved that the Boolean reflection monoid M (An−1 , B)
(respectively, M (Bn , B)) is isomorphic to the symmetric inverse semigroup In (respectively, the monoid I±n of partial signed permutations). Hence our results recover the
presentation of the symmetric inverse semigroup In defined in [10, 34], that is, such
presentation is exactly the presentation of M (Q) for a Aεn−1 quiver Q.
The following example is given to explain Theorem 4.7.
Example 4.9. We start with a Aε3 quiver Q0 which is shown in Figure 3 (a). Let
Q1 = µ2 (Q0 ) be the quiver obtained from Q0 by a mutation at 2.
16
BING DUAN, JIAN-RONG LI, AND YAN-FENG LUO
1
(a) ◦
/ ◦2
/ •ε
µ2
−→
1
(b) ◦ o
2
◦o
ε
•
Figure 3. (a) A Aε3 quiver Q0 ; (b) The quiver Q1 = µ2 (Q).
It follows from Definition 4.4 that
M (Q0 ) = s1 , s2 , sε | s21 = s22 = e, s2ε = sε , s1 s2 s1 = s2 s1 s2 , s1 sε = sε s1 ,
s2 sε s2 sε = sε s2 sε s2 = sε s2 sε i ,
M (Q1 ) = t1 , t2 , tε | t21 = t22 = e, t2ε = tε , t1 t2 t1 = t2 t1 t2 , t1 tε t1 tε = tε t1 tε t1 = tε t1 tε ,
t2 tε t2 tε = tε t2 tε t2 = tε t2 tε , tε t2 t1 t2 = t2 t1 t2 tε i .
Then ϕ : M (Q0 ) → M (Q1 ) is an inverse monoid isomorphism defined by
(
t2 ti t2 if i = 1,
ϕ(si ) =
ti
otherwise.
4.3. Inner by diagram automorphisms of Boolean reflection monoids. We first
consider inner automorphisms of Boolean reflection monoids. It is well known that for
any group G, Inn(G) ∼
= G/Z(G), where Inn(G) is the inner automorphism group of G,
Z(G) is the center of G. It has been shown in [33,39] that automorphisms of the Boolean
reflection monoid M (An−1 , B) are inner: for every automorphism α of M (An−1 , B), there
exists a uniquely determined element g ∈ W (An−1 ) of the Weyl group W (An−1 ) such
that α(t) = gtg −1 for all t ∈ M (An−1 , B). In other words, the automorphism group
of M (An−1 , B) is naturally isomorphic to W (An−1 )/Z(W (An−1 )) for n ≥ 3 and the
automorphism group of M (A1 , B) is naturally isomorphic to W (A1 ).
As a generalization of the above result, we have the following theorem.
Theorem 4.10. The inner automorphism group of M (Φ, B) is naturally isomorphic to
W (Φ)/Z(W (Φ)), where Φ = An−1 (≥ 3), Bn (n ≥ 2), Dn (≥ 4).
Proof. Let α be an inner automorphism of M (Φ, B). Since W (Φ) ⊆ M (Φ, B) is the
unique unit group of M (Φ, B), α|W (Φ) ∈ Inn(W (Φ)) = W (Φ)/Z(W (Φ)).
In the following we prove the cases of Φ = Bn (n ≥ 2), Dn (≥ 4). Let Φε be one of Bnε ,
and Dnε shown in Figure 2. Suppose that Λ = {s0 , s1 , . . . , sn−1 , sε } is a set of generators
of M (Φ, B). For any element g ∈ W (Φ), we claim that the set gΛg−1 is still a set of
generators of M (Φ, B). Firstly, it is obvious that (gsi g−1 )2 = e and (gsε g−1 )2 = gsε g−1 .
Nextly we will prove that gΛg−1 satisfies (R2)–(R4) in Definition 4.4.
Case 1. There is no edge between i and j. Then gsi g−1 gsj g−1 = gsi sj g−1 =
gsj si g−1 = gsj g −1 gsi g−1 .
Case 2. There is an edge labeled by 1 between i and j. Then
gsi g−1 gsj g−1 gsi g−1 = gsi sj si g−1 = gsj si sj g−1 = gsj g−1 gsi g −1 gsj g−1 .
Case 3. There is an edge labeled by 2 between i and j. Then
gsi g−1 gsj g−1 gsi g−1 gsj g−1 = gsi sj si sj g−1 = gsj si sj si g −1
= gsj g−1 gsi g−1 gsj g−1 gsi g−1 .
QUIVER MUTATIONS AND BOOLEAN REFLECTION MONOIDS
17
Case 4. There is no edge between i and ε. Then
gsi g−1 gsε g−1 = gsi sε g−1 = gsε si g−1
= gsε g−1 gsi g −1 .
Case 5. There is an edge labeled by 1 between i and ε. Then
gsi g−1 gsε g−1 gsi g−1 gsε g−1 = gsi sε si sε g−1
= gsε si sε si g−1 = gsε si sε g −1
= gsε g−1 gsi g−1 gsε g−1 gsi g −1
= gsε g−1 gsi g−1 gsε g−1 .
Case 6. In type Bn ,
gs0 g −1 gs1 g −1 · · · gsε g−1 = gs0 s1 · · · sε g−1 = gs1 · · · sε g−1
= gs1 g−1 · · · gsε g−1 ,
gsε g−1 · · · gs1 g−1 gs0 g−1 = gsε · · · s1 s0 g−1 = gsε · · · s1 g−1
= gsε g−1 · · · gs1 g−1 .
Case 7. In type Dn ,
gs0 g−1 gs2 g−1 · · · gsε g −1 · · · gs2 g−1 gs0 g −1 = gs0 s2 · · · sε · · · s2 s0 g−1
= gs1 s2 · · · sε · · · s2 s1 g−1
= gs1 g−1 gs2 g−1 · · · gsε g−1 · · · gs2 g −1 gs1 g −1 .
Finally, we shall show that g1 sε g1−1 = g2 sε g2−1 for any g1 , g2 ∈ Z(W (Φ)). It suffices to
prove that sε = w0 sε w0 , where w0 is the longest element in W (Φ) and w0 is an involution.
In Section 1.2 of [16], we have w0 = wn wn−1 · · · w1 , where wi = si−1 · · · s1 s0 s1 · · · si−1 in
type Φ = Bn , and wi = si−1 · · · s3 s2 s1 s0 s2 s3 · · · si−1 for i ≥ 3 and w1 = s0 , w2 = s1 in
type Φ = Dn . Then by (R1), (R2), and (R4) of Definition 4.4,
w0 sε w0 = wn wn−1 · · · w1 sε w1 · · · wn−1 wn = wn sε wn
(
sn−1 · · · s1 (s0 s1 · · · sn−1 sε sn−1 · · · s1 s0 )s1 · · · sn−1 = sε
=
sn−1 · · · s3 s2 s1 (s0 s2 s3 · · · sn−1 sε sn−1 · · · s3 s2 s0 )s1 s2 s3 · · · sn−1 = sε
in type Bn ,
in type Dn .
Therefore the inner automorphism group of M (Φ, B) is isomorphic to W (Φ)/Z(W (Φ))
for Φ = An−1 (≥ 3), Bn (n ≥ 2), Dn (≥ 4).
Let ∆ be one of Aεn−1 , Bnε , and Dnε shown in Figure 2. Let Q be a ∆ quiver. Let
I ∪ {ε} be the set of vertices of Q and Q′ = µk (Q) the quiver obtained by a mutation of
Q at a mutable vertex k. Following Barot and Marsh’s work [2], one can define variables
18
BING DUAN, JIAN-RONG LI, AND YAN-FENG LUO
ti for i ∈ I, and tε in M (Q′ ) as follows:
(
sk si sk if there is an arrow i → k in Q (possibly weighted),
ti =
si
otherwise,
(
sk sε sk if there is an arrow ε → k in Q (possibly weighted),
tε =
sε
otherwise.
(4.1)
From Lemma 3.3 and Equation (4.1), it follows that new elements ti , i ∈ I, appearing
in the procedure of mutations of quivers, must be some reflections of Weyl groups.
By our Theorem 4.7 and Proposition 4.8, up to isomorphism, Boolean reflection
monoids are encoded by their (generalized) Coxeter diagrams, see Figure 2.
In the following theorem, we show that inner by diagram automorphisms of Boolean
reflection monoids can be constructed by a sequence of mutations preserving the same
underlying diagrams.
Theorem 4.11. Let Q be a ∆ quiver and M (Q) the corresponding Boolean reflection
monoid generated by a set S consisting of simple reflections and sε . Then α is an inner
by diagram automorphism of M (Q) if and only if there exists a sequence of mutations
preserving the underlying diagram ∆ such that α(S) can be obtained from Q by mutations. In particular, all reflections in W (Q\{ε}) and gsε g −1 for g ∈ W (Q\{ε}) can be
obtained from Q by mutations.
Proof. Let ∆ be one of Aεn−1 , Bnε , and Dnε shown in Figure 2. Suppose that S =
{s1 , s2 , . . . , sn−1 , sε } for ∆ = Aεn−1 or S = {s0 , s1 , . . . , sn−1 , sε } for ∆ = Bnε , Dnε .
In the case of type An , all automorphisms of M (Φ, B) are inner, see [33, 39]. A
sequence µ of mutations preserving the underlying diagram of Q induces to an inner
automorphism of M (Φ, B).
All automorphisms of Weyl groups that preserve reflections are inner by diagram automorphisms, see Lemmas 3.5 and 3.6. So we assume without loss of generality that
M (µ(Q)) = ht0 , t1 , . . . , tn−1 , tε i, where ti = gsi g−1 for 0 ≤ i ≤ n − 1, g ∈ W (Bn )
(respectively, g ∈ W (Dn )). We claim that tε = gsε g −1 . Firstly, if tε = gsε g−1 ,
then {t0 , t1 , . . . , tn−1 , tε } is a set of generators of M (µ(Q)) and the generalized Coxeter diagram corresponding to {t0 , t1 , . . . , tn−1 , tε } preserves the underlying diagram
∆. Since tε ti = ti tε for 0 ≤ i ≤ n − 2, we have tε ∈ Z(W (Bn−1 )) (respectively,
tε ∈ Z(W (Dn−1 ))), where W (Bn−1 ) = htε t0 , tε t1 , . . . , tε tn−2 i (respectively, W (Dn−1 ) =
htε t0 , tε t1 , . . . , tε tn−2 i). The variable tε must be of the form g′ sε g′ −1 for some g′ ∈ W (Bn )
(respectively, g′ ∈ W (Dn )). So tε is not the longest word w0 in W (Bn−1 ) (respectively,
W (Dn−1 )). Therefore tε must be the unique identity element in W (Bn−1 ) (respectively,
W (Dn−1 )) and hence tε = gsε g−1 .
Conversely, for each inner automorphism α of M (Q), by Theorem 4.10, there exists
an element g ∈ W (Q\{ε}) of the Weyl group W (Q\{ε}) ⊆ M (Q) such that α(t) = gtg −1
for all t ∈ M (Q). The remainder proof of the necessity is similar to the proof of the
necessity of Theorem 3.7.
Every reflection in W (Q\{ε}) is of the form gsi g−1 , where g = si1 si2 · · · sik ∈ W (Q\{ε})
is a reduced expression for g. By the same arguments as before, we mutate the sequence
i1 , i1 , i2 , i2 , . . . , ik , ik starting from Q, we get gsi g −1 and gsε g−1 .
QUIVER MUTATIONS AND BOOLEAN REFLECTION MONOIDS
19
4.4. Cellularity of semigroup algebras of Boolean reflection monoids. In this
section, we show that semigroup algebras of Boolean reflection monoids are cellular
algebras. We use these presentations we obtained to construct new cellular bases of such
cellular algebras.
Let R be a commutative ring with identity. Recall that a semigroup S is said to be
cellular if its semigroup algebra R[S] is a cellular algebra.
Proposition 4.12. The Boolean reflection monoid M (Φ, B) for Φ = An−1 , Bn , or Dn
is a cellular semigroup.
Proof. All maximal subgroups of the Boolean reflection monoid M (Φ, B) are finite reflection groups. It has been shown in [23] that any finite reflection group W (Φ) is cellular
with respect to which the anti-involution is inversion. Therefore, for each D-class D of
M (Φ, B), the subgroup HD ⊆ M (Φ, B) is cellular with cell datum (ΛD , MD , CD , iD =−1 ),
which satisfies East’s first assumption, see Theorem 15 in [8] or Theorem 2.2.
We define a map
i : R[M (Φ, B)] → R[M (Φ, B)]
X
X
rj gj 7→
rj gj−1 ,
j
j
where rj ∈ R, gj ∈ M (Φ, B). The map i is an R-linear anti-homomorphism and
−1
i([e, f, g]D ) = [e, f, g]−1
D = [f, e, g ]D = [f, e, iD (g)]D for any g ∈ HD , e D f in M (Φ, B).
From Theorem 19 in [8] or Theorem 2.2, it follows that the Boolean reflection monoid
M (Φ, B) is a cellular semigroup, as required.
Remark 4.13. The case that a finite inverse semigroup whose maximal subgroups are
direct products of symmetric groups has been considered by East, see Theorem 22 of [8].
The Boolean reflection monoid of type An−1 is isomorphic to the symmetric group Sn
of degree n. Maximal subgroups of the Boolean reflection monoid of type Bn are finite
reflection groups of type Br , r ≤ n, which are isomorphic to (Z2 × Z2 . . . × Z2 ) ⋊ Sr .
Let ∆ be one of Aεn−1 , Bnε , and Dnε shown in Figure 2. For two quivers with the
same underlying diagrams appearing in the mutation class of ∆ quivers, we always use
their presentations to construct inner by diagram automorphisms of Boolean reflection
monoids, see Theorem 4.11 and then we extend it an R-linear automorphism of semigroup
algebras of Boolean reflection monoids. By Corollary 2.3, we obtain new cellular bases
of semigroup algebras of Boolean reflection monoids.
4.5. An example. Let In be the symmetric inverse semigroup on [n] = {1, 2, . . . , n}.
Let w be a partial permutation on a set A ⊆ [n] and denote the image of i ∈ A under
the map w by wi and the image of i 6∈ A under the map w by wi = ∅. We denote w
by the sequence (w1 w2 . . . wn ). For example, (3 − 2) is the partial permutation with
domain {1, 3} and range {2, 3} under which 1 → 3, 3 → 2.
The following example gives new cellular bases of R[I3 ] by the method of quiver
mutations.
Example 4.14. Let Q0 be the quiver in Example 4.9 and by the results of preceding
sections, M (Q0 ) ∼
= I3 a Boolean reflection monoid. We have M (Q0 )/D = {D0 < D1 <
20
BING DUAN, JIAN-RONG LI, AND YAN-FENG LUO
D2 < D3 }, where each Di is the set of all elements of M (Q0 ) of rank i, and idempotents
in each Di are the partial identity permutation on i-subsets of [3].
Let A be an i-subset of [3]. As shown in Example 23 of [8], the H-class containing the
idempotent idA is the subgroup {x ∈ I3 | im(x) = dom(x) = A} ∼
= S|A| . It is well known
that the group algebra R[Sn ] has cellular bases with respect to which the anti-involution
is inversion. Indeed the Khazdan-Luzstig bases and the Murphy basis both have this
property (see, Example (1.2) of [24], Example (2.2) of [36] or Section 4 of [37]).
Take s1 = (2 1 3), s2 = (1 3 2), and sε = (1 2 −). By mutating Q0 , we obtain the
following isomorphic quivers (Theorems 4.10 and 4.11):
/ s◦2
s1
(a) ◦
s1
(b) ◦
(c)
s1 s2 s1
s2 s1 s2
/ s◦2
◦
s2
/
•
◦
/
/ s◦1
◦
s2
s2 sε s2
/
s1 s2 s1
s2 s1 s2
(f ) ◦
/ s•ε
◦
/
(d) ◦
(e)
/ s•ε
/ s◦1
/
/
s1 s2 sε s2 s1
•
s2 sε s2
•
s1 s2 sε s2 s1
•
From Theorem 4.7 and Proposition 4.8, it follows that the inverse monoids determined
by quivers (a)–(f ) are isomorphic to the symmetric inverse semigroup I3 , respectively.
The presentation of I3 determined by the quiver (a) admits an initial cellular bases by
Theorem 19 of [8] or Theorem 2.2. We can construct an R-linear automorphism of
R[I3 ] using these presentations corresponding to quivers (a)–(f ), and then by Corollary
2.3, we obtain new cellular bases of R[I3 ].
5. Mutations of quivers of finite type
Throughout this section, let as before ∆ be one of Aεn−1 , Bnε , and Dnε in Figure 2, we
consider the way of mutations of ∆ quivers and the oriented cycles appearing in them,
refer to [2, 21].
A quiver without no loops and no 2-cycles is said to be of finite type if it is mutation
equivalent to a Dynkin quiver. A chordless cycle is a cycle such that no two vertices of
the cycle are connected by an edge that does not itself. One can show [21, Proposition
9.7] (or [2, Proposition 2.1]) that all chordless cycles are oriented in the mutation classes
of Dynkin quivers.
We extend the results in [21] and [2, Corollary 2.3] to the case of ∆ quivers.
Lemma 5.1. Let Q be a ∆ quiver and k a mutable vertex of Q. Suppose that k has
two neighbouring vertices. Then the induced subquivers of Q containing vertex k and its
neighbours are shown in Figure 4. The effect of the mutation of Q at k is shown in each
case.
QUIVER MUTATIONS AND BOOLEAN REFLECTION MONOIDS
µ
k
(a)
◦
✆B \✾✾✾
✾✾
✆✆
✆
✾✾
✆✆
✾✾
✆
✆✆
◦
i
◦
◦
B ◦ \✾
✆✆ ✾✾✾
✆
✾✾2
✆✆
✾✾
✆✆
✾
✆
✆
i
◦
◦
◦
i
i
µk
←→
◦
j
k
(a’)
◦
B ◦ \✾
✆✆ ✾✾✾
✆
✾✾
✆✆
✾✾
✆✆
✾
✆✆
•
(c’)
◦
◦
✆ \✾✾✾
✾✾
✆✆
✆
✾✾
✆✆
✾✾
✆
✆ ✆
←→
•
ε
i
µ
k
(e’)
B◦✾
✆✆ ✾✾✾
✆
✾✾
✆✆
✾✾
✆✆
✾
✆
✆
k
←→
2
◦
(g’)
•
k
◦ \✾
✆✆ ✾✾✾
✆
2 ✆
✾✾2
✆
✾✾
✆✆
✾
✆
✆
/•
◦
i
i
◦
◦
✆B ✾✾✾
✾✾
✆✆
✆
✾✾
✆✆
✾✾
✆
✆✆
•
◦o
(d’)
k
(f’)
ε
i
µk
←→
2
ε
←→
•
◦
i
j
k
◦ \✾
✆✆ ✾✾✾
✆
✾✾
✆✆
✾✾
✆✆
✾
✆ ✆
/•
◦
ε
k
◦
✆ ✾✾✾
✾✾
✆✆
✆
✾✾
✆✆
✾✾
✆
✆ ✆
2
•
◦
µ
k
←→
•
ε
i
2
◦
◦ \✾
✆✆ ✾✾✾
✆
2 ✆
✾✾2
✆
✾✾
✆✆
✾
✆
✆
/◦
◦
←→
k
◦ \✾
✆✆ ✾✾✾
✆
✾✾
✆✆
✾✾
✆✆
✾
✆
✆
j
k
i
ε
i
2
µk
k
B ◦ \✾
✆✆ ✾✾✾
2 ✆✆
✾✾
✆
✾✾
✆✆
✾
✆
✆
◦ \✾
✆✆ ✾✾✾
✆
2 ✆
✾✾
✆
✾✾
✆✆
✾
✆
✆
/◦
◦
i
µk
ε
i
ε
←→
j
B◦✾
✆✆ ✾✾✾
✆
✾✾
✆✆
✾✾
✆✆
✾
✆✆
j
k
i
µk
k
2
ε
i
◦
B◦✾
✆✆ ✾✾✾
✆
2 ✆
✾✾2
✆
✾✾
✆✆
✾
✆
✆
◦o
◦
•
◦ \✾
✆✆ ✾✾✾
✆
✾✾
✆✆
✾✾
✆✆
✾
✆
✆
/•
◦
←→
j
(b’)
k
i
B◦✾
✆✆ ✾✾✾
✆
2 ✆
✾✾
✆
✾✾
✆✆
✾
✆
✆
k
◦
✆ \✾✾✾
✾✾
✆✆
✆
✾✾
✆✆
✾✾
✆
✆ ✆
/◦
◦
i
µk
k
ε
i
µk
k
◦✾
✆✆ ✾✾✾
✆
✾✾
✆✆
✾✾
✆✆
✾
✆✆
◦
j
i
j
k
◦
ε
i
2
i
←→
◦
j
(f)
k
←→
k
◦
◦ \✾
✆✆ ✾✾✾
✆
✾✾2
✆✆
✾✾
✆✆
✾
✆
✆
/◦
◦
◦
✆B ✾✾✾
✾✾
✆✆
✆
✾✾
✆✆
✾✾
✆
✆✆
i
(d)
k
µk
◦
◦
j
◦✾
✆✆ ✾✾✾
✆
✾✾2
✆✆
✾✾
✆✆
✾
✆
✆
µ
k
(b)
k
←→
j
B◦✾
✆✆ ✾✾✾
✆
✾✾2
✆✆
✾✾
✆✆
✾
✆
✆
◦
✆ ✾✾✾
✾✾
✆✆
✆
✾✾
✆✆
✾✾
✆
✆ ✆
i
µk
k
(e)
◦
j
k
(c)
k
k
←→
21
k
B◦✾
✆✆ ✾✾✾
✆
✾✾
✆✆
✾✾
✆✆
✾
✆
✆
o
◦
•
2
•
ε
i
2
ε
k
B◦✾
✆✆ ✾✾✾
✆
2 ✆
✾✾2
✆
✾✾
✆✆
✾
✆
✆
•
◦o
i
ε
Figure 4. Subquivers of mutations of Q.
In a diagram, a vertex is said to be connected to another if there is an edge between
them. Let Q be a quiver mutation equivalent to a Dynkin quiver. In Lemma 2.4 of [2],
Barot and Marsh have described the way vertices in Q can be connected to a chordless
cycle: A vertex is connected to at most two vertices of a chordless cycle, and if it is
connected to two vertices, then the two vertices must be adjacent in the cycle.
The following lemma is a generalization of Barot and Marsh’s results [2, Lemma 2.5].
Lemma 5.2. Let Q′ = µk (Q) be the mutation of Q at vertex k. We list various types
of induced subquivers in Q and corresponding cycles in Q′ . Then every chordless cycle
in Q′ arises in such a way.
22
BING DUAN, JIAN-RONG LI, AND YAN-FENG LUO
µ
k
(a)
◦
◦
✂ ]❁❁❁
❁❁
✂✂
✂
❁❁
✂✂
❁
✂
✂
µ
◦
◦
✂ ]❁❁❁
❁❁2
✂✂
✂
❁❁
✂✂
❁
✂
✂
k
(a’)
◦
k
−→
◦
◦
✆B ✾✾✾
✾✾
✆✆
✆
✾✾
✆✆
✾✾
✆
✆✆
•
µ
k
B◦✾
✆✆ ✾✾✾
✆
✾✾
✆✆
✾✾
✆✆
✾
✆
✆
k
−→
2
◦
(e’)
◦ \✾
✆✆ ✾✾✾
✆
2 ✆
✾✾2
✆
✾✾
✆✆
✾
✆
✆
/•
◦
(b’)
k
(d’)
2
µk
←→
◦ d❏❏
(h’) ◦k
O
k
k−1
2
6 ◦ ❴ ❴ ❴/ ◦
/ ◦1
k
←→
k
◦ o❴ ❴ ❴ ◦ o
d−2
k−1
2
6 ◦ ❴ ❴ ❴/ ◦
◦
k
k
◦S
◦ o❴ ❴ ❴ ◦ o
k+1
d−1
◦
d
i
ε
i
•
ε
ε
j
/ ◦1
◦ o❴ ❴ ❴ ◦ o
k+1
µ
◦
d−2
2
◦O ❴ ❴ ❴/ ◦
k−1
k
←→
k
d
◦
(
•L
k
◦
v
ε
ε
j
✉: ◦
✉✉
✉✉
✉
✉
✉✉
✉✉
✉✉✉
◦o
•
2
◦O ❴ ❴ ❴/ ◦
d
2
i
µk
k−1
d−1
/ ◦1 i
•
ε
j
◦
B◦✾
✆✆ ✾✾✾
✆
✾✾
✆✆
✾✾
✆✆
✾
✆
✆
o
◦
•
µk
/ ◦i ←→ ◦k o
◦o
µ
ε
k
/ ◦ ←→ ◦ o❏
◦O
O ❏❏
❏❏
❏❏
❏❏
❏❏
❏❏
❏$
•
◦o
•
i
ε
k
◦
✆B ✾✾✾
✾✾
✆✆
✆
✾✾
✆✆
✾✾
✆
✆✆
◦o
•
2
k
(f’) ◦
O
/ ◦i
◦
◦S
(i’)
j
◦o
ε
k
−→
ε
i
ε
•
k+1
◦ \✾
✆✆ ✾✾✾
✆
✾✾
✆✆
✾✾
✆✆
✾
✆
✆
◦
✂A ❁❁❁
❁❁2
✂✂
✂
❁❁
✂✂
❁
✂
✂
o
◦
◦
i
µ
k
◦
B◦✾
✆✆ ✾✾✾
✆
2 ✆
✾✾2
✆
✾✾
✆✆
✾
✆
✆
◦o
•
j
•
2
k
i
µ
k
−→
ε
i
ε
❏❏
❏❏
❏❏
❏❏
❏❏
❏❏
/•
k
◦
◦ \✾
✆✆ ✾✾✾
✆
✾✾
✆✆
✾✾
✆✆
✾
✆
✆
/•
◦
µk
/ ◦i ←→
◦o
◦
✆ \✾✾✾
✾✾
✆✆
✆
✾✾
✆✆
✾✾
✆
✆ ✆
k
2
k
ε
i
ε
(g’) ◦j
O
(e)
•
k
i
k
k
◦
✂A ❁❁❁
❁❁
2 ✂✂✂
❁❁
✂✂
❁
✂
✂
◦
◦o
2
µ
k
←→
◦
✂ ]❁❁❁
❁❁2
✂✂
✂
❁❁
✂✂
❁
✂
✂
/◦
◦
2
ε
i
◦
2
◦
✆ \✾✾✾
✾✾
✆✆
✆
✾✾
✆✆
✾✾
✆
✆ ✆
/•
◦
i
k
−→
k
(d)
◦
✂A ❁❁❁
❁❁2
✂✂
✂
❁❁
✂✂
❁
✂
✂
2
o
◦
◦
ε
i
(c’)
µ
k
−→
◦
✂ ]❁❁❁
❁❁
2 ✂✂✂
❁❁
✂✂
❁
✂
✂
◦
k
µ
k
(b)
◦
✂A ❁❁❁
❁❁
✂✂
✂
❁❁
✂✂
❁
✂
✂
◦
◦o
◦
k
(c)
k
k
−→
v
d−1
/ ◦1 i
ε
◦
(
•L
◦ o❴ ❴ ❴ ◦ o
k+1
d−1
◦
d
(j’) The vertex k does not connect to an oriented chordless cycle C in Q. Then C is
the corresponding cycle in Q′ .
QUIVER MUTATIONS AND BOOLEAN REFLECTION MONOIDS
23
(k’) The vertex k connects to one vertex of an oriented chordless cycle C in Q (via
an edge of unspecified weight). Then C is the corresponding cycle in Q′ .
By Lemmas 5.1 and 5.2, we have the following corollary.
Corollary 5.3. Let Q be a quiver in the mutation class of a ∆ quiver. Then the frozen
vertex ε in Q has one neighbour or two neighbours and if it has two neighbours, then ε
must be in an oriented cycle.
6. Cycle relations and path relations
In this section, we find an efficient subset of the relations sufficient to define Boolean
reflection monoids, which generalizes Barot and Marsh’s results, Lemmas 4.1, 4.2, 4.4
and Proposition 4.6 in [2].
Lemma 6.1 ([2, Lemmas 4.1, 4.2 and 4.4]). Let Q be a Dynkin quiver and W (Q) the
reflection group determined by Q, see Section 3.
(1) If Q contains a chordless cycle Cd , see Figure 5 (1), d ≥ 3, then the following
are equivalent:
(a) (sa sa+1 · · · sa+d−1 sa+d−2 · · · sa+1 )2 = e (with subscripts modulo d) for a single fixed value of a, 0 ≤ a ≤ d − 1;
(b) (sa sa+1 · · · sa+d−1 sa+d−2 · · · sa+1 )2 = e (with subscripts modulo d) for any
a = 0, 1, · · · , d − 1.
(2) If Q contains a chordless 3-cycle C3 , see Figure 5 (2), then the following are
equivalent:
(a) (s1 s2 s3 s2 )2 = e;
(b) (s2 s3 s1 s3 )2 = e.
Furthermore, if one of the above holds, then the following holds:
(c) (s3 s1 s2 s1 )3 = e.
(3) If Q contains a chordless 4-cycle C4 , see Figure 5 (3), then the following are
equivalent:
(a) (s1 s2 s3 s4 s3 s2 )2 = e;
(b) (s3 s4 s1 s2 s1 s4 )2 = e.
Furthermore, if one of the above holds, then the following holds:
(c) (s2 s3 s4 s1 s4 s3 )3 = e;
(d) (s4 s1 s2 s3 s2 s1 )3 = e.
(1)
0
◦O
◦ o
d−1
/ ◦1
..
(2)
.
2
◦
✆B ✾✾✾
✆
✾✾
2 ✆✆
✾✾
✆
✆
✾✾
✆
✆
✆
o
◦
◦
1
2
3
1
(3)
◦O
/ ◦2
2
2
◦o
4
◦
3
Figure 5. (1) A chordless d-cycle Cd , (2) a chordless 3-cycle C3 , and
(3) a chordless 4-cycle C4 .
24
BING DUAN, JIAN-RONG LI, AND YAN-FENG LUO
Let ∆ be one of Aεn−1 , Bnε , and Dnε in Figure 2. Suppose that Q is any quiver mutation
equivalent to a ∆ quiver. The following lemma shows an efficient subset of relations (R3)
and (R4) in Definition 4.4, which generalizes the above lemma.
Lemma 6.2. Let M (Q) be an inverse monoid with generators subjecting to relations
(R1), (R2) in Definition 4.4.
(1) If Q contains a chordless cycle C3′ , see Figure 6 (1), then the following statements
are equivalent:
(a) sε s1 s2 s1 = s1 s2 s1 sε ;
(b) s1 s2 sε s2 = s2 sε s2 s1 .
Furthermore, if one of the above holds, then the following statements are equivalent:
(c) sε s1 sε = sε s1 sε s1 = s1 sε s1 sε ;
(d) sε s2 sε = sε s2 sε s2 = s2 sε s2 sε .
(2) If Q contains a chordless cycle C3′′ , see Figure 6 (2), then s1 s2 sε s2 = s2 sε s2 s1 .
(3) If Q contains a chordless cycle C4′ , see Figure 6 (3), then the following statement
holds:
(a) sε s1 s2 s3 s2 s1 = s1 s2 s3 s2 s1 sε ;
(b) s1 s2 s3 sε s3 s2 = s2 s3 sε s3 s2 s1 .
Furthermore, if one of the above holds, then the following statements are equivalent:
(c) sε s1 sε = sε s1 sε s1 = s1 sε s1 sε ;
(d) sε s3 sε = sε s3 sε s3 = s3 sε s3 sε .
(4) If Q contains a subquiver Cd′ , see Figure 6 (4), then the following statements are
equivalent:
(a) sa sa+1 · · · sd P (s1 , sε )P (sε , s1 )sd · · · sa+1 sa = sa−1 sa−2 · · · s1 s1 P (s1 , sε )
P (sε , s1 )s1 s1 · · · sa−2 sa−1 for a single fixed value of a, 2 ≤ a ≤ d;
(b) sa sa+1 · · · sd P (s1 , sε )P (sε , s1 )sd · · · sa+1 sa = sa−1 sa−2 · · · s1 s1 P (s1 , sε )
P (sε , s1 )s1 s1 · · · sa−2 sa−1 for any a = 2, · · · , d.
(1)
2
◦
✆B ✾✾✾
✆
✾✾
✆✆
✾✾
✆✆
✾✾
✆
✆✆
o
•
◦
ε
1
(3)
2
◦
✆B ✾✾✾
✆
✾✾
2 ✆✆
✾✾
✆✆
✾✾
✆
✆✆
o
•
◦
(2)
2
◦O
/ ◦3
◦o
•
1
ε
2
1
ε
2
◦O
(4)
✝
✝✝
✝
✝
✝✝
✝
✝
/◦o
•❴ ❴ ❴◦
ε
1
d
/ ◦3
..
.
Figure 6. (1) A chordless 3-cycle C3′ , (2) a chordless 3-cycle C3′′ , (3) a
chordless 4-cycle C4′ , and (4) a subquiver Cd′ . (see Lemma 6.2)
QUIVER MUTATIONS AND BOOLEAN REFLECTION MONOIDS
25
Proof. For (1), the equivalence of (a) and (b) follows from:
s1 s2 sε s2 = s2 (s1 s2 s1 sε )s2 , s2 sε s2 s1 = s2 (sε s1 s2 s1 )s2 ,
using (R2). Suppose that (a) and (b) hold. Then by (R1), (R2), (a), and (b), the
equivalence of (c) and (d) follows from:
s1 s2 s1 (sε s1 sε )s1 s2 s1 = sε s1 s2 s1 s1 s1 s2 s1 sε = sε s2 sε ,
s1 s2 s1 (sε s1 sε s1 )s1 s2 s1 = sε (s1 s2 sε s2 )s1 = sε s2 sε s2 ,
s1 s2 s1 (s1 sε s1 sε )s1 s2 s1 = s1 (s2 sε s2 s1 )sε = s2 sε s2 sε .
For (3), the equivalence of (a) and (b) follows from:
s1 s2 s3 sε s3 s2 = s2 s3 (s1 s2 s3 s2 s1 sε )s3 s2 , s2 s3 sε s3 s2 s1 = s2 s3 (sε s1 s2 s3 s2 s1 )s3 s2 ,
using first s1 s2 s3 s2 s1 = s3 s2 s1 s2 s3 and then (R1). Suppose that (a) and (b) hold. Using
first s2 sε = sε s2 and then (a), we have:
s1 s2 s3 s2 s1 s2 (sε s1 sε )s2 s1 s2 s3 s2 s1 = (s1 s2 s3 s2 s1 sε )s2 s1 s2 (sε s1 s2 s3 s2 s1 )
= (sε s1 s2 s3 s2 s1 )s2 s1 s2 (s1 s2 s3 s2 s1 sε )
= sε s1 s2 s3 s2 s3 s2 s1 sε
= sε s3 sε ,
(by (R2))
where in the last equation we used that s1 and s3 commute. Using (R1), (R2), (a), and
(b), by a similar argument, we have
s1 s2 s3 s2 s1 s2 (s1 sε s1 sε )s2 s1 s2 s3 s2 s1 = s1 s2 s3 s1 s2 sε s1 s2 s1 s2 s3 s2 s1 sε ,
= s1 s2 s1 s3 sε s1 s3 s2 s1 sε ,
= s2 (s1 s2 s3 sε s3 s2 )s1 s2 sε ,
= s2 (s2 s3 sε s3 s2 s1 )s1 s2 sε ,
= s3 sε s3 sε .
s1 s2 s3 s2 s1 s2 (sε s1 sε s1 )s2 s1 s2 s3 s2 s1 = (s1 s2 s3 s2 s1 sε )s2 s1 sε s2 s1 s3 s2 s1
= sε s1 s2 s3 s2 s1 s2 s1 s2 sε s3 s1 s2 s1
= sε s2 (s1 s2 s3 sε s3 s2 )s1 s2
= sε s2 (s2 s3 sε s3 s2 s1 )s1 s2
= sε s3 sε s3 .
Therefore (c) and (d) are equivalent.
For (4), using (R1), it is obvious.
At an end of this section, we show that M (Q) could be defined using only the underlying unoriented weighted diagram of Q, by taking relations (R1)–(R4) corresponding to
both Q and Qop as the defining relations. Our result can be viewed as a generalization
of Proposition 4.6 of [2].
Proposition 6.3. Let M (Φ, B) be a Boolean reflection monoid with generators si , i ∈
I ∪ {ε}. Then the generators satisfy (R1)–(R4) with respect to Q if and only if they
satisfy (R1)–(R4) with respect to Qop .
26
BING DUAN, JIAN-RONG LI, AND YAN-FENG LUO
Proof. We assume that generators si , i ∈ I ∪ {ε} satisfy relations (R1)–(R4) with respect
to Q, and show that these generators satisfy relations (R1)–(R4) with respect to Qop .
The converse follows by replacing Q with Qop . Since (R1) and (R2) do not depend on the
orientation of Q, generators si , i ∈ I ∪ {ε} satisfy relation (R1) and (R2) with respect to
Qop . The cases of chordless cycles appearing in quivers of finite type have been proved
in Proposition 4.6 of [2]. The remaining needed to check the cases are C3′ , C3′′ , C4′ , and
Cd′ shown in Figure 6.
Case 1. In C3′ , we have
sε s2 s1 s2 = sε s1 s2 s1 = s1 s2 s1 sε = s2 s1 s2 sε ,
s2 s1 sε s1 = s1 s2 (s1 s2 sε s2 )s2 s1 = s1 s2 (s2 sε s2 s1 )s2 s1 = s1 sε s1 s2 .
Case 2. In C3′′ , we have
sε s2 s1 s2 = s2 s1 (s1 s2 sε s2 )s1 s2 = s2 s1 (s2 sε s2 s1 )s1 s2 = s2 s1 s2 sε .
Case 3. In C4′ , note that s1 s2 s3 s2 s1 = s3 s2 s1 s2 s3 . We have
sε s3 s2 s1 s2 s3 = sε s1 s2 s3 s2 s1 = s1 s2 s3 s2 s1 sε = s3 s2 s1 s2 s3 sε ,
s3 s2 s1 sε s1 s2 = s2 s1 s3 s2 (s1 s2 s3 sε s3 s2 )s2 s3 s1 s2 = s2 s1 s3 s2 (s2 s3 sε s3 s2 s1 )s2 s3 s1 s2
= s2 s1 sε s1 s2 s3 .
Case 4. In Cd′ , it follows from Lemma 6.2 (4) that (R4) do not depend on the
orientation of chordless cycles in Cd′ .
Since every chordless cylce in Qop corresponds to a chordless cycle in Q, the result
holds.
7. The proof of Theorem 4.7
In this section, we give the proof of Theorem 4.7.
Let ∆ be one of Aεn−1 , Bnε , and Dnε in Figure 2. We fix a ∆ quiver Q. Let Q′ = µk (Q)
be the mutation of Q at vertex k, k ∈ I. Throughout the section, we will write si and ri
for the generators corresponding to vertex i ∈ I ∪ {ε} of M (Q) and M (Q′ ) respectively.
Similar to [2], we define elements ti , i ∈ I, and tε in M (Q) as follows:
(
sk si sk if there is an arrow i → k in Q (possibly weighted),
ti =
si
otherwise,
(
(7.1)
sk sε sk if there is an arrow ε → k in Q (possibly weighted),
tε =
sε
otherwise.
Then
(
(sk si sk )(sk si sk ) = e if there is an arrow i → k in Q (possibly weighted),
t2i =
s2i = e
otherwise,
(
(sk sε sk )(sk sε sk ) = tε if there is an arrow ε → k in Q (possibly weighted),
t2ε =
otherwise.
s2ε = tε
(7.2)
QUIVER MUTATIONS AND BOOLEAN REFLECTION MONOIDS
27
In order to prove Theorem 4.7, we need the following proposition, which we will prove
in Section 7.2.
Proposition 7.1. For each i ∈ I ∪ {ε}, the map
Φ : M (Q′ ) −→ M (Q)
ri 7−→ ti ,
is an inverse monoid homomorphism.
7.1. Proof Theorem 4.7. For each vertex i ∈ I ∪ {ε} of Q define the elements t′i in
M (Q′ ) as follows:
(
rk ri rk if there is an arrow k → i in Q′ ,
t′i =
ri
otherwise,
(
rk rε rk if there is an arrow k → ε in Q′ ,
t′ε =
rε
otherwise.
We claim that these elements t′i , for each vertex i ∈ I ∪ {ε}, satisfy the relations (R1)–
(R4) defining M (Q). This follows from Proposition 7.1 by interchanging Q and Q′ and
using the fact that the definition of M (Q) is unchanged under reversing the orientation of
all the arrows in Q (see Lemma 6.3). Therefore there is an inverse monoid homomorphism
Θ : M (Q) → M (Q′ ) such that Θ(si ) = t′i for each i.
If there is no arrow i → k in Q, then there is also no arrow k → i in Q′ and consequently
Θ ◦ Φ(ri ) = Θ(si ) = ri . If there is an arrow i → k in Q, then there is an arrow k → i
in Q′ and therefore Θ ◦ Φ(ri ) = Θ(sk si sk ) = Θ(sk )Θ(si )Θ(sk ) = rk (rk ri rk )rk = ri . So
Θ ◦ Φ = idM (Q′ ) , and, similarly, Φ ◦ Θ = idM (Q) , and hence Θ and Φ are isomorphisms.
7.2. The proof of Proposition 7.1. We will prove Proposition 7.1 by showing that
the elements ti , i ∈ I ∪ {ε} satisfy the (R1)–(R4) relations in M (Q′ ). We denote by m′ij
the value of mij for Q′ . By Equation (7.2), (R1) is obvious. In the sequel, the proof
that the elements ti , i ∈ I ∪ {ε} satisfy (R2) in M (Q′ ) follows from Lemma 7.2 and the
rest of proof is completed case by case.
Lemma 7.2. The elements ti , for i a vertex of Q, satisfy the following relations.
′
(1) If i = k or j = k and i, j 6= ε, then (ti tj )mij = e.
(2) If at most one of i, j is connected to k in Q (or, equivalently, in Q′ ) and i, j 6= ε,
′
then (ti tj )mij = e.
(3) Let i be in I. Then
if ε, i are not connected in Q′ ,
ti tε = tε ti
tε ti tε = tε ti tε ti = ti tε ti tε if ε, i are connected by an edge with weight 1 in Q′ ,
tε = ti tε = tε ti
if ε, i are connected by an edge with weight 2 in Q′ .
Proof. In Lemma 5.1 of [2], Barot and Marsh proved the parts (1) and (2). We only
need to prove the part (3).
28
BING DUAN, JIAN-RONG LI, AND YAN-FENG LUO
Suppose without loss of generality that i = k. The only nontrivial case is when there
is an arrow ε → k = i with a weight q in Q. If q = 1, then
tε tk tε tk = (sk sε sk )sk (sk sε sk )sk = sk sε sk sε = sε sk sε sk = sk (sk sε sk )sk (sk sε sk ) = tk tε tk tε ,
= sε sk sε = sk sε sk sε sk = (sk sε sk )sk (sk sε sk ) = tε tk tε .
If q = 2, note that sε sk = sk sε = sε , we have tk tε = sk (sk sε sk ) = sε sk = sk sε = sε =
tε tk = tε .
In the following, suppose that i 6= k. We divide this proof into three cases.
Case 1. There are no arrows from i, ε to k, then ti = si , tε = sε hold (3).
Case 2. There are arrows from one of i, ε to k and there are no arrows from the
other of i, ε to k in Q, then we assume that there are arrows from ε to k and there are
no arrows from i to k in Q. If ε, i are not connected in Q, we have
ti tε = si (sk sε sk ) = (sk sε sk )si = tε ti .
If ε, i are connected by an edge with weight 1 in Q, then
tε ti tε = (sk sε sk )si (sk sε sk ) = sk sε si sε sk = sk sε si sε sk si = (sk sε sk )si (sk sε sk )si = tε ti tε ti
= si sk sε si sε sk = si (sk sε sk )si (sk sε sk ) = ti tε ti tε .
That ε, i are connected by an edge with weight 2 and there are no arrows from i to k in
Q is impossible, because of the fact that there is only 3-chordless cycle in the mutation
class of Bnε quivers and Corollary 5.3.
Case 3. There are arrows from i, ε to k. The possibilities for the subquivers induced
by i, ε, and k are enumerated in (a’)–(g’) of Figure 4. We show that ti and tε satisfy (3)
by checking each case. Within each case, subcase (i) is when the subquiver of Q is the
diagram on the left, and subcase (ii) is when the subquiver of Q is the diagram on the
right.
(a′ )(i) We have ti tε = (sk si sk )(sk sε sk ) = sk si sε sk = sk sε si sk = (sk sε sk )(sk si sk ) =
tε ti .
(a′ )(ii) We have ti tε = si sε = sε si = tε ti .
(b′ ) (i) We have
tε ti tε = sε (sk si sk )sε = sε si sk si sε = si (sε sk sε )si ,
(
s i s ε s k s ε s k s i = s ε s i s k s i s ε s i s k s i = s ε s k s i s k s ε s k s i s k = tε ti tε ti ,
=
si sk sε sk sε si = si sk si sε si sk si sε = (sk si sk )sε (sk si sk )sε = ti tε ti tε .
(b′ ) (ii) We have ti tε = si (sk sε sk ) = sk (sk si sk sε )sk = sk (sε sk si sk )sk = (sk sε sk )si =
tε ti .
(c′ ) (i) We have
tε ti tε = (sk sε sk )si (sk sε sk ) = sk sε si sk si sε sk = sk si sε sk sε si sk ,
(
sk si sε sk sε sk si sk = sk sε si sk si sε sk si = (sk sε sk )si (sk sε sk )si = tε ti tε ti ,
=
sk si sk sε sk sε si sk = si sk sε si sk si sε sk = si (sk sε sk )si (sk sε sk ) = ti tε ti tε .
(c′ ) (ii) We have ti tε = (sk si sk )sε = si sk si sε = sε si sk si = sε sk si sk = tε ti .
(d′ ) (i) We have ti tε = (sk si sk )(sk sε sk ) = sk si sε sk = sk sε si sk = (sk sε sk )(sk si sk ) =
tε ti .
QUIVER MUTATIONS AND BOOLEAN REFLECTION MONOIDS
29
(d′ )(ii) We have ti tε = si sε = sε si = tε ti .
(e′ )(i) Note that si sk sε = sk sε and sε sk si = sε sk . We have
ti tε = (sk si sk )sε = sk (sk sε ) = sε = tε ,
tε ti = sε (sk si sk ) = (sε sk )sk = sε = tε .
(e′ )(ii) We have ti tε = si (sk sε sk ) = si sk (sε sk si sk )sk si = si sk (sk si sk sε )sk si = sk sε sk si =
tε ti .
(f ′ )(i) Note that si sk sε = sk sε and sε sk si = sε sk . We have
ti tε = si (sk sε sk ) = sk sε sk = tε ,
tε ti = (sk sε sk )si = sk sε sk = tε .
(f ′ )(ii) We have ti tε = (sk si sk )sε = sk (si sk sε sk )sk = sk (sk sε sk si )sk = sε sk si sk = tε ti .
(g′ )(i) Note that sk sε = sε sk = sε . We have
(
s i s ε s i s ε = ti tε ti tε ,
tε ti tε = (sk sε sk )si (sk sε sk ) = sε si sε =
s ε s i s ε s i = tε ti tε ti .
(g′ )(ii) Note that sk sε = sε sk = sε . We have
(
sε si sε sk = sε si sε si sk = sε (sk si sk )sε (sk si sk ) = tε ti tε ti ,
tε ti tε = sε (sk si sk )sε =
sk sε si sε = sk si sε si sε = (sk si sk )sε (sk si sk )sε = ti tε ti tε .
The possibilities for chordless cycles in mutation classes of ∆ quivers are enumerated
in Lemma 5.2. For (R3), Barot and Marsh proved in [2] that (R3) (i) holds for (a)–(e).
We show that (R3) (ii) and (R3) (iii) hold by checking (a′ )–(k′ ). In each case, we need to
check that the corresponding cycle relations hold. Within each case, subcase (i) is when
the subquiver of Q is the diagram on the left, and subcase (ii) is when the subquiver of
Q is the diagram on the right. In the sequel, we frequently use (R1) and (R2) without
comment.
(a′ )(i) We have
tε tk ti tk = sε sk (sk si sk )sk = sε si = si sε = sk (sk si sk )sk sε = tk ti tk tε .
(b′ )(i)
We have
tε ti tk ti = (sk sε sk )si sk si = sk sε si sk = sk si sε sk = si sk si (sk sε sk ) = ti tk ti tε .
(c′ )(i) We have
tε tk ti tk = sε sk (sk si sk )sk = sε si = si sε = sk (sk si sk )sk sε = tk ti tk tε .
(d′ )(i) We have
ti tk tε tk = si sk (sk sε sk )sk = si sε = sε si = sk (sk sε sk )sk si = tk tε tk ti .
(e′ )(i) Note that sk sε = sε sk = sε and sk si sε si = si sε si sk . We have
tε ti tk ti = (sk sε sk )si sk si = sε si sk si = si sk (sk si sε si )sk si = si sk (si sε si sk )sk si
= si sk si sε = si sk si (sk sε sk ) = ti tk ti tε .
30
BING DUAN, JIAN-RONG LI, AND YAN-FENG LUO
(e′ )(ii) Note that sk sε = sε sk = sε and sε si sk si = si sk si sε . We have
tk ti tε ti = sk (sk si sk )sε (sk si sk ) = si sε si sk = si (sε si sk si )si = si (si sk si sε )si
= sk si sε si = (sk si sk )sε (sk si sk )sk = ti tε ti tk .
(f ′ ) (i) Note that sj sε = sε sj and sε si sj sk sj si = si sj sk sj si sε . We have
tε tk tj tk = sε sk (sk sj sk )sk = sε sj = sj sε = sk (sk sj sk )sk sε = tk tj tk tε ,
tε ti tj ti = sε si (sk sj sk )si = sε si sj sk sj si = si sj sk sj si sε = si (sk sj sk )si sε = ti tj ti tε .
(f ′ ) (ii) Note that sk si = si sk and sε si sj si = si sj si sε .We have
tε ti tj tk tj ti = (sk sε sk )si sj sk sj si = sk sε si sk sj sk sj si = sk (sε si sj si )sk
= sk (si sj si sε )sk = si sj sk sj si (sk sε sk ) = ti tj tk tj ti tε .
(g′ ) (i) Note that sj sε = sε sj and sε sk sj si sj sk = sk sj si sj sk sε . We have
tε tj tk tj = (sk sε sk )sj sk sj = sk sε sj sk = sk sj sε sk = sj sk sj (sk sε sk ) = tj tk tj tε ,
tε tj ti tj = (sk sε sk )sj si sj = sj si sj sk sε sk = tj ti tj tε .
(g′ ) (ii) Note that sk si = si sk and sε sj si sj = sj si sj sε . We have
tε tk tj ti tj tk = sε sk (sk sj sk )si (sk sj sk )sk = sε sj sk si sk sj = sε sj si sj = sj si sj sε
= sk (sk sj sk )si (sk sj sk )sk sε = tk tj ti tj tk tε .
(h′ ) (i) Note that si sj = sj si , sε sk = sk sε , and sε sj sk si sk sj = sj sk si sk sj sε . We have
ti tk tj tk = si sk (sk sj sk )sk = si sj = sj si = sk (sk sj sk )sk si = tk tj tk ti ,
tε tj ti tj = sε (sk sj sk )si (sk sj sk ) = sk (sε sj sk si sk sj )sk = sk (sj sk si sk sj sε )sk
= (sk sj sk )si (sk sj sk )sε = tj ti tj tε .
(h′ ) (ii) Note that sε sj si sj = sj si sj sε . We have
tε tj tk ti tk tj = sε sj sk (sk si sk )sk sj = sε sj si sj = sj si sj sε
= sj sk (sk si sk )sk sj sε = tj tk ti tk tj tε .
Case (i′ ) follows from either Barot and Marsh’s result or (a′ ) or (b′ ) or (h′ ). Case
is trivial and Case (k′ ) follows from the commutative property of tk and ti for each
vertex i in C.
For (R4), by Lemma 6.2, we prove the following several cases, where in each case we
number the vertices 0, 1, . . ., d, ε of these subquivers for convenience. Within each case,
subcase (i) is when the subquiver of Q is the diagram on the left, and subcase (ii) is
when the subquiver of Q is the diagram on the right. In the sequel, we frequently use
(R1) and (R2) without comment.
(1)
(j ′ )
◦ ]❁
✂✂ ❁❁❁
2 ✂✂
❁❁
❁❁
✂✂
✂ ✂
/ ◦ ❴ ❴ ❴/ ◦
◦
0
2
1
k−1
µ
k
←→
/◦
k
/•
ε
k
◦ \✾
◦
✂ \✾✾✾
✆✆ ✾✾✾
✾✾
✂✂
2 ✆✆
✾
✂
✾✾
✾✾
✂
✆
✾✾
✾✾
✂✂
✆✆
✆
✂
✆
✂
/ ◦ ❴ ❴ ❴/ ◦
/•
◦
0
2
1
k−1
ε
QUIVER MUTATIONS AND BOOLEAN REFLECTION MONOIDS
31
(2)
0
/ ◦d ❴ ❴ ❴/ ◦4
ε
•
/ ◦3
0
µ2
←→
B◦
✆✆
✆
✆✆
✆✆
✆
/ ◦2
✾✾
✾✾
✾✾
✾✾
✾
ε
•
B◦✾
✆✆ ✾✾✾
✆
✾✾
✆✆
✾✾
✆✆
✆
2
/ ◦3 o
B◦
✾✾
✾✾
✆✆
✆
✾✾
✆
✾✾
✆✆
✾ ✆✆✆
/ ◦d ❴ ❴ ❴/ ◦4
◦
◦
1
1
(3)
0
ε
•
/ ◦d ❴ ❴ ❴/ ◦4
0
µ0
◦ \✾
←→
✆✆ ✾✾✾
✆
✾✾
✆✆
✾✾
✆✆
✆
1
/ ◦3
◦
✾✾
✆B
✾✾
✆
✾✾
✆✆
✾✾
✆✆
✾ ✆✆✆
ε
•
B◦✾
✆✆ ✾✾✾
✆
✾✾
✆✆
✾✾
✆✆
1
✆
/ ◦3 o
B◦
✾✾
✾✾
✆✆
✆
✾✾
✆
✾✾
✆✆
✾ ✆✆✆
/ ◦d ❴ ❴ ❴/ ◦4
◦
◦
2
2
(4)
1
◦O
/ ◦2 ❴ ❴ ❴/ k−1
◦
ε
0
• ❴ ❴ ❴/ ◦
k
◦
)◦o
d
1
µ
k
←→
◦O
ε
0
• ❴ ❴ ❴/ ◦
v
◦ o❴ ❴ ❴ ◦
d−1
/ ◦2 ❴ ❴ ❴/ k−1
◦ h
k+1
k
◦K
)◦o
d
◦ o❴ ❴ ❴ ◦
d−1
(1) (i) Note that sk sk−1 sk = sk−1 sk sk−1 and sk−1 sε = sε sk−1 . We have
P (t0 , tε ) = t0 P (t1 , tk−1 )tε = s0 P (s1 , sk−1 )sk sk−1 sε
= s0 P (s1 , sk−1 )sk sε sk−1 = P (s1 , sk−1 )sk sε sk−1
= P (s1 , sk−1 )sk sk−1 sε = P (t1 , tε ),
P (tε , t1 ) = tε P (tk−1 , t1 ) = sε sk−1 sk P (sk−1 , s1 )
= sk−1 sε sk P (sk−1 , s1 ) = sk−1 sε sk P (sk−1 , s1 )s0
= sε sk−1 sk P (sk−1 , s1 )s0 = P (tε , t0 ).
(1) (ii) We have
P (t0 , tε ) = P (t0 , tk−1 )tk tε = P (s0 , sk−1 )sk (sk sε sk )
= P (s0 , sk−1 )sε sk = P (s1 , sk−1 )sk (sk sε sk )
= P (t1 , tε ),
P (tε , t1 ) = (sk sε sk )sk P (sk−1 , s1 ) = sk sε P (sk−1 , s1 )s0
= sk sε sk sk P (sk−1 , s1 )s0 = P (tε , t0 ).
k+1
32
BING DUAN, JIAN-RONG LI, AND YAN-FENG LUO
(2) (i) We have
P (t0 , tε )P (tε , t0 ) = t0 t3 P (t4 , tε )P (tε , t4 )t3 t0
= s0 (s2 s3 s2 )P (s4 , sε )P (sε , s4 )(s2 s3 s2 )s0
= s0 s2 s3 P (s4 , sε )P (sε , s4 )s3 s2 s0
= s1 s2 s3 P (s4 , sε )P (sε , s4 )s3 s2 )s1
= s1 (s2 s3 s2 )P (s4 , sε )P (sε , s4 )(s2 s3 s2 )s1
= t1 t3 P (t4 , tε )P (tε , t4 )t3 t1
= P (t1 , tε )P (tε , t1 ).
(2) (ii) We have
P (t0 , tε )P (tε , t0 ) = t0 t2 P (t3 , tε )P (tε , t3 )t2 t0
= (s2 s0 s2 )s2 P (s3 , sε )P (sε , s3 )s2 (s2 s0 s2 )
= s2 P (s0 , sε )P (sε , s0 )s2
= s2 P (s1 , sε )P (sε , s1 )s2
= (s2 s1 s2 )s2 P (s3 , sε )P (sε , s3 )s2 (s2 s1 s2 )
= t1 t2 P (t3 , tε )P (tε , t3 )t2 t1
= P (t1 , tε )P (tε , t1 ).
(3) (i) We have
P (t0 , tε )P (tε , t0 ) = P (s0 , sε )P (sε , s0 ) = P (s2 , sε )P (sε , s2 ) = P (t2 , tε )P (tε , t2 ).
(3) (ii) We have
P (t2 , tε )P (tε , t2 ) = s2 (s0 s3 s0 )P (s4 , sε )P (sε , s4 )(s0 s3 s0 )s2 ,
= s2 (s0 s3 P (s4 , sε )P (sε , s4 )s3 s0 )s2
= P (s3 , sε )P (sε , s3 )
= s3 s0 P (s4 , sε )P (sε , s4 )s0 s3
= s0 (s0 s3 s0 )P (s4 , sε )P (sε , s4 )(s0 s3 s0 )s0 ,
= P (t0 , tε )P (tε , t0 ).
(4) (i) We have
t2 · · · tk−1 tk+1 · · · td P (t0 , tε )P (tε , t0 )td · · · tk+1 tk−1 · · · t2
= s2 · · · (sk sk−1 sk )sk+1 · · · sd P (t0 , tε )P (tε , t0 )sd · · · sk+1 (sk sk−1 sk ) · · · s2
= s2 · · · (sk−1 sk sk−1 )sk+1 · · · sd P (t0 , tε )P (tε , t0 )sd · · · sk+1 (sk−1 sk sk−1 ) · · · s2
= s2 · · · sk−1 sk sk+1 · · · sd P (t0 , tε )P (tε , t0 )sd · · · sk+1 sk sk−1 · · · s2
= s1 P (t0 , tε )P (tε , t0 )s1
= t1 P (t0 , tε )P (tε , t0 )t1 .
QUIVER MUTATIONS AND BOOLEAN REFLECTION MONOIDS
33
(4) (ii) We have
t2 · · · tk tk+1 · · · td P (t0 , tε )P (tε , t0 )td · · · tk+1 tk · · · t2
= s2 · · · sk (sk sk+1 sk ) · · · sd P (t0 , tε )P (tε , t0 )sd · · · (sk sk+1 sk )sk · · · s2
= s2 · · · sk+1 sk · · · sd P (t0 , tε )P (tε , t0 )sd · · · sk sk+1 · · · s2
= s2 · · · sk+1 · · · sd P (t0 , tε )P (tε , t0 )sd · · · sk+1 · · · s2
= s1 P (t0 , tε )P (tε , t0 )s1
= t1 P (t0 , tε )P (tε , t0 )t1 .
Acknowledgements
B. Duan would like to express his gratitude to B. Everitt, W. N. Franzsen, R. Schiffler,
C. C. Xi for helpful discussions. B. Duan was supported by China Scholarship Council to
visit Uconn Department of Mathematics and he would like to thank Uconn Department
of Mathematics for hospitality during his visit. This work was partially supported by
the National Natural Science Foundation of China (no. 11371177, 11501267, 11401275).
The research of J.-R. Li on this project is supported by the Minerva foundation with
funding from the Federal German Ministry for Education and Research.
References
[1] E. Bannai, Automorphisms of irreducible Weyl groups, J. Fac. Sci. Univ. Tokyo Sect. I 16 (1969),
273–286.
[2] M. Barot and R. J. Marsh, Reflection group presentations arising from cluster algebras, Trans. Amer.
Math. Soc. 367 (2015), no. 3, 1945–1967.
[3] N. Bourbaki, Lie groups and Lie algebras, Chapters 4–6, Springer-Verlag, Berlin, 2002.
[4] H. S. M. Coxeter, The complete enumeration of finite groups of the form ri2 = (ri rj )kij = 1, J.
London Math. Soc. s1-10 (1935), no. 1, 21–25.
[5] R. Charney and M. Davis, When is a Coxeter system determined by its Coxeter group?, J. London
Math. Soc. (2) 61 (2000), no. 2, 441–461.
[6] M. W. Davis, The geometry and topology of Coxeter groups, London Mathematical Society Monographs Series, vol. 32, Princeton University Press, Princeton, NJ, 2008.
[7] B. Duan, Presentations of monoids of uniform block permutations, ready (2018).
[8] J. East, Cellular algebras and inverse semigroups, J. Algebra 296 (2006), no. 2, 505–519.
[9]
, Braids and partial permutations, Adv. Math. 213 (2007), no. 1, 440–461.
[10]
, Generators and relations for partition monoids and algebras, J. Algebra 339 (2011), 1–26.
[11] B. Everitt and J. Fountain, Partial symmetry, reflection monoids and Coxeter groups, Adv. Math.
223 (2010), no. 5, 1782–1814.
[12]
, Partial mirror symmetry, lattice presentations and algebraic monoids, Proc. Lond. Math.
Soc. (3) 107 (2013), no. 2, 414–450.
[13] D. Easdown and T. G. Lavers, The inverse braid monoid, Adv. Math. 186 (2004), no. 2, 438–455.
[14] D. G. FitzGerald, A presentation for the monoid of uniform block permutations, Bull. Austral. Math.
Soc. 68 (2003), no. 2, 317–324.
[15] D. G. FitzGerald and J. Leech, Dual symmetric inverse monoids and representation theory, J.
Austral. Math. Soc. Ser. A 64 (1998), no. 3, 345–367.
[16] W. N. Franzsen, Automorphisms of Coxeter Groups, PhD Thesis, University of Sydney, Australia
(2001), 1–92.
[17] A. Felikson and P. Tumarkin, Coxeter groups and their quotients arising from cluster algebras, Int.
Math. Res. Not. IMRN 2016, no. 17, 5135–5186.
[18]
, Coxeter groups, quiver mutations and geometric manifolds, J. Lond. Math. Soc. (2) 94
(2016), no. 1, 38–60.
34
BING DUAN, JIAN-RONG LI, AND YAN-FENG LUO
[19] S. Fomin and N. Reading, Root systems and generalized associahedra, Geometric combinatorics,
IAS/Park City Math. Ser., vol. 13, Amer. Math. Soc., Providence, RI, 2007, pp. 63–131.
[20] S. Fomin and A. Zelevinsky, Cluster algebras I: Foundations, J. Amer. Math. Soc. 15 (2002), no. 2,
497–529.
[21]
, Cluster algebras. II. Finite type classification, Invent. Math. 154 (2003), no. 1, 63–121.
[22] M. Geck, Relative Kazhdan-Lusztig cells, Represent Theory 10 (2006), 481–524.
, Hecke algebras of finite type are cellular, Invent. Math. 169 (2007), no. 3, 501–517.
[23]
[24] J. J. Graham and Lehrer G. I., Cellular algebras, Invent. Math. 123 (1996), no. 1, 1–34.
[25] J. Grant and Marsh R. J., Braid groups and quiver mutation, Pacific Journal of Mathematics 290
(2017), no. 1, 77–116.
[26] X. J. Guo and Xi C. C., Cellularity of twisted semigroup algebras, J. Pure Appl. Algebra 213 (2009),
no. 1, 71–86.
[27] Tom Halverson, Representations of the q-rook monoid, J. Algebra 273 (2004), no. 1, 227–251.
[28] J. E. Humphreys, Reflection groups and Coxeter groups, Cambridge Studies in Advanced Mathematics, 29, Cambridge University Press, Cambridge, 1990.
[29] J. M. Howie, Fundamental of Semigroup Theory, Oxford University Press, New York, 1995.
[30] J. Haley, D. Hemminger, A. Landesman, and H. Peck, Artin group presentations arising from cluster
algebras, Algebr. Represent. Theory 20 (2017), no. 3, 629–653.
[31] Tom Halverson and Arun Ram, q-rook monoid algebras, Hecke algebras, and Schur-Weyl duality, J.
Math. Sci. 121 (2004), no. 3, 2419–2436.
[32] Y. D. Ji and Luo Y. F., Cellularity of Some Semigroup Algebras, Bull. Malays. Math. Sci. Soc. 40
(2017), no. 1, 215–235.
[33] A. E. Liber, On symmetric generalized groups, (Russian) Mat. Sbornik N.S. 33 (1953), no. 75,
531–544.
[34] L. M. Popova, Defining relations in some semigroups of partial transformations of a finite set,
Uchenye Zap. Leningrad Gos. Ped. Inst. 218 (1961), 191–212.
[35] R. J. Marsh, Lecture notes on cluster algebras, Zurich Lectures in Advanced Mathematics, European
Mathematical Society (EMS), Zürich, 2013.
[36] A. Mathas, Iwahori-Hecke algebras and Schur algebras of the symmetric group, University Lecture
Series, 15, American Mathematical Society, Providence, RI, 1999.
[37] G. E. Murphy, The representations of Hecke algebras of type An , J. Algebra 173 (1995), no. 1,
97–121.
[38] A. I. Seven, Reflection group relations arising from cluster algebras, Proc. Amer. Math. Soc. 144
(2016), no. 11, 4641–4650.
[39] B. M. Schein and B. Teclezghi, Endomorphisms of finite symmetric inverse semigroups, J. Algebra
198 (1997), no. 1, 300–310.
[40] S. V. Tsaranov, Representation and classification of Coxeter monoids, European J. Combin. 11
(1990), no. 2, 189–204.
[41] S. Wilcox, Cellularity of diagram algebras as twisted semigroup algebras, J. Algebra 309 (2007),
no. 1, 10–31.
[42] C. C. Xi, Partition algebras are cellular, Compositio Math. 119 (1999), no. 1, 99–109.
[43]
, Cellular algebras, available at https://webusers.imj-prg.fr/~ bernhard.keller/ictp2006/lecturenotes/x
Bing Duan: School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000,
P. R. China.
E-mail address: [email protected]
Jian-Rong Li: Dept. of Mathematics, The Weizmann Institute of Science, Rehovot
7610001, Israel; school of Mathematics and Statistics, Lanzhou University, Lanzhou 730000,
P. R. China.
E-mail address: [email protected]
QUIVER MUTATIONS AND BOOLEAN REFLECTION MONOIDS
35
Yan-Feng Luo: School of Mathematics and Statistics, Lanzhou University, Lanzhou
730000, P. R. China.
E-mail address: [email protected]
| 4 |
arXiv:1709.02152v1 [math.GR] 7 Sep 2017
THE CONJUGACY RATIO OF GROUPS
LAURA CIOBANU, CHARLES GARNET COX, AND ARMANDO MARTINO
Abstract. In this paper we introduce and study the conjugacy ratio of a
finitely generated group, which is the limit at infinity of the quotient of the
conjugacy and standard growth functions. We conjecture that the conjugacy
ratio is 0 for all groups except the virtually abelian ones, and confirm this conjecture for certain residually finite groups of subexponential growth, hyperbolic
groups, right-angled Artin groups, and the lamplighter group.
1. Introduction
In this paper we introduce and study the conjugacy ratio of a group, which is the
limit of the quotient of two functions naturally associated to any finitely generated
group: conjugacy growth and standard growth. More precisely, if G is generated
by the finite set X, let BG,X (n) denote the ball of radius n with respect to X, and
let CG,X (n) denote the set of conjugacy classes of G which have a representative in
BG,X (n). Then the conjugacy ratio of G with respect to X is:
(1)
crX (G) = lim sup
n→∞
|CG,X (n)|
.
|BG,X (n)|
The motivation of this paper is twofold. On one hand, the conjugacy ratio of a
finite group H is equal to the degree of commutativity dc(H) of H, which measures
the probability that two elements of the group commute, and is defined as:
|{(x, y) ∈ H × H : xy = yx}|
(2)
.
dc(H) =
|H|2
The degree of commutativity of a group has received a lot of attention recently, as
its definition was extended to finitely generated infinite groups in [AMV17] to be
dcX (G) = lim sup
n→∞
|{(x, y) ∈ BG,X (n)2 : ab = ba}|
.
|BG,X (n)|2
As raised in [Cox16], it is natural to explore whether the degree of commutativity
and the conjugacy ratio are related for infinite groups as well.
Our second motivation comes from the fact that very few quantitative results
comparing standard and conjugacy growth in groups exist in the literature. While
in any group there are fewer conjugacy classes than elements, the gap between these
two functions has not been been explored in detail, and it is worth investigating.
For example, the standard and conjugacy growth rates (i.e. taking the limit of the
nth root of the function at n) are equal in some of the most frequently encountered
families of infinite groups: hyperbolic groups [AC17], graph products [CHM17],
Date: March 19, 2018.
2010 Mathematics Subject Classification. 20P05, 20F69.
Key words and phrases. Conjugacy growth, degree of commutativity, polynomial growth,
RAAGs, hyperbolic groups, wreath products.
1
2
LAURA CIOBANU, CHARLES GARNET COX, AND ARMANDO MARTINO
many wreath products [Mer17]; thus in these examples the quotient of the two
functions, as a function of n, must be at most subexponential, and if the conjugacy
ratio is 0, the convergence to 0 will not be very fast.
Our starting point is the following conjecture, inspired by [AMV17, Conj. 1.6].
Conjecture 1.1. Let G be a group generated by a finite set X. Then crX (G) > 0
if and only if G is virtually abelian.
Our results on the conjugacy ratio in several families of groups support Conjecture 1.1. In Section 3 we investigate groups of stable subexponential growth
(Definition 3.1). We first show that any virtually abelian group has crX (G) > 0
for any finite generating set X. We then show that, if N is a normal, finite index
subgroup of G, then (for any finite generating set X of G) crX (G) 6 dc(G/N ).
This allows us to apply a technique from [AMV17] to show that any residually
finite group G of stable subexponential growth which is not virtually abelian has
crX (G) = 0 for any finite generating set X. We also show in Theorem 3.9 that if
G is a finitely generated virtually abelian group, with finite generating sets X and
Y , then crX (G) = crY (G).
We say that a group, G, with generating set X has stable subexponential growth
|BG,X (n+1)|
= 1, Definition 3.1. This includes all finitely generated
if limn→∞ |B
G,X (n)|
virtually-nilpotent groups. Since all finitely generated virtually-nilpotent groups
are residually finite, the theorem below means that Conjecture 1.1 is true for all
groups of polynomial growth.
Theorem 3.7. The conjugacy ratio for all finitely generated, residually finite groups
of stable subexponential growth that are not virtually abelian is zero, with respect to
all finite generating sets.
The proof of Theorem 3.7 cannot be generalised to groups of exponential growth,
but we provide independent arguments for several important classes of groups of
exponential growth.
Theorem 4.1. Let G be a non-elementary hyperbolic group. Then crX (G) = 0 for
any finite generating set X.
Theorem 4.3. Let G be the lamplighter group, that is, the wreath product C2 ≀ Z.
Then crX (G) = 0 for the standard generating set X (defined in (12)).
Theorem 4.12. Let G = (GV , XV ) be a right-angled Artin group (RAAG) based
on a graph Γ = (V, E) with generating set XV . Then crXV (G) = 0 unless G is free
abelian, in which case crXV (G) = 1.
We may also consider the strict or spherical conjugacy ratio, where the counting
is done in the sphere of radius n rather than the ball of radius n, that is, we may
take the ratio of the strict conjugacy growth function over the spherical growth
function. More precisely, let SG,X (n) be the sphere of radius n in the group G
s
with respect to finite generating set X, and let CG,X
(n) be the conjugacy classes
that intersect SG,X (n) but not BG,X (n − 1), that is, those conjugacy classes with
a minimal length representative in SG,X (n). The spherical conjugacy ratio is then
(3)
crsX (G) = lim sup
n→∞
s
|CG,X
(n)|
.
|SG,X (n)|
THE CONJUGACY RATIO OF GROUPS
3
Remark 1.2. By the Stolz-Cesàro theorem, anytime the spherical conjugacy ratio
turns out to be a limit, the conjugacy ratio will be equal to this limit. In particular,
if the spherical conjugacy ratio is 0, then the conjugacy ratio is 0.
2. Preliminaries
Recall that for a finitely generated group, G, with generating set X, the exponential growth rate of G with respect to X is:
q
(4)
ExpX (G) = lim n |BG,X (n)|.
n→∞
Definition 2.1. A group, G, with finite generating set X, is said to have exponential growth if ExpX (G) > 1 and subexponential growth if ExpX (G) = 1. This
does not depend on the generating set, X.
Additionally, for any ǫ > 0, if λ = ExpX (G), then for sufficiently large n,
λn ≤ |BG,X (n)| ≤ (λ + ǫ)n .
Moreover, if we replace balls with spheres, we get the same limit and inequality.
We collect below a few results on convergence of series that will be relevant later.
Theorem 2.2 (Stolz-Cesàro). Let an , bn , n ≥ 1 be two sequences with bn strictly
increasing and divergent. If the lefthandside limit exists,
an+1 − an
an
lim
= l =⇒ lim
= l.
n→∞ bn+1 − bn
n→∞ bn
Proposition 2.3 is a partial converse to the Stolz-Cesàro theorem. It implies that
for groups of exponential growth, if the conjugacy ratio is a limit and the ratio of
sizes of consecutive balls has a limit, then the spherical conjugacy ratio is equal to
the conjugacy ratio.
Proposition 2.3. Let an , bn , n ≥ 1 be two sequences with bn strictly increasing
and divergent, such that the lefthandside limit exists and limn→∞ bn+1
bn 6= 1. Then
an
an+1 − an
= l =⇒ lim
= l.
n→∞ bn
n→∞ bn+1 − bn
lim
Proposition 2.4. Let an , bn , cn , dn , n ≥ 0 be monotonically increasing sequences
of positive integers. Define the sequences b
cn and dbn as b
c0 := c0 , db0 := d0 , and
cn := cn − cn−1 and dbn := dn − dn−1 , for n ≥ 1.
b
Suppose that
(i) an ≤ bn and b
cn ≤ dbn for all n,
an
cn
(ii) bn → 0 and dn → 0 as n → ∞.
Then
Pn
ai b
cn−i
= 0.
lim Pi=0
n
n→∞
bi dbn−i
i=0
Proof. Given ǫ > 0, fix an N such that abnn < ǫ for all n ≥ N . Next choose an
M ≥ N such that dcnn < aǫN for all n ≥ M .
Then, for n ≥ M ≥ N ,
n
n
n
X
X
X
bi dbn−i .
ai b
cn−i < ǫ
bi b
cn−i ≤ ǫ
i=N
i=N
i=0
4
LAURA CIOBANU, CHARLES GARNET COX, AND ARMANDO MARTINO
Thus, for n ≥ M ,
Pn
PN
PN
Pn
cn−i
cn−i
ai b
cn−i
cn−i
i=N +1 ai b
i=0 ai b
i=0 ai b
= Pn
+ Pn
< Pni=0
+ ǫ.
Pn
b
b
b
bi dn−i
bi dn−i
bi dn−i
bi dbn−i
i=0
i=0
i=0
i=0
Now we obtain the result by using the fact that for n ≥ M
PN
PN
cn−i
b
cn−i
cn
i=0 ai b
≤ aN Pni=0
≤ aN
< ǫ.
Pn
dn
bi dbn−i
dbn−i
i=0
i=0
Proposition 2.5. Let an , bn , cn , dn , n ≥ 0, be sequences of positive integers satisfying the following properties:
(i) an , bn are monotone sequences,
(ii) an ≤ bn and cn ≤ dn for all n,
(iii) abnn → 0 as n → ∞,
(iv) dbnn ≤ δ n for all sufficiently large n, and for some 0 < δ < 1.
Then,
Pn
ai cn−i
= 0.
lim Pi=0
n
n→∞
i=0 bi dn−i
Proof. Given ǫ > 0, fix an N such that
n
X
ai cn−i < ǫ′
i=N
an
bn
n
X
i=N
< ǫ′ < ǫ for all n ≥ N . Then, for n ≥ N ,
bi cn−i ≤ ǫ′
n
X
bi dn−i .
i=0
Thus, for n ≥ N ,
Pn
Pn
PN
PN
ai cn−i
ai cn−i
ai cn−i
i=N +1 ai cn−i
i=0
i=0
Pn
= Pn
+ Pn
< Pni=0
+ ǫ′
b
d
b
d
b
d
i=0 i n−i
i=0 i n−i
i=0 i n−i
i=0 bi dn−i
and so it suffices to show that
PN
PN
ai cn−i
i=0 ai cn−i
= 0.
lim Pi=0
≤
lim
n
n→∞
n→∞
bn d0
i=0 bi dn−i
Now
PN
ai cn−i
6
bn d0
i=0
PN
N
N
X cn−i
X dn−i
ai cn−i
≤ aN
≤ aN
.
bn
b
b
i=0 n−i
i=0 n−i
i=0
Using hypothesis (iv), there is a sufficiently large n such that
N
N
X
X
dn−i
δ n 1 − δ N +1
aN
1
aN
δ n−i = aN N
≤ δn
< ǫ − ǫ′ .
≤ aN
N (1 − δ)
b
δ
1
−
δ
δ
i=0 n−i
i=0
3. Results for groups of stable subexponential growth
Definition 3.1. A group G, with finite generating set, X, is said to be of stable
G.X (n+1)|
= 1.
subexponential growth if limn→∞ |B|B
G,X (n)|
THE CONJUGACY RATIO OF GROUPS
5
Note that being of stable subexponential growth, implies that ExpX (G) = 1,
and hence that the group has subexponential growth.
By the celebrated result of Gromov, every finitely generated group of polynomial
growth - where BG,X (n) is bounded above by a polynomial function - is virtually
nilpotent. All these groups are of stable subexponential growth since, by a result
of Bass, [BASS72], if G is a finitely generated, virtually nilpotent group, and X is
any finite generating set, then, for some exponent d, and constants, A, B:
(5)
And ≤ |BG,X (n)| ≤ Bnd .
The exponent d is calculated explicitly in [BASS72]; for a virtually abelian group
it is equal to the rank of a finite index free abelian subgroup.
From (5) we get that for any positive integer, k,
(6)
lim
n→∞
|BG,X (n + k)|
= 1.
|BG,X (n)|
The main result which we require for this class is the following.
Proposition 3.2. [BV02]. Let G be a finitely generated group with stable subexponential growth, and finite generating set X. For every finite index subgroup H 6 G
and every g ∈ G, we have
|gH ∩ BG,X (n)|
1
|Hg ∩ BG,X (n)|
lim
= lim
=
.
n→∞
n→∞
|BG,X (n)|
|BG,X (n)|
[G : H]
Furthermore, if H is an infinite index subgroup of G then both limits are zero
for any coset of H.
Remark 3.3. The last statement does not appear explicitly in [BV02], but follows
easily from their arguments. Alternatively, one could prove this via the construction
of an invariant mean which requires the choice of an ultrafilter. The stable subexponential condition ensures that any ultrafilter will do, and hence that all limit points
of the sequences above are equal.
From now on, whenever there is no ambiguity concerning the group and its
generating set, we will write C(n) instead of CG,X (n) and B(n) instead of BG,X (n).
Proposition 3.4. Suppose that G is a finitely generated, virtually abelian group.
Then, for any finite generating set X of G, we have that crX (G) > 0.
More precisely, if [G : A] = m where A is abelian, then crX (G) ≥ 1/m2 .
Proof. Let [G : A] = m, where A is abelian. We note that G acts by multiplication
on the right cosets of A. If g and h lie in the same right coset, then h = αg for
some α ∈ A, so for any a ∈ A, h−1 ah = (αg)−1 a(αg) = g −1 ag since A is abelian.
Thus there are at most m conjugates of each element a ∈ A and so, for all n ∈ N,
1
. Now
we have that |C(n) ∩ A| > |B(n) ∩ A| · m
|C(n) ∩ A| |B(n) ∩ A| |C(n) ∩ A|
|B(n) ∩ A| 1
|C(n)|
>
=
·
>
·
|B(n)|
|B(n)|
|B(n)
|B(n) ∩ A|
|B(n)|
m
which tends to 1/m2 by Proposition 3.2.
Lemma 3.5. Let G be a group of stable subexponential growth with finite generating
set X, let g ∈ G and let H be a finite index subgroup of G. For d ∈ N we have
1
|gH ∩ BG,X (n + d)|
=
.
lim
n→∞
|BG,X (n)|
[G : H]
6
LAURA CIOBANU, CHARLES GARNET COX, AND ARMANDO MARTINO
Proof. This follows from writing
lim
n→∞
|B(n + d)| |gH ∩ B(n + d)|
|gH ∩ B(n + d)|
= lim
n→∞
|B(n)|
|B(n)|
|B(n + d)|
together with Proposition 3.2 and (6).
Proposition 3.6. Let G be a finitely generated group of stable subexponential
growth and N a subgroup of finite index in G. Then crX (G) ≤ dc(G/N ) for any
finite generating set X of G.
Proof. Let [G : N ] = m, so that G = g1 N ⊔g2 N ⊔. . .⊔gm N for some g1 , . . . , gm ∈ G.
Let d := max{|gi |X : i = 1, . . . , m}.
Now consider if xN ∼ yN (in G/N ). Then yN = g −1 xgN = g −1 xgg −1 N g =
−1
g xN g for some g ∈ G. Moreover, since x and y are conjugate in G/N , we may
choose g from {g1 , . . . , gm } and so |g|X 6 d. Now let yk1 ∈ B(n). We know there
must exist some xk2 ∈ xN such that g −1 (xk2 )g = yk1 . But then xk2 = gyk1 g −1 ,
and so xk2 ∈ B(n + 2d). Hence, for every n ∈ N, each element in B(n) ∩ yN is
conjugate to some element in B(n + 2d) ∩ xN .
Let x1 , . . . xk ∈ {g1 , . . . , gm } be the representatives of the conjugacy classes in
G/N . For every i ∈ N and every j ∈ Zk , we will assume that there are |B(n) ∩ xj N |
conjugacy classes in B(n) ∩ xj N . Hence
Pk
|xi N ∩ B(n + 2d)|
|C(n)|
6 i=1
|B(n)|
|B(n)|
which tends to k/m by the previous lemma.
Theorem 3.7. Conjecture 1.1 is true for all finitely generated, residually finite
groups of stable subexponetial growth.
Proof. Proposition 3.4 states that, if a finitely generated group G is virtually
abelian, then, for any finite generating set X, crX (G) > 0. For the other direction we apply the method of [AMV17, Proof of Thm. 1.3] by using Proposition
3.6. For completeness we will describe their argument. It requires the following
result from [Gal70]: if F is a finite group and N E F , then
(7)
dc(F ) 6 dc(F/N ) · dc(N ).
Our hypotheses are that G is: finitely generated, residually finite, of stable subexponential growth, and not virtually abelian. We wish to show that crX (G) = 0 for
any finite generating set X. We will work with finite quotients and will build a chain
of normal subgroups. Since G is finitely generated we may choose these subgroups
to be characteristic, and will do this because being characteristic is transitive.
Since G is not virtually abelian, choose g1 , g2 ∈ G that do not commute and,
using the residually finite assumption, let [g1 , g2 ] 6∈ K1 where K1 is a characteristic
and finite index subgroup of G. Hence G/K1 is non-abelian, and by Gustafson’s
result we have that dc(G/K1 ) 6 5/8. Now, since the properties of G which we have
used also apply to finite index subgroups, this argument also applies to K1 . Hence
we may construct a descending chain of characteristic finite index subgroups
. . . 6 Ki 6 Ki−1 6 . . . 6 K2 6 K1 6 K0 = G
THE CONJUGACY RATIO OF GROUPS
7
where, for every i ∈ N, dc(Ki−1 /Ki ) 6 5/8. Moreover (G/Ki )/(Ki−1 /Ki ) =
G/Ki−1 and so, from (7),
dc(G/Ki ) 6 dc(G/Ki−1 ) · dc(Ki−1 /Ki ) 6 5/8 · dc(G/Ki−1 ).
By induction dc(G/Ki ) 6 (5/8)i and so, by Proposition 3.6, for any finite generating set X of G, we have that crX (G) 6 dc(G/Ki ) 6 (5/8)i . Since this holds for
every i ∈ N, we obtain that crX (G) = 0.
Corollary 3.8. Conjecture 1.1 is true for all finitely generated, virtually nilpotent
groups, or equivalently, all groups of polynomial growth.
3.1. Virtually Abelian groups. The goal of this section is to prove:
Theorem 3.9. Let G be a finitely generated, virtually abelian group, and X, Y be
finite generating sets for G. Then crX (G) = crY (G).
It will be useful to have the following shorthand:
Definition 3.10. Let G be generated by the finite set X. A subset, S, of G is
|S∩BG,X (n)|
generic if lim supn→∞ |BG,X
(n)| = 1, and negligible if the limit is 0.
Given a group, G, with finite generating set X, a finitely generated subgroup,
H, of G is said to be undistorted if any word metric on H is bi-Lipschitz equivalent
to any word metric on G, when restricted to H. This makes sense since any two
finite generating sets on a group induce bi-Lipschitz equivalent word metrics.
It is easy to see that a finite index subgroup is always undistorted, and that a
subgroup H is undistorted if and only if it has an undistorted subgroup of finite
index. Retracts are also undistorted (recall that a retract of G is the image of an
endomorphism ρ : G → G such that ρ2 = ρ).
We now collect the following facts:
Proposition 3.11. Suppose that G is a finitely generated virtually abelian group,
with finite generating set X, having a subgroup of finite index isomorphic to Zd .
(i) Every subgroup of G is both finitely generated and undistorted.
(ii) Let H be an infinite subgroup of G. Let T (n) = TH,X (n) (for transversal) be
the number of cosets of H that have a representative in BG,X (n). Then,
lim
n→∞
T (n)
= 0.
|BG,X (n)|
Proof. (i) Let H ≤ G. It is well known that H is finitely generated, as this fact is
true in the case where G is virtually polycyclic, which includes the finitely generated
virtually nilpotent (and abelian) case.
However, the fact that H is undistorted is not true more generally, and follows
from the fact that every subgroup of a finitely generated free abelian group has
finite index in a direct summand. In our case, H has a finite index subgroup which
is a retract of a finite index subgroup of G, and is therefore undistorted in G.
(ii) From above, H is finitely generated and undistorted. Since H is infinite, it
must contain an element of infinite order, so there exists an ǫ > 0 such that
|H ∩ BG,X (n)| ≥ ǫn.
More precisely, |H ∩ BG,X (n)| will have polynomial bounds of degree e, d ≥ e ≥ 1.
8
LAURA CIOBANU, CHARLES GARNET COX, AND ARMANDO MARTINO
Let A, B, d be the constants in (5). Then
Hence,
B2d nd ≥ |BG,X (2n)| ≥ T (n)|H ∩ BG,X (n)| ≥ T (n)ǫn.
T (n)
B2d nd−1
= 0.
≤ lim
n→∞ |BG,X (n)|
n→∞
ǫAnd
0 ≤ lim
From now on, we let G be an infinite, finitely generated, virtually abelian group.
Let A be a normal, finite index, free abelian subgroup, and B be the centraliser of
A in G. Note that A is a subgroup of B, which therefore has finite index.
Proposition 3.12. Let G be a finitely generated, virtually abelian group and X any
finite generating set for G. Let A be a normal, finite index, free abelian subgroup,
and B be the centraliser of A in G. Then the set of minimal length G-conjugacy
representatives in G \ B is negligible.
Proof. Let y 6∈ B be an element of G and denote by CyA (n) the number of conjugacy
classes which have a representative in BG,X (n) ∩ yA. Then we claim that
lim
n→∞
CyA (n)
= 0.
|BG,X (n)|
For each conjugacy class with a representative in BG,X (n) ∩ yA, choose a shortest
such representative, and denote this set of representatives Z = {yai : ai ∈ A}.
From these, extract the set U = {ai }, rewriting the ai as geodesics if required. Note
that, for some fixed k (the length of y), we have
CyA (n) = |Z ∩ BG,X (n)| ≤ |U ∩ BG,X (n + k)|.
Now let My denote the automorphism of A induced by conjugation with y, which
we think of as a matrix. For any a1 , a2 ∈ A we have that:
a−1
2 (ya1 )a2 = y(a1 + (I − My )a2 ),
if we switch to an additive notation in A. Let H be the image of (I − My ) in A,
that is, H = h[a, y] : a ∈ Ai. Since y 6∈ B, we can conclude that H is a non-trivial
subgroup of A and is therefore infinite. Moreover, the elements of U are all in
distinct cosets of H. Hence, by Proposition 3.11 part (ii), we may conclude that:
|Z∩BG,X (n)|
|BG,X (n)|
≤
|U∩BG,X (n+k)|
|BG,X (n)|
=
TH,X (n+k) |BG,X (n+k)|
|BG,X (n+k)| |BG,X (n)|
→ 0.
Proposition 3.12 shows that the only elements of G that contribute to the conjugacy ratio are the elements of B. (The representative of a conjugacy class might
not have a shortest representative in our particular coset yA, but varying y we see
that we have an overcount of the number of conjugacy classes in the complement
of B, which nonetheless gives 0.)
Thus the strategy for proving Theorem 3.9 is the following. First note that each
element of B has finite conjugacy class in G. We split the elements in B into
those which centralise elements from outside of B and those whose centraliser is
completely in B. Proposition 3.14 shows the former ones form a negligible set, and
THE CONJUGACY RATIO OF GROUPS
9
the latter ones a generic set of B (Corollary 3.15); moreover, for the latter ones the
size of the G-conjugacy class is the index of the B-centraliser, which is constant for
elements in the same A-coset. Therefore, each coset (or, rather, conjugacy class of
cosets) of A contributes a fixed amount to the conjugacy ratio, which is algebraically
determined.
We use the notation ZK (g) for the K-centraliser of g ∈ G, that is, ZK (g) = {k ∈
K : k −1 gk = g}.
Lemma 3.13. Let x ∈ G. Then ZB (x) = ZB (xa) for any a ∈ A. Moreover,
[G : ZB (x)] < ∞ if x ∈ B.
S
Proposition 3.14. The set y6∈B ZB (y) is a finite union of infinite index subgroups
of G. Hence this set is negligible with respect to any finite generating set.
Proof. Since ZB (y) = ZB (ya) for any a ∈ A, this is a finite union. So it is enough
to show that each ZB (y) has infinite index.
In fact, it is sufficient to show that ZA (y) = ZB (y) ∩ A is an infinite index
subgroup of A. However, ZA (y), is a pure subgroup of A; that is, if am ∈ ZA (y)
and m 6= 0 then a ∈ ZA (y). This implies that ZA (y) is a direct summand of A.
But since y 6∈ B, this direct summand cannot be the whole of A and is therefore
an infinite index subgroup of A as required.
Corollary 3.15. There is a generic set of elements of B (with respect to any
generating set) whose centraliser lies entirely in B.
Proof. If for some b ∈ B there exists t ∈
/ B such that [t, b] = 1, then b ∈ ZB (t) ⊂
S
y6∈B ZB (y), which is negligible by Proposition 3.14.
Proof of Theorem 3.9: For each r, let Ar be the elements b ∈ B for which ZB (b)
has index r in G (and therefore conjugacy class size r in G), and let N = {b ∈ B :
ZB (b) 6⊂ B}, that is,S
N is the set of elements of B whose centraliser does not fully
lie in B. Then N = y6∈B ZB (y) and so by Corollary 3.15 it is a negligible set.
Since A ≤ ZB (b) ≤ G for any b ∈ B and A has finite index in G, there are only
finitely many values for the index of ZB (b) in G, and thus finitely many r for which
Ar is non-empty. Moreover, since ZB (y) = ZB (ya) for any y ∈ B \ A and a ∈ A, if
y ∈ Ar , then ya ∈ Ar , so each non-empty Ar is a union of A-cosets and thus
(8)
|Ar ∩ BG,X (n)|
= δ,
n→∞
|BG,X (n)|
lim
where δ is 1/[G : A] times the number of A-cosets in Ar , so is independent of X.
It is easy to see that there is an integer, k, such that if two elements of B are
conjugate in G, then they are conjugate by an element of length at most k; the
same holds for Ar as Ar ⊂ B. Moreover, since B is normal in G, it is easy to
see that G acts on Ar by conjugation; G acts by conjugation on N , and hence on
Ar \ N , as well.
Let Cn be the number of conjugacy classes of G which meet BG,X (n) and are
contained in Ar \ N . Then,
(9)
|(Ar \ N ) ∩ BG,X (n)| ≤ rCn ≤ |(Ar \ N ) ∩ BG,X (n + 2k)|.
The first inequality comes from the fact that each element of Ar \ N has r
conjugates in G, and the second from the fact that each of the conjugates can be
obtained from a conjugator of length at most k.
10
LAURA CIOBANU, CHARLES GARNET COX, AND ARMANDO MARTINO
Now
Cn
|CG,X (n) ∩ Ar |
Cn
|N ∩ BG,X (n)|
≤
≤
+
,
|BG,X (n)|
|BG,X (n)|
|BG,X (n)|
|BG,X (n)|
and by (8), (9) and Corollary 3.15
Cn
= lim
lim
n→∞
n→∞ |BG,X (n)|
|N ∩ BG,X (n)|
Cn
+
|BG,X (n)|
|BG,X (n)|
=
δ
,
r
so we get
lim
n→∞
δ
|CG,X (n) ∩ Ar |
= .
|BG,X (n)|
r
Hence the number of conjugacy classes of G that meet Ar is independent of the
generating set. Summing over the finitely many r gives the result.
Remark 3.16. The same ideas as those just presented can be used to show that,
if G is a finitely generated, virtually abelian group, and X is any finite generating
set, then crX (G) = inf N Ef G cr(G/N ). That is, the conjugacy ratio is equal to the
infimum of conjugacy ratios of the finite quotients. Hence, if one were to measure
the conjugacy ratio using invariant means, one would get the same numerical value.
Unpublished results indicate that this is the same as the degree of commutativity.
For similar reasons, the same is true whenever G is a finitely generated virtually
nilpotent group, the virtually abelian case being the key one.
4. Results for other families of groups
4.1. Hyperbolic groups. In this section we prove Conjecture 1.1 for non-elementary
hyperbolic groups.
We will write f (n) ∼ g(n) to mean f (n)/g(n) → 1 as n → ∞.
Theorem 4.1. Let G be a non-elementary hyperbolic group. Then crX (G) = 0 for
any finite generating set X.
Proof. Let G be a non-elementary hyperbolic group with finite generating set X.
Then by a result of Coornaert (see [Cor93]) there are positive constants A0 ,B0 , and
integer n0 , such that for all n ≥ n0
(10)
A0 enh ≤ |BG,X (n)| ≤ B0 enh ,
where h = ExpX (G).
By Theorem 1.2 in [AC17], there are positive constants A1 , B1 and n1 such that
(11)
A1
enh
enh
≤ |CG,X (n)| ≤ B1
n
n
for all n ≥ n1 . Thus from (10) and (11) we get
|CG,X (n)|
B1
6
|BG,X (n)|
A0 n
for all n ≥ max(n0 , n1 ), and by taking the limit we obtain that crX (G) = 0.
THE CONJUGACY RATIO OF GROUPS
11
4.2. The lamplighter group.
We follow the notation in [Mer17]. Let I be a
L
non-empty set. For η ∈ i∈I G we write η(i) for the ith component of η, and if
L
moreover I is a group and x ∈ I, we define η x ∈ i∈I G by η x (i) = η(x−1 i), and
say that η x is the left translate of η by x.
Definition 4.2. Consider groups H and L with symmetric generating sets A and
B, and neutral elements e and e′ , respectively. The wreath product of G by L,
written G ≀ L, is defined as
M
H ≀ L :=
H ⋊ L,
i∈L
where for (η, m), (θ, n) ∈ H ≀ L, (η, m)(θ, n) = (ηθm , mn).
L
For h ∈ H, let ~h ∈ i∈L H be such that ~h(e′ ) = h and ~h(i) = e for i 6= e′ . Then
(12)
X := {(~e, a) : a ∈ A} ∪ {(~b, e′ ) : b ∈ B}.
generates H ≀ L.
For the lamplighter group G = C2 ≀ Z we let A := {a}, where a is the non-trivial
element of C2 , and let B be the standard generating set of Z.
Theorem 4.3. Let G be the lamplighter group, that is, the wreath product C2 ≀ Z.
Then crX (G) = 0 for the standard generating set X.
Proof. The statement follows immediately
from [Mer17, Example 5.0.3],
where
n it
√
√ n
2 1+ 5
1+ 5
s
is shown that |CG,X (n)| ∼ n
, and the fact that |SG,X (n)| ∼
by
2
2
[Par92].
4.3. Right-Angled Artin Groups. Let Γ = (V, E) be a simple graph (i.e. a
non-oriented graph without loops or multiple edges) with vertex set V and edge set
E. For each vertex v of Γ, let Gv be a group. The graph product of the groups Gv
with respect to Γ is defined to be the quotient of their free product by the normal
closure of the relators [gv , gw ] for all gv ∈ Gv , gw ∈ Gw for which {v, w} is an
edge of Γ. Here we consider right-angled Artin groups (RAAGs), which are graph
products with all Gv = Z, and denote by (GV , XV ) the RAAG based on the graph
Γ with generating set XV (in bijection to V ).
Conjugacy representatives in a RAAG come, to a large extent, from taking one
word out of each cyclic permutation class, so we first establish the asymptotics of
the language of cyclic representatives in a rather general setting.
Example 4.4. In a free group on the free generating basis, counting the conjugacy
classes with a minimal representative of length n is equivalent to counting the
number of cyclically reduced words of length n, up to cyclic permutation.
4.3.1. Cyclic representatives of languages. We follow the notation in [CHM17, Section 2.3]. Let L be a language over a finite alphabet X, that is, L ⊆ X ∗ , and
let L(n) denote the set of√words of length ≤ n in L. For n ≥ 1, n ∈ N, let
Ln := {wn | w ∈ L} and n L = {v | v n ∈ L}. Define Prim(L) := {w ∈ L | ∄k >
1, v ∈ L such that v k = w} to be the language of primitive words in L.
Suppose L is closed under cyclic permutations; then we construct a language
CycRep(L) of cyclic representatives of L out of the words wc , where wc the word
that is least lexicographically among all cyclic permutations of w, for w ∈ L:
CycRep(L) := {wc | w ∈ L}.
12
LAURA CIOBANU, CHARLES GARNET COX, AND ARMANDO MARTINO
Proposition 4.5 (see also Lemma 2.10 (4), [CHM17]). Let L be an exponential
growth
language closed under cyclic permutations. Furthermore assume that Lk ⊆ L
√
k
and L ⊆ L for all k ≥ 1. Then
lim
n→∞
|CycRep(L)(n)|
= 0.
|L(n)|
Proof. For simplicity of notation let a(n) := |Ls (n)|, p(n) := |Prim(L)s (n)| and
c(n) := |CycRep(L)s (n)|, that is, we consider the numbers of words of length exactly
n in each language.S
Write L as L = k≥1 Primk (L), and notice that the number of cyclic representatives of length n in Prim(L) is p(n)/n, and the number of cyclic representatives of
P
P
length nk in Primk (L) is also p(n)/n. Thus a(n) = d/n p(d) and c(n) = d/n p(d)
d .
Let µ(n) and φ(n) be the standardPnumber theoretic Möbius and Euler functions.
Then by Möbius inversion p(n) = d/n µ( nd )a(n) and so
P
X
X l/(n/d) µ(l)
φ(n/d)
l a(d)
a(d)
=
,
c(n) =
d
n
d/n
d/n
P
P
φ(n)
which follows from d/n φ(d) = n and d/n µ(d)
d = n .
Since a(n) is exponential, only the last term in the sum above is of the same
magnitude as a(n), so
(13)
c(n) ∼
|CycRep(L)s (n)|
a(n)
=⇒ lim
= 0.
n→∞
n
|Ls (n)|
By Stolz-Cesàro we obtain the result.
4.3.2. Conjugacy representatives in RAAGs. We first establish a result about the
conjugacy ratio of direct products.
Lemma 4.6. Let H and K be two groups with finite generating sets X and Y , respectively. If either (i) crX (H) = crY (K) = 0 or, (ii) crX (H) = 0 and ExpX (H) >
ExpY (K), then crX∪Y (H × K) = 0.
Proof. We calculate the conjugacy ratio with respect to balls in H × K. To do
this we use balls in H and spheres in K. Let an := |CH,X (n)|, bn := |BH,X (n)|,
s
tn := |CK,Y
(n)| and sn := |SK,Y (n)|. Then
Pn
ai tn−i
.
crX∪Y (H × K) = lim sup Pni=0
n→∞
i=0 bi sn−i
If crX (H) = crY (K) = 0, then by Proposition 2.4 (putting tn = cbn , sn = dc
n ) we
get that crX∪Y (H × K) = 0. Similarly, if crX (H) = 0 and ExpX (H) > ExpY (K)
then Proposition 2.5 (putting cn = tn , dn = sn ) states that this limit is zero, so
crX∪Y (H × K) = 0.
Since RAAGs interpolate between free and free abelian groups, the presence of
commutativity does not allow us to simply consider cyclically reduced words up to
permutation, as in free groups. We need to single out the words for which taking
cyclic representatives produces conjugacy representatives, and use Crisp, Godelle
and Wiest’s approach from [CGW], which was further developed in [CHM17].
THE CONJUGACY RATIO OF GROUPS
13
Definition 4.7 (Def 2.19, [CGW]). Let V = {a1 , . . . , aN } and set the total order
−1
a1 < a−1
1 < a2 < a2 < . . . . A cyclically reduced word w is in cyclic normal form
if it is in the shortlex language SL(GV , XV ) of GV with respect to XV and all its
cyclic conjugates are in SL(GV , XV ) as well.
Not all elements posses a cyclic normal form. For example, if [a1 , a2 ] = 1, the
word a1 a2 is in SL(GV , XV ), but its cyclic permutation a2 a1 is not. To deal with
this situation, [CGW] divides the words over XV into split and non-split.
Definition 4.8 (Definition 2.13, [CGW]). Let w be a cyclically reduced word over
XV and denote by ∆(w) the full subgraph spanned by Supp(w). Let ∆(w) be the
graph complement of ∆(w).
(i) The word w is split if ∆(w) is disconnected, which amounts to being able
to write w as a product of commuting subwords (or blocks).
(ii) The word w is non-split if ∆(w) is connected.
(iii) Let CycSL(GV , XV ) denote the set of all non-trivial cyclic normal forms
corresponding to non-split words in GV .
We say that a group element is non-split (split) if it can be represented by a
cyclically reduced word which is non-split (split).
Proposition 4.9 (Prop. 2.21, [CGW]). Two cyclic normal forms represent conjugate elements if and only if they are equal up to a cyclic permutation.
Proposition 4.10 (Remark 2.14, [CGW]). Let w and v be two cyclically reduced
split words. Then they are conjugate if and only if ∆(w) = ∆(v) and the words
corresponding to the commuting blocks are conjugate, respectively.
Lemma 4.11. [CHM17] Let CycSL(GV , XV ) be the set of cyclic normal forms in
GV . The following hold:
(1) CycSLk (GV , XV ) ⊆ CycSL(GV , XV ) for all k ≥ 1, and
(2) CycSL(GV , XV )) is closed under cyclic permutations.
Theorem 4.12. Let G = (GV , XV ) be a right-angled Artin group (RAAG) based
on a graph Γ = (V, E) with generating set XV . Then crXV (G) = 0 unless G is free
abelian, in which case crXV (G) = 1.
Proof. We use induction on the number of vertices. Let n := |V |. The result is
trivial for n = 1. If G is a direct product, then we get cr(G) = 0 if at least one of
the factors has cr = 0; this follows from Lemma 4.6(i) if both factors have cr = 0
and from Lemma 4.6(ii) if, say, the first factor has cr = 0, as the second is by
induction free abelian, and of strictly smaller growth rate than the first. We get
cr(G) = 1 when each factor is free abelian.
So suppose G is not a direct product. We split the conjugacy classes CGV ,XV of
G into two types: those which have a shortest length representative with support
XU , where U ( V , and denote these by CGV ,≤XV , and those which have a shortest
length representative with support exactly XV , and denote these by CGV ,=XV . By
Propositions 4.9 and 4.10, this is well defined. Moreover, by Propositions 4.9 and
4.10, two cyclically reduced words w1 , w2 with support XU are conjugate in GV if
and only if they are conjugate in GU (note that if a word w ∈ CycSL(GV , XV )∩XU∗ ,
where U ( V , then w ∈ CycSL(GU , XU )).
14
LAURA CIOBANU, CHARLES GARNET COX, AND ARMANDO MARTINO
Thus we can write CGV ,≤XV ⊆
CGV ,XV ⊆
(14)
Then (14) implies that
(15)
|CGV ,XV (n)|
≤
|BGV ,XV (n)|
Now for U ( V
so
P
S
CGU ,XU and express the above as:
[
CGV ,=XV .
CGU ,XU
U(V
[
U(V
XU ,U(V
|CGU ,XU (n)| + |CGV ,=XV (n)|
|BGV ,XV (n)|
.
|CGU ,XU (n)|
|CGU ,XU (n)| |BGU ,XU (n)|
=
,
|BGV ,XV (n)|
|BGU ,XU (n)| |BGV XV (n)|
|BGU ,XU (n)|
|CGU ,XU (n)|
≤ crXU (GU ) lim sup
.
(n)|
|B
|B
n→∞
n→∞
GV ,XV
GV ,XV (n)|
The right hand side is equal to 0 since either (i) crXU (GU ) = 0 by induction, or (ii)
GU is free abelian (so of polynomial growth); if (ii), since G itself if not a direct
product by assumption, it is of exponential growth, and the last fraction is 0.
|C
(n)|
V
It remains to find lim supn→∞ |BGGV ,=X
(n)| , the second part of the right hand
V ,XV
side of (15). Since G is not a direct product, all conjugacy representatives with
support exactly XV are non-split, so it suffices to consider cyclic normal forms up
to cyclic permutations, that is
lim sup
|CGV ,=XV (n)|
|CycRep(CycSL(GV , XV )(n)|
≤
=
|BGV ,XV (n)|
|SL(G, XV )(n)|
|CycRep(CycSL(GV , XV )(n)| |CycSL(GV , XV )(n)|
,
|CycSL(G, XV )(n)|
|SL(G, XV )(n)|
and by Proposition 4.5 applied to the language CycSL(GV , XV ) (which satisfies the
hypothesis of Proposition 4.5 by Lemma 4.11)
lim
n→∞
This proves the result.
|CycRep(CycSL(GV , XV )(n)|
= 0.
|CycSL(G, XV )(n)|
5. Reflections and open questions
Our results on the conjugacy ratio values are essentially identical to those on the
degree of commutativity in [AMV17, Cox16, Val17]. That is, the two quantities
are equal for all the classes of groups we studied. However, we could not establish
a direct general link between them.
Question 1. Is the limsup in the definition of the conjugacy ratio a limit?
Question 2. What are the groups for which dcX (G) ≤ crX (G) (or vice versa)?
They are equal in the virtually nilpotent case, in the hyperbolic group case and in
many more.
As is the case for the degree of commutativity, we do not know whether the
conjugacy ratio might be influenced by a change of generators.
Question 3. Does there exist a group G with finite generating sets X and Y such
that crX (G) 6= crY (G)?
THE CONJUGACY RATIO OF GROUPS
15
Finally, it would be interesting to unify the proofs confirming our conjecture for
larger classes of groups, such as all groups of exponential growth, for example.
References
[AC17] Y. Antolı́n and L. Ciobanu, Formal conjugacy growth in acylindrically hyperbolic groups,
Int. Math. Res. Notices, 1 (2017), 121–157.
[AMV17] Y. Antolı́n, A. Martino, and E. Ventura, Degree of commutativity of infinite groups,
Proceedings of the American Mathematical Society (Feb. 2017).
[BASS72] H. Bass, The degree of polynomial growth of finitely generated nilpotent groups, Proc.
London Math. Soc. (3) 25 (1972), 603–614.
[BV02] J. Burillo and E. Ventura, Counting primitive elements in free groups, Geom. Dedicata
93 (2002), 143–162.
[CHM17] L. Ciobanu, S. Hermiller, and V. Mercier, Conjugacy growth in graph products, Preprint
2017.
[Cor93] M. Coornaert, Mesures de Patterson–Sullivan dans les espaces hyperboliques au sens de
Gromov, Pacific J. Math. 159 (1993), 241–270.
[Cor05] M. Coornaert, Asymptotic growth of conjugacy classes in finitely-generated free groups,
IJAC. Vol. 15 (2005), 887–892.
[Cox16] C.G. Cox, The degree of commutativity and lamplighter groups, Preprint 2017,
https://arxiv.org/abs/1605.04829.
[CGW] Crisp, Godelle and Wiest Linear time solution to the conjugacy problem in right-angled
Artin groups and their subgroups, Journal of Topology, Vol. 2, (2009), 442 – 460.
[ET68] P. Erdős and P. Turán, On some problems of a statistical group-theory. IV, Acta Math.
Acad. Sci. Hungar 19 (1968), 413–435. MR 0232833
[Gal70] P.X. Gallagher, The number of conjugacy classes in a finite group, Math Z. 118 (1970),
175-179.
[Gus73] W. H. Gustafson, What is the probability that two group elements commute?, Amer.
Math. Monthly 80 (1973), 1031–1034. MR 0327901
[Mer17] V. Mercier, Conjugacy growth series in some wreath products, Preprint 2017,
https://arxiv.org/abs/1610.07868.
[Par92] W. Parry. Growth series of some wreath products. Trans. Amer. Math. Soc., 331 (1992),
2:751–759.
[Val17] M. Valiunas. Degree of commutativity for right-angled Artin groups. Preprint 2017,
https://arxiv.org/abs/1701.04374
Heriot-Watt University, Edinburgh, EH14 4AS, UK
E-mail address: [email protected]
URL: http://www.macs.hw.ac.uk/~lc45/
University of Bath, BA2 7AY, UK
E-mail address: [email protected]
Mathematical Sciences, University of Southampton, SO17 1BJ, UK
E-mail address: [email protected]
URL: http://www.personal.soton.ac.uk/am1t07/
| 4 |
1
Complex Systems Science meets 5G and IoT
Nicola Marchetti, Irene Macaluso, Nicholas Kaminski, Merim Dzaferagic, M.
Majid Butt, Marco Ruffini, Saul Friedner, Julie Bradford, Andrea Zanella,
arXiv:1710.11548v1 [cs.NI] 31 Oct 2017
Michele Zorzi, and Linda Doyle
Abstract
We propose a new paradigm for telecommunications, and develop a framework drawing on concepts
from information (i.e., different metrics of complexity) and computational (i.e., agent based modeling)
theory, adapted from complex system science. We proceed in a systematic fashion by dividing network
complexity understanding and analysis into different layers. Modelling layer forms the foundation of the
proposed framework, supporting analysis and tuning layers. The modelling layer aims at capturing the
significant attributes of networks and the interactions that shape them, through the application of tools
such as agent-based modelling and graph theoretical abstractions, to derive new metrics that holistically
describe a network. The analysis phase completes the core functionality of the framework by linking our
new metrics to the overall network performance. The tuning layer augments this core with algorithms
that aim at automatically guiding networks toward desired conditions. In order to maximize the impact
of our ideas, the proposed approach is rooted in relevant, near-future architectures and use cases in 5G
networks, i.e., Internet of Things (IoT) and self-organizing cellular networks.
Index Terms
Complex systems science, Agent-based modelling, Self-organization, 5G, Internet of Things.
Nicola Marchetti ([email protected]), Irene Macaluso, Nicholas Kaminski, Merim Dzaferagic, M. Majid Butt, Marco
Ruffini, and Linda Doyle are with CONNECT / The Centre for Future Networks and Communications, Trinity College, The
University of Dublin, Ireland. Saul Friedner and Julie Bradford are with Real Wireless, UK. Andrea Zanella and Michele Zorzi
are with the University of Padova, Italy.
This material is based upon works supported by the Science Foundation Ireland under Grants No. 13/RC/2077 and 10/CE/i853.
2
I. I NTRODUCTION
The transition of humanity into the Information Age has precipitated the need for new
paradigms to comprehend and overcome a new set of challenges. Specifically, the telecommunication networks that underpin modern societies represent some of the largest scale construction and
deployment efforts ever attempted by humanity, with renovations occurring nearly continuously
over the course of decades. This results in networks that consist of numerous subsections, each
following its own trajectory of development, commingled into a complex1 cacophony.
A few emerging trends confirm the picture just drawn. Mobile and wireless networks are
getting denser and more heterogeneous in nature. Nodes in the network vary hugely in form and
functionality - ranging from tiny simple sensors to sophisticated cognitive entities. There is a
wider range of node and network-wide parameters to set, many of which are interdependent and
which impact heavily on network performance. Networks are becoming more and more adaptive
and dynamic, and many parameters are set during run-time in response to changing contexts.
As networks evolve, all of the above issues become more exaggerated - e.g., 5G networks will
see more antennas, more base stations and devices, more modes of operation, more variability,
and more dynamism. In a world like that, there is no way to systematically capture network
behaviour. There is no straightforward network theory or information theoretic approach that
can be used to describe the overall network or the interplay between the different networks.
We propose to tackle this by studying wireless networks from the perspective of Complex
Systems Science (CSS), developing complexity metrics and relating them to more traditional
measures of network performance. One of the key questions in CSS relates to the degree of
1
With the term ’complexity’ we refer to a specific set of complex systems science quantities, related to the interactions between
network entities (rather than to entities themselves) and between networks. As the current and future trend is towards more
diverse networks coexisting and more entities (e.g., within IoT, or ultra dense small cell networks), the amount of interactions
will increase, leading to an increase in complexity (in the meaning given to the word by complex systems science).
3
organization of a system [1], in terms of both the difficulty in describing its organizational
structure (A), and the amount of information shared between the parts of the system as a result
of the organizational structure (B). For example, the measure of excess entropy [2] (type (A))
can be used to describe the behaviour of a collection of self-organising networks [3], [4]; while
the signalling complexity associated with future network resource management can be analyzed
through a type (B) measure, i.e., functional complexity, introduced in [5], [6].
The above conceptual structure based on complexity informs an agent-based modelling (ABM)
paradigm to examine the interactions between the different entities that shape a network. ABM
provides a method of modelling complex systems from the ground up, which allows for a deeper
investigation of the interactions that shape the ultimate system performance. ABM provides
powerful modelling of entities in a variety of areas and contexts [7]–[9]. The attributes of ABM
can be applied to inform communication networks’ decision making; in particular, ABM can be
used to investigate the impact of several Medium Access Control (MAC) component technologies
on the Key Performance Indicators (KPI) of both telecom networks and applications, for example
in the case of a wireless sensor network aiding an Internet of Things (IoT) system [10].
In summary, we propose a new paradigm for telecommunications, drawing on concepts of a
complex systems science nature, to understand and model the behaviour of highly heterogeneous
networks and systems of networks. We also employ our framework to create new technologies
for supporting network operation.
II. M OTIVATION
We propose the development of a conceptual framework as a means of exploring a broad range
of possibilities in wireless networks, including a vast array of 5G technological possibilities. This
framework for thought applies concepts from complex systems science [11], [12] to provide
a means to understand wireless networks holistically on a variety of scales. Specifically, we
4
consider the communication patterns that enable network functions, by capturing all nodes
necessary to perform a given function; then by drawing connections between these nodes we
highlight their functional dependencies. We call a graph obtained in this way functional topology.
This approach allows us to analyze the communication patterns on multiple scales. The lowest
scale models the communication between individual devices/nodes. In other words, the lowest
scale focuses on the communication between a node and all the immediate neighbors of this
node in the functional topology. The second scale models the communication between a node,
all its immediate neighbors and all neighbors of its neighbors. The increasing scale size moves
the focus away from the communication between individual nodes, and allows us to analyze
communication patterns between groups of nodes (i.e., functional entities/groups).
Considering the high degree of heterogeneity and dense interplay of network elements in
proposed 5G and IoT systems, achieving a holistic understanding of network operation is poised
to become an even more challenging prospect in the near future. To address these challenges, we
demonstrate the power of our framework for the modeling and analysis of relevant 5G scenarios,
i.e., self-organizing cellular and IoT networks. While our framework supports innovation beyond
these concepts, we feel these scenarios adequately represent the near-future applications of our
work.
The development of our concept is organized in a layered fashion, with a modelling layer
forming the foundation of the framework and supporting analysis and tuning layers. The main
aspects of our framework are represented in Fig. 1 and will be discussed in detail in the remainder
of the paper.
As compared to the CSS literature addressing communication systems [13]–[17], we study
wireless networks from the infrastructure perspective. As a simple example, in [3], [4] excess entropy is used to measure complexity and, in combination with entropy, leads to an understanding
of the structure emerging in a lattice of self-organising networks. The self-organising systems
5
Modelling
Analysis
Communication
metrics
CSS
metrics
Functional
topology
graphs
Links between:
1. Communication &
CSS metrics
constraints
2. Technological
behaviours
3. ABM parameters
and range values
Tuning
Guidelines
for tuning
5G Network
Local rules Global fitness
Adaptive multi-dimensional 5G
resource allocation
Fig. 1: Our complex systems science based layered approach to 5G networks. Functional topology graphs are
abstracted from the network, and are then used to compute complexity and telecom metrics, and find their
relations. The understanding of such relations will then feed an ABM approach to network tuning.
studied in [3], [4] exhibit a complex behaviour and this relates to robustness against changes in
the environment; in particular, exploring frequency planning from a complex systems perspective
leads to conclude that future networks shall eschew any current frequency planning approaches
and instead determine frequency of operation on the fly. This has enormous implications for
design and roll-out of networks, deployment of small cells, and network operation.
III. M ETHODOLOGY
Significant impacts have been made by CSS in a wide range of areas including physics, biology,
economics, social sciences, computer sciences, and various engineering domains. We claim that
the CSS perspective provides the necessary means to redefine the general understanding of
telecommunication networks. We draw on concepts from information theory and ABM; each
6
concept augmenting and developing the understanding of wireless networks. We will now briefly
review some of the most important tools and concepts we use in our studies.
In order to specify and analyse the complexity of a network function, [5] introduced a framework representing an abstraction of a telecommunication network, by modelling its operation
and capturing all elements, i.e., nodes and connections, necessary to perform a given function.
Our framework includes functional topologies, i.e., graphs created based on the functional
connectivity between system entities (see Fig. 1). A node in our topology represents a functional
entity of a network node or any information source that is part of the given network function.
The links indicate dependencies between nodes. The definition of functional topologies allows
us to visualise the relationships between system entities, and enables the systematic study of
interactions between them. Based on these topologies one can define CSS inspired metrics such
as functional complexity [5], which quantifies the variety of structural patterns and roles of nodes
in the functional topology, or other information theoretical-inspired metrics.
Agent-based modelling (ABM) is a useful method to model networks. In [3], [4], [10] ABM
was used to investigate the impact of several MAC component technologies, in terms of both
telecom and IoT application’s Key Performance Indicators (KPI). This is key for our framework’s
analysis and tuning layers.
Our framework enables multi-scale modelling, analysis and tuning of wireless networks, in
which changes in the 5G networks domain can be analysed and assessed. Indeed, in order to
maximize the impact of our framework, our proposed approach is rooted in relevant, near-future
architectures and use cases in 5G networks, such as self-organizing cellular and IoT networks.
The use cases define the expected parameters, types of users/devices and environments; a general
set of possible scenarios we could investigate using our framework is shown in Table I.
7
TABLE I: Possible use cases.
Parameters
Type of users
Environments
Low latency, High throughput, High reliability, Extensive coverage,
Energy efficiency
Typical mobile broadband, Healthcare, Automotive, Home/industrial automation,
Wearable devices
Busy train station, Emergency/disaster location, Busy office complex/campus,
Large utility/manufacturing plant
A. Solution Approach
Our framework is based around the idea of using concepts, tools and measures of a complex
systems science nature. The framework is based on a modelling layer which supports the analysis
and tuning layers (see Fig. 1).
1) Modeling Layer: The modelling phase focuses on developing techniques to capture the
significant attributes of networks and the interactions that shape them. Along with the traditional
attributes used to characterize networks (e.g. coverage and throughput), the modelling phase
develops new complexity metrics and investigates their relation to telecom KPIs. These metrics
shall be developed distinctly for each application, based on existing and new concepts we draw
from CSS.
The modelling component of the framework develops appropriate abstractions and formalisms
to enable metric calculation. To this end, we produce a multi-scale abstraction for networks. The
first level or device level of this abstraction focuses on individual elements within a network,
targeting the interplay that results from information being collected and used locally by a single
entity. Interference and stability of the connection (e.g., as a function of power available at
the node) between nodes in a network are two examples of notions studied at the device scale.
Available local information may (as in the case of the interference perceived at a certain network
node) or may not (battery level) result from the actions of other nodes. That is, the device scale
typically models the implicit exchange of information, where nodes infer information for each
other’s actions without directly exchanging messages, such as the interaction-through-interference
8
paradigm of a distributed Time Division Multiple Access (TDMA) system. The higher scales
model the explicit exchange of information between groups of nodes in the network; at this
level (interaction scale) the nodes act on the basis of information provided by some other node
directly, as occurs for example when assigning a slot in a centralized TDMA system.
The interactions that shape the network formation and operation are directly modelled using
ABM. Our model considers the interactions between the interests of different network operators.
These agents operate in a hierarchical fashion (see Fig. 2) with network operator agents who,
in turn, contain sub-agents that determine specific aspects of the network, based on technical
behaviours. Anything that makes decisions in a network can be viewed as an agent, and ABM
is applied to model interactions between agents. For example, IoT agents may attempt to use
the infrastructure provided by operator agents, as shown in Fig. 2. To capture the range of
possibilities, we can use nested subagents, in which major agents might represent a whole
network with subagents representing individual cells. ABM allows conversion of experience
with detailed processes (micro-level behaviours) into knowledge about complete systems (macrolevel outcomes). In general we can consider several radio resources in our ABM model, e.g.,
resources belonging to frequency, power and space domains. Several alternative techniques and
technologies can be applied within each domain, which entails a wide set of resources and related
modes of utilisation.
2) Analysis Layer: In the analysis layer, the models are reviewed to determine the representative power and meaning of the metrics developed by linking: (i) the operator behaviours with
our new CSS metrics; (ii) the operator behaviours with network KPIs (fitness); (iii) our new CSS
metrics and KPIs. As an example, we could analyse the relationship between operator decisions
on the amount of shared resources (infrastructure and/or spectrum) and the resulting network
characteristics.
For each scenario, measures of network performance can be identified, including standard
9
Network Operator
Agent
Cellular
network
agent
- Cell Agent
- Access Point Agent
IoT
Agent
Network Operator Agent
Cellular network
agent
Network Operator Agent
WiFi network agent
WiFi network agent
WiFi network agent
Cellular network
agent
IoT Agent
IoT
Agent
Fig. 2: Agent organization. Our agent model is hierarchical, with major agents representing a whole network,
and subagents representing IoT agents or individual cells and access points agents.
operator KPIs such as cell edge, peak and mean throughput, spectrum utilisation relative to
available bandwidth, network reliability, and coverage.
For each type of the above mentioned relations (i), (ii), (iii), we can determine the most
promising pairing of elements (i.e., operator behaviour and CSS metric, or CSS metric and KPI)
within each scale and between scales for determining connections. In particular, we can identify
which behaviours correlate to specific network performance measures on each scale, and to what
extent and how our CSS metrics describe these relationships. Further, we can investigate how
a certain CSS metric-KPI relation at a certain scale affects another CSS metric-KPI relation at
a different scale (e.g. a strategy leading to throughput maximisation at the device level might
compromise the fairness objective of the resource allocation scheduler at the interaction level).
This process involves assessing the ability of the CSS metrics to describe the impact of operator
behaviours, analysing the effect of these behaviours on the network KPIs, and finally describing
the network KPIs in terms of the CSS metrics. Determining the link between network CSS
metrics and KPIs would allow us to attempt to answer fundamental questions such as whether
one needs a minimum complexity for achieving a given level of KPI (fitness), and what excess
10
complexity implies in terms of adaptivity and robustness vs. cost.
In summary, the analysis layer completes the development of the core of our framework, by
establishing a compact representation of the networks by linking complexity metrics to network
performance.
3) Tuning Layer: The tuning layer augments the framework with algorithms that automatically
guide the operation and management behaviours of relevant agents to achieve desired network
properties. This tuning approach utilizes the holistic information encoded into the complexity
based quantities, to select appropriate parameters and constraints for the behaviours of the agents.
The developed tuning approach can be based on the application of multi-objective optimization
techniques; the algorithms to be developed within this paradigm might apply multi-objective
optimization algorithms (e.g., NSGA-II, PGEN, SMS-EMOA, successive Pareto optimization) to
determine the Pareto fronts for the state spaces of the agent behaviours on the basis of achieving
desirable CSS metrics values. These Pareto fronts provide the parameters and constraints of the
operator behaviours, allowing operators to further optimize for specific differentiations while
maintaining desired holistic properties. A particular solution may be selected from the Pareto
front on the basis of agent preferences, such as a preference for high adaptivity and robustness
or low complexity, without compromising the overall quality of the solution.
IV. A PPLICATIONS OF THE P ROPOSED F RAMEWORK
A. Modeling Layer
1) Agent-Based Modelling of the Internet of Things: We employ an instance of our framework
concept to investigate the tightened coupling between operative reality and information transfer
precipitated by IoT. As such, this investigation resides primarily in the modelling phase with
some extension into the analysis phase. Within this work, we apply the tool of ABM to study
the impact of communications technologies within the scope of IoT [10].
An automatic traffic management system is considered, where for the purposes of illustrating
11
DM
Fig. 3: Single Intersection Diagram. Sensors are deployed alongside the roads and are represented as dots. Inactive
sensors are depicted as black dots; sensors detecting moving and static cars are shown as orange and purple dots
respectively.
the nature of our ABM approach, a single intersection is assumed, depicted in Fig. 3, controlled
with traffic lights, in which the avenue of the cross-road is observed by sensor nodes. A processing unit, here denoted as the decision maker (DM), serves as the sink of sensor information and
the source of light control commands. Sensor nodes mark the advancement of cars, here portrayed
as yellow squares and proceeding on the left side of the roadway, toward the intersection. Two
MAC protocols (CSMA and Aloha) are investigated, for communication between the sensors
and the DM. The DM applies the resultant information from this process to govern vehicular
progress through the coloration of traffic signals.
Notably, the semantics of communications greatly impact the operation of the physical system.
Fig. 4 exemplifies this notion through a depiction of the difference between the actual number
of cars waiting at a traffic light and the perceived number of cars known to the DM component
of the system. As revealed by ABM, the minor difference of pre-sensing a channel (CSMA) or
not (Aloha) causes either an over- or an under-estimation of the actual number of vehicles by
the controlling element in the system. As such, the application of ABM techniques allows the
development of an understanding of the various inter-relationships that direct the behavior of a
complete telecommunication system.
12
Fig. 4: Impact of MAC on Perception of Situation on a single-intersection scenario. Vehicles always travel in a
straight line at constant speed, unless they need to stop due to traffic lights or other cars. At each iteration the
probability of a new car arriving at one of the four edges of the grid and travelling in the corresponding direction
is 50%.
2) Functional Complexity: As another example of work at the modelling layer, we have
developed a metric to capture the amount of information shared between elements of a network
(as a result of the organization of the network) in support of a network function. This analytical
approach to quantify the complexity of a functional topology provides us with the means to
capture the signaling complexity of functional operations within a network, such as handover or
frequency assignment. That is, our complexity metric provides a new method of describing the
functional operation of telecommunication networks.
Our complexity metric is built upon the concept of Shannon entropy (Hr (xn )). We employ
the Bernoulli random variable xn to model the potential of a node to interact with other nodes.
The probability of interaction pr (xn = 1) is defined as the reachability of a node n (pr (xn =
1) = inr /j, where inr is the number of nodes that can reach node n and j is the number of
nodes for the given subgraph). The definition of reachability, in terms of the number of hops
allowed between two nodes in the functional topology, enables the analysis of complexity on
multiple scales (r). The one hop reachability represents the lowest possible scale (r = 1), where
13
each node interacts only with its immediate neighbors. The increasing number of allowed hops
between the nodes brings the nodes closer to each other in terms of interactions, and moves the
focus from interactions among nodes to interactions among groups of nodes, i.e., analysis of
higher scales. The total amount of information of the k th subgraph with j nodes for scale r is
calculated as
Ir (Λjk ) =
X
Hr (xn ),
(1)
n∈Λjk
where Λjk is the k th subgraph with j nodes. The total amount of information represents the
total uncertainty which is related to the actual roles of nodes that appear within a subgraph and
different subgraph patterns. Our complexity metric, which is calculated with Eq. (2), quantifies
the amount of order and structure in a system that is seemingly disordered.
R−1 N
1 X X
r+1−j
CF =
|hIr (Λj )i −
Ir (ΛN )|
R − 1 r=1 j=1+r
r+1−N
(2)
where R is the maximum scale size, which is defined as the diameter of the functional topology,
N is the number of nodes in the functional topology, ΛN is the whole functional graph, and
hIr (Λj )i is the average amount of information for a given subgraph size j. We call the metric
in Eq. (2) functional complexity.
Our approach holistically gauges the functional organization of a network by first describing
the interactions necessary to perform a given function topologically. Within this representation,
we capture the network elements involved in performing some function and the interactions that
support the operation of the function.
Our quantification of networks in terms of their functional relationships provides a wholly new
approach to understanding the operation of networks. As corroborated by Fig. 5, more typical
metrics for network topology do not capture the notions represented by our complexity metric
(in fact, the correlation of our complexity metric with other traditional metrics is lower than
14
Average Path Length,
Clustering Coefficient,
Complexity
0.28
Average Degree,
Complexity
Average Path Length,
Complexity
Average Path Length,
Average Degree,
Complexity
0.47
-0.42
0.5
Clustering Coefficient,
Average Degree,
Complexity
0.15
0.3
Clustering Coefficient,
Complexity
Fig. 5: Correlation between the proposed complexity metric and the three most used measures of network topology
(i.e., average path length, average degree distribution, clustering coefficient).
0.5 in all the cases we consider); this complexity metric thus provides an alternative method of
describing network operation.
The above functional topology and complexity framework can be applied for instance to
understand the underlying mechanisms that lead to certain network properties (i.e., scalability,
energy efficiency) in Wireless Sensor Networks (WSN) as the result of different clustering
algorithms [6].
B. Analysis Layer
In the context of the analysis layer of our framework, we focus on a cellular network that selforganises from a frequency perspective to understand the collective behaviour of the network.
We calculate the excess entropy
EC =
∞
X
(h(M ) − h)
(3)
M =1
to measure complexity, and the entropy
h = lim h(M ),
M →∞
(4)
where h(M ) is the entropy of the target cell X conditioned on M surrounding cells. By measuring
EC and h we gain an understanding of the structure emerging in the lattice for a self-organising
15
network.
Based on Eqs. (3) and (4), in [4] one shows that a self-organising cellular network can exhibit
a complex behaviour, and that it can be robust against changes in the environment. In more
detail, a self-organised and a centralised channel allocation are analyzed, with respect to their
robustness to local changes in the environment. In order to compare the stability of the two
types of channel allocation, 102 instances of the self-organising frequency allocation algorithm
are run using 102 × 102 lattices. Then, for each resulting channel allocation, all possible cells
n are considered, and for each cell all possible frequencies are in turn considered. Then the
optimal minimum distance c to an interference-free channel allocation is computed (we define
the distance between two channel allocations as the number of changes that are necessary to
move from one configuration to the other). We found that the locally perturbed channel allocation
matrices resulting from self-organisation are more stable than those resulting from a centralized
frequency planner.
What we know so far is that there is a relation between some complexity metrics and
some telecom KPIs (i.e., between excess entropy and robustness to changes [4], and between
functional complexity and the trade-off scalability-energy efficiency [6]). The complexity metrics
we introduced have shed some new light on very relevant telecom KPIs/properties in the context
of 5G networks, i.e., excess entropy can measure self-organization capabilities in the frequency
allocation context and functional complexity can measure scalability in WSN. As widely acknowledged, self-organization and scalability are very important properties of 5G systems (e.g.,
for IoT and dense small cell deployments). In the future we plan to improve and expand such
understanding to all the most prominent 5G network technologies and KPIs.
C. Tuning Layer
ABM rules will choose the technological behaviour options that maximize the targeted communication network KPI, subject to constraints defined by the correlation between CSS metrics
16
Set of
available
parameters
-
Complexity – robustness
Complexity – energy efficiency
Complexity - resilience
Fitness
functions
Waveforms
MIMO
Frequency
reuse
Duplexing
Tuning
Layer
-
MIMO multi-user scheme
OFDMA
Full duplex
Frequency assignment algorithm
Network configuration
parameters
Fig. 6: The adaptation of network configuration parameters in the tuning layer. The set of available parameters
represents a virtual pool of all the available network resources. The fitness functions depict a relationship between
different network KPIs and complexity metrics which are calculated upon the set of available parameters.
and other telecom KPIs. Local decisions will be based on only a few CSS metrics, and will
lead to desired global behaviours/KPIs of the network. The local decisions are made according
to ABM rules, by exploring and selecting the fittest behaviours (where by behaviour we mean
some algorithm or policy acting on some radio resources).
Our goal, for different services (e.g., mobile broadband, M2M) is to choose behaviours that
allow the network to achieve satisfactory KPIs, in terms of, e.g., delay, throughput, coverage,
energy efficiency, out-of-band emission, etc. The question is whether we can keep achieving
globally satisfactory KPIs just by changing ABM rules in a distributed fashion at different nodes.
Such adaptation will act within a certain resource allocation domain (e.g., picking among different
massive MIMO schemes) or between performance-equivalent allocations using resources from
different domains (e.g., spectrum or infrastructure). The main ideas behind the tuning layer of
our framework are exemplified in Fig. 6.
Although our own work on the tuning layer is still in the initial phase, from a substantial
amount of literature, we can gather evidence that different physical layer (PHY) and Radio
17
Resource Management (RRM) techniques in the 5G domain should be chosen depending on
environmental conditions and network requirements, i.e., we are potentially in a situation where
our tuning layer is relevant and beneficial. We give a brief account of such evidence next.
In [18], it is shown that for a massive MIMO system, the sum-rate has a linear or sublinear
behaviour with respect to the number of base station antennas, depending on the spatial richness
of the environment; related work on adaptive precoding for distributed MIMO is explored in
[19]. Several works investigate the coexistence of various waveforms in terms of cross-waveform
leakage interference [20], [21] and possible implications for the waveform selection [22]. The
fraction of cells that have full duplex base stations can be used as a design parameter, to target an
optimal trade-off between area spectral efficiency and outage in a mixed full/half duplex cellular
system [23], [24]. In [25], it is shown that increasing the frequency reuse can improve the
throughput-coverage trade-off for ultra-dense small cell deployments, while a lower frequency
reuse should be favoured if the target is maximizing throughput given a certain BS density.
In summary, we plan to use the above understanding of the benefit of adaptation at PHY and
MAC layers in 5G networks, and extend it as needed in terms of technology components,
KPIs and adaptation criteria, to inform our framework, and show its immediate benefit in
understanding, operating and designing 5G systems.
V. O PEN C HALLENGES
Several more 5G component technologies, in addition to those considered in this paper, can
enrich the set of possible choices used to model, analyse and tune the network, including
(massive) co-located or distributed multiple antenna arrays; different waveforms and multiple
access schemes; different duplexing schemes; novel spectrum sharing schemes such as License
Assisted Access (LAA); and different frequency reuse schemes including probabilistic ones for
ultra-dense networks.
18
What we know so far is that there is a relation between some complexity metrics and some
telecom KPIs (i.e., between excess entropy and robustness to changes, and between functional
complexity and the trade-off scalability-energy efficiency). In the future the aim is to improve and
expand such understanding to all the most prominent technologies and KPIs for 5G networks. In
particular it is still an open question how to achieve the desired network tuning properties within
a large optimization space encompassing many different network resources, KPI objectives and
constraints, many different heterogeneous co-existing networks and a very large number of nodes
and decision points. We conjecture ABM can help us achieve such ambitious goal, as a key tool
to engineer desired emergent properties in such future challenging networks.
As the network graph representations discussed in the proposed framework might dynamically
change according to the different radio resource domains and related techniques used, one open
area of investigation is to study how the complexity metrics can be calculated and how they
evolve over time for such dynamic multi-dimensional resource allocation; and then use such
metrics to analyse and tune the network behaviour taking into account robustness, resilience,
network utilization, and other time-dependent network characteristics.
VI. C ONCLUSION
Current complex systems science literature focusing on communication systems draws on
network science, studying applications and traffic modelling, but lacks considerations of architecture, infrastructure, and technology. We instead apply complex systems science to wireless
networks from the functional perspective, drawing on concepts from information (i.e., different
metrics of complexity) and computational (i.e., agent based modeling) theory, adapted from
complex system science.
Since complex systems science metrics are currently absent from the quantities considered
when operating and designing communication networks, by introducing our proposed framework
we initiate a completely new way to model, analyse and engineer networks, founding a new
19
theory and practice of telecommunications not previously anticipated. As a simple example,
our work on exploring frequency planning from a complex systems perspective leads us to
conclude that future networks shall eschew any current frequency planning approaches and
instead determine frequency of operation on the fly, with enormous implications for design, rollout and operation of networks. We believe such distributed decision making paradigm is likely
going to be the way forward for many of the future 5G and IoT resource allocation problems. In
particular, we have reasons to believe that complex systems science provides the key to unlock
the full potential of self-organization in telecom systems.
R EFERENCES
[1] S. Lloyd, “Measures of complexity: A nonexhaustive list,” IEEE Control Systems Magazine, vol. 21, no. 4, pp. 7–8, Aug.
2001.
[2] D.P. Feldman, J.P. Crutchfield, “Structural information in two-dimensional patterns: Entropy convergence and excess
entropy,” Physical Review E, 2003.
[3] I. Macaluso, H. Cornean, N. Marchetti, L. Doyle, “Complex communication systems achieving interference-free frequency
allocation,” in IEEE ICC, 2014, pp. 1447–1452.
[4] I. Macaluso, C. Galiotto, N. Marchetti, L. Doyle, “A complex systems science perspective on cognitive networks,” Journal
of Systems Science and Complexity, vol. 29, no. 4, pp. 1034–1056, January 2016.
[5] M. Dzaferagic, N. Kaminski, N. McBride, I. Macaluso, N. Marchetti, “Functional Complexity Framework for the Analysis
of Telecommunication Networks,” Journal of Systems Science and Complexity, under review, available online on arXiv:
https://arxiv.org/abs/1607.02020.
[6] M. Dzaferagic, N. Kaminski, I. Macaluso, N. Marchetti, “Relation between Functional Complexity, Scalability and Energy
Efficiency in WSNs,” in International Wireless Communications and Mobile Computing Conference (IWCMC), Jun. 2017,
under review, available online on arXiv: https://arxiv.org/abs/1610.05970.
[7] M. Niazi, A. Hussain, “Agent-based tools for modeling and simulation of self-organization in peer-to-peer, ad hoc, and
other complex networks,” IEEE Communications Magazine, vol. 47, no. 3, pp. 166–173, Mar. 2009.
[8] R. Cirillo et al., “Evaluating the potential impact of transmission constraints on the operation of a competitive electricity
market in Illinois,” Report ANL, 16(06), 2006.
[9] A. Tonmukayakul, M. Weiss, “An agent-based model for secondary use of radio spectrum,” in IEEE International
Symposium on Dynamic Spectrum Access Networks (DySPAN), 2005, pp. 467–475.
[10] N. Kaminski, M. Murphy, N. Marchetti, “Agent-based Modelling of an IoT Network,” in IEEE International Symposium
on Systems Engineering (ISSE), Oct. 2016.
[11] J. Whitacre, “Degeneracy: a link between evolvability, robustness and complexity in biological systems,” Theoretical
Biology and Medical Modelling, 2010.
[12] C. Hooker, Philosophy of Complex Systems.
Elsevier, 2011.
[13] J. Candia, et al., “Uncovering individual and collective human dynamics from mobile phone records,” Journal of Physics
A: Mathematical and Theoretical, vol. 41, no. 22, pp. 1–11, May 2008.
20
[14] P. Deville, C. Inard, S. Martin, M. Gilbert, F. Stevens, A. Gaughan, V. Blondel, and A. Tatem, “Dynamic population
mapping using mobile phone data,” Proceedings of the National Academy of Sciences, 2014.
[15] C. A. Hidalgo and C. Rodriguez-Sickert, “The dynamics of a mobile phone network,” Physica A: Statistical Mechanics
and its Applications, vol. 387, no. 12, pp. 3017–3024, May 2008.
[16] J.P. Onnela et al., “Structure and tie strengths in mobile communication networks,” Proceedings of the National Academy
of Sciences, vol. 104, no. 18, pp. 7332–7336,, May 2007.
[17] P. Wang, M. Gonzalez, C.A. Hidalgo, and A.L. Barabasi, “Understanding the spreading patterns of mobile phone viruses,”
Science, vol. 324, no. 5930, pp. 1071–1076, March 2009.
[18] F. Bentosela, H. Cornean, A. Farhang, N. Marchetti, “On the Sublinear Behavior of Massive Multi User MIMO Sum Rate
for Deterministic Channel Models,” IEEE Transactions on Communications, Aug. 2016.
[19] Y.-S. Ryu, S.-H. Jung, and H.-K. Song, “Adaptive precoding scheme with efficient joint processing for downlink coordinated
multi-point transmission system,” Electronics Letters, 2015.
[20] H. Xing, M. Renfors, “Investigation of filter bank based device-to-device communication integrated into OFDMA cellular
system,” in International Symposium on Wireless Communications Systems (ISWCS), Aug. 2014.
[21] Q. Bodinier, F. Bader, and J. Palicot, “Modeling interference between OFDM/OQAM and CP-OFDM: Limitations of the
PSD-based model,” in International Conference on Telecommunications (ICT), May 2016.
[22] C. Sexton, Q. Bodinier, A. Farhang, N. Marchetti, F. Bader, L. DaSilva, “Coexistence of OFDM and FBMC for Underlay
D2D Communication in 5G Networks,” in IEEE Global Telecommunications Conference (GLOBECOM), Dec. 2016.
[23] S. Goyal, C. Galiotto, N. Marchetti, S. Panwar, “Throughput and coverage for a mixed full and half duplex small cell
network,” in IEEE International Conference on Communications (ICC), May 2016.
[24] A. Cirik, K. Rikkinen, M. Latva-aho, “Joint Subcarrier and Power Allocation for Sum-Rate Maximization in OFDMA
Full-Duplex Systems,” in IEEE Vehicular Technology Conference (VTC), May 2015.
[25] C. Galiotto, N. Pratas, L. Doyle, N. Marchetti, “Effect of LOS/NLOS Propagation on Ultra-Dense Networks,” Computer
Networks, under review, available on arXiv: https://arxiv.org/abs/1507.01757.
| 3 |
Analysis of Unprotected Intersection Left-Turn Conflicts based on
Naturalistic Driving Data
arXiv:1702.00135v2 [cs.SY] 3 Apr 2017
Xinpeng Wang1 , Ding Zhao2 , Huei Peng3 and David J. LeBlanc2
Abstract— Analyzing and reconstructing driving scenarios is
crucial for testing and evaluating highly automated vehicles
(HAVs). This research analyzed left-turn / straight-driving conflicts at unprotected intersections by extracting actual vehicle
motion data from a naturalistic driving database collected by
the University of Michigan. Nearly 7,000 left turn across path
- opposite direction (LTAP/OD) events involving heavy trucks
and light vehicles were extracted and used to build a stochastic
model of such LTAP/OD scenario, which is among the top
priority light-vehicle pre-crash scenarios identified by National
Highway Traffic Safety Administration (NHTSA). Statistical
analysis showed that vehicle type is a significant factor, whereas
the change of season seems to have limited influence on the
statistical nature of the conflict. The results can be used to
build testing environments for HAVs to simulate the LTAP/OD
crash cases in a stochastic manner.
I. INTRODUCTION
Before highly automated vehicles (HAVs) can be released
to the general public, a well-defined process for testing and
evaluating them must be established. The Google self-driving
car project experienced its first shared-responsibility crash in
February 2016 [1]. Moreover, Tesla Autopilot failed to detect
a semi-truck in its first fatal crash happened in May 2016
and was criticized for using the consumers as beta testers [2].
Fig. 1 briefly demonstrates how this crash happened, with
the red sedan representing the Tesla. The National Highway
Traffic Safety Administration (NHTSA) is now considering
the possibility of putting a pre-market approval process into
place [3], in addition to a rigorous self-certification process
still anticipated from the vehicle manufacturers.
A key factor in HAV testing is the test scenarios and
behaviors of other road users, particularly those of other
vehicles. The test conditions need to be not only realistic
but also feasible for repeated safety tests. Test scenario
models can be divided into two types. The first type has
fixed scenarios, such as the tests of lane support systems
(LSS) [4] and autonomous emergency braking (AEB) [5]
launched by The European New Car Assessment Programme
(EURO NCAP). A major advantage of this type is that it is
repeatable. However, it is hard to use this type of models to
* This work is funded by the Mobility Transformation Center Denso
Tailor Project at the University of Michigan with grant No. N020210.
1 X. Wang is with the Department of Automation, Tsinghua University,
Beijing, China, 100084, and now he is a visiting scholar at the University
of Michigan, Ann Arbor, MI 48109, U.S.
2 D. Zhao (corresponding author: [email protected]) and D.
J. LeBlanc are with the University of Michigan Transportation Research
Institute, Ann Arbor, MI 48109, U.S.
3 H. Peng is with Department of Mechanical Engineering at the University
of Michigan Transportation Research Institute, Ann Arbor, MI 48109, U.S.
Fig. 1.
A brief description of the Tesla accident
represent the highly complex and variable nature of the human driving environment. Moreover, HAVs could be adjusted
to pass certain fixed scenarios while their performance under
broad conditions might not be well assessed. To overcome
these drawbacks, we proposed a second type of models. In
our previous works [6]–[8], we proposed a stochastic test
method and built a test environment for car-following and
lane-change scenarios. In this paper, we will focus on the
intersection scenario.
The intersection has been one of the most challenging
scenarios for HAVs, due to the variety of road users, complexity of traffic flow and the unpredictability of vehicles and
pedestrians. According to [9], crashes at intersections took
up a major portion, about 44 %, of all the traffic crashes
in the US. Among all kinds of scenarios with potential
risks at an intersection, unprotected left turn across path opposite direction (LTAP/OD) is a typical one. This scenario
is ranked second among 10 priority V2V light-vehicle precrash scenarios [10]. In an LTAP/OD scenario, two vehicles
are considered: the turning vehicle (TV) and the straightdriving vehicle (SdV).
Although a lot of research, such as [11], [12], has been
conducted on traffic conflict analysis of LTAP/OD scenario,
the factor of vehicle type has not been widely investigated.
Now that the crash of the Tesla with Autopilot system has
been attributed to its failure to detect the truck turning ahead
[13], it is crucial that more attention should be paid to
scenarios involving heavy trucks. Moreover, there has been
insufficient research on the influence of season change on
driving behaviors at intersections. As extreme weather such
as storm and fog has a strong impact on the driving behaviors of human drivers, we propose that they are possibly
influential to HAVs as well.
This research focused on two major tasks: first, it built a
stochastic model of traffic conflicts in the LTAP/OD scenario
TABLE I
INTRODUCTION TO IVBSS DATABASE
Vehicle type
Distance
Time
Trips
Vehicles / Drivers
Type of front radar
Light vehicle
213, 309 mi
Apr. 2009 - Apr. 2010
22,657
16 sedans / 108 drivers
Bosch LRR2
Heavy truck
601, 994 mi
Feb. 2009 - Dec. 2009
22,724
10 tractors / 20 drivers
TRW AC20
(a) Light vehicle
Fig. 2.
based on naturalistic driving data. Events involving both
light vehicles (LVs) and heavy trucks (HTs) as the SdV
were extracted from the database, reconstructed into realistic
trajectories of TVs and SdVs, and finally described with
several key variables. Secondly, the influence of vehicle type
of the SdV and the season factor to the driving behavior
of the TV was analyzed by comparing the distribution of
these key variables between LVs and HTs as well as between
summer and winter.
II. DATA S OURCE
The data source for this research is the Integrated VehicleBased Safety Systems (IVBSS) database [14], which was
collected from 2009 to 2010 and maintained by the University of Michigan Transportation Research Institute (UMTRI).
The database consists of two parts: LV platform and HT
platform. It comes from a naturalistic field operational test
(N-FOT) which is to assess the potential safety benefits
and driver acceptance associated with a prototype integrated
crash warning system [15], [16]. As the system incorporates
forward crash warning (FCW), lateral drift warning (LDW),
lane-change/merge warning (LCM) and curve speed warning
(CSW), non of the functions are designed to deal with
LTAP/OD scenario. Thus, it is assumed in this research that
whether the warning system is enabled will not affect driver
behavior at LTAP/OD scenario.
For LV platform [14], 16 identical prototype vehicles were
driven by 108 drivers for their own personal use for over
six weeks. On each test vehicle, there is one long-distance
77-GHz radar that looks forward and six 24-GHz radars
that cover the adjacent lanes as well as the area behind the
vehicle. In addition, there is a vision system, an automotivegrade non-differential global positioning system (GPS) and
an on-board digital map. Around 700 different channels of
signals have been collected.
For the HT platform [17], 18 male commercial truck
drivers from Con-way Freight drove 10 equipped Class
8 tractors for 10 months. There are eight radars, three
exterior cameras and several interior cameras on the test
truck, recording over 500 channels of data including the
driving environment, drivers activity, system behaviors and
vehicle kinematics. Basic information about the LV and HT
platforms in the IVBSS database is listed in Table I; the
configuration of on-board sensors on each platform is shown
in Fig. 2.
The test area covered by IVBSS N-FOT is primarily in
(b) Heavy truck
Sensor configuration of IVBSS test vehicles
the Detroit area of the U.S. Most of the HT trips took place
in the lower peninsula of Michigan (63 %) and Ohio (33 %)
[16]. most of the LV trips fell within a similar region [15].
The database provides adequate information for this research. For event extraction, data from the on-board GPS
sensor is used to locate the instrumented vehicle; data from
the front long-range radar is used to reconstruct the trajectory
of target vehicles; video recordings from vision cameras
around the vehicles are used as a supplemental tool for event
screening. In addition, as the IVBSS test lasted approximately one year, driving data under a variety of weather
conditions throughout the year were covered, enabling us to
uncover the influence of season factors.
III. E XTRACTION OF L EFT T URN S CENARIO
In order to extract eligible left-turn events from the
database for both the LV and the HV platform, three major
tasks were performed. First, we processed data from radar
for further use. Second, we searched the database for all
left-turn events that meet our criteria. Finally, data points in
each event were interpreted into trajectories of the TV and
the SdV.
A. Target Association of Truck Data
For radar data from the HT platform, we need to associate
and mark data points together that belong to the same target
in order to screen out unfit targets and create a trajectory for
every eligible TV. To cluster points of interests, we apply the
following criteria to processing HT data:
• Only objects (TVs) that move in the opposite direction
are detected. (vT V <-0.3 m/s).
• Only points with small azimuth angle (|α| < 5.5°) are
considered, as the effective detecting range of the radar
is 11°.
• When the cluster with point i is expended, only data
points within a small time slot ([t(i), t(i) + 0.85 s]) are
considered.
• Only neighbor points that satisfy the following rules are
grouped:
1) Strong correspondence between range, range rate
and time difference:
δtpred
| < 0.3
(1)
|1 −
t(j) − t(i)
where
δtpred = 2 ∗
r(j) − r(i)
rr(j) + rr(i)
(a) Range
Fig. 3.
(b) Transversal
Fig. 4. Configuration of the instrumented vehicle and the target vehicle
for event extraction in LTAP/OD scenario
An exemplary result of target association
Here, r(i) is the range of point i, and rr(i) is the
range rate of point i
2) Reasonable difference in transversal.
tr(j) − tr(i)
< 20m/s
t(j) − t(i)
(2)
Here, tr(i) is the transversal of data point i, and
t(i) is the time.
Fig. 3 shows an example of data points that are associated
and divided into different groups in one event. The dots
with the same color show trajectories of targets, while red
dots do not belong to any group and are seen as noise. In
such a typical LTAP/OD scenario, a vehicle is turning in
front of the instrumented truck. Fig. 3(a) shows how the
range of target points change over time. As target vehicles
cross the intersection when the instrumented truck is moving
forward at a steady speed, the ranges to different targets
are decreasing linearly. Moreover, Fig. 3(b) shows that the
transversal of multiple targets is increasing from negative to
positive, indicating that they cross from left to right in the
view of the instrumented truck. Once data points from each
target are clustered, the HT platform can be used for further
event extraction for eligible LTAP/OD scenarios.
B. Event Screening
An unprotected LTAP/OD scenario can be recorded by
either the SdV or the TV. In this paper, we use only the
scenarios recorded by SdVs. Fig. 4 demonstrates the configuration of the instrumented vehicle, i.e., the SdV and the
target vehicle, i.e., the TV for event extraction in LTAP/OD
scenarios. For both the LV and HT platforms, eligible leftturn events are queried based on the following criteria:
• The intersection has a stop sign or a set of signal lights.
Although there will be protected LTAP/OD events retrieved with this criterion, they can be screened out by
the following conditions, such as the constraint on the
velocity of the TV and the SdV.
• The instrumented vehicle is moving straight (speed
larger than 3 m/s & change of heading angle smaller
than 10°).
• The target vehicle is moving towards the instrumented
vehicle (the longitudinal projection of speed smaller
than -0.5 m/s) and moving from left to right (due to
the difference in radars, transversal goes from positive
to negative for LVs, and from negative to positive for
HTs).
Fig. 5.
•
•
Procedure and interim results for event extraction
Time duration of the event is adequate (more than 1.5
s).
The maximum of time difference between two consecutive points (defined as δt) in an event should be
small enough to be seen as points of the same target
(max{δt} < 1 s).
Event extraction follows a similar procedure for LV and
HT. For the LV platform, we first select all straight-driving
occurrence at intersections; we then extract those with leftturn objects from the opposite direction. These tasks are
completed in the Microsoft SQL Server Management Studio
(SSMS). Afterwards, the extracted events are exported to
MATLAB for the last round of screening, which guarantees
reasonable speed, targets, and time duration. For the HT
platform, the only difference is that after retrieving all the
occurrences of straight-driving at intersections, we export
data from the database server directly into MATLAB for
target association and the following extraction tasks.
The diagram in Fig. 5 illustrates the procedure and interim
results of each phase for event extraction. Finally, HT has
5,780 eligible LTAP/OD events, whereas LV has 1,055. The
location of these events is shown in Fig. 6.
C. Trajectory Reconstruction
Once all eligible events have been selected, the trajectories
of both TV and SdV in each event are then reconstructed.
The exact position of SdV comes from the on-board GPS
sensor; the data from the front long-range radar are used to
extract the relative position of TV in the coordinate of SdV.
After synchronization on GPS and radar data, the trajectories
of TV and SdV are generated. Fig. 7 shows the reconstructed
trajectories for SdV and TV in one event. Here, dots with
the same color represent the position of TV and SdV at the
same moment. The TV crossed intersection before the SdV
in this example.
(a) Light vehicle
Fig. 6.
(b) Heavy truck
Fig. 8.
The location of extracted events
Time to the conflict point of the SdV in a LTAP/OD scenario
vSdV : Speed of the SdV at Tx
vT V : Speed of the TV at Tx
First, we demonstrate an example of conflict analysis on
a single LTAP/OD event. Here we use the aforementioned
occurrence, where the TV crossed the intersection before
the SdV did. Fig. 8 uses Tcp to demonstrate how the SdV
and the TV interacted in one real LTAP/OD event. The
vertical axis indicates predicted time to the point of conflict
of the SdV, whereas the horizontal axis shows the real
elapsed time relative to the moment when the TV crosses
the intersection, that is, Tx . In this scenario, time to the
conflict point decreased linearly over time, indicating the
margin between the TV and the SdV was large enough for
the SdV to maintain a nearly constant speed when the TV
was crossing. When the TV reached the conflict point, there
was a 2.3-second margin for SdV, that is, Tcp , which is
demonstrated by the red dot. Here, Tcp described the essence
of this interaction between the SdV and the TV.
Then, the following modeling and analysis will ignore the
detailed interaction of the TV and the SdV, paying attention
only to the four aforementioned variables in each event. We
use all events we retrieved in the previous section from both
the HT and LV platforms as the source for modeling.
•
•
Fig. 7.
Example of reconstructed trajectory
IV. C ONFLICT A NALYSIS AND C OMPARISON
A. Definition and Metrics of Conflicts
In this section, conflict is used to describe risky events in
traffic. According to [18], conflict is defined as an observational situation in which two or more road users approach
each other in space and time to such an extent that a collision
is imminent if their movements remain unchanged. Many
conflict metrics have been used for measuring the level of
safety for an LTAP/OD event, including post-encroachment
time (PET), leading buffer (LB) and trailing buffer (TB) used
by [19], and gap time (GT) used by [20]. For this paper,
as the goal is to construct a stochastic model, we choose
only the most representative time slice in each event to
model LTAP/OD conflicts. The heading angle of the SdV is
taken as constant during each event, with any small deviation
being ignored. Thus, a conflict point is naturally defined as
the location of the TV when its transversal in the radar of
SdV crosses zero, and this exact moment is regarded as
the representative moment of this conflict, defined as Tx .
Consequently, four variables at Tx are chosen to model the
conflict, including two modified conflict metrics: time to the
conflict point (Tcp ) and distance to the conflict point (Dcp ):
• Dcp : Distance to the conflict point for the SdV at Tx
Dcp = dist(PSdV (Tx ), PT V (Tx ))
•
In this section, the effect of vehicle type on traffic conflict
in LTAP/OD scenarios is discussed. Distributions of variables
for LVs and HTs are compared.
As events with smaller Dcp and Tcp are more dangerous,
we generated the distributions of the reciprocal of Dcp and
Tcp to put these risky but rare events in the tail, as shown in
(3)
Here PSdV and PT V are the positions of SdV and TV
Tcp : Time to the conflict point for the SdV at Tx
Tcp = Dcp /vSdV
B. Comparison between Light Vehicles and Heavy Trucks
(4)
−1
(a) Distribution of Dcp
−1
(b) Distribution of Tcp
−1
−1
Fig. 9. Comparison of Dcp
and Dcp
between heavy trucks and light
vehicles
Light vehicle
Heavy truck
(a) Speed distribution of straight- (b) Speed distribution of turning
driving heavy trucks and straight- heavy trucks and turning light vehidriving light vehicles
cles
Fig. 10. Comparison of the speed between heavy trucks and light vehicles
Fig. 9. The dots and bars at the top of figures show the
mean value and the standard deviation of each empirical
−1
distribution. From Fig. 9, we can see that when Dcp
or
−1
Tcp
increases, there are fewer points of data, giving rise to
a shape with a long tail. Moreover, events with an HT as
−1
−1
SdV tend to have both smaller Dcp
and smaller Tcp
than
with an LV, indicating less severe conflicts.
Fig. 10 shows the distributions of vSdV and vT V . The
distribution of vSdV for HT and LV platforms both have
a triangular shape. Most vSdV ranges from 12 to 20 m/s,
whereas most vT V is less than 10 m/s at the conflict point.
Though there is no obvious difference with the distribution
of vSdV between events with LV and HT as the SdV, the
vT V tends to be significantly higher when the SdV is an
LV than is an HT. Combined with the previous results of
Dcp and Tcp , we can conclude that for left-turn conflicts
where HTs are SdVs, conflict metrics have significantly
higher value, and TVs tend to turn with less aggressive
speed. This means that when the TV chooses the time of
turning and commences turning action, it behaves more
conservatively when confronted by an HT coming from the
opposite direction than an LV. The difference in vehicle type
does influence the driving behavior of the TV and the severity
of the conflict.
C. Analysis of Season Factor
In this section, we uncover the influence of season factor
on behaviors of SdVs and TVs in LTAP/OD scenarios.
During the test, 7 % of driving for HTs [16] and 15 % [15]
for LVs took place in freezing temperature. The months with
events that took place in freezing temperatures are defined
as winter, which includes December through March of the
following year. This period also coincides with the time when
the average snowfall in Ann Arbor is over 8 inches. On the
other hand, summer is defined as being from June to August.
We have retrieved 272 events in summer and 391 events in
winter for LVs, whereas the numbers for HTs are 1818 and
−1
−1
844 respectively. Tcp
, Dcp
, vSdV and vT V are compared
for summer and winter driving.
Mann-Whitney-Wilcoxon (MWW) test [21] is a nonparametric hypothesis test of the null hypothesis that two
populations are the same against an alternative hypothesis.
Here, we used it to determine whether the conflict metrics
differ between summer and winter.
Fig. 11.
Comparison between events in summer and winter
Fig. 11 shows the result of the comparison. It can be
concluded that for both LV and HT platforms, the mean
values for summer and winter of all four variables that
describe the conflict at LTAP/OD for both SdVs and TVs
are very close. As the p-value from the MWW test is large
(p-value > 0.6) for all the eight distributions, we are not able
to distinguish between the left-turn pattern in summer and
−1
−1
, vT V and vSdV . This result
in winter in terms of Dcp
, Tcp
indicates that despite a large difference in climate, there is no
significant difference between the way people drive in winter
and in summer at LTAP/OD scenarios in the Great Lakes
area. This conclusion has its significance for designing and
testing of HAVs.
V. CONCLUSION
In this research, traffic conflicts of TVs and SdVs in
LTAP/OD scenarios are modeled and analyzed based on
nearly 7,000 left-turn events extracted and reconstructed
from the naturalistic database. The two modified conflict
metrics, Tcp and Dcp are used to model turning behavior
of the TV. This stochastic model can be further used for
developing simulation tools for evaluating HAVs.
The significance of vehicle type and season are also
addressed in the research. In general, when the SdV is an
HT, the driver of the TV tends to turn in a more conservative
fashion with a wider margin. Surprisingly, despite prevailing
snow and freezing weather in the winter of Michigan, driver
behavior at LTAP/OD scenarios during the N-FOT test did
not differ significantly between summer and winter. These
two conclusions can be useful for designing automated driving algorithms and for establishing regulations and policies
for HAVs.
In the following research, we will improve the accuracy
of trajectory reconstruction by conducting sensor fusion to
the GPS and yaw rate sensor, and by re-synchronizing data
from different channels. Moreover, we will further investigate
the reasons behind the conclusion on the similarity of driver
behavior between summer and winter. Possible causes could
be: snow on the road was shoveled promptly in winter
thus normal driving was almost unaffected; the N-FOT
trips avoided extreme weather in winter so that the data
was biased. Besides, We will also facilitate the model to
build a stochastic simulation environment for the testing and
evaluation of HAVs.
D ISCLAIMERS
This work was funded in part by the University of Michigan Mobility Transformation Center Denso Pool project. The
findings and conclusions in the report are those of the authors
and do not necessarily represent the views of the MTC or
Denso.
R EFERENCES
[1] “Google Self-Driving Car Project Monthly Report (February 2016),”
pp. 14–16, 2016.
[2] O. Solon, “Should Tesla be ’beta testing’ autopilot
if
there
is
a
chance
someone
might
die?”
[Online]. Available: https://www.theguardian.com/technology/2016/jul/
06/tesla-autopilot-fatal-crash-public-beta-testing
[3] U.S. Department of Transportation and National Highway Traffic
Safety Administration, “Federal Automated Vehicles Policy,” Tech.
Rep. September, 2016. [Online]. Available: https://www.transportation.
gov/sites/dot.gov/files/docs/AVpolicyguidancePDF.pdf
[4] “European New Car Assessment Programme - TEST PROTOCOL
- Lane Support Systems,” 2015. [Online]. Available: http://www.
euroncap.com/en/for-engineers/protocols/safety-assist/
[5] “European New Car Assessment Programme - TEST PROTOCOL AEB systems,” 2015. [Online]. Available: http://www.euroncap.com/
en/for-engineers/protocols/safety-assist/
[6] D. Zhao, X. Huang, H. Peng, H. Lam, and D. J. LeBlanc, “Accelerated
Evaluation of Automated Vehicles in Car-Following Maneuvers,”
submitted to IEEE Transactions on Intelligent Transportation Systems,
7 2017. [Online]. Available: http://arxiv.org/abs/1607.02687
[7] D. Zhao, H. Lam, H. Peng, S. Bao, D. J. Leblanc, and C. S.
Pan, “Accelerated Evaluation of Automated Vehicles Safety in Lane
Change Scenarios based on Importance Sampling Techniques,” IEEE
Transactions on Intelligent Transportation Systems, 2016.
[8] Z. Huang, D. Zhao, H. Lam, and D. J. LeBlanc, “Accelerated
Evaluation of Automated Vehicles Using Piecewise Mixture Models,”
submitted to IEEE Transactions on Intelligent Transportation Systems,
7 2017. [Online]. Available: http://arxiv.org/abs/1701.08915
[9] C.-Y. Chan, “Defining safety performance measures of driver assistance systems for intersection left-turn conflicts,” 2006 IEEE Intelligent Vehicles Symposium., pp. 25–30, 2006.
[10] N. Wassim G., S. Toma, and J. Brewer, “Depiction of Priority
Scenarios for Safety Applications Based on Vehicle-to-Vehicle Communications,” Tech. Rep. April, 2013.
[11] C.-Y. Chan, “Characterization of driving behaviors based on field
observation of intersection left turn across path scenarios,” IEEE
Transactions on intelligent transportation systems, vol. 7, no. 3, pp.
322–331, 2006.
[12] K. Nobukawa, M. Barnes, R. Goodsell, T. Gordon, and A. Arbor, “Reconstruction of Vehicle Trajectories for Intersection Conflict Analysis
Using Vehicle-Based Sensors,” Conflict, no. July 2016, pp. 1–12, 2009.
[13] “Preliminary
Report
HWY16FH018,”
2016.
[Online].
Available: http://www.ntsb.gov/investigations/AccidentReports/Pages/
HWY16FH018-preliminary.aspx
[14] D. J. Leblanc, J. R. Sayer, S. Bao, S. Bogard, M. L. Buonarosa,
A. Blankespoor, and D. Funkhouser, “Driver Acceptance and
behavioral changes with an Integrated warning system: Key findings
from the IVBSS FOT,” Tech. Rep., 2011. [Online]. Available:
http://www-nrd.nhtsa.dot.gov/pdf/esv/esv22/22ESV-000260.pdf
[15] J. R. Sayer, M. L. Buonarosa, S. Bao, S. E. Bogard, D. J. LeBlanc,
A. D. Blankespoor, D. S. Funkhouser, and C. B. Winkler, “Integrated
Vehicle-Based Safety Systems Light-Vehicle Field Operational Test,
Methodology and Results Report,” no. December, 2010.
[16] J. R. Sayer, S. E. Bogard, D. Funkhouser, D. J. Lablance, S. Bao, A. D.
Blankespoor, and C. B. Buonarosa, Mary Lynn Winkler, “Integrated
Vehicle-Based Safety Systems Heavy-Truck Field Operational Test
Key Findings Report,” Tech. Rep. August, 2010.
[17] D. Zhao, H. Peng, K. Nobukawa, S. Bao, D. J. LeBlanc, and C. S. Pan,
“Analysis of Mandatory and Discretionary Lane Change Behaviors for
Heavy Trucks,” in AVEC, no. Dlc, 2014, pp. 355–360.
[18] A. P. Tarko, “Use of crash surrogates and exceedance statistics to
estimate road safety,” Accident Analysis and Prevention, vol. 45, pp.
230–240, 2012.
[19] K. Nobukawa, “A model based approach to the analysis of intersection
conflicts and collision avoidance systems,” Ph.D. dissertation, The
University of Michigan, 2011.
[20] J. Misener, “California Intersection Decision Support: A Systems
Approach to Achieve Nationally Interoperable Solutions II California
PATH Research Report,” 2007.
[21] H. B. Mann and D. R. Whitney, “On a Test of Whether one of
Two Random Variables is Stochastically Larger than the The Annals
of Mathematical Statistics,” The Annals of Mathematical Statistics,
vol. 18, no. 1, pp. 50–60, 1947.
| 3 |
A Revised Incremental Conductance MPPT Algorithm
for Solar PV Generation Systems
Meng Yue and Xiaoyu Wang
Sustainable Energy Technologies Department
Brookhaven National Laboratory
Upton, NY 11973, USA
[email protected], [email protected]
Abstract—A revised Incremental Conductance (IncCond)
maximum power point tracking (MPPT) algorithm for PV
generation systems is proposed in this paper. The commonly
adopted traditional IncCond method uses a constant step size
for voltage adjustment and is difficult to achieve both a good
tracking performance and quick elimination of the oscillations,
especially under the dramatic changes of the environment
conditions. For the revised algorithm, the incremental voltage
change step size is adaptively adjusted based on the slope of the
power-voltage (P-V) curve. An accelerating factor and a
decelerating factor are further applied to adjust the voltage step
change considering whether the sign of the P-V curve slope
remains the same or not in a subsequent tracking step. In
addition, the upper bound of the maximum voltage step change
is also updated considering the information of sign changes. The
revised MPPT algorithm can quickly track the maximum power
points (MPPs) and remove the oscillation of the actual operation
points around the real MPPs. The effectiveness of the revised
algorithm is demonstrated using a simulation.
Index Terms—IncCond MPPT algorithm, fractional opencircuit/short-circuit MPPT algorithm, P&O MPTT algorithm,
solar PV generation.
I.
INTRODUCTION
As one of the most promising renewable energy
technologies, the installed capacity of the solar photovoltaic
(PV) generation has increased dramatically in recent years.
Although the cost of PV generation continues to drop, the
economic competitiveness of solar PV energy is still low
compared to the traditional energy sources, even with various
local and federal policy instruments [1]. While it is desirable
to further lower the cost and increase the efficiency of solar
energy systems including both the solar panels and the power
electronic devices, increasing the efficiency of the installed
PV energy systems by simply improving the existing control
algorithms should also be pursued. One way of achieving this
is to modify the existing MPPT algorithms to extract more
solar energy under various environmental conditions.
Many different types of MPPT algorithms have been
proposed in the literature. As summarized in [2], different
algorithms have their own pros and cons, in terms of
complexity, accuracy and convergence speed, etc. Among the
commonly used algorithms, the hill-climbing/perturbation and
observation (P&O) method [3-6] is easy to implement using
either analog or digital circuits. It periodically perturbs either
the duty ratio of the converter or the the PV array operating
voltage even when the MPP is achieved. Further, the “true”
MPPT cannot be achieved using the P&O method since the
operation point of the PV system is oscillating around the
MPP. Under the conditions of continuous fast changing
irradiance, the operating point might continuously deviate
from the MPPs and eventually the optimal operation points
cannot be achieved at all. These issues degrade the
performance of the solar generation system. The fractional
open-circuit voltage (or short-circuit current) method [7-9]
needs only to sense one voltage (or current) parameter to
approximate the MPP by using empirical parameters. The
major issue related to this method is that the PV circuit has to
be periodically operated in open-circuit (or short-circuit)
conditions and may have significant impact on the grid
operation. Other types of algorithms, such as those based on
fuzzy logic control and neural network may accurately track
the MPPs under different environmental conditions [2]. The
MPPT performance, however, is not guaranteed since they
both rely heavily on the algorithm developers and/or a
significant volume of field data under all kind of conditions
for the design and implementation of such algorithms.
The IncCond method (see, e.g., [10-17]) appears to be the
most popular one in practice due to its medium complexity
and the relatively good tracking performance. One of the
major difficulties implementing the IncCond method is the
selection of the (fixed) voltage change step size for
simultaneously satisfying the tracking speed and maintaining
the MPP. A large step size of voltage change helps the system
rapidly approach the MPPs. On the other hand, this large value
generally induces persisting oscillations around the MPP if no
other special countermeasures were taken. The issues with
using a small step size of voltage change are the opposite.
A simple and effective revised IncCond algorithm is
proposed in this paper. An adaptive voltage step change
scheme is first adopted based on the slope where the operation
point locates on the P-V curve. An accelerating factor and a
decelerating factor are then applied to further adjust the
voltage step change considering whether the sign of the P-V
curve slope remains the same or changes in a subsequent
tracking step. The same information of sign changes is also
used to update the upper bound of the maximum voltage step
change. The adaptive voltage step change enables the PV
system to quickly track the environment condition variations,
i.e. reach and stay at the MPPs. In this way, more solar energy
generation can be harvested from the PV energy systems.
These improvements enable the quick response to the
environment condition changes and rapid landing on the MPP.
The revised method is easy to implement since it does not
require knowledge of the I-V characteristics of specific PV
panels and the parameters are easy to tune.
The revised IncCond algorithm is described in detail in
Section II with an overview of various modified IncCond
methods. Modeling of generic PV generation systems is
presented in Section III for simulation purposes. Simulation
results using the proposed MPPT algorithm will be shown in
Section IV and concluding remarks are given in Section V.
II.
A REVISED INCCOND MPPT METHOD
The MPP is achieved by adjusting the terminal output
voltage of a solar array through controlling the converter duty
ratio. While the cell temperature can be easily measured, the
irradiance is difficult to measure accurately, and the desired
voltage at the MPP is hard to know exactly. Therefore, a test
condition needs to be developed in order to determine whether
the current operating point is the MPP or not without
measuring the temperature and irradiance. For a solar panel,
there is only one maximum power point for a given irradiance
level and cell temperature. Note, the presence of a partial
shading condition of a panel may cause multiple local maxima
and is not considered in this paper, although the revised
algorithm can be used together with the two-staged methods
proposed in [16, 17].
The IncCond method uses the information of the solar P-V
curve, i.e., at the left hand side of the MPP the slope is greater
than zero, at the right hand side of the MPP the slope is less
than zero, and the slope is zero exactly at the MPP. Therefore,
the solar array terminal voltage needs to be increased when the
slope is positive and decreased when the slope is negative.
The slope dP/dV can be calculated as,
dP d ( IV )
dI
=
= I +V
dV
dV
dV
(1)
with dI/dV ≈ ∆I/∆V (i.e., the incremental conductance) in an
implementation. Under the MPP condition, i.e., dP/dV = 0,
the following relationship holds,
∆I
I
=−
∆V
V
(2)
The major difficulty with the IncCond method is the
selection of the incremental step size of the duty ratio for
adjusting the solar terminal output voltage. A fixed
incremental step size of the duty ratio in general will not bring
the array to the MPP because the operating point will oscillate
around the MPP, i.e., either on the left or the right of the MPP.
Ref. [14] divided the entire I-V (current-voltage) curves
into two domains using "square root" functions with all of the
MPPs contained in only one of them. Therefore, the first step
of performing MPPT is to bring the operating point to the
domain that contains all of the MPPs. This method, however,
requires a good understanding of the PV panel I-V
characteristics that are panel specific. In [15], a so-called "Van
Allen's oscillator" was added between the solar panel and the
inverter for a purpose of balancing the power source and the
load that continues changing. A simple proportional integral
(PI) controller was developed to track the MPPs based on this
configuration. It is intuitively easy to avoid a fixed voltage
change step size by adjusting the increment proportionally to
the steepness of the slope and eventually the increment of the
duty ratio will become zero at the MPP, where the slope is
zero, similar to the PI controller proposed in [15]. The
implementation, however, appears to be very difficult because
(1) the P-V curve steepness around the MPP can be very
different for different operating conditions (i.e., the P-V curve
for a lower irradiance level can be more flat) and (2) a sudden
change in the operating condition of the solar array may
produce a very large numerical difference in calculating the
slope when the change occurs. Since the duty ratio is between
0 and 1, this may cause unacceptable change in the solar
terminal output voltage and make it very difficult to bring the
voltage back to normal. Note, refs [16] and [17] proposed twostage methods mainly to avoid the local maxima caused by the
non-uniform insolation experienced by the solar panels. The
traditional IncCond method was still used after bringing the
operating point close to the global MPP by using, e.g.,
monitoring cells in [17].
In this section, a simple and effective modified IncCond
method is proposed based on observations that (1) in two
consecutive tracking steps, a changing in sign of the slope
(i.e., from positive to negative or from negative to positive)
indicates that the increment step size is too large (otherwise, it
may land on the MPP or the same side on the P-V curve) of
the duty ratio; and (2) the same sign of the slope in two
consecutive tracking steps indicates that the increment step
size is too small (otherwise, the operating point may land on
the MPP or the other side on the P-V curve). Based on these
observations, the strategy proposed here is to (1) adjust the
incremental step size considering the steepness of the slope;
and (2) further adjust the incremental step size comparing the
sign of slopes in two consecutive tracking steps, i.e., decrease
the incremental size in the former case (e.g., by multiplying
the incremental size by a factor DEACC such as 0.7) and to
increase the incremental size in the latter case (e.g., by
multiplying the increment by a factor ACC such as 1.2).
By applying this improved strategy, the solar array will
approach the MPP in an accelerating manner after a change in
the operating condition(s), and the magnitude of oscillation
around the MPP may rapidly decrease until the test condition
is considered to be satisfied. After landing onto the MPP, the
duty ratio will not be adjusted until the operating condition
changes again.
In the implementation, the upper- and lower-bounds for
the incremental step size need to be defined to avoid
extremely drastic changes in the duty ratio. However, the
upper-bound is generally fixed and needs to be large enough
to permit the rapid tracking of the MPP for a sudden change of
the operation condition. The issue with a fixed upper bound is
when the array starts tracking the new MPP and it quickly
approaches the MPP and lands on the other side of the MPP,
the duty ratio needs to be adjusted in the reverse direction
using the incremental step, which could be large (due to the
factor ACC) and remain large for some time (although the
factor DEACC has been applied). At this point, having a large
incremental step does not help because it may cause very large
fluctuations or overshoot of the voltage before the MPP is
achieved. Therefore, the second proposed improvement is,
when the sign of the slope changes, the upper-bound is also
decreased together with the incremental size. It is also
preferred to maintain the upper-bound small nearby the MPP
until the MPP is reached.
Note, in the implementation of the algorithm, a nominal
incremental step size is pre-selected. After the test condition
of the slope is considered to be satisfied, the duty ratio will not
be changed but the incremental step size might become very
small now, which, if not corrected, will cause the very slow
response in the beginning of attempting to track the next MPP
under a different operating condition. A simple solution is to
reset the step size to the nominal value without adjusting the
duty ratio when the MPP is considered to be reached.
∆I
I
∆I I or δd (k ) = δd (k − 1) ×
DEACC× |
+ |
+ |
∆V V
∆V V
if the slope is not small enough (i.e., greater than a preselected constant ε and equation (2) is still not satisfied), and
the duty ratio d (k ) = d (k − 1) + δd (k ) .
× ACC× |
δd (0) indicates the initial incremental size of the duty ratio,
δd max (0) the initial upper bound of the incremental size.
δd (k ) and δd max (k ) are the updated (based on conditions
discussed above, as indicated in Fig. 1) increment size and the
upper boundary of the incremental size.
III.
MODELING OF PV ENERGY SYSTEMS
A. Solar Array
In general, a solar array consists of many solar modules
connected in series and/or parallel, each module being
manufactured by serially connecting a certain number of solar
cells. A solar cell is essentially represented by an equivalent
electrical circuit as shown in Fig. 2. For illustration purposes,
modeling of a solar cell is briefly summarized in this part.
Interested readers can find details in other references, e.g.,
[18].
Note also, the PV terminal output voltage may be very
sensitive to the duty ratio, especially for duty ratio close to 0.0
or 1.0. The dc-dc converter input and output voltages should
thus be selected such that the duty ratio is in the middle of the
duty ratio range.
Fig. 2. An equivalent electrical circuit of a solar cell.
For the solar cell model in Fig. 2, the following equation
can be derived,
(3)
I PV = I Ph − I d − (VPV + I PV × RS ) / RP
where IPV and VPV represent the solar cell terminal output
current and voltage, respectively. IPh is the photon current
source, Id the diode current. The series resistance RS and shunt
resistance RP are used to represent the power losses while the
latter can generally be neglected. It is noted in equation (3)
that the photon current and the diode current are temperature
and irradiance dependent. For given panel temperature T
(Kelvin) and irradiance level G (W/m2), IPh and Id can be
calculated using the following equations,
G
G
I Ph |T ,G
I=
I SC |Tref ,Gref [1 + α (T − Tref )]
=
Ph |T ,Gref
Gref
Gref
|T ,G I 0 |T ,G ×[exp(q (V + I × RS ) / (nkT )) − 1]
Id =
3
−qEg 1
1
T n
) exp(
( −
))
I 0 | T ,G =
I 0 |Tref ,G × (
Tref
nk T Tref
Fig. 1. Flowchart of the revised IncCond MPPT algorithm.
I 0 |Tref ,G I SC |Tref ,Gref /[exp(qVOCref / (nkTref )) − 1]
=
Based on the above discussions, the flow chart of the
proposed modified IncCond algorithm is shown in Fig. 1. In
(4)
Fig. 1, V(·) and I(·) are the terminal output voltage and current
and
G
represent
the
reference
cell
In
equation
(4),
T
ref
ref
of the solar array, which will be adjusted according to the
slope calculated by the dc-dc converter, δd (k ) = δd (k − 1) temperature (Tref = 225˚C, i.e., 298 K) and reference irradiance
(Gref = 1,000 W/m ) under the standard condition. α is the
(5)
dV
|V
can be obtained from the manufacturers' data
dI OCref
sheet. After substituting the above parameters into equation
(3), the I-V characteristics of the solar cell can be numerically
computed for any given cell temperature and irradiance level
and then used to represent a solar array consisting of
interconnected modules and cells.
where
A buck-boost converter is used to step up the output dc
voltage of a solar array such that a bulky step-up transformer
can be avoided and perform the MPPT by controlling the duty
ratio of the converter. See, e.g., [19] for more details.
IV.
SIMULATION RESULTS
The revised IncCond algorithm was implemented in an
integrated Matlab-based power system simulation software
EPTOOL that was developed based on the Power System
Toolbox [20]. EPTOOL can be used to perform a transient
analysis of the grid under faulted conditions and the solar
irradiance and temperature changes. Only MPPT simulation
results are presented here to validate the effectiveness of the
revised algorithm using a hypothetical solar irradiance profile
as the input to solar plant, as tabulated in Table I. The other
input, the panel temperature is assumed constant of 25˚C
during the cloud transients.
A centralized solar PV plant consists of 17*170000 solar
BP SX 150 panels [21]. The capacity of the PV plant is 433
MW or 4.33 pu (100 base MVA and under standard
environmental conditions). A 50-machine-145-bus system was
used as an example for carrying out the simulation purpose
only.
TABLE I: VARIATION OF IRRADIANCE AT A SOLAR PLANT DURING A CLOUD
TRANSIENT
0.0
0.2
0.7
0.9
1.2
Time (s)
Irradiance ( W / m 2 )
1000
20
200
300
400
1.5
1.9
2.5
3.0
4.0
Time (s)
500
650
850
990
150
Irradiance ( W / m 2 )
4.2
4.3
4.4
4.5
4.8
Time (s)
Irradiance ( W / m 2 )
120
20
210
330
340
4.9
Time (s)
Irradiance ( W / m 2 )
350
The conventional IncCond algorithm with a fixed
incremental step size (0.001) of duty ratio is first applied and
the MPPT is performed every 10 ms. Other parameters are
selected as the following: δdmax = 0.01, and ε = 5E-4.
PV Output Voltage (kV)
qVOCref
dV
RS =
−
|VOC −nkTref / [ I 0 |Tref ,G q exp(
)]
ref
dI
nkTref
Simulation results for the solar array terminal output voltage
and the deviation of the actual dc output power from the
calculated maximum power points are shown in Fig. 3, from
which one can observe the persisting oscillations around the
MPPs in most of the time, i.e., the MPPs were not truly
achieved. The reason for the oscillations is that, as implied in
the algorithm description of Section II, the terminal voltage
continues to be adjusted. Fig. 3 also indicates that the
conventional IncCond algorithm is not able to track the MPP
for a rapid variation of the irradiance since the output power
deviations from the actual solar power generation are
significant for the large change in irradiance at 0.2 s, although
for slow variations it can provide acceptable performance.
This significant power deficiency around 0.2 s is caused by an
inability to adjust the panel voltage rapidly enough to
compensate for the large decrement of irradiance level, as can
be seen by comparing the top curves in Fig. 3 with those given
in Fig. 4-5. This highlights the inefficiencies of selecting
control parameters such as the incremental step size and the
upper-bound in accordance with conventional MPPT
algorithms. These inefficiencies related to the conventional
IncCond method can be addressed by the modified algorithm
proposed in this paper.
0.65
0.6
0.55
0.5
0.45
PV DC Power Deviation (p.u.)
temperature coefficient and ISC is the short-circuit current of
the solar cell. These are both constants that can be obtained
from manufacturers’ data sheets. I0|T,G is the reverse
saturation current of the diode, n the diode ideality factor, q =
1.602e-19 C the Coulomb constant, k = 1.38e-23 J/K the
Boltzmann constant. Eg is band-energy gap (eV) and is given
as Eg=1.16–0.000702*T2/(T-1108). VOC is the open-circuit
voltage of the solar cell.
The series resistance can be solved using parameters at
reference temperature and irradiance, i.e.,
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
0
0.5
1
1.5
2
2.5
Time(s)
3
3.5
4
4.5
5
2
0
-2
-4
-6
Fig. 3. Terminal voltage variation and deviated power output of the PV plant
using conventional IncCond algorithm.
For the modified MPPT algorithm proposed in Section II,
the deceleration and acceleration factors are applied in the
modified IncCond algorithm with DEACC = 0.8 and ACC =
1.2 while other parameters remain the same. In the first
scenario, the upper bound of the incremental step size of the
duty ratio is fixed, i.e., δdmax(k) = 0.01, k = 0, 1, …. As shown
in Fig. 4, the oscillations are eliminated quickly as the
irradiance changes.
Fig. 4 also shows that the revised algorithm with the fixed
upper-bound of the step size can quickly make the operating
point reach the MPP, as the voltage level becomes quickly
stable even after the sudden change of irradiance at time 0.2s.
However, relatively large terminal voltage overshoot at
changing points of irradiance is now introduced and must be
addressed.
Fig. 5 shows further improvement of the tracking
performance for a second scenario where an adaptive upperbound of the incremental step is used. The overshoot at the
change points of the irradiance have been significantly
decreased. The output power deviation is also reduced.
Simulation also shows that the tracking performance is not
sensitive to the associated parameters (e.g., δd), which makes
parameter tuning very easy and the modified IncCond
algorithm very robust.
[3]
0.6
[5]
0.55
0.5
0.45
0.4
PV DC Power Deviation (p.u.)
PV Output Voltage (kV)
[4]
0.65
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
0
[7]
-2
-4
-6
[8]
0
0.5
1
1.5
2
2.5
Time(s)
3
3.5
4
4.5
5
Fig. 4. Terminal voltage and deviated power output of the array using the
modified IncCond algorithm (fixed upper bound of the incremental step size
of duty ratio).
[9]
0.65
[10]
0.6
0.55
0.5
0.45
0.4
PV DC Power Deviation (p.u.)
PV Output Voltage (kV)
[6]
2
[11]
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
2
0
[12]
-2
-4
-6
0
0.5
1
1.5
2
2.5
Time(s)
3
3.5
4
4.5
5
[13]
Fig. 5. Terminal voltage and deviated power output of the array using the
modified IncCond algorithm (with adaptive upper bound of the incremental
step size of duty ratio).
V.
CONCLUSIONS
A revised IncCond algorithm was presented in this paper
for PV generation systems. Compared with traditional
IncCond methods, the voltage step change is adaptively
determined based on the slope of the P-V curve and the
location of the operating points in two consecutive tracking
steps such that the PV system can track the rapid change in
environmental conditions while the oscillation of the PV
system operating points around the MPP can be avoided. In
addition, the upper bound of the voltage step change is
assigned a factor, DEACC (less than 1) to constrain the step
change when a change in the sign of slope is detected.. The
simulation results demonstrate the effectiveness of the
proposed algorithm. The robustness of the MPPT algorithm is
also enhanced due to fact that the parameters can be easily
tuned regardless of the PV systems and it does not require
knowledge of the I-V characteristics of specific PV panels.
REFERENCES
[1]
[2]
"Trends in Photovoltaic Applications," IEA Report IEA-PVPS T120:2011, 2011.
T. Esram and P. L. Chapman, "Comparison of photovoltaic array
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
maximum power point tracking techniques," IEEE Transactions on
Energy Conversion, vol. 22, pp. 439-449, 2007.
N. Femia, G. Petrone, G. Spagnuolo, and M. Vitelli, "Optimization of
perturb and observe maximum power point tracking method," Power
Electronics, IEEE Transactions on, vol. 20, pp. 963-973, 2005.
O. Wasynezuk, "Dynamic Behavior of a Class of Photovoltaic Power
Systems," Power Apparatus and Systems, IEEE Transactions on, vol.
PAS-102, pp. 3031-3037, 1983.
N. S. D'Souza, L. A. C. Lopes, and L. Xuejun, "An Intelligent
Maximum Power Point Tracker Using Peak Current Control," in
Power Electronics Specialists Conference, 2005. PESC '05. IEEE
36th, 2005, p. 172.
N. Kasa, T. Iida, and L. Chen, "Flyback Inverter Controlled by
Sensorless Current MPPT for Photovoltaic Power System," Industrial
Electronics, IEEE Transactions on, vol. 52, pp. 1145-1152, 2005.
J. J. Schoeman and J. D. v. Wyk, "A simplified maximal power
controller for terrestrial photovoltaic panel arrays," in 13th Annu.
IEEE Power Electron. Spec. Conf., 1982, pp. 361-367.
K. Kobayashi, H. Matsuo, and Y. Sekine, "A novel optimum operating
point tracker of the solar cell power supply system," in Power
Electronics Specialists Conference, 2004. PESC 04. 2004 IEEE 35th
Annual, 2004, pp. 2147-2151 Vol.3.
N. Mutoh, T. Matuo, K. Okada, and M. Sakai, "Prediction-data-based
maximum-power-point-tracking method for photovoltaic power
generation systems," in Power Electronics Specialists Conference,
2002. pesc 02. 2002 IEEE 33rd Annual, 2002, pp. 1489-1494 vol.3.
K. H. Hussein, I. Muta, T. Hoshino, and M. Osakada, "Maximum
photovoltaic power tracking: an algorithm for rapidly changing
atmospheric conditions," Generation, Transmission and Distribution,
IEE Proceedings-, vol. 142, pp. 59-64, 1995.
K. Tae-Yeop, A. Ho-Gyun, P. Seung Kyu, and L. Youn-Kyun, "A
novel maximum power point tracking control for photovoltaic power
system under rapidly changing solar radiation," in Industrial
Electronics, 2001. Proceedings. ISIE 2001. IEEE International
Symposium on, 2001, pp. 1011-1014 vol.2.
K. Yeong-Chau, L. Tsorng-Juu, and C. Jiann-Fuh, "Novel maximumpower-point-tracking controller for photovoltaic energy conversion
system," Industrial Electronics, IEEE Transactions on, vol. 48, pp.
594-601, 2001.
W. Wenkai, N. Pongratananukul, Q. Weihong, K. Rustom, T.
Kasparis, and I. Batarseh, "DSP-based multiple peak power tracking
for expandable power system," in Applied Power Electronics
Conference and Exposition, 2003. APEC '03. Eighteenth Annual
IEEE, 2003, pp. 525-530 vol.1.
H. Koizumi and K. Kurokawa, "A Novel Maximum Power Point
Tracking Method for PV Module Integrated Converter," in Power
Electronics Specialists Conference, 2005. PESC '05. IEEE 36th, 2005,
pp. 2081-2086.
K. Harada and G. Zhao, "Controlled power interface between solar
cells and AC source," Power Electronics, IEEE Transactions on, vol.
8, pp. 654-662, 1993.
K. Irisawa, T. Saito, I. Takano, and Y. Sawada, "Maximum power
point tracking control of photovoltaic generation system under nonuniform insolation by means of monitoring cells," in Photovoltaic
Specialists Conference, 2000. Conference Record of the TwentyEighth IEEE, 2000, pp. 1707-1710.
K. Kobayashi, I. Takano, and Y. Sawada, "A study on a two stage
maximum power point tracking control of a photovoltaic system under
partially shaded insolation conditions," in Power Engineering Society
General Meeting, 2003, IEEE, 2003, p. 2617 Vol. 4.
S.-K. Kim, J.-H. Jeon, C.-H. Cho, E.-S. Kim, and J.-B. Ahn,
"Modeling and simulation of a grid-connected PV generation system
for electromagnetic transient analysis," Solar Energy, vol. 83, pp. 664678, 2009.
N. Mohan, T. M. Undeland, and W. P. Robbins, Power Electronics:
Converters, Applications, and Design: John Wiley & Sons, Inc., 1995.
Power
System
Toolbox
Webpage,
http://www.ecse.rpi.edu/pst/PST.html.
"BP SX 150," http://partsonsale.com/bpsx150.pdf.
| 5 |
Focus: Querying Large Video Datasets with Low Latency and Low Cost
Kevin Hsieh†§
Ganesh Ananthanarayanan§ Peter Bodik§ Paramvir Bahl§ Matthai Philipose§
Phillip B. Gibbons† Onur Mutlu∗†
† Carnegie Mellon University
§ Microsoft
∗ ETH Zürich
Focus-Opt-Query
Focus-Balance
Query-all
Large volumes of videos are continuously recorded from
cameras deployed for traffic control and surveillance with
the goal of answering “after the fact” queries: identify
video frames with objects of certain classes (cars, bags)
from many days of recorded video. While advancements
in convolutional neural networks (CNNs) have enabled
answering such queries with high accuracy, they are too
expensive and slow. We build Focus, a system for lowlatency and low-cost querying on large video datasets.
Focus uses cheap ingestion techniques to index the videos
by the objects occurring in them. At ingest-time, it uses
compression and video-specific specialization of CNNs.
Focus handles the lower accuracy of the cheap CNNs by
judiciously leveraging expensive CNNs at query-time. To
reduce query time latency, we cluster similar objects and
hence avoid redundant processing. Using experiments on
video streams from traffic, surveillance and news channels, we see that Focus uses 58× fewer GPU cycles than
running expensive ingest processors and is 37× faster
than processing all the video at query time.
Normalized Query
Latency
arXiv:1801.03493v1 [cs.DB] 10 Jan 2018
Abstract
Focus-Opt-Ingest
Ingest-all
1
0.8
0.6
0.4
0.2
0
0.03
0.02
0.01
(I=141X, Q=46X)
(I=26X,
(I=86X,
Q=63X)
Q=56X)
0
0 0.2 0.4 0.6 0.8 1
Normalized Ingest Cost
0
0.02 0.04 0.06
Normalized Ingest Cost
Figure 1: Effectiveness of Focus at reducing both ingest cost
and query latency, for an example traffic video. We compare against two baselines: “Ingest-all” that runs ResNet152
on all video frames during ingest, and “Query-all” that runs
ResNet152 on all the video frames at query time. By zooming in, we see that Focus (the Focus-Balance point) is simultaneously 86× cheaper than Ingest-all in its GPU consumption
and 56× faster than Query-all in query latency, all the while
achieving at least 95% precision and recall. (Also shown
are two alternatives offering slightly different trade-offs.)
ResNet152), using them for video analytics queries is
both expensive and slow. Using the ResNet152 classifier
at query-time to identify video frames with cars on a
month-long traffic video requires 280 GPU hours and
costs $250 in the Azure cloud. The latency for running
queries is also high. To achieve a query latency of one
minute on 280 GPU hours of work would require tens of
thousands of GPUs classifying the frames of the video in
parallel, which is many orders of magnitude more than
what is typically provisioned (few tens or hundreds) by
traffic jurisdictions or retail stores. Note that the above
cost and latency values are after using motion detection
techniques to exclude frames with no moving objects.
We believe that enabling low-latency and low-cost
querying over large video datasets will make video analytics more useful and open up many new opportunities.
A natural approach to enabling low latency querying
is doing all classifications with ResNet152 at ingest-time,
i.e., on the live videos, and store the results in an index of
object classes to video frames. Any queries for specific
classes (e.g., cars) will thus involve only a simple index
lookup at query-time. There are, however, at least two
problems with this approach. First, the cost to index all
the video at ingest-time, e.g., $250/month/stream in the
1. Introduction
Cameras are ubiquitous, with millions of them deployed
by government and private entities at traffic intersections,
enterprise offices, and retail stores. Videos from these
cameras are continuously recorded [2,7]. One of the main
purposes for recording the videos is answering “after-thefact” queries: identify video frames with objects of certain
classes (like cars or bags) over many days of recorded
video. As results from these queries are used by analysts
and investigators, achieving low query latencies is crucial.
Advances in convolutional neural networks (CNNs)
backed by copious training data and hardware accelerators (e.g., GPUs [13]) have led to high accuracy in the
computer vision tasks like object detection and object classification. For instance, the ResNet152 object classifier
CNN [38] won the ImageNet challenge that evaluates classification accuracy on 1, 000 classes using a public image
dataset with labeled ground truths [63]. For each image,
these classifiers return a ranked list of 1, 000 classes in
decreasing order of confidence.
Despite the accuracy of image classifier CNNs (like
1
above example, is prohibitively high. Second, most of this
ingest-time cost is wasteful because typically only a small
fraction of recorded videos get queried [16]. Following a
theft, the police would query a few days of video from a
handful of surveillance cameras, but not all the videos.
We present Focus, a system to support low-latency
low-cost querying on large video datasets. To address
the above drawbacks, Focus has the following goals: (1)
low cost indexing of video at ingest-time, (2) providing
high accuracy and low latency for queries, and (3) allowing trade offs between the cost at ingest-time against the
latency at query-time. As input, the user specifies the
ground-truth CNN (or “GT-CNN”, e.g., the ResNet152
classifier) and the desired accuracy of results that Focus
needs to achieve relative to the GT-CNN.
Focus uses four key techniques – cheap CNNs for ingest, using top-K results from the ingest-time CNN, clustering similar objects, and judicious selection of system
and model parameters.
First, to make video ingestion cheap, Focus uses compressed and specialized versions of CNNs, to create an
ingest-time index of object classes to frames. CNN compression (e.g., [66]) creates new CNNs with fewer convolutional layers and smaller input images. Specialization [35, 65] trains those CNNs on a smaller set of object
classes specific to each video stream so that those cheaper
CNNs can classify these video-specific objects more accurately. Together, these techniques result in highly efficient
CNNs for video indexing.
Second, the cheap ingest CNNs, however, are also less
accurate than the expensive GT-CNN (like ResNet152),
measured in terms of recall and precision. Recall is the
fraction of frames in the video that contained objects of
the queried class that were actually returned in the query’s
results. Precision, on the other hand, is the fraction of
frames in the query’s results that contained objects of the
queried class. To increase recall, Focus relies on an empirical observation: while the top-most (i.e., most confident)
classification results of the cheap and expensive CNNs
may not always match, the top-most result of the expensive CNN falls within the top-K results of the cheap CNN.
Therefore, at ingest-time, Focus indexes each object with
the “top-K” results of the cheap CNN (instead of just the
top-most). To increase precision, at query-time, we first
filter the objects from the top-K index and then classify
the filtered objects with the expensive GT-CNN.
Third, to reduce the query-time latency of using the expensive GT-CNN, Focus relies on the significant similarity
between objects in videos. For example, a car moving
across an intersection will look very similar in consecutive frames. Focus leverages this similarity by clustering
the objects at ingest-time, classifying only the cluster centroids with the expensive GT-CNN at query-time, and
assigning the same class to all objects in the cluster, thus
considerably reducing query latency.
In a nutshell, Focus’s ingest-time and query-time operations are as follows. At ingest-time, it classifies the detected objects using a cheap CNN, clusters similar objects,
and indexes each cluster centroid using the top-K classification results. At query-time, when the user queries
for class X, Focus looks up the ingest index for centroids
that match class X and classifies them using the GT-CNN.
For centroids that were classified as class X, it returns all
objects from the corresponding clusters to the user.
Finally, Focus smartly chooses the ingest-time CNN
and its parameters to meet user-specified targets on precision and recall. Among the choices that meet the accuracy
targets, it allows the user to trade off between the ingest
cost and query latency. For example, using a cheaper ingest CNN reduces the ingest cost but increases the query
latency as Focus needs to use a larger K for the top-K index
to retain the accuracy targets. Focus identifies the “sweet
spot” in parameters that sharply improve one of ingest
cost or query latency for a small worsening of the other.
We built Focus and evaluated it on thirteen 12-hour
videos from three domains – traffic cameras, surveillance cameras, and news channels. We compare against
two baselines: “Ingest-all” that runs GT-CNN on all video
frames during ingest, and “Query-all” that runs GT-CNN on
all the video frames at query time. We use ResNet152 as
GT-CNN and augment both baselines with motion detection to remove frames with no objects, which is one of the
core techniques in a recent prior work, NoScope [44]. Figure 1 shows a representative result, for a traffic video from
a commercial intersection. On average, Focus is 58× (up
to 98×) cheaper than Ingest-all and 37× (up to 57×) faster
than Query-all. This leads to the cost of ingestion coming
down from $250/month/stream to $4/month/stream, and
the latency to query a 24 hour video dropping from 1 hour
to under 2 minutes. See §6 for the full details.
We make the following contributions.
1. We formulate the problem of querying video datasets
by showing the trade-offs between query latency, ingest cost, and accuracy (precision and recall) of results.
2. We propose techniques to ingest videos with low cost
by leveraging compressed and video-specific specialization of CNNs, while retaining high accuracy targets
by creating approximate (top-K) indexes.
3. We identify and leverage similarity between objects
in a video to cluster them using CNN features and
significantly speeding up queries.
4. We propose and build a new end-to-end system to
support low-latency, low-cost querying on large video
datasets. We show that our system offers new trade-off
options between ingestion cost and query latency, as
it is significantly cheaper than analyzing all videos
frames at ingest time and significantly faster than analyzing queried video frames at query time.
2
2. Background and Motivation
can only process 77 images/second even with a high-end
GPU (NVIDIA K80 [13]). This makes querying on large
video datasets using these CNNs to be slow and costly.
There are at least two recent techniques designed to reduce the cost of CNNs. First, compression is a set of techniques aiming to reduce the cost of CNN inference (classification) at the expense of reduced accuracy. Such techniques include removing some expensive convolutional
layers [66], matrix pruning [31, 37], and others [42, 62]
and can dramatically reduce the classification cost of a
CNN. For example, ResNet18, which is a ResNet152 variant with only 18 layers is 8× cheaper. Second, a more
recent technique is CNN specialization [35], where the
CNNs are trained on a subset of a dataset specific to a particular context, also making them much cheaper. Using
the combination of cheap and expensive CNNs is a key
facet of our solution, described in §4.
We first provide a brief overview of convolutional Neural
Networks (CNN), the state-of-the-art approach to detecting and classifying objects in images (§2.1). We then discuss new observations we made about real-world videos,
which motivate the design of our techniques (§2.2).
2.1. Convolutional Neural Networks
A Convolution Neural Network (CNN) [47] is a specific
class of neural networks that works by extracting the
visual features in images. During image classification, or
“inference”, a CNN takes an input image and outputs the
probability of each class (e.g., dog, flower, or car). CNNs
are the state-of-the-art method used for many computer
vision tasks, such as image classification (e.g., [38,45,71])
and face recognition (e.g., [46, 64]).
Input
Image
Pooling
Layers
Convolutional +
Rectification Layers
Fully-Connected
Layer
Prob.(Apple) ✓
Prob.(Car)
Prob.(Orange)
Prob.(Cat)
…
Prob.(Flower)
Prob.(Dog)
.
.
.
.
.
.
Extracted
.
.
Features
.
2.2. Characterizing Real-world Videos
We aim to support queries of the form, find all frames
in the video that contain objects of class X. We identify
some key characteristics of real-world videos towards
supporting these queries: (1) large portions of videos
can be excluded (§2.2.1), (2) only a limited set of object
classes occur in each video (§2.2.2), and (3) objects of
the same class have similar feature vectors (§2.2.3). The
design of Focus is based on these characteristics.
We have analyzed 12 hours of video from six video
streams each. The six video stream span across traffic
cameras, surveillance cameras, and news channels. (§6.1
provides the details.) We detect the objects in each frame
of these videos (using background subtraction [43]), and
classify each object with the ResNet152 CNN [38] among
the supported 1, 000 object classes. In this paper, we use
results from the costly ResNet152 CNN as ground truth.
2.2.1. Excluding large portions of videos. We find considerable potential to avoid processing large portions
of videos at query-time. Significant portions of video
streams either have no objects at all (as in a garage camera at night) or the objects are stationary (like parked
cars). We find that in our video sets, one-third to one-half
of the frames fall in these categories. Therefore, queries
to any object class would benefit from pre-processing
filters applied to exclude these portions of the videos.
Even among the frames that do contain objects, not all
of them are relevant to a query because each query only
looks for a specific class of objects. In our video sets,
an object class occurs in only 0.01% of the frames on
average, and even the most frequent object classes occur
in no more than 16% − 43% of the frames in the different
videos. This is because while there are usually some dominant classes (e.g., cars in a traffic camera, people in a
news channel), most other classes are rare. Since queries
are for specific object classes, there is considerable poten-
Figure 2: Architecture of an image classification CNN.
Figure 2 illustrates the architecture of an image classification CNN. Broadly, almost all CNNs consist of three
key types of network layers: (1) convolutional and rectification layers, which detect visual features from input pixels, (2) pooling layers, which down-sample the input by
merging neighboring pixel values, and (3) fully-connected
layers, which provide the reasoning to classify the input
object based on the outputs from previous layers. The
outputs of an image classification CNN are the the probabilities of all object classes, and the class with the highest
probability is the predicted class for the input image.
The output of the penultimate (i.e., previous-to-last)
layer can be considered as “representative features” of the
input image [45]. The features are a real-valued vector,
with lengths between 512 and 4096 in state-of-the-art classifier CNNs (e.g., [38, 45, 66, 71]). It has been shown that
images with similar feature vectors (i.e., small Euclidean
distances) are visually similar [22, 23, 45, 58].
The high accuracy of CNNs comes at a cost: inferring (or classifying) using state-of-the-art CNNs to
classify objects in images requires significant computational resources. This is because the higher accuracy
of CNNs comes from using deeper architectures (i.e.,
more layers) to obtain better visual features. For instance,
ResNet152 [38], the winner of the ImageNet competition [63] in 2015, has been trained to classify across 1000
classes from the ImageNet dataset using 152 layers, but
3
CDF (number of objects)
1
0.9
since they are specifically trained to extract visual features
for classification. We verify the robustness of feature
vectors using the following analysis. In each video, for
each object i, we find its nearest neighbor j using feature
vectors from the cheap ResNet18 CNN and compute the
fraction of object pairs that belong to the same class. This
fraction is over 99% in each of our videos, which shows
using feature vectors from cheap CNNs can potentially
help identify duplicate objects.
95% of objects
0.8
Auburn
Lausanne
CNN
0.7
0.6
Jackson Hole
Sittard
MSNBC
0.5
0%
2%
4%
6%
8%
Percentage of ResNet152's 1000 classes
10%
Figure 3: CDF of frequency of object classes. The x-axis
is the fraction of classes out of the 1000 recognized by
ResNet152 (truncated to 10%).
3. Overview of Focus
The goal of Focus is to index live video streams by the
object classes occurring in them and enable answering
“after-the-fact” queries later on the stored videos of the
form find all frames that contain objects of class X. Optionally, the query can be restricted to a subset of cameras
and a time range. Such a query formulation is the basis for
many widespread applications and could be used either
on its own (such as for detecting all cars or bicycles in
the video) or used as a basis for further processing (e.g.,
finding all collisions between cars and bicycles).
Focus is designed to work with a wide variety of current and future CNNs. At system configuration time, the
user (system administrator) provides a ground-truth CNN
(GT-CNN), which serves as the accuracy baseline for Focus, but is far too costly to run on every video frame.
Through a sequence of techniques, Focus provides nearlycomparable accuracy but at greatly reduced cost. By
default, and throughout this paper, we use the ResNet152
image classifier as the GT-CNN.
Because the acceptable target accuracy is applicationdependent, Focus permits the user to specify the target,
while providing reasonable defaults. Accuracy is specified in terms of precision, i.e., fraction of frames output
by the query that actually contain an object of class X
according to GT-CNN, and recall, i.e., fraction of frames
that contain objects of class X according to GT-CNN that
were actually returned by the query. The lower the target,
the greater the cost-savings provided by Focus. Even for
high targets such as 95%–99%, Focus is able to achieve
order-of-magnitude or more cost savings.
Figure 4 presents the design of Focus.
• At ingest-time (left part of Figure 4), Focus classifies
objects in the incoming video frames and extracts their
feature vectors. To make this step cheap, it uses a highly
compressed and specialized version of the GT-CNN
model (IT1 in Figure 4). Focus then clusters objects
based on their feature vectors (IT2 ) and assign to each
cluster the top K most likely classes these objects belong to (based on classification confidence of the ingest
CNN); (IT3 ). It creates a top-K index, which maps each
class to the set of object clusters (IT4 ). The top-K index
is the output of Focus’ ingest-time processing of videos.
• At query-time (right part of Figure 4), when the user
tial in indexing frames by the classes of objects.
2.2.2. Limited set of object classes in each video. We
next focus on the classes of objects that occur in each of
the videos and the disparity in frequency among them.
Most video streams have a limited set of objects because each video has its own context (e.g., traffic cameras
can have automobiles, pedestrians or bikes but not airplanes). It is rare that a video stream contains objects
of all the classes recognized by state-of-the-art classifier
CNNs. Figure 3 shows the cumulative distribution function (CDF) of the frequency of object classes in our videos
(as classified by ResNet152). We make two observations.
First, objects of only 22% − 33% (not graphed) of
the 1, 000 object classes occur in the less busy videos
(Auburn, Jackson Hole, Lausanne, and Sittard).
Even in the busier videos (CNN, and MSNBC), objects of
only 50% − 69% of the classes appear. Also, there is
little overlap between the classes of objects among the
different videos. On average, the Jaccard indexes [72]
(i.e., intersection over union) between the videos based on
their object classes is only 0.46. Second, even among the
object classes that do occur, a small fraction of classes disproportionately dominate. Figure 3 shows that 3% − 10%
of the most frequent object classes cover ≥ 95% of the
objects in each video stream. This suggests that for each
video stream, we can automatically (i) determine its most
frequently occurring classes and (ii) train efficient CNNs
specialized for classifying these classes (§2.1).
2.2.3. Feature vectors for finding duplicate objects.
Objects moving in the video often stay in the frame for
several seconds; for example, a pedestrian might take a
minute to cross a street. Instead of classifying each instance of the same object across the frames, we would like
to inexpensively find duplicate objects and only classify
one of them using a CNN (and apply the same label to all
duplicates). Thus, given n duplicate objects, this requires
only one CNN classification operation instead of n.
Comparing pixel values across frames is an obvious
choice to identify duplicate objects, however, they turn
out to be highly sensitive to even small changes in the
camera’s real-time view of an object. Instead, feature
vectors extracted from the CNNs are much more robust
4
Frames
Frames
Frames
Object
feature
vectors
Objects
IT1 Specialized,
Compressed CNN
Query-time
Ingest-time
CNN
specialization
IT3
IT2
QT1
Object
clusters
Object top-K
classes
Frames
with objects
of class X
Querying for
class X
QT4
Matching
clusters for X
Centroid
objects
IT4
Top-K
index
QT2
QT3
GT-CNN
Figure 4: Overview of Focus.
queries for a certain class X (QT1 ), Focus retrieves the
matching clusters from the top-K index (QT2 ), runs the
centroids of the clusters through GT-CNN (QT3 ), and
returns all frames from the clusters whose centroids
were classified by GT-CNN as class X (QT4 ).
The top-K ingest index is a mapping between the object
class to the clusters. Specifically,
object class → hcluster IDi
cluster ID → [centroid object, hobjectsi
in cluster, hframe IDsi of objects]
We next explain how Focus’ key techniques keep ingest
cost and query latency low while also meeting the userspecified accuracy targets.
1) Cheap Ingest-time CNN: Focus makes indexing at
ingest-time cheap by compressing and specializing the
GT-CNN model for each video stream. (i) Compression
of CNN models [31, 37, 42, 62, 66] uses fewer convolutional layers and other approximation techniques (§2.1).
(ii) Specialization of CNNs [35, 65] uses the observation
that a specific video stream contains only a small number of object classes and their appearance is more constrained than in a generic video (§2.2.2). Both techniques
are done automatically and together result in ingest-time
CNN models that are up to 98× cheaper than GT-CNN.
2) Top-K ingest index: The cheap ingest-time CNNs
are less accurate, i.e., their top-most results do not often
match the top-most classifications of GT-CNN. Therefore,
to keep the recall high, Focus associates each object with
the top-K classification results of the cheap CNN, instead
of just its top-most result. Increasing the K increases
recall because the top-most result of GT-CNN often falls
within the ingest-time CNN’s top-K results. At querytime, Focus uses the GT-CNN to remove objects in this
larger set that do not match the class, to regain precision
lost by including all the top-K.
3) Clustering similar objects: A high value of K at
ingest-time increases the work to do at query time, thereby
increasing query latency. To reduce this overhead, Focus
clusters similar objects at ingest-time using feature vectors from the ingest-time CNN. In each cluster, at querytime, we run only the cluster centroid through GT-CNN
and apply the classified result from the GT-CNN to all
objects in the cluster. Thus, if the objects are not tightly
clustered, clustering can reduce precision and recall.
4) Trading off ingest vs. query costs: Focus automatically chooses the cheap CNN, its K, and specialization
and clustering parameters to achieve the desired precision
and recall targets. These choices also help Focus trade off
between the work done at ingest-time and query-time. For
instance, to save ingest work, Focus can select a cheaper
ingest-time CNN, and then counteract the resultant loss
in accuracy by running the expensive GT-CNN on more
objects at query time. Focus chooses its parameters so as
to offer a sharp improvement in one of the two costs for a
small degradation in the other cost. Because the desired
trade-off point is application-dependent, Focus provides
users with a choice of three options: ingest-optimized,
query-optimized, and balanced (the default).
Note that while our explanation is anchored on image
classification CNNs, the architecture of Focus is generally
applicable to all existing CNNs (e.g., face recognition).
Techniques that we use for CNN compression [42, 62]
and specialization [35], and feature extraction from the
CNNs are all broadly applicable to all CNNs.
4. Video Ingest & Querying Techniques
In this section, we describe the main techniques used in
Focus: using cheap CNN models at ingest-time (§4.1),
identifying similar objects and frames to save on redundant CNN processing (§4.2), and specializing the CNNs
to the specific videos that are being analyzed (§4.3). §4.4
describes setting parameters in Focus.
4.1. Cheap Ingestion
Focus indexes the live videos at ingest-time to reduce the
query-time latency. We perform object detection on each
frame, typically an inexpensive operation, and then will
classify the extracted objects using ingest-time CNNs that
are far cheaper than the ground-truth GT-CNN. We use
these classifications to index objects by class.
Cheap ingest-time CNN: As noted earlier, the user provides Focus with a GT-CNN. Optionally, the user can
also provide other classifier architectures to be used in
Focus’ search for cheap CNNs, such as AlexNet [45] and
5
CheapCNN 1 (7X)
100%
CheapCNN 2 (28X)
CheapCNN 3 (58X)
class.
The selection of the cheap ingest-time CNN model
(CheapCNNi ) and the K value (for the top-K results) has a
significant influence on the recall of the outputs produced.
Lower values of K reduce recall, i.e., Focus will miss
returning frames that contain the queried objects. At the
same time, higher values of K increase the number of
objects to classify with GT-CNN at query time to keep
precision high, and hence adds to the latency. We defer to
§4.4 on how Focus sets these parameters as they have to
be jointly set with other parameters in §4.2 and §4.3.
Recall
80%
60%
40%
20%
0%
10
20
60
100
Number of selected results (K)
200
Figure 5: Effect of K on recall for three cheap CNNs. The
number within the parenthesis indicates how much cheaper
the model is compared to our GT-CNN, ResNet152.
VGG [66] (which vary in their resource costs and accuracies). Starting from these user-provided CNNs, Focus
applies various levels of compression, such as removing
convolutional layers and reducing the input image resolution (§2.1). This results in a large set of CNN options for
ingestion, {CheapCNN1 , . . . , CheapCNNn }, with a wide
range of costs and accuracies.
Top-K Ingest Index: To keep recall high, Focus indexes each object using the top K object classes from
CheapCNNi ’s output, instead of using just the top-most
class. Recall from §2.1 that the output of the CNN is a
list of object classes in descending order of confidence.
We empirically observe that the top-most output of the
expensive GT-CNN is often contained within the top-K
classes output by the cheap CNN (for a small value of K
relative to the 1, 000 classes recognized by the CNNs).
Figure 5 plots the effect of K on recall on one of our
video streams, lausanne (see §6.1). The three models
in the figure are ResNet18 [38], and ResNet18 with 3
and 5 layers removed; additionally, the input images were
rescaled to 224, 112, and 56 pixels, respectively. All
models were retrained on their original training data (ImageNet [63]). We make two observations.
First, we observe steady increase in recall with increasing K, for all three CheapCNNs. As the figure shows,
CheapCNN1 , CheapCNN2 , and CheapCNN3 reach 90%
recall when K ≥ 60, K ≥ 100, and K ≥ 200, respectively.
Note that all these models recognize 1000 classes, so even
K = 200 represents only 20% of the possible classes. Second, there is a trade-off between different models – the
cheaper they are, the lower their recall with the same K.
Overall, we conclude that by selecting the appropriate K,
Focus can achieve the target recall.
Focus creates the top-K index of an object’s top-K
classes output by CheapCNNi at ingest-time. While filtering for objects of the queried class X using the top-K
index (with the appropriate K) will have a high recall, it
will have very low precision. Since we associate each
object with K classes (while it has only one true class),
the average precision is only 1/K. Thus, at query time,
to keep the precision high, Focus determines the actual
class of objects from the top-K index using the expensive
GT-CNN and only return objects that match the queried
4.2. Redundancy Elimination
At query time, Focus retrieves the objects likely matching
the user-specified class from the top-K index and infers
their actual class using the GT-CNN. This would ensure
precision of 100%, but could cause significant latency at
query time. Even if this inference is parallelized across
many GPUs, it would still incur a large cost. Focus uses
the following observation to reduce this cost: if two objects are visually similar, their feature vectors would be
closely aligned and they would likely be classified as the
same class (e.g., “cars”) by the GT-CNN model (§2.2.3).
Focus clusters objects that are similar, invokes the expensive GT-CNN only on the cluster centroids, and assigns the centroid’s label to all objects in each cluster.
Doing so dramatically reduces the work done by the GTCNN classifier at query time. Focus uses the feature vector
output by the previous-to-last layer of the cheap ingest
CNN (see §2.1) for clustering. Note that Focus clusters
the objects in the frames and not the frames as a whole.
The key questions regarding clustering are how do we
cluster (algorithm) and when do we cluster (system). We
discuss both these key questions below.
Clustering Heuristic: We require two properties in our
clustering technique. First, given the high volume of
video data, it should be a single-pass algorithm to keep
the overhead low, as the complexities of most clustering
algorithms are quadratic. Second, it should make no
assumption on the number of clusters and adapt to outliers
in data points on the fly. Given these requirements, we use
the following simple approach for incremental clustering,
which has been well-studied in the literature [27, 55].
We put the first object into the first cluster c1 . To cluster
a new object i with a feature vector fi , we assign it to the
closest cluster c j if c j is at most distance T away from
fi . However, if none of the clusters are within a distance
T , we create a new cluster with centroid at fi , where T
is a distance threshold. We measure distance as the L2
norm [10] between cluster centroid and object feature
vector. We keep the number of clusters at a constant M by
removing the smallest ones and storing their data in the
top-K index. Using this algorithm, we can keep growing
the popular clusters (such as similar cars), while keeping
6
the complexity as O(Mn), which is linear to n, the total
number of objects.
Clustering can reduce both precision and recall depending on parameter T . If the centroid is classified by
GT-CNN as the queried class X but the cluster contains
another object of a different class, it reduces precision. If
the centroid is classified as a class different than X but
the cluster has an object of class X, it reduces recall. We
discuss setting T in §4.4.
Clustering at Ingest vs. Query Time: Focus clusters the
objects at ingest-time rather than at query-time. Clustering at query-time would involve storing all feature vectors,
loading them for objects filtered from the ingest index and
then clustering them. Instead, clustering at ingest time
creates clusters right when the feature vectors are created
and only stores the cluster centroids in the top-K index.
This makes the query-time latency much lower and also
reduces the size of the top-K index. We observe that the
ordering of indexing and clustering operations is mostly
commutative in practice and has little impact on result
accuracy (we do not present these results due to space
constraints). We therefore use ingest-time clustering due
to its latency and storage benefits.
Pixel Differencing of Objects: While clustering primarily reduces work done at query-time (number of objects to
be classified by the GT-CNN), Focus also employs pixel
differencing among objects in adjacent incoming frames
to reduce ingest cost. Specifically, if two objects have
very similar pixel values, it only runs the cheap CNN on
one of them and assign them both to the same cluster in
our top-K index.
curacy on video streams, while removing 1/3 of the convolutional layers and making the input image 4× smaller
in resolution. This leads to the specialized CheapCNNi
being 10× cheaper than even the generic CheapCNNi .
Since the specialized CNN classifies across fewer
classes, they are more accurate, which allows Focus to select a much smaller K (for the top-K ingest index) to meet
the desired recall. We find that specialized models can use
K = 2 or 4, much smaller than the typical K = 60 ~ 200
for the generic cheap CNNs (Figure 5). Smaller K directly
translates to fewer objects that have to be classified by
GT-CNN at query time, thus reducing latency.
Model Retraining: On each video stream Focus periodically obtains a small sample of video frames and classifies
their objects using GT-CNN to estimate the ground truth
of distribution of object classes for the video (similar to
Figure 3). From this distribution, Focus selects the most
frequently occurring Ls object classes to retrain new specialized models. As we saw in §2.2.2, there is usually a
“power law” in the distribution of classes – just a handful
of classes account for a dominant majority of the objects –
thus, low values of Ls usually suffice.1
Specialization is also based off a family of CNN architectures (such as ResNet [38], AlexNet [45], and VGG
[66]) with different number of convolution layers, similar
to §4.1. Specialization adds to the set of options available for ingest CNNs ({CheapCNN1 , ..., CheapCNNn } in
§4.1), and Focus picks the best model (CheapCNNi ) and
the corresponding K for the index.
“OTHER” class: While Focus specializes the CNN towards the most frequently occurring Ls classes, we also
want to support querying of the less frequent classes. For
this purpose, Focus includes an additional class called
“OTHER” in the specialized model.2 Being classified as
OTHER simply means not being one of the Ls classes.
At query time, if the queried class is among the OTHER
classes of the ingest CNN’s index, Focus extracts all the
clusters that match the OTHER class and classifies their
centroids through the GT-CNN model.
The parameter Ls (for each stream) exposes the following trade-off. Using a small Ls allows us to train a simpler
model with cheaper ingest cost and lower query-time latency for the popular classes, however, it also leads to
a larger fraction of objects falling in the OTHER class;
querying for them will be expensive because all those
objects will have to be classified by the GT-CNN. Using
a larger value of Ls , on the other hand, leads to a more expensive ingest and query-time models, but cheaper querying for the OTHER classes. We select Ls next in §4.4.
4.3. Video-specific Specialization of CNNs
Recall from §4.1 that Focus uses a cheap ingest-time CNN,
CheapCNNi to index object classes. Focus further reduces
its cost by specializing the ingest-time CNN model to
each video stream. Model specialization benefits from
two properties of objects in each video stream. First,
while object classification models are trained to differentiate between thousands of object classes, many video
streams contain only a small number of classes (§2.2.2).
Second, objects in a specific stream are often visually
more constrained than objects in general (say, compared
to the ImageNet [63] dataset). The cars and buses that
occur in a specific traffic camera have much less variability, e.g., they have very similar angle, distortion and size,
than a generic set of vehicles.
Instead of trying to differentiate among thousands of object classes, differentiating among just (say) fifty classes
and in a specific camera’s video is a much simpler task,
requiring simpler image features and smaller image resolutions. As a result, the specialized models are smaller
and more accurate [35]. For example, by retraining a
stream-specific CheapCNNi , we can achieve similar ac-
1 Specialized CNNs can be retrained quickly on a small dataset.
Retraining is relatively infrequent and done once every few days.
2 Since there will be considerably fewer objects in the video belonging to the OTHER class, we proportionally re-weight the training data
to contain equal number of objects of all the classes.
7
Normalized Query
Latency
4.4. Balancing Accuracy, Latency, and Cost
Focus’
goals of high accuracy, low ingest cost and low
query latency are impacted by the parameters in Focus’
techniques – K, the number of top results from the ingesttime CNN to index an object; Ls , the number of popular object classes we use to create a specialized model;
CheapCNNi , the specialized ingest-time cheap CNN; and
T , the distance threshold for clustering objects.
The effect of these four parameters is intertwined. All
the four parameters impact ingest cost, query latency, and
recall, but only T impacts precision. This is because we
apply the cluster centroid’s classification by GT-CNN to
all the objects in its cluster. Thus, if the clustering is not
tight (i.e., high value of T ), we lose precision.
Parameter Selection: Focus selects parameter values
per video stream. It samples a representative fraction
of frames of the video stream and classifies them using
GT-CNN for the ground truth. For each combination of
parameter values, Focus computes the expected precision
and recall (using the ground truths generated by GT-CNN)
that would be achieved for each of the object classes.
To navigate the combinatorial space of options for these
parameters, we adopt a two-step approach. In the first
step, Focus chooses CheapCNNi , Ls and K using only the
recall target. In the next step, Focus iterates through the
values of T , the clustering distance threshold, and only
select values that meet the precision target.
Trading off Ingest Cost and Query Latency: Among
the combination of values that meet the precision and
recall targets, the selection is based on balancing the
ingest- and query-time costs. For example, picking a
model CheapCNNi that is more accurate will have higher
ingest cost, but lower query cost because we can use
a lower K. Using a less accurate CheapCNNi will have
the opposite effect. Focus identifies “intelligent defaults”
that sharply improve one of the two costs for a small
worsening of the other cost.
Figure 6 illustrates the parameter selection based on
the ingest cost and query latency for one of our video
streams (auburn_c). The figure plots all the viable “configurations” (i.e., set of parameters that meet the precision
and recall target) based on their ingest cost (i.e., cost
of CheapCNNi ) and query latency (i.e., the number of
clusters according to K, Ls , T ). We first draw the Pareto
boundary [17], which is the set of configurations that
cannot improve one metric without worsening the other.
Focus can discard all the other configurations because at
least one point on the Pareto boundary is better than them
in both metrics. Focus balances between the ingest cost
and query latency (Balance in Figure 6) by selecting the
configuration that minimizes the sum of ingest and query
cost (measured in total GPU cycles).
Focus also allows for other configurations based on
the application’s preferences and query rates. Opt-Ingest
0.035
0.030
0.025
0.020
0.015
0.010
Opt-Ingest
Opt-Query
Balance
0.00
0.05
0.10
Normalized Ingest Cost
0.15
Figure 6: Parameter selection based on trading off ingest
cost and query latency. The ingest cost is normalized to
ingesting all objects with ResNet152, while the query latency is normalized to the time for querying all objects with
ResNet152. The dashed line is the Pareto boundary.
minimizes the ingest cost and is applicable when the application expects most of the video streams to not get
queried (such as a surveillance cameras), as this policy
also minimizes the amount of wasted ingest work. On the
other hand, Opt-Query minimizes query latency even if it
incurs a heavy ingest cost. Such flexibility allows Focus
to fit different applications.
5. Implementation Details
We describe the key aspects in Focus’s implementation.
Worker Processes. Focus’s ingest-time work is distributed across many machines, with each machine running one worker process for each video stream’s ingestion. The ingest worker receives the live video stream,
and extracts the moving objects (using background subtraction [81]); it is extensible to plug in any other object
detector. The detected objects are sent to the ingest-time
CNN to infer the top-K classes and the feature vectors.
The ingest worker uses the features to cluster objects in its
video stream and stores the top-K index in MongoDB [12]
for efficient retrieval at query-time.
Worker processes also serve queries by fetching the relevant frames off the top-K index database and classifying
the objects with GT-CNN. We parallelize a query’s work
across many worker processes if resources are idle.
GPUs for CNN classification. The cheap CNNs and GTCNN execute on GPUs (or other hardware accelerators for
CNNs) which could either be local on the same machine
as the worker processes or “disaggregated” on a remote
cluster. This detail is abstracted away from our worker
process and it seamlessly works with both designs.
Dynamically adjusting K at query-time. As an enhanced technique, we can select a new Kx ≤ K at query
time and only extract clusters where class X appears
among the top-Kx classes; this will result in fewer clusters
and thus also lower query-time latency. This technique
is useful in two scenarios: 1) some classes might be very
accurately classified by the cheap CNN; using a lower Kx
will still meet the user-specified accuracy, yet will result
in much lower latency; 2) if we want to retrieve only some
objects of class X, we can use very low Kx to quickly retrieve them. If more objects are required, we can increase
8
Table 1: Video dataset characteristics
Kx to extract a new batch of results.
Type
Description
A commercial area intersection
in the City of Auburn [6]
A residential area intersection
auburn_r AL, USA
in the City of Auburn [5]
A downtown intersection in
city_a_d USA
City A3
Traffic
A residential area intersection
city_a_r
USA
in City A3
A road-side camera in the City
bend
OR, USA
of Bend [8]
A busy intersection (Town
jacksonh WY, USA
Square) in Jackson Hole [9]
A video stream rotates among
church_st VT, USA
cameras in a shopping mall
(Church Street Marketplace) [3]
A pedestrian plazalatency (Place de
Surveillance
lausanne Switzerland
la Palud) in Lausanne [11]
A bookshop street in the
oxford
England
University of Oxford [15]
sittard
Netherlands A market square in Sittard [4]
News channel
cnn
USA
News
News channel
foxnews USA
News channel
msnbc
USA
6. Evaluation
We evaluate the Focus prototype with more than 150 hours
of videos from 13 real video streams that span across
traffic cameras, surveillance cameras, and news channels.
Our highlights are:
1. On average, Focus is simultaneously 58× (up to 98×)
cheaper than the Ingest-all baseline in its GPU consumption and 37× (up to 57×) faster than the Query-all
baseline in query latency, all the while achieving at
least 95% precision and recall (§6.2, §6.3).
2. Focus provides a rich trade-off space between ingest
cost and query latency. Among the video streams,
the ingest cost is up to 141× cheaper than the Ingestall baseline (and reduces query latency by 46×) if
optimizing for low-cost ingest. The query latency is
reduced by up to 66× (with 11× cheaper ingest) if
optimizing for query latency (§6.4).
3. Focus is effective under broad conditions such as high
accuracy targets (one order-of-magnitude savings even
for 99% accuracy target, §6.5) and various frame sampling rates (30 fps-1 fps, §6.6).
Name
Location
auburn_c
AL, USA
in a one-second segment of video if the GT-CNN reports
such class in 50% of the frames in that segment. We use
this criteria as our ground truth because our GT-CNN
(ResNet152) sometimes gives different answers to the
exact same object in consecutive frames, and this criteria
can effectively eliminate these random, erroneous results.
We set our default accuracy target as 95% recall and 95%
precision. We also evaluate the results with other accuracy
targets such as 97%, 98% and 99% (§6.5). Note that in
most practical cases, only one of the two metrics (recall
or accuracy) needs to be high. For example, an investigator cares about high recall, and looking through some
irrelevant results is an acceptable trade-off. By setting
both targets high, we are lower bounding the performance
improvements that Focus can achieve.
6.1. Setup
Software Tools. We use OpenCV 3.2.0 [14] to decode the
videos into frames, and then use the built-in background
subtraction algorithm [43] in OpenCV to extract moving
objects from video frames. We use background subtraction instead of object detector CNNs (e.g., YOLOv2 [59]
or Faster R-CNN [60]) to detect objects because: (1)
running background subtraction is orders of magnitude
faster than running these CNNs, and (2) background subtraction can detect moving objects more reliably, while
object detector CNNs usually have difficulties on small
objects [52]. Nonetheless, our system can seamlessly use
object detector CNNs as well. We run and train CNNs
with Microsoft Cognitive Toolkit 2.1 [54], an open-source
deep learning system.
Video Datasets. We evaluate 13 live video streams that
span across traffic cameras, surveillance cameras, and
news channels. We evaluate each video stream for 12
hours, which evenly cover day time and night time. Table 1 summarizes the video characteristics. By default,
we evaluate each video at 30 fps and also evaluate the
sensitivity to other frame rates (§6.6). In some figures
we only show a representative sample of 9 cameras to
improve legibility.
Accuracy Target. We use ResNet152, a state-of-the-art
CNN, as our ground-truth CNN (GT-CNN). We evaluate all extracted objects with the GT-CNN and use the
results as the correct answers. We define a class present
Baselines. We use two baselines for comparisons: (1)
Ingest-all, the baseline system that uses GT-CNN to analyze all objects at ingest time, and stores the inverted
index for query; and (2) Query-all, the baseline system that
simply extracts objects at ingest time, and uses GT-CNN
to analyze all the objects that fall into the query interval at
query time. Note that we strengthen both baselines with
basic motion detection (background subtraction). Therefore, the baselines do not run any GT-CNN on the frames
that have no moving objects. Note that not running GTCNN on frames with no moving objects is one of the core
techniques in the recent NoScope work [44].
Metrics. We use two performance metrics. The first
metric is ingest cost, which is the GPU time to ingest each
video. The second metric is query latency, which is the
latency for an object class query. Specifically, for each
video stream, we evaluate all dominant object classes
and take the average of their latencies. (Querying for
non-dominant “OTHER” classes is much cheaper than
querying popular classes, and would skew the results
because there are far more such classes; thus, we focus on
3 The
video streams are obtained from real and operational traffic
cameras in a city. We mask the city name for anonymity.
9
Traffic
Surveillance
News
33X
Avg
37X
Surveillance
msnbc
cnn
foxnews
sittard
11X
oxford
22X
(auburn_c, city_a_d and jacksonh), normal intersections or roads (auburn_r and city_a_r, bend), rotating cameras (church_st), busy plazas (lausanne and
sittard), a university street (oxford), and different
news channels (cnn, foxnews, and msnbc). Among these
videos, the gains in query latency are smaller for relatively less busy videos (auburn_r, bend, lausanne, and
oxford). This is because these videos are dominated by
fewer object classes, and Focus has more work (i.e., analysis using GT-CNN) to do at query time for these classes.
We conclude that the core techniques of Focus are general
and effective on a variety of real-world videos.
56X 58X
msnbc
43X
57X 57X 55X
lausanne
jacksonh
41X
church_st
bend
24X 30X
64X
cnn
sittard
oxford
lausanne
jacksonh
city_a_d
52X 52X
city_a_r
31X
city_a_d
56X
auburn_r
80
60
40
20
0
auburn_c
Faster than
Query-all by (factor)
Traffic
52X 48X
church_st
auburn_r
bend
44X 44X 44X 53X
city_a_r
58X
foxnews
98X 94X
86X
auburn_c
Cheaper than
Ingest-all by (factor)
100
80
60
40
20
0
News
6.3. Effect of Different Focus Components
Figure 8 shows the breakdown of ingest-time cost and
query latency across different design points of Focus: (1)
Compressed model, which applies a generic compressed
model for indexing at ingest time, (2) Compressed + Specialized model, which uses a per-stream specialized and
compressed model for indexing, and (3) Compressed +
Specialized model + Clustering, which adds feature-based
clustering at ingest time to reduce redundant work at query
time. All of the above include the top-K index and using
GT-CNN at query-time, and achieve the same accuracy
of 95%. Three main observations are in order.
First, generic compressed models provide benefits for
both ingest cost and query latency, but they are not the
major source of improvement. This is because the accuracy of a generic compressed model degrades significantly
when we remove convolutional layers. In order to retain
the accuracy target, we need to choose relatively expensive compressed models (CheapCNNi ) and a larger K,
which incur higher ingest cost and query latency.
Second, specializing the model (in addition to compressing it) greatly reduces ingest cost and query latency.
Because of fewer convolutional layers and smaller input
resolution, our specialized models are 7× to 71× cheaper
than the GT-CNN, while retaining the accuracy target for
each video streams. Running a specialized model at ingest
time speeds up query latency by 5× to 25× (Figure 8b).
Third, clustering is a very effective technique to further
reduce query latency with unnoticeable costs at ingest
time. As Figure 8b shows, using clustering (on top of a
specialized compressed model) reduces the query latency
by up to 56×, significantly better than just running a specialized model at ingest time. This gain comes with a
negligible cost (Figure 8a), because we run our clustering algorithm (§4.2) on the CPUs of the ingest machine,
which is fully pipelined with the GPUs that run the specialized CNN model.
Avg
Figure 7: (Top) Focus ingest cost compared to Ingest-all.
(Bottom) Focus query latency compared to Query-all.
the popular ones.) Both metrics include only GPU time
spent classifying images and exclude other (CPU) time
spent decoding video frames, detecting moving objects,
recording and loading video, and reading and writing to
the top-K index. We focus solely on GPU time because
when the GPU is involved, it is the bottleneck resource.
The query latency of Ingest-all is 0 and the ingest cost of
Query-all is 0.
Experiment Platform. We run the experiments on our
local cluster. Each machine in the cluster is equipped with
a state-of-the-art GPU (NVIDIA GTX Titan X), 16-core
Intel Xeon CPU (E5-2698), 64 GB RAM, a 40 GbE NIC,
and runs 64-bit Ubuntu 16.04 LTS.
6.2. End-to-End Performance
We first show the end-to-end performance of Focus by
showing its ingest cost and query latency when Focus aims
to balance these two metrics (§4.4). Figure 7 compares the
ingest cost of Focus with Ingest-all and the query latency
of Focus with Query-all. We make two main observations.
First, Focus significantly improves query latency with a
very small ingest cost. Focus makes queries by an average
of 37× faster than Query-all with a very small cost at
ingest time (an average of 58× cheaper than Ingest-all).
With a 10-GPU cluster, the query latency on a 24-hour
video goes down from one hour to less than two minutes.
The processing cost of each video stream also goes down
from $250/month to $4/month. This shows that Focus can
strike a very good balance between these two competing
goals very effectively.
Second, Focus is effective across different video
streams with various characteristics. It makes queries
11× to 57× faster with a very small ingest time
cost (48× to 98× cheaper) across busy intersections
6.4. Ingest Cost vs. Query Latency Trade-off
One of the interesting features of Focus is the flexibility to
tune its system parameters to achieve different application
10
(a) Ingest cost
0
cnn
Opt-I
Opt-Q
Opt-I
Opt-Q
Opt-I
auburn_c city_a_r jacksonh church_st lausanne sittard
Opt-Q
Opt-I
Opt-Q
Opt-I
Opt-Q
Opt-I
Opt-Q
Opt-I
Opt-Q
0
Opt-I
20
50
Opt-Q
40
100
Opt-I
60
Ingest Cheaper by
Query Faster by
150
Opt-Q
Improvements (factor)
Compressed model
+ Specialized model
+ Clustering
80
auburn_c
city_a_r
jacksonh
church_st
lausanne
sittard
cnn
foxnews
msnbc
Avg
auburn_c
city_a_r
jacksonh
church_st
lausanne
sittard
cnn
foxnews
msnbc
Avg
Faster than Query-all by
(factor)
Cheaper than Ingest-all by
(factor)
100
80
60
40
20
0
Compressed model
+ Specialized model
+ Clustering
foxnews msnbc
Figure 9: Trade-offs between ingest cost and query latency
three higher targets, 97%, 98%, and 99%.
As the figures show, with higher accuracy targets, the
ingest costs are about the same, and the improvement of
query latency decreases. Focus keeps the ingest cost similar (62× to 64× cheaper than the baseline) because it still
runs the specialized and compressed CNN at ingest time.
However, when the accuracy targets are higher, Focus
needs to select more top-K classification results, which
increases the work at query time. On average, the query
latency of Focus is faster than Query-all by 15×, 12×, and
8× with respect to 97%, 98%, and 99% accuracy targets.
We conclude that the techniques of Focus can achieve
higher accuracy targets with significant improvements on
both ingest cost and query latency.
(b) Query latency
Figure 8: Effect of different Focus components
Ingest cheaper
by (factor)
goals (§4.4). Figure 1 from §1 depicted three alternative
settings for Focus that illustrate the trade-off space between ingest cost and query latency, using the auburn_c
video stream: (1) Focus-Opt-Query, which optimizes for
query latency by increasing ingest cost, (2) Focus-Balance,
which is the default option that balances these two metrics
(§4.4), and (3): Focus-Opt-Ingest, which is the opposite
of Focus-Opt-Query. The results are’shown relative to the
two baselines. The chart at the right of the figure is the
zoomed-in region that covers the three settings of Focus,
and each data label (I, Q) indicates its ingest cost is I×
cheaper than Ingest-all, while its query latency is Q× faster
than Query-all.
As Figure 1 shows, Focus offers very good options in
the trade-off space between ingest cost and query latency.
Focus-Opt-Ingest achieves 141× cheaper cost than Ingestall to ingest the video stream, and makes the query 46×
faster than doing nothing at ingest (Query-all). On the other
hand, Focus-Opt-Query reduces query latency by 63× with
a relatively higher ingest cost, but it is still 26× cheaper
than Ingest-all. As they are all good options compared to
the baselines, such flexibility allows a user to tailor Focus
for different contexts. For example, a traffic camera that
requires fast turnaround time for queries can use FocusOpt-Query, while a surveillance video stream that will
be queried very rarely would choose Focus-Opt-Ingest to
reduce the amount of wasted ingest cost.
Figure 9 shows the (I, Q) values for both Focus-OptIngest (Opt-I) and Focus-Opt-Query (Opt-Q) for the representative videos. As the figure show, the trade-off flexibility exists among all the other videos. On average,
Focus-Opt-Ingest spends only 95× cheaper ingest cost to
provide 35× query latency reduction. On the other hand,
Focus-Opt-Query makes queries 49× faster with a higher
ingest cost (15× cheaper than Ingest-all). We conclude
that Focus provides good flexibility between ingest cost
and query latency, and makes it a better fit in different
contexts.
100
95%
97%
98%
99%
10
1
Query faster by
(factor)
Figure 10: Ingest cost sensitivity to accuracy target
100
95%
97%
98%
99%
10
1
Figure 11: Query latency sensitivity to accuracy target
6.6. Sensitivity to Frame Sampling
A common approach to reduce the video processing time
is to use frame sampling (i.e., periodically select a frame
to process). However, not all applications can use frame
sampling because it can miss objects that show up and
disappear within a frame sampling window. As the frame
sampling rate is an application dependent choice, we
study the sensitivity of Focus’s performance to different
frame rates. Figures 12 and 13 show the ingest cost and
query latency of Focus at different frame rates (i.e., 30
fps, 10 fps, 5 fps, and 1 fps) compared to Ingest-all and
Query-all, respectively. We make two observations.
First, the ingest cost reduction is roughly the same
across the different frame rates. On average, the ingest
6.5. Sensitivity to Accuracy Target
Figures 10 and 11 illustrate the improvements of ingest
cost and query latency of Focus compared to the baselines
under different accuracy targets. Other than the default
95% accuracy target (recall and precision), we evaluate
11
Ingest cheaper
by (factor)
100
30fps
10fps
5fps
1fps
7. Related Work
10
To our best knowledge, Focus is the first system that offers
low-cost, low-latency,and high-accuracy video queries by
balancing between ingest-time cost and query latency. We
now discuss work related to our key techniques.
1) Cascaded classification. Various works in vision research propose speeding up classification by cascading a
series of classifiers. Viola et al. [75] is the earliest work
which cascades a series of classifiers (from the simplest to
the most complicated) to quickly disregard regions in an
image. Many improvements followed (e.g., [50, 76, 77]).
CNNs are also cascaded (e.g., [26, 35, 49, 70]) to reduce
object detection latency. Our work is different in two
major ways. First, we decouple the compressed CNN
from the GT-CNN, which allows us to choose from a
wider range for ingest-time CNNs and allows for better
trade-offs between ingest cost and query latency, a key
aspect of our work. Second, we cluster similar objects
using CNN features to eliminate redundant work, which
is a new and effective technique for video streams.
2) Neural network compression. Recent work proposes
various techniques to reduce the running time of CNNs.
These techniques include shallow models [21], predicting
weights [33], matrix pruning [31, 37], model quantization [36], and others (e.g., [20, 34, 39, 41, 42, 57, 62]). Our
work is largely orthogonal to these, in that our system is
not tied to a specific model compression technique, and
we can employ any of these techniques.
3) Context-specific model specialization. Contextspecific specialization of models can improve accuracy [53] or reduce running time [35, 44, 65]. Among
these, the closest to our work is Kang et al.’s proposal,
NoScope [44], which aims to optimize CNN-based video
queries. A few key differences stand out. First, NoScope
applies all the optimizations at query-time, while Focus
adopts a different architecture by splitting work between
ingest- and query-time. Thus, Focus trades off higher ingest cost for even lower query latency. Second, NoScope
optimizes CNNs for a single class, while we optimize ingest CNNs for all frequent classes in the stream and allow
queries even for the rare – OTHER – classes. Finally, we
use the object feature vectors to cluster similar objects and
create an index to map classes to clusters; this allows us to
efficiently query across all classes, while NoScope has to
redo all query-time work, including training specialized
CNNs, for each query.
4) Stream processing systems. Systems for general
stream data processing (e.g., [1,18,19,24,28,29,51,56,73,
74, 79]) and specific to video analytics (e.g., [80]) mainly
focus on the general stream processing challenges such
as load shedding, fault tolerance, distributed execution,
or limited network bandwidth. In contrast, our work is
specific for querying on recorded video data with ingest
and query trade-offs, thus it is mostly orthogonal to these.
1
Query faster
by (factor)
Figure 12: Ingest cost sensitivity to frame sampling
100
30fps
10fps
5fps
1fps
10
1
Figure 13: Query latency sensitivity to frame sampling
cost of Focus is 62× cheaper than Ingest-all at 30 fps,
and it is 64× to 58× cheaper at lower frame rates. This
is because the major ingest cost saving comes from the
specialized and compressed CNN models (§6.3), which
are orthogonal to frame sampling rates.
Second, the query latency improvement of Focus degrades with lower frame rates. This is expected because
one of our key techniques to reduce query latency is redundancy elimination, especially clustering similar objects
using CNN feature vectors. At lower frame rates, the
benefit of this technique reduces because there are fewer
redundancies. Nonetheless, on average, Focus is still one
order of magnitude faster than Query-all at a very low
frame rate (1 fps).
6.7. Applicability with Different Query Rate
There are two factors that can affect the applicability of
Focus: 1) the number of classes that get queried over time
and 2) the fraction of videos that get queried. In the first
extreme case where all the classes and all the videos are
queried, Ingest-all could be a good option because its cost
is amortized among all the queries. In our study, even
in such an extreme case, the overall cost of Focus is still
4× cheaper than Ingest-all on average (up to 6× cheaper)
because we run a very cheap CNN at ingest time, and
we run GT-CNN per object cluster only once (§5), so the
overall cost is still cheaper than Ingest-all.
The second extreme case is only a tiny fraction of
videos gets queried. While Focus can save the ingest
cost by up to 141× (§6.4), it can be more costly than
Query-all if the fraction of videos gets queried is less than
1
141 = 0.7%. In such a case, we can choose to do nothing
at ingest time and run all the techniques of Focus only at
query time when we know the fraction of videos that get
queried. While this approach increases query latency, it
still reduces the query latency by a average of 22× (up
to 34×) than Query-all in our evaluation. We conclude
that Focus is still better than both baselines even under
extreme query rates.
12
We can integrate Focus with one of these general stream
processing system to build a more fault tolerable system.
5) Video indexing and retrieval. A large body of works
in multimedia and information retrieval research propose
various content-based video indexing and retrieval techniques to facilitate queries on videos (e.g., [40,48,68,69]).
Among them, most works focus on indexing videos for
different types of queries such as shot boundary detection [78], semantic video search [30], video classification [25], or spatio-temporal information-based video retrieval [61]. Some works (e.g., [32, 67]) focus on the
query interface to enable query by keywords, concepts,
or examples. These works are largely orthogonal to our
work because we focus on the cost and latency of video
queries, not query types or interfaces. We believe our idea
of splitting ingest-time and query-time work is generic
for videos queries, and can be extended to different types
of queries.
[5] “City of Auburn North Ross St and East Magnolia Ave.”
[Online]. Available: https://www.youtube.com/watch?v=
cjuskMMYlLA
[6] “City of Auburn Toomer’s Corner Webcam.” [Online].
Available: https://www.youtube.com/watch?v=yJAk_
FozAmI
[7] “Genetec,” https://www.genetec.com/.
[8] “Greenwood Avenue Bend, Oregon.” [Online]. Available:
https://www.youtube.com/watch?v=SNz323Cyago
[9] “Jackson Hole Wyoming USA Town Square.” [Online]. Available: https://www.youtube.com/watch?v=
psfFJR3vZ78
[10] “L2̂ Norm.” [Online]. Available: http://mathworld.
wolfram.com/L2-Norm.html
[11] “Lausanne, Place de la Palud.” [Online]. Available:
https://www.youtube.com/watch?v=GdhEsWcV4iE
[12] “MongoDB.” [Online]. Available: https://www.mongodb.
com/
[13] “Nvidia Tesla K80.” [Online]. Available: http://www.
nvidia.com/object/tesla-k80.html
[14] “Opencv 3.2.” [Online]. Available: http://opencv.org/
opencv-3-2.html
[15] “Oxford Martin School Webcam - Broad Street, Oxford.”
[Online]. Available: https://www.youtube.com/watch?v=
Qhq4vQdfrFw
[16] “Top Video Surveillance Trends for 2016.” [Online].
Available: https://technology.ihs.com/api/binary/572252
[17] “Wikipedia: Pareto efficiency.” [Online]. Available:
https://en.wikipedia.org/wiki/Pareto_efficiency
[18] D. J. Abadi, Y. Ahmad, M. Balazinska, U. Çetintemel,
M. Cherniack, J. Hwang, W. Lindner, A. Maskey, A. Rasin,
E. Ryvkina, N. Tatbul, Y. Xing, and S. B. Zdonik, “The
design of the Borealis stream processing engine,” in CIDR,
2005.
[19] L. Amini, H. Andrade, R. Bhagwan, F. Eskesen, R. King,
P. Selo, Y. Park, and C. Venkatramani, “SPC: A distributed,
scalable platform for data mining,” in DM-SSP, 2006.
[20] S. Anwar, K. Hwang, and W. Sung, “Fixed point optimization of deep convolutional neural networks for object
recognition,” in ICASSP, 2015.
[21] J. Ba and R. Caruana, “Do deep nets really need to be
deep?” in NIPS, 2014.
[22] A. Babenko and V. S. Lempitsky, “Aggregating deep convolutional features for image retrieval,” in ICCV, 2015.
[23] A. Babenko, A. Slesarev, A. Chigorin, and V. S. Lempitsky,
“Neural codes for image retrieval,” in ECCV, 2014.
[24] P. Bailis, E. Gan, S. Madden, D. Narayanan, K. Rong, and
S. Suri, “MacroBase: Prioritizing attention in fast data,” in
SIGMOD, 2017.
[25] D. Brezeale and D. J. Cook, “Automatic video classification: A survey of the literature,” IEEE Trans. Systems,
Man, and Cybernetics, Part C.
[26] Z. Cai, M. J. Saberian, and N. Vasconcelos, “Learning
complexity-aware cascades for deep pedestrian detection,”
in ICCV, 2015.
[27] F. Cao, M. Ester, W. Qian, and A. Zhou, “Density-based
clustering over an evolving data stream with noise,” in
SIAM International Conference on Data Mining, 2006.
[28] D. Carney, U. Çetintemel, M. Cherniack, C. Convey,
S. Lee, G. Seidman, M. Stonebraker, N. Tatbul, and S. B.
8. Conclusion
Answering queries of the form, find me frames that contain objects of class X is an important workload on
recorded video datasets. Such queries are used by analysts and investigators, and it is crucial to answer them
with low latency and low cost. We present Focus, a system
that performs low cost ingest-time analytics on live video
that later facilitates low-latency queries on the recorded
videos. Focus uses compressed and specialized CNNs at
ingest-time that substantially reduces cost. It also clusters
similar objects to reduce the work done at query-time,
and hence the latency. Focus selects the ingest-time CNN
and its parameters to smartly trade-off between the ingesttime cost and query-time latency. Our evaluations using
150 hours of video from traffic, surveillance, and news
domains show that Focus reduces GPU consumption by
58× and makes queries 37× faster compared to current
baselines. We conclude that Focus is a promising approach to querying large video datasets. We hope that
Focus will enable future works on better determining the
ingest-time and query-time trade-offs in video querying
systems. Our next steps include training a specialized
and highly accurate query-time CNN for each stream and
object to further reduce query latency.
References
[1] “Apache Storm.” [Online]. Available: http://storm.apache.
org/index.html
[2] “Avigilon,” http://avigilon.com/products/.
[3] “Church Street Market Place.” [Online]. Available:
https://www.youtube.com/watch?v=S3Bl8AuKPds
[4] “City Cam, WebcamSittard: Town Square Sittard (NL).”
[Online]. Available: https://www.youtube.com/watch?v=
Zb9koIwo3Js
13
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
[46]
[47]
Zdonik, “Monitoring streams - A new class of data management applications,” in VLDB, 2002.
S. Chandrasekaran, O. Cooper, A. Deshpande, M. J.
Franklin, J. M. Hellerstein, W. Hong, S. Krishnamurthy,
S. Madden, F. Reiss, and M. A. Shah, “TelegraphCQ: Continuous dataflow processing,” in SIGMOD, 2003.
S. Chang, W. Ma, and A. W. M. Smeulders, “Recent advances and challenges of semantic image/video search,” in
ICASSP, 2007.
W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and
Y. Chen, “Compressing neural networks with the hashing
trick,” CoRR, vol. abs/1504.04788, 2015.
M. G. Christel and R. M. Conescu, “Mining novice user activity with TRECVID interactive retrieval tasks,” in CIVR.
M. Denil, B. Shakibi, L. Dinh, M. Ranzato, and N. de Freitas, “Predicting parameters in deep learning,” in NIPS,
2013.
E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus, “Exploiting linear structure within convolutional networks for efficient evaluation,” in NIPS, 2014.
S. Han, H. Shen, M. Philipose, S. Agarwal, A. Wolman,
and A. Krishnamurthy, “MCDNN: An approximationbased execution framework for deep stream processing
under resource constraints,” in MobiSys, 2016.
S. Han, H. Mao, and W. J. Dally, “Deep compression:
Compressing deep neural network with pruning, trained
quantization and huffman coding,” in ICLR, 2016.
S. Han, J. Pool, J. Tran, and W. Dally, “Learning both
weights and connections for efficient neural network,” in
NIPS, 2015.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual
learning for image recognition,” in CVPR, 2016.
G. E. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” CoRR, vol.
abs/1503.02531, 2015.
W. Hu, N. Xie, L. Li, X. Zeng, and S. J. Maybank, “A survey on visual content-based video indexing and retrieval,”
IEEE Trans. Systems, Man, and Cybernetics, Part C.
K. Hwang and W. Sung, “Fixed-point feedforward deep
neural network design using weights +1, 0, and -1,” in
SiPS, 2014.
M. Jaderberg, A. Vedaldi, and A. Zisserman, “Speeding up
convolutional neural networks with low rank expansions,”
CoRR, vol. abs/1405.3866, 2014.
P. KaewTraKulPong and R. Bowden, “An improved adaptive background mixture model for real-time tracking with
shadow detection,” in AVSS, 2001.
D. Kang, J. Emmons, F. Abuzaid, P. Bailis, and M. Zaharia, “NoScope: Optimizing deep CNN-based queries
over video streams at scale,” PVLDB, 2017.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet
classification with deep convolutional neural networks,” in
NIPS, 2012.
S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back,
“Face recognition: a convolutional neural-network approach,” IEEE Trans. Neural Networks, 1997.
Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E.
Howard, W. E. Hubbard, and L. D. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural
Computation, 1989.
[48] M. S. Lew, N. Sebe, C. Djeraba, and R. Jain, “Contentbased multimedia information retrieval: State of the art
and challenges,” TOMCCAP.
[49] H. Li, Z. Lin, X. Shen, J. Brandt, and G. Hua, “A convolutional neural network cascade for face detection,” in
CVPR, 2015.
[50] R. Lienhart and J. Maydt, “An extended set of Haar-like
features for rapid object detection,” in ICIP, 2002.
[51] W. Lin, H. Fan, Z. Qian, J. Xu, S. Yang, J. Zhou, and
L. Zhou, “StreamScope: Continuous reliable distributed
processing of big data streams,” in NSDI, 2016.
[52] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. E. Reed,
C. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in ECCV.
[53] A. Mhalla, H. Maâmatou, T. Chateau, S. Gazzah, and
N. E. B. Amara, “Faster R-CNN scene specialization with
a sequential monte-carlo framework,” in DICTA.
[54] Microsoft, “Microsoft Cognitive Toolkit.” [Online]. Available: https://www.microsoft.com/en-us/cognitive-toolkit/
[55] L. O’Callaghan, N. Mishra, A. Meyerson, and S. Guha,
“Streaming-data algorithms for high-quality clustering,” in
ICDE, 2002.
[56] A. Rabkin, M. Arye, S. Sen, V. S. Pai, and M. J. Freedman,
“Aggregation and degradation in JetStream: Streaming
analytics in the wide area,” in NSDI, 2014.
[57] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi,
“XNOR-Net: ImageNet classification using binary convolutional neural networks,” in ECCV, 2016.
[58] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson,
“CNN features off-the-shelf: An astounding baseline for
recognition,” in CVPR Workshops, 2014.
[59] J. Redmon and A. Farhadi, “YOLO9000: Better, faster,
stronger,” CoRR, vol. abs/1612.08242, 2016.
[60] S. Ren, K. He, R. B. Girshick, and J. Sun, “Faster R-CNN:
Towards real-time object detection with region proposal
networks,” in NIPS, 2015.
[61] W. Ren, S. Singh, M. Singh, and Y. S. Zhu, “State-of-theart on spatio-temporal information-based video retrieval,”
Pattern Recognition.
[62] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta,
and Y. Bengio, “FitNets: Hints for thin deep nets,” CoRR,
vol. abs/1412.6550, 2014.
[63] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh,
S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein,
A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual
recognition challenge,” IJCV, 2015.
[64] F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A
unified embedding for face recognition and clustering,” in
CVPR, 2015.
[65] H. Shen, S. Han, M. Philipose, and A. Krishnamurthy,
“Fast video classification via adaptive cascading of deep
models,” in CVPR, 2017.
[66] K. Simonyan and A. Zisserman, “Very deep convolutional
networks for large-scale image recognition,” in ICLR,
2015.
[67] C. Snoek, K. E. A. van de Sande, O. de Rooij, B. Huurnink,
E. Gavves, D. Odijk, M. de Rijke, T. Gevers, M. Worring, D. Koelma, and A. W. M. Smeulders, “The mediamill TRECVID 2010 semantic video search engine,” in
TRECVID 2010 workshop participants notebook papers,
2010.
14
[68] C. Snoek and M. Worring, “Multimodal video indexing:
A review of the state-of-the-art,” Multimedia Tools Appl.
[69] C. G. M. Snoek and M. Worring, “Concept-based video retrieval,” Foundations and Trends in Information Retrieval.
[70] Y. Sun, X. Wang, and X. Tang, “Deep convolutional network cascade for facial point detection,” in CVPR, 2013.
[71] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed,
D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich,
“Going deeper with convolutions,” in CVPR, 2015.
[72] P.-N. Tan, M. Steinbach, and V. Kumar, Introduction
to Data Mining, (First Edition). Boston, MA, USA:
Addison-Wesley Longman Publishing Co., Inc., 2005.
[73] N. Tatbul, U. Çetintemel, and S. B. Zdonik, “Staying FIT:
efficient load shedding techniques for distributed stream
processing,” in VLDB, 2007.
[74] Y. Tu, S. Liu, S. Prabhakar, and B. Yao, “Load shedding
in stream databases: A control-based approach,” in VLDB,
2006.
[75] P. A. Viola and M. J. Jones, “Rapid object detection using
a boosted cascade of simple features,” in CVPR, 2001.
[76] Z. E. Xu, M. J. Kusner, K. Q. Weinberger, and M. Chen,
“Cost-sensitive tree of classifiers,” in ICML, 2013.
[77] Q. Yang, C. X. Ling, X. Chai, and R. Pan, “Test-cost
sensitive classification on data with missing values,” IEEE
Trans. Knowl. Data Eng., 2006.
[78] J. Yuan, H. Wang, L. Xiao, W. Zheng, J. Li, F. Lin, and
B. Zhang, “A formal study of shot boundary detection,”
IEEE Trans. Circuits Syst. Video Techn.
[79] M. Zaharia, T. Das, H. Li, T. Hunter, S. Shenker, and
I. Stoica, “Discretized streams: fault-tolerant streaming
computation at scale,” in SOSP, 2013.
[80] H. Zhang, G. Ananthanarayanan, P. Bodík, M. Philipose,
P. Bahl, and M. J. Freedman, “Live video analytics at scale
with approximation and delay-tolerance,” in NSDI, 2017.
[81] Z. Zivkovic, “Improved adaptive gaussian mixture model
for background subtraction,” in ICPR, 2004.
15
| 1 |
arXiv:1605.03662v2 [math.ST] 21 Jan 2018
Subspace Perspective on Canonical Correlation Analysis:
Dimension Reduction and Minimax Rates
Zhuang Ma and Xiaodong Li
Abstract
Canonical correlation analysis (CCA) is a fundamental statistical tool for exploring the
correlation structure between two sets of random variables. In this paper, motivated by the
recent success of applying CCA to learn low dimensional representations of high dimensional
objects, we propose two losses based on the principal angles between the model spaces spanned
by the sample canonical variates and their population correspondents, respectively. We further
characterize the non-asymptotic error bounds for the estimation risks under the proposed error
metrics, which reveal how the performance of sample CCA depends adaptively on key quantities
including the dimensions, the sample size, the condition number of the covariance matrices and
particularly the population canonical correlation coefficients. The optimality of our uniform
upper bounds is also justified by lower-bound analysis based on stringent and localized parameter
spaces. To the best of our knowledge, for the first time our paper separates p1 and p2 for the
first order term in the upper bounds without assuming the residual correlations are zeros. More
significantly, our paper derives p1 ´ λ2k qp1 ´ λ2k`1 q{pλk ´ λk`1 q2 for the first time in the nonasymptotic CCA estimation convergence rates, which is essential to understand the behavior of
CCA when the leading canonical correlation coefficients are close to 1.
1
Introduction
Canonical correlation analysis (CCA), first introduced by Hotelling (1936), is a fundamental
statistical tool to characterize the relationship between two groups of random variables and finds
a wide range of applications across many different fields. For example, in genome-wide association
study (GWAS), CCA is used to discover the genetic associations between the genotype data
of single nucleotide polymorphisms (SNPs) and the phenotype data of gene expression levels
(Witten et al., 2009; Chen et al., 2012). In information retrieval, CCA is used to embed both
the search space (e.g. images) and the query space (e.g. text) into a shared low dimensional
latent space such that the similarity between the queries and the candidates can be quantified
(Rasiwasia et al., 2010; Gong et al., 2014). In natural language processing, CCA is applied to
the word co-occurrence matrix and learns vector representations of the words which capture the
semantics (Dhillon et al., 2011; Faruqui and Dyer, 2014). Other applications, to name a few,
include fMRI data analysis (Friman et al., 2003), computer vision (Kim et al., 2007) and speech
recognition (Arora and Livescu, 2013; Wang et al., 2015).
The enormous empirical success motivates us to revisit the estimation problem of canonical
correlation analysis. Two theoretical questions are naturally posed: What are proper error metrics
to quantify the discrepancy between population CCA and its sample estimates? And under such
metrics, what are the quantities that characterize the fundamental statistical limits?
1
The justification of loss functions, in the context of CCA, has seldom appeared in the literature.
From first principles that the proper metric to quantify the estimation loss should depend on the
specific purpose of using CCA, we find that the applications discussed above mainly fall into two
categories: identifying variables of interest and dimension reduction.
The first category, mostly in genomic research (Witten et al., 2009; Chen et al., 2012), treats
one group of variables as responses and the other group of variables as covariates. The goal is to
discover the specific subset of the covariates that are most correlated with the responses. Such
applications are featured by low signal-to-noise ratio and the interpretability of the results is the
major concern.
In contrast, the second category is investigated extensively in statistical machine learning and
engineering community where CCA is used to learn low dimensional latent representations of
complex objects such as images (Rasiwasia et al., 2010), text (Dhillon et al., 2011) and speeches
(Arora and Livescu, 2013). These scenarios are usually accompanied with relatively high signalto-noise ratio and the prediction accuracy, using the learned low dimensional embeddings as the
new set of predictors, is of primary interest. In recent years, there has been a series of publications
establishing fundamental theoretical guarantees for CCA to achieve sufficient dimension reduction
(Kakade and Foster (2007); Foster et al. (2008); Sridharan and Kakade (2008); Fukumizu et al.
(2009); Chaudhuri et al. (2009) and many others).
In this paper, we aim to address the problems raised above by treating CCA as a tool for
dimension reduction.
1.1
Population and Sample CCA
Suppose x “ rX1 , . . . , Xp1 sJ P Rp1 and y “ rY1 , . . . , Yp2 sJ P Rp2 are two sets of variates with the
joint covariance matrix
ˆ„ ˙
„
Σx Σxy
x
Cov
.
(1.1)
“ Σ :“
Σy
ΣJ
y
xy
For simplicity, we assume
EpXi q “ 0, i “ 1, . . . , p1 ,
EpYj q “ 0, j “ 1, . . . , p2 .
On the population level, CCA is designed to extract the most correlated linear combinations
between two sets of random variables sequentially: The ith pair of canonical variables Ui “ φJ
i x
and Vi “ ψiJ y maximizes
λi “ CorrpUi , Vi q
such that Ui and Vi have unit variances and they are uncorrelated to all previous pairs of canonical
variables. Here pφi , ψi q is called the ith pair of canonical loadings and λi is the ith canonical
correlation.
It is well known in multivariate statistical analysis that the canonical loadings can be found
recursively by the following criterion:
pφi , ψi q “ arg max
φJ Σxy ψ
subject to φJ Σx φ “ 1, ψ J Σy ψ “ 1;
J
J
φ Σx φj “ 0, ψ Σy ψj “ 0, @ 1 ď j ď i ´ 1.
2
(1.2)
Although this criterion is a nonconvex optimization, it can be obtained easily by spectral methods:
Define Φ :“ rφ1 , ¨ ¨ ¨ , φp1 ^p2 s, Ψ :“ rψ1 , ¨ ¨ ¨ , ψp1 ^p2 s and Λ :“ diagpλ1 , ¨ ¨ ¨ , λp1 ^p2 q. Then
´1{2
´1{2
1{2
1{2
λ1 , . . . , λp1 ^p2 are singular values of Σx Σxy Σy , and Σx Φ, Σy Ψ are actually left and right
´1{2
´1{2
singular vectors of Σx Σxy Σy , respectively.
1.2
Canonical variables versus canonical loadings
pi , ψ
pi quk , the
For any given estimates of the leading k canonical loadings, denoted by tpφ
i“1
corresponding estimates for the canonical variables can be represented by
pJ x,
pi “ φ
U
i
pJ y,
Vpi “ ψ
i
i “ 1 . . . , p1 ^ p2 .
To quantify the estimation loss, generally speaking, we can either focus on measuring the difference
pi , ψ
pi quk or measuring the difference between
between the canonical loadings tpφi , ψi quki“1 and tpφ
i“1
pi , Vpi quk . Here x, y in the definition of tpUi , Vi quk
the canonical variables tpUi , Vi quki“1 and tpU
i“1
i“1
pi , ψ
pi quk are constructed.
pi , Vpi quk are independent of the samples based on which tpφ
and tpU
i“1
i“1
Therefore, for the discrepancy between the canonical variables, there is an extra layer of randomness.
As discussed above, in modern machine learning applications such as natural language
processing and information retrieval, the leading sample canonical loadings are used for dimension
reduction, i.e., for a new observation px0 , y0 q, ideally we hope to use the corresponding values of
k
J
k
the canonical variables pui “ φJ
i x0 qi“1 and pvi “ ψi y0 qi“1 to represent the observation in a low
pJ x0 qk
dimension space. Empirically, the actual low dimensional representations are pûi “ φ
i
i“1
pJ y0 qk . Therefore, the discrepancy between the ideal dimension reduction and
and pv̂i “ ψ
i
i“1
pi , Vpi quk approximate tpUi , Vi quk .
actual dimension reduction should be explained by how well tpU
i“1
i“1
Consequently, we choose to quantify the difference between the sample and population canonical
variables instead of the canonical loadings.
1.3
Linear span
However, there are still many options to quantify how well the sample canonical variables
approximate their population correspondents. To choose suitable losses, it is convenient to come
back to specific applications to get some inspiration.
Motivated by applications in natural language processing and information retrieval, the model
of multi-view sufficient dimension reduction has been studied in Foster et al. (2008). Roughly
speaking, a statistical model was proposed by Foster et al. (2008) to study how to predict Z using
two sets of predictors denoted by x “ rX1 , . . . , Xp1 sJ and y “ rY1 , . . . , Yp2 sJ , where the joint
covariance of pZ, x, yq is
¨» fi˛ »
fi
Σx Σxy σxz
x
Σy σyz fl .
Cov ˝– y fl‚ “ –ΣJ
xy
J
J
Z
σz2
σxz σyz
It was proven in Foster et al. (2008) that under certain assumptions, the leading k canonical
variables U1 , . . . Uk are sufficient dimension reduction for the linear prediction of Z; That is, the
best linear predictor of Z based on X1 , . . . , Xp1 is the same as the best linear predictor based on
3
U1 , . . . Uk . (Similarly, the best linear predictor of Z based on Y1 , . . . , Yp2 is the same as the best
linear predictor based on V1 , . . . Vk .)
Notice that the best linear predictor is actually determined by the set of all linear combinations
of U1 , . . . , Uk (referred to as the “model space” in the literature of linear regression for prediction),
which we denote as spanpU1 , . . . , Uk q. Inspired by Foster et al. (2008), we propose to quantify the
pi uk by the discrepancy between the corresponding subspaces
discrepancy between tUi uki“1 and tU
i“1
p1 , . . . , U
pk q and spanpU1 , . . . , Uk q (and similarly measure the difference between tVi uk and
spanpU
i“1
tVpi uki“1 by the distance between spanpVp1 , . . . , Vpk q and spanpV1 , . . . , Vk q).
1.4
Hilbert spaces and principal angles
xpU,kq “ spanpU
p1 , . . . , U
pk q and MpU,kq “
In this section, we define the discrepancy between M
spanpU1 , . . . , Uk q by introducing a Hilbert space. Noting that for any given sample tpxi , yi quni“1 ,
xpU,kq and MpU,kq are composed by linear combinations of X1 , . . . , Xp . Denote the set of all
both M
1
possible linear combinations as
H “ spanpX1 , . . . , Xp1 q.
(1.3)
Moreover, for any X1 , X2 P H, we define a bilinear function xX1 , X2 y :“ CovpX1 , X2 q “ EpX1 X2 q.
It is easy to show that x¨, ¨y is an inner product and pH, x¨, ¨yq is a p1 -dimensional Hilbert space,
which is isomorphic to Rp1 .
xpU,kq and MpU,kq are
With the natural covariance-based inner product, we know both M
subspaces of H, so it is natural to define their discrepancy based on their principal angles
π
2 ě θ1 ě . . . ě θk ě 0. In the literature of statistics and linear algebra, two loss functions
are usually used
p1 , . . . , U
pk q, spanpU1 , . . . , Uk qq “ sin2 pθ1 q
Lmax pspanpU
and
1
psin2 pθ1 q ` . . . ` sin2 pθk qq
k
In spite of a somewhat abstract definition, we have the following clean formula for these two losses:
p1 , . . . , U
pk q, spanpU1 , . . . , Uk qq “
Lave pspanpU
Theorem 1.1. Suppose for any p1 ˆ k matrix A, PA represents the orthogonal projector onto the
column span of A. Assume the observed sample is fixed. Then
›
›2
›
p1 , . . . , U
pk q, spanpU1 , . . . , Uk qq “ 1 ››P 1{2
´
P
Lave pspanpU
›
1{2
p 1:k
Σx Φ1:k F
2k Σx Φ
›2
¯
1 ››´
›
“ › Ip1 ´ PΣ1{2 Φ
PΣ1{2 Φ
(1.4)
›
p
x
x
1:k
1:k
k
F
“
‰
1
p J Q}22
“
min E }uJ ´ u
k QPRkˆk
p 1:k q
:“ Lave pΦ1:k , Φ
4
and
›
›2
›
p1 , . . . , U
pk q, spanpU1 , . . . , Uk qq “ ››P 1{2
´
P
Lmax pspanpU
›
1{2
p 1:k
Σx Φ1:k
Σx Φ
›2
›´
¯
›
›
PΣ1{2 Φ
“ › Ip1 ´ PΣ1{2 Φ
›
p
x
x
1:k
1:k
”``
˘
˘2 ı
pJ Q g
“ max min E
uJ ´ u
(1.5)
gPRk QPRkˆk
p 1:k q.
:“ Lmax pΦ1:k , Φ
Here Φ1:k “ rφ1 , . . . , φk s is a p1 ˆ k matrix consisting of the leading k population canonical
p 1:k is its estimate based on a given sample. Moreover uJ :“ pU1 , . . . , Uk q and
loadings for x, and Φ
p1 , . . . , U
pk q.
ûJ :“ pU
1.5
Uniform upper bounds and minimax rates
The most important contribution of this paper is to establish sharp upper bounds for the
p 1:k q and
estimation/prediction of CCA based on the proposed subspace losses Lmax pΦ1:k , Φ
p
Lave pΦ1:k , Φ1:k q. It is noteworthy that both upper bounds hold uniformly for all invertible Σx , Σy
provided n ą Cpp1 ` p2 q for some numerical constant C. Furthermore, in order to justify the
sharpness of these bounds, we also establish minimax lower bounds under a family of stringent and
localized parameter spaces. These results will be detailed in Section 2.
1.6
Notations and the Organization
Throughout the paper, we use lower-case and upper-case non-bolded letters to represent fixed and
random variables, respectively. We also use lower-case and upper-case bold letters to represent
vectors (which could be either deterministic or random) and matrices, respectively. For any matrix
U P Rnˆp and vector u P Rp , }U }, }U }F denotes operator (spectral) norm and Frobenius norm
respectively, }u} denotes the vector l2 norm, U1:k denotes the submatrix consisting of the first k
columns of U , and PU stands for the projection matrix onto the column space of U . Moreover, we
use σmax pU q and σmin pU q to represent the largest and smallest singular value of U respectively, and
κpU q “ σmax pU q{σmin pU q to denote the condition number of the matrix. We use Ip for the identity
matrix of dimension p and Ip,k for the submatrix composed of the first k columns of Ip . Further,
Opm, nq (and simply Opnq when m “ n) stands for the set of m ˆ n matrices with orthonormal
columns and Sp` denotes the set of p ˆ p strictly positive definite matrices. For a random vector
x P Rp , spanpxJ q “ txJ w, w P Rp u denotes the subspace of all the linear combinations of x. Other
notations will be specified within the corresponding context.
In the following, we will introduce our main upper and lower bound results in Section 2. To
highlight our contributions in the new loss functions and theoretical results, we will compare our
results to existing work in the literature in Section 3. All proofs are deferred to Section 4.
2
Theory
In this section, we introduce our main results on non-asymptotic upper and lower bounds for
estimating CCA under the proposed loss functions. It is worth recalling that λ1 , . . . , λp1 ^p2 are
´1{2
´1{2
singular values of Σx Σxy Σy .
5
It is natural to estimate population CCA by its sample counterparts. Similar to equation (1.2),
the sample canonical loadings are defined recursively by
pi , ψ
pi q “ arg max
pφ
p xy ψ
φJ Σ
p x φ “ 1, ψ J Σ
p y ψ “ 1;
subject to φJ Σ
p x φj “ 0, ψ J Σ
p y ψj “ 0, @ 1 ď j ď i ´ 1.
φJ Σ
(2.1)
p x, Σ
p y, Σ
p xy are the sample covariance matrices. The sample canonical variables are
where Σ
defined as the following linear combinations by the sample canonical loadings:
pJ x,
pi “ φ
U
i
pJ y,
Vpi “ ψ
i
i “ 1 . . . , p1 ^ p2 .
We prove the following upper bound for the estimate based on sample CCA.
„
x
Theorem 2.1. (Upper bound) Suppose
„ N p0, Σq where Σ is defined as in (1.1). Assume
y
Σx and Σy are invertible. Moreover, assume λk ą λk`1 for some predetermined k. Then there
exist universal positive constants γ, C, C0 such that if n ě Cpp1 ` p2 q, the top-k sample canonical
p 1:k satisfies
coefficients matrix Φ
ff
«
”
ı
2
p1 ´ λ2k qp1 ´ λ2k`1 q p1
pp
`
p
q
1
2
p 1:k q ď C0
` 2
` e´γpp1 ^p2 q
E Lmax pΦ1:k , Φ
pλk ´ λk`1 q2
n
n pλk ´ λk`1 q4
«
ff
”
ı
p1 ´ λ2k qp1 ´ λ2k`1 q p1 ´ k
pp1 ` p2 q2
´γpp1 ^p2 q
p
E Lave pΦ1:k , Φ1:k q ď C0
` 2
`e
pλk ´ λk`1 q2
n
n pλk ´ λk`1 q4
p 1:k can be obtained by switching p1 and p2 .
The upper bounds for Ψ
Since we pursue a nonasymptotic theoretical framework for CCA estimates, and the loss
functions we propose are nonstandard in the literature, the standard minimax lower bound results
in parametric maximum likelihood estimates do not apply straightforwardly. Instead, we turn to
the nonparametric minimax lower bound frameworks, particularly those in PCA and CCA; See,
e.g., Vu et al. (2013); Cai et al. (2013); Gao et al. (2015). Compared to these existing works, the
technical novelties of our results and proofs are summarized in Sections 3.3 and 6.
We define the parameter space Fpp1 , p2 , k, λk , λk`1 , κ1 , κ2 q as the collection of joint covariance
matrices Σ satisfying
1. κpΣx q “ κ1 and κpΣy q “ κ2 ;
2. 0 ď λp1 ^p2 ď ¨ ¨ ¨ ď λk`1 ă λk ď ¨ ¨ ¨ ď λ1 ď 1.
We deliberately set κpΣx q “ κ1 , κpΣy q “ κ2 to demonstrate that the lower bound is independent
of the condition number. For the rest of the paper, we will use the shorthand F to represent this
parameter space for simplicity.
6
Theorem 2.2. (Lower bound) There exists a universal constant c independent of n, p1 , p2 and Σ
such that
¸
+
#˜
”
ı
p1 ´ λ2k qp1 ´ λ2k`1 q p1 ´ k
p
´
k
1
2
p 1:k q ě c
^1^
inf sup E Lmax pΦ1:k , Φ
pλk ´ λk`1 q2
n
k
p 1:k ΣPF
Φ
#˜
¸
+
”
ı
p1 ´ λ2k qp1 ´ λ2k`1 q p1 ´ k
p
´
k
1
2
p 1:k q ě c
inf sup E Lave pΦ1:k , Φ
^1^
.
pλk ´ λk`1 q2
n
k
p 1:k ΣPF
Φ
p 1:k can be obtained by replacing p1 with p2 .
The lower bounds for Ψ
Corollary 2.3. When p1 , p2 ě p2kq _ Cplog nq and
něC
pp1 ` p2 qp1 ` p2 {p1 q
pλk ´ λk`1 q2 p1 ´ λ2k qp1 ´ λ2k`1 q
(2.2)
for some universal positive constant c, the minimax rates can be characterized by
3
”
ı p1 ´ λ2 qp1 ´ λ2 q p
1
k
k`1
p 1:k q —
,
inf sup E Lmax pΦ1:k , Φ
2
pλ
´
λ
q
n
p
k
k`1
Φ1:k ΣPF
”
ı p1 ´ λ2 qp1 ´ λ2 q p
1
k`1
k
p 1:k q —
.
inf sup E Lave pΦ1:k , Φ
2
pλk ´ λk`1 q
n
p 1:k ΣPF
Φ
Related Work and Our Contributions
Recently, the non-asymptotic rate of convergence of CCA has been studied by Gao et al. (2015,
2017) under a sparse setup and by Cai and Zhang (2017) under the usual non-sparse setup.
Cai and Zhang (2017) appeared on arXiv almost at the same time as the first version of our paper
was posted. In this section, we state our contributions by detailed comparison with these works.
3.1
Novel loss funcitons
We proposed new loss functions based on the principal angles between the subspace spanned by
the population canonical variates and the subspace spanned by the estimated canonical variates.
In contrast, Gao et al. (2017) proposed and studied the loss Lave ; Cai and Zhang (2017) proposed
Lmax and studied both Lave and Lmax , where
ˇ
”
ı
p 1:k q “ min E }xJ Φ1:k ´ xJ Φ
p 1:k Q}2 ˇˇ Φ
p 1:k ,
Lave pΦ1:k , Φ
2
QPOpk,kq
„´´
¯ ¯2 ˇ
ˇ
J
J
p 1:k Q g
p 1:k q “ max
p 1:k .
x Φ1:k ´ x Φ
Lmax pΦ1:k , Φ
min E
ˇΦ
gPRk ,|g|“1 QPOpk,kq
Lave and Lmax resemble our loss functions Lave and Lmax respectively. By Theorem 1.1, we also
have
ˇ
”
ı
p 1:k q “ 2 min E }xJ Φ1:k ´ xJ Φ
p 1:k Q}2 ˇˇ Φ
p 1:k
Lave pΦ1:k , Φ
2
QPRkˆk
„´´
¯ ¯2 ˇ
ˇ p
J
Jp
p
x Φ1:k ´ x Φ1:k Q g
Lmax pΦ1:k , Φ1:k q “ max
min E
ˇ Φ1:k
gPRk ,|g|“1 QPRkˆk
7
By these two expressions, we can easily obtain
p 1:k q
p 1:k q ď 2Lave pΦ1:k , Φ
Lave pΦ1:k , Φ
p 1:k q ď Lmax pΦ1:k , Φ
p 1:k q
Lmax pΦ1:k , Φ
(3.1)
p 1:k q and Lave pΦ1:k , Φ
p 1:k q are not equivalent up to a constant. Neither are
However, Lave pΦ1:k , Φ
p
p
Lmax pΦ1:k , Φ1:k q and Lmax pΦ1:k , Φ1:k q. In fact, we can prove that as long as n ą maxpp1 , p2 q, if
λk “ 1 ą λk`1 , then
p 1:k q “ Lmax pΦ1:k , Φ
p 1:k q “ 0,
Lave pΦ1:k , Φ
p 1:k q ‰ 0 and Lmax pΦ1:k , Φ
p 1:k q ‰ 0.
while almost surely Lave pΦ1:k , Φ
To illustrate this comparison,
very simple
simulation:
Suppose
„ we can consider „the following
„
1 0
1 0
1 0
p1 “ p2 “ 2, n “ 3 and Σx “
and Σy “
and Σxy “
. In this setup, we
0 1
0 1
0 0.5
know the population canonical
are λ1 “ 1 and λ2 “ 0.5, and the leading
„ correlation„coefficients
1
1
. In our simulation, we generated the following data
and ψ1 “
canonical loadings are φ1 “
0
0
matrices
»
fi
0.0736 1.5496
X “ –1.5390 ´0.0415fl
0.9331 ´0.4776
and
»
fi
0.0736 2.8982
Y “ –1.5390 ´1.2214fl .
0.9331 2.5931
p1 “ 1 and λ
p
Furthermore, we can obtain the sample canonical
correlations
λ
„
„
2 “ 0.5210, as well as
´0.9616
´0.9616
p1 “
p1 “
p1 q “
the leading sample canonical loadings φ
and ψ
. Then Lave pφ1 , φ
0
0
p1 q “ 0 while Lave pφ1 , φ
p1 q ‰ 0, Lmax pφ1 , φ
p1 q ‰ 0.
Lmax pφ1 , φ
This numerical example clearly shows that the sample CCA can exactly identify that among all
linear combinations of X1 and X2 and all linear combinations of Y1 and Y2 , aX1 and bY1 are mostly
correlated. Our loss functions Lave and Lmax do characterize this exact identification, whereas Lave
and Lmax do not.
Moreover, the following joint loss was studied in Gao et al. (2015):
„›
›2
´
´
¯¯
›
J
J
p 1:k Ψ
p ´ Φ1:k Ψ ›› .
p 1:k , Ψ
p 1:k
Ljoint pΦ1:k , Ψ1:k q , Φ
“ E ›Φ
1:k
1:k
F
´
´
¯¯
p 1:k , Ψ
p 1:k
‰ 0 almost surely under the special case λk “ 1 ą
Similarly, Ljoint pΦ1:k , Ψ1:k q , Φ
λk`1 .
3.2
Sharper upper bounds
Regardless of loss functions, we explain in the following why Theorem 2.1 implies sharper upper
bounds than the existing rates in Gao et al. (2015), Gao et al. (2017) and Cai and Zhang (2017)
8
under the nonsparse case. Our discussion is focused on Lave in the following discussion while the
discussion for Lmax is similar.
Notice that if we only apply Wedin’s sin-theta law, i.e., replacing the fine bound Lemma 5.4
with the rough bound Lemma 5.2 (also see Gao et al. (2015) for similar ideas), we can obtain the
following rough bound:
„
”
ı
p1 ` p2
p 1:k q ď C0
E Lave pΦ1:k , Φ
.
(3.2)
npλk ´ λk`1 q2
p 1:k from p2 , both Gao et al. (2017) and
In order to decouple the estimation error bound of Φ
Cai and Zhang (2017) assume the residual canonical correlations are zero, i.e.,
λk`1 “ . . . “ λp1 ^p2 “ 0.
This assumption is essential for proofs in both Gao et al. (2017) and Cai and Zhang (2017) under
certain sample size conditions. We got rid of this assumption by developing new proof techniques
and these techniques actually work for Lave , Lmax as well. A detailed comparison between our
result and that in Cai and Zhang (2017) is summarized in Table 3.2 (The results of Gao et al.
(2017) in the non-sparse regime can be implied by Cai and Zhang (2017) under milder sample size
conditions).
Loss function
Sample size
Cai and Zhang 2016
L
Lave q
ˆave pě
˙
?
p1 ` p1 p2
p2
nąC
` 4{3
λ2
Upper Bound Rates
n ą Cpp1 ` p2 q
λk
k
λk`1 “ ¨ ¨ ¨ “ λp1 “ 0
Our work
Lave
Yes
p1
nλ2k
`
No
p1´λ2k qp1´λ2k`1 q p1 ´k
n
pλk ´λk`1 q2
p1 p2
n2 λ4k
`
pp1 `p2 q2
2
n pλk ´λk`1 q4
` e´γpp1 ^p2 q
Perhaps the most striking contribution of our upper bound is that we first derive the factors
p1 ´ λ2k q and p1 ´ λ2k`1 q in the literature of nonasymptotic CCA estimate. We now explain why
these factors are essential when leading canonical correlation coefficients are close to 1.
Example 1: λk “ 1 and λk`1 “ 0
Consider the example that k “ 1, p1 “ p2 :“ p " log n, λ1 “ 1 and λ2 “ 0. Then our bound rates
p1´λ2k qp1´λ2k`1 q p1 ´k
n
pλk ´λk`1 q2
`
pp1 `p2 q2
n2 pλk ´λk`1 q4
` e´γpp1 ^p2 q actually imply that
2
p1 q ď C p ,
ELave pφ1 , φ
n2
while the rates in Gao et al. (2017) and Cai and Zhang (2017) imply that
p1 q ď 2ELave pφ1 , φ
p1 q ď C p .
ELave pφ1 , φ
n
p1 q, our result could
This shows that even under the condition λk`1 “ 0, under our loss Lave pφ1 , φ
imply sharper convergence rates than that in Gao et al. (2017) and Cai and Zhang (2017) if λk “ 1.
9
p1 q “ 0 through
Notice that as aforementioned, when λk “ 1, we can actually prove ELave pφ1 , φ
a separate argument. How to improve Theorem 2.1 to imply this result is an open problem for
future research.
Example 2: Both λk and λk`1 are close to 1
Consider the example that k “ 1, p1 “ p2 :“ p " log n, λ1 “ 1 ´
our bound rates
p1´λ2k qp1´λ2k`1 q p1 ´k
n
pλk ´λk`1 q2
`
q2
pp1 `p2
n2 pλk ´λk`1 q4
b
4
p
n
and λ2 “ 1 ´ 2
` e´γpp1 ^p2 q actually imply that
b
4
p
n.
Then
p1 q ď C p ,
ELave pφ1 , φ
n
while the rough rates (3.2) by Wedin’s sin-theta law implies
c
p
p
ELave pφ1 , φ1 q ď C
.
n
This shows that our upper bound rates could be much sharper than the rough rates (3.2) when
both λk and λk`1 are close to 1.
New proof techniques and connection to asymptotic theory
To the best of our knowledge, none of the analysis in Gao et al. (2015), Gao et al. (2017),
Cai and Zhang (2017) can be used to obtain the multiplicative factor p1´λ2k qp1´λ2k`1 q{pλk ´λk`1 q2
in the first order term of the upper bound, even under the strong condition that λk`1 “ ¨ ¨ ¨ “
λp1 ^p2 “ 0.
Following a different path, we do careful non-asymptotic entry-wise perturbation analysis of the
estimating equations of CCA to avoid the loss of precision caused by applying matrix inequalities
in the early stage of the proof. The main challenge is to analyze the properties of matrix hardmard
products, especially to derive tight operator norm bounds for certain hardmard products. We are
particularly luckily to find a divide-and-conquer approach (λk ě 21 and λk ă 21 in the proof of
Lemma 5.4) to decompose the target matrices into simple-structure matrices where we can apply
the tools developed in Lemma 5.6.
pi , ψ
pi qup1 ^p2 has been studied by
The asymptotic distribution of the canonical loadings tpφ
i“1
Anderson (1999) under the assumption that all the canonical correlations are distinct and λ1 ‰ 1.
Since we focus on subspaces, we only require λk ą λk`1 for the given k. Both Anderson (1999)
and our work are based on analyzing the estimating equations ((5.5)) of CCA. Our analysis is more
involved because completely novel techniques are required to obtain the factor p1 ´ λ2k qp1 ´ λ2k`1 q
in the nonasymptotic framework.
3.3
Sharper lower bounds under parameter spaces with fixed λk and λk`1
The minimax lower bounds for the estimation rates of CCA were first established by Gao et al.
(2015, 2017) under the losses Ljoint and Lave . However, the parameter space discussed in Gao et al.
(2017) requires λk`1 “ 0. Moreover, the parameter space in Gao et al. (2015) is parameterized by
λ satisfying λk ě λ, but λk`1 is not specified. In fact, they also constructed the hypothesis class
with λk`1 “ 0 and the resulting minimax lower bound is proportional to λ12 .
10
However, this minimax lower bound
b is not sharp when λk and λk`1 are close. Suppose p1 “
1
1
p2 :“ p, k “ 1, λ1 “ 2 and λ2 “ 2 ´ np . Our minimax lower bound in Theorem 2.2 leads to
”
ı
p 1:k q ě Op1q.
inf sup E Lave pΦ1:k , Φ
p 1:k ΣPF
Φ
In contrast, to capture the fundamental limit of CCA estimates in this scenario under the framework
of Gao et al. (2015), one needs to choose λ to capture both λk and λk`1 , i.e., λk ď λ ď λk`1 and
hence λ « 1{2. Then the resulting minimax lower bound rate will be nλp 2 “ Op np q, which is much
looser than Op1q.
Technically speaking, we follow the analytical framework of Gao et al. (2015) and Gao et al.
(2017), but the hypothesis classes construction requires any given λk`1 ą 0 instead of λk`1 “ 0,
and this brings in new technical challenges. More detailed technical discussions are deferred to
Section 6.
4
Proof of Theorem 1.1
Suppose the observed sample of px, yq is fixed and consider the correlation between
p1 , . . . , U
pk q.
Let
the two subspaces of H (defined in (1.3)): spanpU1 , . . . , Uk q and spanpU
x
x
x
pW1 , W1 q, pW2 , W2 q, . . . , pWk , Wk q be the first, second, ..., and kth pair of canonical
p1 , . . . , U
pk .
variates between U1 , . . . , Uk and U
Then spanpW1 , . . . , Wk q “ spanpU1 , . . . , Uk q,
x
x
p
p
xj y “ xW
xi , W
xj y “ 0, for any i ‰ j and
spanpW1 , . . . , Wk q “ spanpU1 , . . . , Uk q and xWi , Wj y “ xWi , W
xi q “ 1, for i “ 1, . . . , k.
VarpWi q “ VarpW
xi q is actually the ith principal angle
By the definition of principal angles, we know =pWi , W
p
p
xi q. This implies that
between spanpU1 , . . . , Uk q and spanpU1 , . . . , Uk q, i.e., θi :“ =pWi , W
p 1:k q :“
Lave pΦ1:k , Φ
k
ÿ
i“1
2
sin θi “
k ˆ
ÿ
i“1
ˇA
Eˇ2 ˙
ˇ
ˇ
x
1 ´ ˇ Wi , Wi ˇ .
p1 , . . . , U
pk are linear combinations of X1 , . . . , Xp , we can denote
Since U1 , . . . , Uk , U
1
p
x1 , . . . , W
xk q “ xJ Σx´1{2 B,
w J :“ pW1 , . . . , Wk q “ xJ Σx´1{2 B, and ŵ J :“ pW
p :“ rp
where B :“ rb1 , . . . , bk s, B
b1 , . . . , p
bk s P Rpˆk .
By the definition of w, we have
Ik “ Covpwq “ B J Σx´1{2 CovpxqΣx´1{2 B “ B J B
p J B.
p Then B, B
p are p ˆ k basis matrices. Moreover, we have bJp
and similarly Ik “ B
i bj “
xj y “ 0, for all i ‰ j. Moreover, we have
xWi , W
p “ B J B.
p
Diagpcospθ1 q, . . . , cospθk qq “ Covpw, ŵq “ B J Σx´1{2 CovpxqΣx´1{2 B
Notice that spanpU1 , . . . , Uk q “ spanpW1 , . . . , Wk q, pU1 , . . . , Uk q “ xJ Φ1:k , and pW1 , . . . , Wk q “
´1{2
xJ Σx B. Then
Φ1:k “ Σx´1{2 BC ñ Σ1{2
x Φ1:k “ BC
11
1{2
for some nonsingular k ˆ k matrix C. This implies that B and Σx Φ1:k have the same column
space. Since B P Rpˆk is a basis matrix, we have
BB J “ PΣ1{2 Φ .
x
Similarly, we have
1:k
pB
p J “ P 1{2
.
B
p
Σ Φ
x
1:k
Straightforward calculation gives
›
›2
¯
´
›
pB
pJ ´ B
pB
p J BB J ` B
pB
p JB
pB
pJ
pB
p J ›› “ trace BB J BB J ´ BB J B
›BB J ´ B
F
pB
p J Bq
“ 2k ´ 2tracepB J B
“ 2k ´ 2tracepDiagpcos2 pθ1 q, . . . , cos2 pθk qqq
p 1:k q
“ 2psin2 pθ1 q ` . . . ` sin2 pθk qq “ 2kLave pΦ1:k , Φ
and
›`
›2
´`
˘
˘
`
˘¯
›
pB
p JB
pB
p J ›› “ trace Ip ´ BB J B
pB
p J Ip ´ BB J
› Ip1 ´ BB J B
1
1
F
pB
p J Bq
“ k ´ tracepB J B
p 1:k q.
“ kLave pΦ1:k , Φ
The above equalities yield the first two equalities in (1.4).
Notice that both U1 , . . . , Uk and W1 , . . . Wk are both orthonormal bases of spanpU1 , . . . , Uk q.
p1 , . . . , U
pk and W
x1 , . . . W
xk are both orthonormal bases of spanpU
p1 , . . . , U
pk qq.) Then we
(Similarly, U
J
J
have u “ w R where R is a k ˆ k orthogonal matrix. Then
min E}uJ ´ ûJ Q}22 “ min E}uJ ´ ŵ J Q}22 “ min E}w J R ´ ŵ J Q}22
QPRkˆk
QPRkˆk
QPRkˆk
J
“ min E}w ´ ŵ
QPRkˆk
“
“
“
E
min
i“1
k
ÿ
qi PRk , i“1,...,k
i“1
i“1
QRJ }22
“ min E}w J ´ ŵ J Q}22
QPRkˆk
pWi ´ ŵ J qi q2
min
qi PRk , i“1,...,k
k
ÿ
k
ÿ
J
EpWi ´ ŵ J qi q2
min EpWi ´ ŵ J qi q2
qi PRk
Notice that minqi PRk EpWi ´ ŵ J qi q2 is obtained by the best linear predictor, so
min EpWi ´ ŵ J qi q2 “ VarpWi q ´ Covpŵ, Wi qJ Cov´1 pŵqCovpŵ, Wi q
qi PRk
“ 1 ´ cos2 θi “ sin2 θi .
12
Therefore,
min E}uJ ´ ûJ Q}22 “
QPRkˆk
k
ÿ
i“1
p 1:k q,
sin2 θi “ kLave pΦ1:k , Φ
which implies the third equality in (1.4). Similarly,
max
min E
gPRk ,}g}“1 QPRkˆk
“
“
“
“
“
`` J
˘ ˘2
u ´ ûJ Q g
max
min E
max
min E
gPRk ,}g}“1 QPRkˆk
gPRk ,}g}“1
max
gPRk ,}g}“1
max
gPRk ,}g}“1
max
gPRk ,}g}“1
2
“ sin θ1
QPRkˆk
min E
QPRkˆk
qi
`` J
˘
˘2
w R ´ ŵ J Q RJ g
`` J
˘ ˘2
w ´ ŵ J Q g
min
PRk ,
k
ÿ
`` J
˘ ˘2
u ´ ŵ J Q g
E
i“1,...,k
k
ÿ
i“1
gi2 pWi ´ ŵ J qi q2
gi2 sin2 θi
i“1
Finally, we prove (1.5). By Wedin (1983), we have
›
›2 ›`
›2 ›`
˘
˘ ››2
›
pB
p J ›› “ ›› Ip ´ BB J B
pB
p J ›› “ ›› Ip ´ BB J B
p›
›BB J ´ B
1
1
´
`
˘ `
˘ ¯
p J Ip ´ BB J J Ip ´ BB J B
p
“ λmax B
1
1
`
˘
“ λmax Ik ´ Diagpcos2 pθ1 q, . . . , cos2 pθk qq
p 1:k q,
“ 1 ´ cos2 pθ1 q “ sin2 pθ1 q “ Lmax pΦ1:k , Φ
which implies the the equalities in (1.5).
5
Proof of Upper Bound
Throughout this proof, we denote ∆ :“ λk ´ λk`1 .
5.1
Linear Invariance
Without loss of generality, we assume p2 ě p1 :“ p. By the definition of canonical variables, we know
that U1 , . . . , Up and V1 , . . . , Vp are only determined by spanpX1 , . . . , Xp1 q and spanpY1 , . . . , Yp2 q. In
other words, for any invertible C1 P Rp1 ˆp1 and C2 P Rp2 ˆp2 , the canonical pairs of pX1 , . . . , Xp1 qC1
and pY1 , . . . , Yp2 qC2 are still pU1 , V1 q, . . . , pUp1 , Vp1 q. Therefore, we can consider the following
orthonormal bases
U1 , . . . , Up1 P spanpX1 , . . . , Xp1 q
13
and
V1 , . . . , Vp1 , Vp1 `1 , . . . , Vp2 P spanpY1 , . . . , Yp2 q.
Here pV1 , . . . , Vp1 , Vp1 `1 , . . . , Vp2 q is an orthonormal extension of V1 , . . . , Vp1 . Therefore, we know
that pU1 , V1 q, . . . , pUp1 , Vp1 q are also the the canonical pairs between U1 , . . . , Up1 and V1 , . . . , Vp2 .
Similarly, for a fixed sample of the variables of x and y, the sample canonical pairs
p
pp , Vpp q are also sample canonical pairs of the corresponding sample of
pU1 , Vp1 q, . . . , pU
1
1
pX1 , . . . , Xp1 qC1 and pY1 , . . . , Yp2 qC2 . This can be easily seen from the concept of sample
p1 and Vp1 are respectively the linear combinations of
canonical variables. For example, U
X1 , . . . , Xp1 and Y1 , . . . , Yp1 , such that their corresponding sample variance are both 1 and sample
correlation is maximized. If we replace pX1 , . . . , Xp1 q and pY1 , . . . , Yp1 q with pX1 , . . . , Xp1 qC1
and pY1 , . . . , Yp2 qC2 respectively and seek for the first sample canonical pair, the constraints
(linear combinations of the two sets of variables and unit sample variances) and the objective
p1 , Vp1 q is still the answer. Similarly,
(sample correlation is maximized) are the same as before, so pU
p
p
p
p
pU1 , V1 q, . . . , pUp1 , Vp1 q are the sample canonical pairs of pX1 , . . . , Xp1 qC1 and pY1 , . . . , Yp2 qC2 . In
particular, they are the sample canonical pairs of U1 , . . . , Up1 and V1 , . . . , Vp2 .
The above argument gives the following convenient fact: In order to bound
p1 , . . . , U
pk q, spanpU1 , . . . , Uk qq
Lave{max pspanpU
we can replace X1 , . . . , Xp1 , Y1 , . . . , Yp2 with U1 , . . . , Up1 , V1 , . . . , Vp2 . In other words, we can assume
x and y satisfy the standard form
r
Σx “ Ip1 , Σy “ Ip2 , Σxy “ rΛ, 0p1 ˆpp2 ´p1 q s :“ Λ
where Λ “ Diagpλ1 , λ2 , . . . , λp1 q P Rp1ˆp1 . Moreover
Φ1:p1 “ Ip1 , Ψ1:p1 “
which implies that
Φ1:k “
5.2
„
Ik
0pp1 ´kqˆk
„
Ip1
0pp2 ´p1 qˆp1
, Ψ1:k “
„
,
Ik
0pp2 ´kqˆk
.
Upper Bound Under the Standard Form
Under the standard form, by (1.4) and (1.5), we have
and
›2
›
p1 , . . . , U
pk q, spanpU1 , . . . , Uk qq “ 1 ››pIp ´ PΦ q P p ››
Lave pspanpU
1
1:k
Φ1:k F
k
›2
›
›
›
p
p
Lmax pspanpU1 , . . . , Uk q, spanpU1 , . . . , Uk qq “ ›pIp1 ´ PΦ1:k q PΦ
p 1:k › .
14
(5.1)
(5.2)
p 1:k
Denote Φ
«
ff
pu
Φ
p u and Φ
p l are the upper k ˆ k and lower pp1 ´ kq ˆ k sub“ p l1:k where Φ
1:k
1:k
Φ1:k
p 1:k respectively. Then
matrices of Φ
›2
›
´
¯
›
›
J p
´1 p J
p
p
“
trace
pI
´
P
q
Φ
p
Φ
Φ
q
Φ
pI
´
P
q
,
›
›pIp1 ´ PΦ1:k q PΦ
p
Φ
p
Φ
1:k
1:k
p 1:k
1
1
1:k
1:k
1:k
1:k
F
›
›2
¯
´
›
›
´1 p J
p 1:k pΦ
pJ Φ
p
›pIp1 ´ PΦ1:k q PΦ
› “ λmax pIp1 ´ PΦ1:k q Φ
p
1:k 1:k q Φ1:k pIp1 ´ PΦ1:k q
1:k
Since
´1 p J
p 1:k pΦ
pJ Φ
p
pIp1 ´ PΦ1:k q Φ
1:k 1:k q Φ1:k pIp1 ´ PΦ1:k q
„
ı
0kˆk ”
1
1
J
p
p
l qJ ,
p
pIp1 ´ PΦ1:k q Φ1:k Φ1:k pIp1 ´ PΦ1:k q “
ĺ
0
p
Φ
kˆk
pl
1:k
p 1:k q
p 1:k q Φ
σk2 pΦ
σk2 pΦ
1:k
we have
›2
›
›
›
pI
´
P
q
P
› p1
Φ1:k
p 1:k › ď trace
Φ
F
and
›2
›
›
›
›pIp1 ´ PΦ1:k q PΦ
p 1:k › ď λmax
˜
˜
¸
„
ı
p l }2
}Φ
1
0kˆk ”
1:k F
pJ
“
,
0
Φ
kˆk
l
1:k
p
2
2
p
p 1:k q
Φ
σk pΦ1:k q
σk pΦ
1:k
¸
„
ı
p l }2
}Φ
0kˆk ”
1
1:k
J
p
.
“
Φ
0
kˆk
1:k
pl
2 pΦ
p 1:k q Φ
p 1:k q
σk2 pΦ
σ
1:k
k
(5.3)
(5.4)
p l }2 , as well as a lower bound of
p l }2 and }Φ
Therefore, it suffices to give upper bounds of }Φ
1:k
1:k F
p 1:k q.
σk2 pΦ
5.3
Basic bounds
Recall that
Then
and
r
Σx “ Ip1 , Σy “ Ip2 , Σxy “ rΛ, 0p1 ˆpp2 ´p1 q s :“ Λ.
«
ff
ˆ„ ˙
r
x
Ip1 Λ
Cov
:“ Σ “ r J
y
Λ
Ip2
«
ff
ˆ„ ˙
px Σ
p xy
x
Σ
y
p “
Cov
:“ Σ
py .
p yx Σ
y
Σ
p 2p as the left upper p2p1 q ˆ p2p1 q principal submatrix of Σ.
p We can
Moreover, we can define Σ
1
similarly define Σ2p1 .
Lemma 5.1. There exist universal constants γ, C and C0 such that when n ě C0 p1 , then with
probability at least 1 ´ e´γp1 , the following inequalities hold
c
›
›
p1
›
› p 1{2
p
p
.
}Σ2p1 ´ Σ2p1 }, }Ip1 ´ Σx }, ›Σx ´ Ip1 › ď C
n
15
Proof.
It is obvious that }Σ2p1 } ď 2. By Lemma 5.9, there exist constants γ, C0 and C1 , such that when
n ě C0 p1 , with probability at least 1 ´ e´γp1 there holds
c
p1
p
}Σ2p1 ´ Σ2p1 } ď C1
.
n
b
p x } ď C1 p1 . Moreover,
As submatrices, we have }Ip1 ´ Σ
n
p x } “ }pIp ´ Σ
p 1{2 qpIp ` Σ
p 1{2 q} ě σmin pIp ` Σ
p 1{2 q}Ip ´ Σ
p 1{2 } ě }Ip ´ Σ
p 1{2 },
}Ip1 ´ Σ
1
x
1
x
1
x
1
x
1
x
b
p1 `p2
p 1{2
which implies }Ip1 ´ Σ
x } ď C1
n .
Lemma 5.2. There exist universal constants c, C and C0 such that when n ě C0 pp1 ` p2 q, then
with probability at least 1 ´ e´cpp1 `p2 q , the following inequalities hold
c
›
›
p1 ` p2
›
› p 1{2
p
p
p
}Σ ´ Σ}, }Ip2 ´ Σy }, }Σxy ´ Σxy }, ›Σy ´ Ip2 › ď C
,
n
c
p1 ` p2
´1{2 p
´1{2
p
p
p
,
}Λ ´ Λ} ď }Σx Σxy Σy
´ Σxy } ď C
n
p 1:k }2 ď 3 , σ 2 pΨ
p 1:k q ě 1 , }Ψ
p 1:k }2 ď 3 ,
p 1:k q ě 1 , }Φ
σk2 pΦ
k
2
2
2
2
c
C
p
`
p
1
2
pl } ď
p l }, }Ψ
}Φ
,
1:k
1:k
∆
n
where ∆ “ λk ´ λk`1 is the eigen-gap.
The proof is deferred to Section 5.7.
5.4
p l }2
Estimating Equations and upper bound of }Φ
1:k
p l }2 . Notice that we have already
In this section, we aim to give a sharp upper bound for }Φ
1:k
established an upper bound in Lemma 5.2, where Wedin’s sin θ law plays the essential role. However,
this bound is actually too loose for our purpose. Therefore, we need to develop new techniques to
sharpen the results.
p P Rp1 ˆp1 , Ψ
p P Rp2 ˆp1 consist of the sample canonical coefficients. By definition,
Recall that Φ
p x1{2 Φ
p and
the sample canonical coefficients satisfy the following two estimating equations (because Σ
1{2
´1{2
´1{2
py Ψ
p are left and right singular vectors of Σ
px Σ
p xy Σ
py
Σ
respectively),
If we define define
Λ“
„
Λ1
Λ2
p xy Ψ
p “Σ
p xΦ
pΛ
p
Σ
p yx Φ
p “Σ
p yΨ
p Λ.
p
Σ
PR
p1 ˆp1
p “
, Λ
16
«
p1
Λ
(5.5)
ff
P Rp1 ˆp1 ,
p
Λ2
(5.6)
p 1 are k ˆ k diagonal matrices while Λ2 , Λ
p 2 are pp1 ´ kq ˆ pp1 ´ kq diagonal matrices.
where Λ1 , Λ
Then (5.5) imply
p xy Ψ
p 1:k “ Σ
p xΦ
p 1:k Λ
p1
Σ
(5.7)
p yx Φ
p 1:k “ Σ
p yΨ
p 1:k Λ
p 1.
Σ
Divide the matrices into blocks,
ff
ff
«
ff
«
ff
«
«
p 11 Σ
p 12
p 11 Σ
p 12
p 11 Σ
p 12
p 11 Σ
p 12
Σ
Σ
Σ
Σ
xy
xy
yx
yx
y
y
x
x
py “
p xy “
p
px “
, Σ
, Σ
Σ
p 22
p 21
p 22
p 21
p 22 , Σyx “ Σ
p 21
p 22
p 21
Σ
Σ
Σ
Σ
Σ
x
y
y
xy Σxy
yx Σyx
x
kˆk , Ψ
p l P Rpp2 ´kqˆk in
p 11 p 11 p 11
pu
p 11
where Σ
x , Σy , Σxy , Σyx are k ˆ k matrices. Finally, we define Ψ1:k P R
1:k
pu ,Φ
p l . With these blocks, (5.7) can be rewritten as
the same way as Φ
1:k
1:k
p 21 p u p
p 22 p l p
p 22 p l
p 21 Ψ
pu
Σ
xy 1:k ` Σxy Ψ1:k “ Σx Φ1:k Λ1 ` Σx Φ1:k Λ1 ,
p 21 Φ
pu
p 22 p l
p 21 p u p
p 22 p l p
Σ
yx 1:k ` Σyx Φ1:k “ Σy Ψ1:k Λ1 ` Σy Ψ1:k Λ1 ,
p 11 Φ
pu Λ
p1 ` Σ
p 12 Φ
pl Λ
p 1,
p 12 Ψ
pl “ Σ
p 11 Ψ
pu ` Σ
Σ
xy 1:k
p
pu
Σ11
yx Φ1:k
Define the zero-padding of Λ2 :
`
xy 1:k
p
pl
Σ12
yx Φ1:k
“
x
1:k
11 p u p
p
Σy Ψ1:k Λ1
`
(5.8)
(5.9)
(5.10)
x
1:k
12 p l p
p
Σy Ψ1:k Λ1 .
(5.11)
r 2 :“ rΛ2 , 0s “ Σ22 P Rpp1 ´kqˆpp2 ´kq .
Λ
xy
The above equations imply the following lemma:
Lemma 5.3. The equality (5.7) gives the following result
pu
pl
p l Λ2 ´ Λ2 Φ
Φ
2 1:k “ B Φ1:k ` R
1:k 1
r
r 2 pΣ
p 21 ´ Σ
pu ` R
p 21 Λ1 qΦ
p 21 ´ Σ
p 21 Λ1 qΨ
p u Λ1 ` Λ
“ pΣ
yx
y
xy
x
1:k
1:k
where
(5.12)
(5.13)
p 21 Λ1 ` Λ
r 2Σ
p 21 ´ Σ
p 21 Λ2 ´ Λ
r 2Σ
p 21 Λ1 ,
B :“ Σ
xy
yx
x
1
y
21
21
r
p
r
p
R :“ pΣ R1 ´ R3 qΛ1 ´ Λ2 pΣ R2 ` R4 q,
x
y
r ´ pΣ
p 21 ´ Σ
p 21 Λ1 qR2 .
R :“ R
x
xy
and
p 12 p l
p 1 ´ Λ1 q ` pΣ
p 11 ´ Ik qΦ
pu Λ
p
p 12 p l p
p 11
pu
p u pΛ
R1 :“ Φ
x
1:k 1 ` Σx Φ1:k Λ1 ´ pΣxy ´ Λ1 qΨ1:k ´ Σxy Ψ1:k ,
1:k
p 12 Φ
pl ,
pu ´ Σ
pu Λ
p1 ` Σ
p 12 Ψ
pl Λ
p 1 ´ pΣ
p 11 ´ Λ1 qΦ
p 1 ´ Λ1 q ` pΣ
p 11 ´ Ik qΨ
p u pΛ
R2 :“ Ψ
R3 :“
R4 :“
1:k
1:k
y
1:k
yx
y
1:k
22
l
22
l
l
21
u
r p
p
p 1 ´ Λ1 q ` pΣ
p Φ
p Λ
p
p
p Φ
p pΛ
Σ
x
x
1:k 1 ´ Φ1:k Λ1 q ´ pΣxy ´ Λ2 qΨ1:k ,
1:k
p 22 r J p l
p 22 p l p
pl
p 21
pu p
Σ
y Ψ1:k pΛ1 ´ Λ1 q ` pΣy Ψ1:k Λ1 ´ Ψ1:k Λ1 q ´ pΣyx ´ Λ2 qΦ1:k .
17
yx
1:k
The proof is deferred to Section 5.7.
By Lemma 5.2, one can easily obtain that
}R1 }, }R2 } ď C
c
p1 ` p2
.
n
Recall that
p 21 Φ
p u pΛ
p 1 ´ Λ1 q ` pΣ
p 22 Φ
pl Λ
p
pl
p 22 r p l
R3 :“ Σ
x
1:k
x
1:k 1 ´ Φ1:k Λ1 q ´ pΣxy ´ Λ2 qΨ1:k
By Lemma 5.2, we have
and
p 21 Φ
p u pΛ
p 1 ´ Λ1 q} ď C p1 ` p2 , }pΣ
p 22 ´ Λ
r 2 qΨ
p l } ď C p1 ` p2 ,
}Σ
x
xy
1:k
1:k
n
∆n
p 22 Φ
pl Λ
p
pl
p 22
pl p
pl p
}Σ
x
1:k 1 ´ Φ1:k Λ1 } ď }pΣx ´ Ip1 ´k qΦ1:k Λ1 ` Φ1:k pΛ1 ´ Λ1 q}
p 22 ´ Ip ´k qΦ
pl Λ
p
pl p
ď }pΣ
x
1:k 1 } ` }Φ1:k pΛ1 ´ Λ1 q} ď C
1
p1 ` p2
.
∆n
`p2
`p2
. Similarly, }R4 } ď C p1∆n
.
Therefore, we get }R3 } ď C p1∆n
Combined with Lemma 5.2, we have
and
r “ }pΣ
p 21 R1 ´ R3 qΛ1 ´ Λ
r 2 pΣ
p 21 R2 ` R4 q} ď C p1 ` p2
}R}
x
y
∆n
r ` }Σ
p 21 ´ Σ
p 21 Λ1 }}R2 } ď C p1 ` p2 .
}R} ď }R}
xy
x
∆n
The proof of the following lemma is deferred to Section 5.7:
Lemma 5.4. If n ě C0 pp1 ` p2 q, then with probability 1 ´ c0 expp´γp1 q,
»d
fi
2 qp1 ´ λ2 q
p
p1
´
λ
pp
`
p
q
1
1
2 fl
k
k`1
pl } ď C –
}Φ
`
.
1:k
n∆2
n∆2
5.5
Upper bounds of risks
Notice that the inequality (5.4) yields
›2
›
p l }2
}Φ
›
›
1:k
.
›pIp1 ´ PΦ1:k q PΦ
p 1:k › ď 2
p 1:k q
σk pΦ
By Lemma 5.4 and Lemma 5.2, we know on an event G with probability at least 1 ´ Ce´γp1 ,
ff
«
›
›2
p1 p1 ´ λ2k qp1 ´ λ2k`1 q pp1 ` p2 q2
›
›
`
.
›pIp1 ´ PΦ1:k q PΦ
p 1:k › ď C
n∆2
n 2 ∆4
18
›2
›
›
›
Moreover, since ›pIp1 ´ PΦ1:k q PΦ
p 1:k › ď 1, by (5.2), we have
«
ff
›
›2
p1 p1 ´ λ2k qp1 ´ λ2k`1 q pp1 ` p2 q2
›
›
p 1:k q “ E ›pIp ´ PΦ q P p › ď C
ELmax pΦ1:k , Φ
`
` e´γp1 .
1
1:k
Φ1:k
n∆2
n 2 ∆4
Since pIp1 ´ PΦ1:k q PΦ
p 1:k is of at most rank-k, we have
›
›2
›2
1 ››
›
›
›
ď
pI
´
P
q
P
›
›pIp1 ´ PΦ1:k q PΦ
›
p1
Φ1:k
p 1:k ›
p 1:k
Φ
k
F
Then by (5.1) and the previous inequality, we have
›
›2
›
›
p
ELave pΦ1:k , Φ1:k q “ E ›pIp1 ´ PΦ1:k q PΦ
p 1:k ›
›2
1 ››
›
“ E ›pIp1 ´ PΦ1:k q PΦ
p 1:k ›
k
F
›2
›
›
›
ď E ›pIp1 ´ PΦ1:k q PΦ
p 1:k ›
ff
«
p1 p1 ´ λ2k qp1 ´ λ2k`1 q pp1 ` p2 q2
`
` e´γp1 .
ďC
n∆2
n 2 ∆4
In fact, the factor p1 in the main term can be reduced to p1 ´ k by similar arguments as done
for the operator norm. The Frobenius norm version of Lemma 5.4 is actually much simpler. We
omit the proof to avoid unnecessary redundancy and repetition.
5.6
Supporting lemmas in linear algebra and probability
Definition 5.5. (Hadamard Operator Norm) For A P Rmˆn , define the Hadamard operator norm
as
(
|||A||| “ sup }A ˝ B} : }B} ď 1, B P Rmˆn
Let α1 , ¨ ¨ ¨ , αm and β1 , ¨ ¨ ¨ , βn be arbitrary positive numbers lower bounded by a positive constant
δ.
n
mˆn ,
Lemma 5.6. Let tαi um
i“1 and tβi ui“1 be two sequences of positive numbers. for any X P R
there hold
›
›« a
ff
› 1
›
α
β
i j
›
›
˝ X › ď }X},
(5.14)
›
› 2
› αi ` βj
and
›„
›
› minpαi , βj q
› 1
›
˝ X ›› ď }X},
› αi ` βj
2
›„
›
› maxpαi , βj q
› 3
›
˝ X ›› ď }X}.
›
αi ` βj
2
(5.15)
Proof. The proof of (5.14) can be found in “Norm Bounds for Hadamard Products and an
Arithmetic-Geometric Mean Inequality for Unitarily Invariant Norms” by Horn.
Denote
„
minpαi , βj q
maxpαi , βj q
, G2 “
G1 “
αi ` βj
αi ` βj
„
The proof of (5.15) relies on the following two results.
19
Lemma 5.7. (Theorem 5.5.18 of Hom and Johnson (1991)) If A, B P Rnˆn and A is positive
semidefinite. Then,
˙
ˆ
}A ˝ B} ď max Aii }B},
1ďiďn
where } ¨ } is the operator norm.
Lemma 5.8. (Theorem 3.2 of Mathias (1993)) The symmetric matrix
´ minpa , a q ¯
i j
ai ` aj
1ďi,jďn
is positive semidefinite if ai ą 0, 1 ď i ď n.
Define γi “ βi , 1 ď i ď n and γi “ αi´n , n ` 1 ď i ď m ` n. Define M P Rpm`nqˆpm`nq by
Mij “
mintγi , γj u
.
γi ` γj
By Lemma 5.8, M is also positive semidefinite. Again, apply Lemma 5.7 and notice that G2 is the
lower left sub-matrix of M , It is easy to obtain
|||G2 ||| ď |||M ||| ď
1
.
2
Finally, since G1 ˝ B “ B ´ G2 ˝ B for any B, we have
}G1 ˝ B} ď }B} ` }G2 ˝ B},
which implies,
3
|||G1 ||| ď 1 ` |||G2 ||| ď .
2
Lemma 5.9. (Covariance Matrix Estimation, Remark 5.40 of Vershynin (2010)) Assume A P
Rnˆp has independent sub-gaussian random rows with second moment matrix Σ. Then there exists
universal constant C such that for every t ě 0, the following inequality holds with probability at
2
least 1 ´ e´ct ,
c
1 J
t
p
2
} A A ´ Σ} ď maxtδ, δ u}Σ}
δ“C
`? .
n
n
n
Lemma 5.10. (Bernstein inequality, Proposition 5.16 of Vershynin (2010)) Let X1 , ¨ ¨ ¨ , Xn be
independent centered sub-exponential random variables and K “ maxi }Xi }ψ1 . Then for every
a “ pa1 , ¨ ¨ ¨ , an q P Rn and every t ě 0, we have
+
#
"
ˆ
˙*
n
ÿ
t
t2
,
ai Xi | ě t ď 2exp ´c min
.
P |
K 2 }a}22 K}a}8
i“1
20
Lemma 5.11. (Hanson-Wright inequality, Theorem 1.1 of Rudelson and Vershynin (2013)) Let
x “ px1 , ¨ ¨ ¨ , xp q be a random vector with independent components xi which satisfy Exi “ 0 and
}xi }ψ2 ď K, Let A P Rpˆp . Then there exists universal constant c such that for every t ě 0,
˙*
"
ˆ
(
t
t2
,
.
P |xJ Ax ´ ExJ Ax| ě t ď 2exp ´c min
K 4 }A}2F K 2 }A}
Lemma 5.12. (Covering Number of the Sphere, Lemma 5.2 of Vershynin (2010)). The unit
Euclidean sphere Sn´1 equipped with the Euclidean metric satisfies for every ǫ ą 0 that
2
|N pSn´1 , ǫq| ď p1 ` qn ,
ǫ
where N pSn´1 , ǫq is the ǫ-net of Sn´1 with minimal cardinality.
The following variant of Wedin’s sin θ law (Wedin, 1972) is proved in Proposition 1 of Cai et al.
(2015).
p “ A ` E, define the singular value decompositions of A
Lemma 5.13. For A, E P Rmˆn and A
p
and A as
p“U
pD
p Vp J .
A “ U DV J , A
Then the following perturbation bound holds,
›
› ›
›
›
› ›
›
›pI ´ PU1:k q PUp 1:k › “ ›PU1:k ´ PUp 1:k › ď
2}E}
,
σk pAq ´ σk`1 pAq
where σk pAq, σk`1 pAq are the kth and pk ` 1qth singular values of A.
5.7
5.7.1
Proofs of key lemmas
Proof of Lemma 5.2
(1) The proof of
c
›
›
p1 ` p2
›
› p 1{2
p
p
p
}Σ ´ Σ}, }Ip2 ´ Σy }, }Σxy ´ Σxy }, ›Σy ´ Ip2 › ď C
n
is exactly the same as that of Lemma 5.1.
(2) Observe that
p x´1{2 Σ
p xy Σ
p y´1{2 ´ Σxy “ pIp ´ Σ
p x1{2 qΣ
p x´1{2 Σ
p xy Σ
p y´1{2
Σ
1
p 1{2 Σ
p ´1{2 Σ
p xy Σ
p ´1{2 pIp ´ Σ
p 1{2 q ` pΣ
p xy ´ Σxy q.
`Σ
x
x
y
p1 ď 1. Then
p x´1{2 Σ
p xy Σ
p y´1{2 } “ λ
and }Σ
2
y
p ´1{2 Σ
p xy Σ
p ´1{2 ´ Σxy } ď }Ip ´ Σ
p 1{2 } ` }Σ
p x }}Ip ´ Σ
p 1{2 } ` }Σ
p xy ´ Σxy }.
}Σ
x
y
x
y
1
2
21
p and Λ are singular values of Σ
p x´1{2 Σ
p xy Σ
p y´1{2 and Σxy respectively. Hence by the
Notice that Λ
famous Weyl’s inequality for singular values,
p ´ Λ} ď }Σ
p ´1{2 Σ
p xy Σ
p ´1{2 ´ Σxy }
}Λ
x
y
p
p x }}Ip ´ Σ
p 1{2 } ` }Σ
p xy ´ Σxy }
ď }Ip1 ´ Σx } ` }Σ
y
2
¸
˜
c
c
c
p1 ` p2
p1 ` p2
p1 ` p2
C1
ď C2
.
ď 3 ` C1
n
n
n
Ip1
p 1{2
p
p ´1{2 Σ
p xy Σ
p y´1{2 , we have }Σ
p 1{2
p
pJp p
(3) Since Σ
x Φ are left singular vectors of Σx
x Φ} “ 1, Φ Σx Φ “
p JΦ
p ´ Ip “ ´Φ
p J pΣ
p x ´ Ip qΦ.
p Then we have,
and Φ
1
1
p JΦ
p ´ Ip } “ }Φ
p J pΣ
p x ´ Ip qΦ}
p ď }Φ
p JΣ
p 1{2 }}Σ
p ´1{2 pΣ
p x ´ Ip qΣ
p ´1{2 }}Σ
p 1{2 Φ}
p
}Φ
1
1
x
x
1
x
x
p ´1{2 pΣ
p x ´ Ip qΣ
p ´1{2 }.
“ }Σ
x
1
x
As a submatrix,
pJ Φ
p
p ´1{2 pΣ
p x ´ Ip qΣ
p ´1{2 } ď }Σ
p ´1 }}Σ
p x ´ Ip }
}Φ
x
x
1:k 1:k ´ Ik } ď }Σx
1
1
p
1
p x ´ Ip } ď }Σ ´ Σ} ď 1
ď
}Σ
1
p x ´ Ip }
p ´ Σ}
2
1 ´ }Σ
1 ´ }Σ
1
as long as n ě C0 pp1 ` p2 q for sufficiently large C0 . In this case,
By the same argument,
(4) Recall that
p 1:k q ě 1{2, }Φ
p 1:k }2 ď 3{2.
σk2 pΦ
p 1:k q ě 1{2, }Ψ
p 1:k }2 ď 3{2.
σk2 pΨ
Φ1:k “
„
Ik
0pp1 ´kqˆk
, Ψ1:k “
„
Ik
0pp2 ´kqˆk
.
p 1{2
p
The last inequality in the lemma relies on the fact that Σ
x Φ1:k and Φ1:k are leading k singular
´1{2 p
´1{2
p
p
vectors of Σx Σxy Σy
and Σxy respectively. By a variant of Wedin’s sin θ law as stated in
Lemma 5.13,
c
› 2}Σ
›
p x´1{2 Σ
p xy Σ
p y´1{2 ´ Σxy }
2C2 p1 ` p2
›
›
pIp1 ´ PΦ1:k q› ď
ď
.
›P Σ
p
p 1{2
x Φ1:k
∆
∆
n
On the other hand,
› ›
›
›
› › p 1{2 p
›
›
1{2 p
J
p
pIp1 ´ PΦ1:k q› “ ›Σx Φ1:k pΣx Φ1:k q pIp1 ´ PΦ1:k q›
›PΣ
p
p 1{2
x Φ1:k
›
›
›
› p 1{2 p
J
Φ
q
pI
´
P
q
“ ›pΣ
p1
Φ1:k ›
1:k
x
›
›
› p 1{2 p
l›
Φ
q
“ ›pΣ
1:k › ,
x
22
p 1{2
p
Here the second equality is due to the fact that Σ
x Φ1:k has orthonormal columns. Moreover,
1{2
px Φ
p 1:k ql denotes the lower pp1 ´ kq ˆ k sub-matrix of Σ
p 1{2
p
pΣ
x Φ1:k . Again, by triangle inequality,
› ›
›
´
¯l ››
› p l › ›› p 1{2 p
l
1{2
p
p
›Φ1:k › “ ›pΣx Φ1:k q ´ pΣx ´ Ip1 qΦ1:k ››
›
›
› ›
››
› p 1{2 p
› p 1{2
››p ›
l›
Φ
ď ›pΣ
Φ
q
`
Σ
´
I
q
›p x
p1 › › 1:k ›
1:k ›
x
c
c
c
c
2C2 p1 ` p2
C3 p 1 ` p 2
3
p1 ` p2
ď
`
C1
ď
.
∆
n
2
n
∆
n
The last inequality is due to ∆ ď 1. Let C “ maxpC1 , C2 , C3 q, the proof is done.
5.7.2
Proof of Lemma 5.3
The equality (5.10) implies
pu ´ Φ
p u Λ1 “ Φ
p u pΛ
p 1 ´ Λ1 q ` pΣ
p 11 ´ Ik qΦ
pu Λ
p
p 12 p l p
Λ1 Ψ
1:k
1:k
1:k
x
1:k 1 ` Σx Φ1:k Λ1
p 12
p l :“ R1 .
p 11
pu ´ Σ
´ pΣ
xy Ψ
xy ´ Λ1 qΨ
(5.16)
p 1 ´ Λ1 q ` pΣ
p 11
pu p
p 12 p l p
p u pΛ
p u Λ1 “ Ψ
pu ´ Ψ
Λ1 Φ
y ´ Ik qΨ1:k Λ1 ` Σy Ψ1:k Λ1
1:k
1:k
1:k
p 12 Φ
p l :“ R2 .
p 11 ´ Λ1 qΦ
pu ´ Σ
´ pΣ
(5.17)
1:k
1:k
Similarly, (5.11) implies
yx
yx
1:k
1:k
The equality (5.8) is equivalent to
p 21 p u p
p 21 p u
p 22 r p l
r pl
p 21 Ψ
pu
Σ
xy 1:k ` Λ2 Ψ1:k ` pΣxy ´ Λ2 qΨ1:k “ Σx Φ1:k Λ1 ` Σx Φ1:k pΛ1 ´ Λ1 q
p 22
pl p
pl
p l Λ1 ` pΣ
`Φ
x Φ Λ1 ´ Φ Λ1 q,
1:k
1:k
1:k
which can be written as
p 21 p u p
pl
r pl
p 21 p u
p 21
pu
Σ
xy Ψ1:k ` Λ2 Ψ1:k ´ Σx Φ1:k Λ1 ´ Φ1:k Λ1 “ Σx Φ1:k pΛ1 ´ Λ1 q
p 22 Φ
pl Λ
p1 ´ Φ
p l Λ1 q ´ pΣ
p 22 ´ Λ
r 2 qΨ
p l :“ R3 .
` pΣ
(5.18)
p 21 Φ
pu
rJ pl
p 21 p u
pl
p 21 p u p
Σ
yx 1:k ` Λ2 Φ1:k ´ Σy Ψ1:k Λ1 ´ Ψ1:k Λ1 “ Σy Ψ1:k pΛ1 ´ Λ1 q
p 22 r J p l :“ R4 .
p 22
pl p
pl
` pΣ
y Ψ Λ1 ´ Ψ Λ1 q ´ pΣyx ´ Λ2 qΦ
(5.19)
x
1:k
1:k
xy
1:k
Apply the same argument to (5.9), we obtain
1:k
1:k
1:k
r 2 ˆ (5.19), then
Consider (5.18) ˆ p´Λ1 q ´ Λ
that is
r p 21 p u
r p 21 p u
p 21 p u 2 p 21 p u
pl
p l Λ2 ´ Λ2 Φ
Φ
2 1:k ` Σx Φ1:k Λ1 ´ Σxy Ψ1:k Λ1 ´ Λ2 Σyx Φ1:k ` Λ2 Σy Ψ1:k Λ1
1:k 1
r 2 R4 q,
“ ´pR3 Λ1 ` Λ
p l Λ2 ´ Λ2 Φ
pl
p 21 p u
r p 21 p u
Φ
1:k 1
2 1:k “Σxy Ψ1:k Λ1 ` Λ2 Σyx Φ1:k
r 2 R4 q.
r 2Σ
p 21 Ψ
p u Λ1 ´ pR3 Λ1 ` Λ
p 21 Φ
p u Λ2 ´ Λ
´Σ
y
x
1:k
1:k 1
23
(5.20)
Combined with (5.16) and (5.17),
p l Λ2 ´ Λ2 Φ
pl
p 21 p u
r p 21 p u
p 21 p u
p 21
Φ
2 1:k “ Σxy Ψ1:k Λ1 ` Λ2 Σyx Φ1:k ´ Σx Λ1 Ψ1:k Λ1 ` Σx R1 Λ1
1:k 1
r 2Σ
p 21
r 2Σ
pu ´ Λ
p 21
r
´Λ
y Λ1 Φ
y R2 ´ pR3 Λ1 ` Λ2 R4 q
1:k
p 21 ´ Σ
p 21 Λ1 qΨ
p u Λ1 ` Λ
r 2 pΣ
p 21 ´ Σ
p 21 Λ1 qΦ
pu
“ pΣ
xy
x
yx
y
1:k
1:k
21
21
p
r
p
` pΣx R1 ´ R3 qΛ1 ´ Λ2 pΣy R2 ` R4 q.
This finishes the proof of (5.13).
Plug (5.17) into (5.21), we get
p l Λ2 ´ Λ
r 2Φ
pl
p 21 p 21
pu
r p 21 p 21
pu
r
Φ
1:k 1
2 1:k “ pΣxy ´ Σx Λ1 qpΛ1 Φ1:k ´ R2 q ` Λ2 pΣyx ´ Σy Λ1 qΦ1:k ` R
r ´ pΣ
p 21 ´ Σ
p 21 Λ1 qR2 q.
p u ` pR
“ BΦ
xy
1:k
x
This finishes the proof of (5.12).
5.7.3
Proof of Lemma 5.4
First, we discuss two quite different cases: λk ě
Case 1: λk ě
1
2
and λk ă 21 .
1
2
Let
1
δ :“ λ2k ´ λ2k`1 “ pλk ´ λk`1 qpλk ` λk`1 q ě ∆.
2
Define the pp1 ´ kq ˆ k matrices A by
b
b
λ2j ´ λ2k ` 2δ λ2k`1 ´ λ2k`i ` 2δ
Aij “
, 1 ď i ď p1 ´ k, 1 ď j ď k
λ2j ´ λ2k`i
By (5.12) in Lemma 5.3, there holds
where
and
By Lemma 5.6, we have
p u D2 q ` A ˝ pD1 RD2 q,
p l “ A ˝ pD1 B Φ
Φ
1:k
1:k
¨
1
D1 “ diag ˝ b , ¨ ¨ ¨ , b
¨
δ
2
D2 “ diag ˝ b
1
λ2k`1
1
λ21 ´ λ2k `
δ
2
´
λ2p1
`
δ
2
˛
˛
‚
1
, ¨ ¨ ¨ , b ‚.
δ
2
p u D2 } ` 1 }pD1 RD2 q}
p l } ď 1 }D1 B Φ
}Φ
1:k
1:k
2
2
1
1
u
p
ď }D1 B}}Φ1:k }}D2 } ` }D1 }}R}}D2 }.
2
2
24
(5.21)
p u } ď }Φ
p 1:k } ď
Recall that }Φ
1:k
b
3
2
and it is obvious that }D1 }, }D2 } ď
b
2
δ.
Moreover, in the
1 `p2 q
previous section, we also have shown that }R} ď Cppn∆
. It suffices to bound }D1 B} and to this
end we apply the standard covering argument.
Step 1. Reduction. Denote by Nǫ pSd q the d-dimensional unit ball surface. For ǫ ą 0 and
any pair of vectors u P Rp1 ´k , v P Rk , we can choose uǫ P Nǫ pSp1 ´k´1 q, vǫ P Nǫ pSk´1 q such that
}u ´ uǫ }, }v ´ vǫ } ď ǫ. Then
J
J
J
uJ D1 Bv “ uJ D1 Bv ´ uJ
ǫ D1 Bv ` uǫ D1 Bv ´ uǫ D1 Bvǫ ` uǫ D1 Bvǫ
J
ď }u ´ uǫ }}D1 Bv} ` }uJ
ǫ D1 B}}v ´ vǫ } ` uǫ D1 Bvǫ
ď 2ǫ}D1 B} ` uJ
ǫ D1 Bvǫ
ď 2ǫ}D1 B} ` max uJ
ǫ D1 Bvǫ .
uǫ ,vǫ
Maximize over u and v, we obtain
}D1 B} ď 2ǫ}D1 B} ` max uJ
ǫ D1 Bvǫ .
uǫ ,vǫ
Therefore, }D1 B} ď p1 ´ 2ǫq´1 max uJ
ǫ D1 Bvǫ . Let ǫ “ 1{4. Then it suffices to give an upper
uǫ ,vǫ
bound max uJ
ǫ D1 Bvǫ with high probability.
uǫ ,vǫ
Step 2. Concentration.
1 ď i ď p1 ´ k and 1 ď j ď k
rD1 Bsi,j
“b
1
λ2k`1 ´ λ2k`i `
δ
2
Y ´λl Xl
? 2 for all 1 ď α ď n and 1 ď l ď p1 . Then for
Let Zα,l “ α,l
1´λl
n
1 ÿ
pλj Xα,k`i Yα,j ´ λ2j Xα,k`i Xα,j ` λk`i Yα,k`i Xα,j ´ λk`i λj Yα,k`i Yα,j q
n α“1
n
1 ÿ!
p1 ´ λ2j qλk`i λj Xα,k`i Xα,j ´ λ2j pYα,k`i ´ λk`i Xα,k`i qpYα,j ´ λj Xα,j q
δ n
2
2
λk`1 ´ λk`i ` 2 α“1
)
` p1 ´ λ2j qλj pYα,k`i ´ λk`i Xα,k`i qXα,j ` p1 ´ λ2j qλk`i pYα,j ´ λj Xα,j qXα,k`i .
“b
1
n
1 ÿ!
p1 ´ λ2j qλk`i λj Xα,k`i Xα,j
δ n
2
2
λk`1 ´ λk`i ` 2 α“1
b
b
b
´ λ2j 1 ´ λ2k`i 1 ´ λ2j Zα,k`i Zα,j ` p1 ´ λ2j qλj 1 ´ λ2k`i Zα,k`i Xα,j
b
)
` p1 ´ λ2j qλk`i 1 ´ λ2k`i Xα,k`i Zα,j .
“b
1
In this way, tXα,k`i , Zα,k`i , 1 ď i ď p1 , 1 ď α ď nu are mutually independent standard gaussian
25
random variables. For any given pair of vectors u P Rp1 ´k , v P Rk ,
uJ D1 Bv “
n p1 ´k ÿ
k
!
ui vj
1 ÿ ÿ
b
p1 ´ λ2j qλk`i λj Xα,k`i Xα,j
n α“1 i“1 j“1 λ2 ´ λ2 ` δ
k`1
k`i
2
b
b
b
´ λ2j 1 ´ λ2k`i 1 ´ λ2j Zα,k`i Zα,j ` p1 ´ λ2j qλj 1 ´ λ2k`i Zα,k`i Xα,j
b
)
` p1 ´ λ2j qλk`i 1 ´ λ2k`i Xα,k`i Zα,j
n
. 1 ÿ J
“
w Aα wα ,
n α“1 α
where
J
wαJ “ rxJ
α , zα s “ rXα,1 , . . . , Xα,p1 , Zα,1 , . . . , Zα,p1 s
and Aα P Rp2p1 qˆp2p1 q is symmetric and determined by the corresponding quadratic form. This
yields
}Aα }2F “
p1 ´k ÿ
k
!
u2i vj2
1 ÿ
p1 ´ λ2j q2 λ2k`i λ2j ` λ4j p1 ´ λ2k`i qp1 ´ λ2j q
2 i“1 j“1 λ2k`1 ´ λ2k`i ` 2δ
)
` p1 ´ λ2j q2 λ2j p1 ´ λ2k`i q ` p1 ´ λ2j q2 λ2k`i p1 ´ λ2k`i q
p1 ´k ÿ
k
`
˘`
˘
u2i vj2
1 ÿ
1 ´ λ2j λ2k`i ` λ2j ´ 2λ2k`i λ2j
δ
2
2
2 i“1 j“1 λk`1 ´ λk`i ` 2
¸
˜p ´k ¸ ˜
k
1
ÿ
p1 ´ λ2j qpλ2k`i ` λ2j ´ 2λ2k`i λ2j q
1 ÿ
max
vj2
u2i
ď
1ďiďp1 ´k
2 i“1
λ2k`1 ´ λ2k`i ` 2δ
j“1
“
ď
p1
1
max
2 1ďiďp1 ´k
1ďjďk
ď p1 ´ λ2k q
max
1ďiďp1 ´k δ
1ďjďk 2
ď p1 ´ λ2k q
ď
2p1 ´
1ďjďk
2
2
´ λk qp2λj ´ 2λ2k`i λ2j q
λ2k`1 ´ λ2k`i ` 2δ
max
λ2j p1 ´ λ2k`i q
` λ2k`1 ´ λ2i`k
p1 ´ λ2k`1 q
1ďiďp1 ´k
1ďjďk
2
λk qp1 ´ λ2k`1 q
δ
δ
2
.
“ K 2,
where the second last inequality is due to the facts that λj ď 1 and
δ
2
p1 ´ λ2k`i q
` λ2k`1 ´ λ2i`k
ď
p1 ´ λ2k`1 q
δ
2
Moreover }Aα }22 ď }Aα }2F ď K 2 .
26
p7
δ
` λ2k`1 ă λ2k ď 1q.
2
Now define w J :“ rw1J , . . . , wnJ s and
»
A1
—
A2
—
A“—
–
Then we have
}A} ď max }Aα } ď K,
1ďαďn
..
.
}A}2F
and
fi
An
ď
ffi
ffi
ffi.
fl
n
ÿ
α“1
}Aα }2F ď nK 2
1 J
w Aw, where w P N2p1 n p0, I2p1 n q.
n
Therefore, By the classic Hanson-Wright inequality (Lemma 5.11), there holds
"
ˆ 2
˙*
(
t
t
J
P n|u D1 Bv| ě t ď 2 exp ´c0 min
,
nK 2 K
uJ D1 Bv “
for some numerical constant c0 ą 0. Without loss of generality, we can also assume c0 ď 1. Let
?
t “ c40 np1 K. By n ě p1 , straightforward calculation gives
"
*
4?
J
P n|u D1 Bv| ě
np1 K ď 2e´4p1 .
c0
Step 3. Union Bound. By Lemma 5.12, we choose 1{4-net such that
,
$
d
/
’
ˆ ? ˙c
&
2
2
p1 p1 ´ λk qp1 ´ λk`1 q .
4 2
J
P
max
uǫ D1 Bvǫ ě
/
’
c0
n
δ
%uǫ PNǫ pSp1 ´k´1q
vǫ PNǫ pSk´1 q
3
ď 9p1 ´k 9k ˆ 2e´4p1 ď 2e´ 2 p1 .
3
In other words, with probability at least 1 ´ 2e´ 2 p1 , we have
ˆ ? ˙c
p1
8 2
J
´1
}D1 B} ď p1 ´ 2ǫq max uǫ D1 Bvǫ ď
uǫ ,vǫ
c0
n
d
p1 ´ λ2k qp1 ´ λ2k`1 q
.
δ
In summary, we have as long as n ě C0 pp1 ` p2 q, with probability 1 ´ c0 expp´γp1 q,
»d
fi
2
2
p1 p1 ´ λk qp1 ´ λk`1 q pp1 ` p2 q
pl } ď C –
fl
}Φ
`
1:k
nδ2
n∆δ
fi
»d
p1 p1 ´ λ2k qp1 ´ λ2k`1 q pp1 ` p2 q
fl.
`
ďC–
n∆2
n∆2
Here the last inequality is due to δ “ pλk ` λk`1 q∆ ě 21 ∆. Here C0 , C, c0 , γ are absolute constants.
27
Case 2: λk ď
1
2
By (5.13), we have
p l “ GΛ1 ` Λ2 F ,
p l Λ21 ´ Λ22 Φ
Φ
1:k
1:k
where
and
p 21 R1 ´ R3 q
p 21 ´ Σ
p 21 Λ1 qΨ
p u ` pΣ
G :“ pΣ
x
xy
x
1:k
”
ı
p 21 ´ Σ
p 21 Λ1 qΦ
p u ´ pΣ
p 21 R2 ` R4 q .
F :“ rIp1 , 0p1 ˆpp2 ´p1 q s pΣ
yx
y
1:k
y
p 21 and Σ
p 21 are submatrices of Σ
p 2p . By Lemma 5.1, we have
Notice that Σ
xy
x
1
Moreover, by }R1 } ď C
b
p1 `p2
n ,
p 21 ´ Σ
p 21 Λ1 } ď C
}Σ
xy
x
c
p1
.
n
`p2
}R3 } ď C p1n∆
and Lemma 5.2, there holds
}G} ď C
ˆc
p1 p1 ` p2
`
n
n∆
˙
.
p 2p . By a similar
p 21 are submatrices of Σ
p 21 and rIp , 0p ˆpp ´p q sΣ
Similarly, rIp1 , 0p1 ˆpp2 ´p1 q sΣ
x
1
yx
1
1
2
1
argument,
˙
ˆc
p1 p1 ` p2
`
.
}F } ď C
n
n∆
Then
pl “
Φ
1:k
„
„
„
„
λj
1
1
λk`i
˝
˝
˝G`
˝F
λk`i ` λj
λj ´ λk`i
λk`i ` λj
λj ´ λk`i
Here 1 ď i ď p1 ´ k and 1 ď j ď k. By Lemma 5.6, there holds for any X,
›„
› ›„
›
›
› › maxpλk`i , λj q
› 3
λj
›
›“›
X
X ›› ď }X}
› λk`i ` λj
› ›
λk`i ` λj
2
and
Finally, for any X,
where
›„
› ›„
›
› › minpλk`i , λj q
› 1
›
λ
k`i
› ›
›
X ›› ď }X}.
› λk`i ` λj X › “ › λk`i ` λj
2
„
1
X “ A ˝ pD1 XD2 q
λj ´ λk`i
»b
A :“ –
λj ´ λk `
∆
2
b
λk`1 ´ λk`i `
λj ´ λk`i
28
∆
2
fi
fl,
¨
1
D1 “ diag ˝ b , ¨ ¨ ¨ , b
and
¨
Since }D1 }, }D2 } ď
b
In summary, we have
Since
1
2
2
∆,
D2 “ diag ˝ b
λk`1 ´ λp1 `
1
λ1 ´ λk `
∆
2
∆
2
˛
‚,
1
, ¨ ¨ ¨ , b ‚.
∆
2
by Lemma 5.6,
›„
›
›
› 1
1
1
›
›
› λj ´ λk`i X › ď 2 }D1 XD2 } ď ∆ }X}.
pl } ď C
}Φ
1:k
ě λk ě λk`1 , there holds
»d
pl } ď C –
}Φ
1:k
6
∆
2
1
˛
ˆc
p1
p1 ` p2
`
2
n∆
n∆2
˙
.
fi
p1 p1 ´ λ2k qp1 ´ λ2k`1 q pp1 ` p2 q
fl.
`
n∆2
n∆2
Lower Bound: Proof of Theorem 2.2
To establish the minimax lower bounds of CCA estimates for our proposed losses, we follow the
analytical frameworks in the literature of PCA and CCA, e.g., Vu et al. (2013); Cai et al. (2013);
Gao et al. (2015), where the calculation is focused on the construction of the hypothesis class to
which the packing lemma and Fano’s inequality are applied. However, since we fix both λk and
λk`1 in the localized parameter spaces, new technical challenges arise and consequently we construct
hypothesis classes based on the equality (6.1). In this section we also denote ∆ :“ λk ´ λk`1 .
6.1
On Kullback-Leibler Divergence
The following lemma can be viewed as an extension of Lemma 14 in Gao et al. (2015) from λk`1 “ 0
to arbitrary λk`1 . The proof of the lemma can be found in Section 6.4.
“
‰
“
‰
Lemma 6.1. For i “ 1, 2 and p2 ě p1 ě k, let Upiq , Wpiq P Opp1 , p1 q, Vpiq , Zpiq P Opp2 , p1 q
where Upiq P Rp1 ˆk , Vpiq P Rp2 ˆk . For 0 ď λ2 ă λ1 ă 1, let ∆ “ λ1 ´ λ2 and define
«
ff
1{2
J ` λ W Z J qΣ1{2
Σx
Σx pλ1 Upiq Vpiq
y
2
piq piq
Σpiq “
i “ 1, 2,
1{2
J ` λ Z W J qΣ1{2
Σy pλ1 Vpiq Upiq
Σy
x
2 piq
piq
Let Ppiq denote the distribution of a random i.i.d. sample of size n from N p0, Σpiq q. If we further
assume
« Jff
« Jff
Vp1q
Vp2q
rUp1q , Wp1q s
“
rU
,
W
s
,
(6.1)
p2q
p2q
J
J
Zp1q
Zp2q
29
Then one can show that
DpPp1q ||Pp2q q “
n∆2 p1 ` λ1 λ2 q
J 2
}U V J ´ Up2q Vp2q
}F .
2p1 ´ λ21 qp1 ´ λ22 q p1q p1q
Remark 6.2. The conditon in (6.1) is crucial for obtaining the eigen-gap factor 1{∆2 in the lower
bound and is the key insight behind the construction of the hypothesis class in the proof. Gao et al.
(2015) has a similar lemma but only deals with the case that the residual canonical correlations
are zero. To the best of our knowledge, the proof techniques in Gao et al. (2015, 2017) cannot be
directly used to obtain our results.
6.2
Packing Number and Fano’s Lemma
The following result on the packing number is based on the metric entropy of the Grassmannian
manifold Gpk, rq due to Szarek (1982). We use the version adapted from Lemma 1 of Cai et al.
(2013) which is also used in Gao et al. (2015).
Lemmaa6.3. For any fixed U0 P Opp, kq and Bǫ0 “ tU P Opp, kq : }U U J ´ U0 U0J }F ď ǫ0 u with
ǫ0 P p0, 2rk ^ pp ´ kqs q. Define the semi-metric ρp¨, ¨q on Bǫ0 by
ρpU1 , U2 q “ }U1 U1J ´ U2 U2J }F .
Then there exists universal constant C such that for any α P p0, 1q, the packing number
MpBǫ0 , ρ, αǫ0 q satisfies
˙
ˆ
1 kpp´kq
.
MpBǫ0 , ρ, αǫ0 q ě
Cα
The following corollary is used to prove the lower bound.
Corollary 6.4. If we change the set in Lemma 6.3 to Brǫ0 “ tU P Opp, kq : }U ´ U0 }F ď ǫ0 u, then
we still have
˙
ˆ
1 kpp´kq
r
.
MpBǫ0 , ρ, αǫ0 q ě
Cα
Proof. Apply Lemma 6.3 to Bǫ0 , there exists U1 , ¨ ¨ ¨ , Un with n ě p1{Cαqkpp´kq such that
}Ui UiJ ´ U0 U0J }F ď ǫ0 , 1 ď i ď n, }Ui UiJ ´ Uj UjJ }F ě αǫ0 , 1 ď i ď j ď n.
r i “ arg
Define U
min
U PtUi Q, QPOpkqu
}U ´ U0 }F , by Lemma 6.5,
r i ´ U 0 }F ď }U
riU
r iJ ´ U0 U0J }F ď ǫ0 .
}U
r1 , ¨ ¨ ¨ , U
rn P Brǫ and
Therefore, U
0
which implies,
riU
r iJ ´ U
rj U
r jJ }F “ }Ui UiJ ´ Uj UjJ }F ě αǫ0 .
}U
MpBrǫ0 , ρ, αǫ0 q ě n ě
30
ˆ
1
Cα
˙kpp´kq
.
Lemma 6.5. For any matrices U1 , U2 P Opp, kq,
inf
QPOpk,kq
}U1 ´ U2 Q}F ď }PU1 ´ PU2 }F
Proof. By definition
}U1 ´ U2 Q}2F “ 2k ´ 2trpU1J U2 Qq
Let U1J U2 “ U DV J be the singular value decomposition. Then V U J P Opk, kq and
inf
QPOpk,kq
}U1 ´ U2 Q}2F ď 2k ´ 2trpU1J U2 V U J q
“ 2k ´ 2trpU DU J q
“ 2k ´ 2trpDq.
On the other hand,
}PU1 ´ PU2 }2F “ }U1 U1J ´ U2 U2J }2F
“ 2k ´ 2trpU1 U1J U2 U2J q
“ 2k ´ 2trpU1J U2 U2J U1 q
“ 2k ´ 2trpD 2 q.
Since U1 , U2 P Opp, kq, }U1J U2 } ď 1 and therefore all the diagonal elements of D is less than 1,
which implies that trpDq ě trpD 2 q and
inf
QPOpk,kq
}U1 ´ U2 Q}2F ď }PU1 ´ PU2 }2F .
Lemma 6.6 (Fano’s Lemma Yu (1997)). Let pΘ, ρq be a (semi)metric space and tPθ : θ P Θu a
collection of probability measures. For any totally bounded T Ă Θ, denote MpT, ρ, ǫq the ǫ-packing
number of T with respect to the metric ρ, i.e. , the maximal number of points in T whoese pairwise
minimum distance in ρ is at least ǫ. Define the Kullback-Leibler diameter of T by
dKL pT q “ sup DpPθ ||Pθ1 q.
θ,θ 1 PT
Then,
¯
”
ı
2´
p θq ě sup sup ǫ 1 ´ dKL pT q ` log 2
inf sup Eθ ρ2 pθ,
log MpT, ρ, ǫq
T ĂΘ ǫą0 4
θp θPΘ
31
6.3
Proof of Lower Bound
‰
‰
“
“
For any fixed Up0q , Wp0q P Opp1 , p1 q and Vp0q , Zp0q P Opp2 , p1 q where Up0q P Rp1ˆk , Vp0q P
Rp2 ˆk , Wp0q P Rp1 ˆpp1 ´kq , Vp0q P Rp2 ˆpp2 ´kq , define
!`
˘ “
‰
“
‰
Hǫ0 “ U , W , V , Z : U , W P Opp1 , p1 q with U P Rp1 ˆk , V , Z P Opp2 , p1 q
«
ff
« Jff
J
Vp0q )
V
with V P Rp2 ˆk , }U ´ Up0q }F ď ǫ0 , rU , W s
“
rU
,
W
s
.
p0q
p0q
J
ZJ
Zp0q
For any fixed Σx P Sp`1 , Σy P Sp`2 with κpΣx q “ κx , κpΣy q “ κy , consider the parametrization
Σxy “ Σx ΦΛΨJ Σy , for 0 ď λk`1 ă λk ă 1, define
«
ff
1{2
1{2
!
Σx
Σx pλk U V J ` λk`1 W Z J qΣy
Tǫ0 “ Σ “
,
1{2
1{2
Σy
Σy pλk V U J ` λk`1 ZW J qΣx
)
`
˘
Φ “ Σx´1{2 rU , W s, Ψ “ Σy´1{2 rV , Zs, U , W , V , Z P Hǫ0 .
It is straightforward to verify that Tǫ0 Ă Fpp1 , p2 , k, λk , λk`1 , κx , κy q. For any Σpiq P Tǫ0 , i “ 1, 2,
they yield to the parametrization,
«
ff
1{2
1{2
J `λ
J
Σx
Σx pλk Upiq Vpiq
k`1 Wpiq Zpiq qΣy
Σpiq “
,
1{2
1{2
J `λ
J
Σy pλk Vpiq Upiq
Σy
k`1 Zpiq Wpiq qΣx
˘
`
´1{2
piq
piq
where Upiq , Wpiq , Vpiq , Zpiq P Hǫ0 and the leading-k canonical vectors are Φ1:k “ Σx Upiq , Ψ1:k “
´1{2
Σy
Vpiq . We define a semi-metric on Tǫ0 as
›
›
›
›
›
›
›
›
ρpΣp1q , Σp2q q “ ›PΣ1{2 Φp1q ´ PΣ1{2 Φp2q › “ ›PUp1q ´ PUp2q › .
x
x
1:k
1:k
F
F
By Lemma 6.1,
DpPΣ1 ||PΣ2 q “
n∆2 p1 ` λk λk`1 q
J 2
}U V J ´ Up2q Vp2q
}F .
2p1 ´ λ2k qp1 ´ λ2k`1 q p1q p1q
Further by the definition of dKL pT q,
dKL pT q “
n∆2 p1 ` λk λk`1 q
J
J 2
sup
}Up1q Vp1q
´ Up2q Vp2q
}F .
2p1 ´ λ2k qp1 ´ λ2k`1 q Σp1q ,Σp2q PTǫ0
(6.2)
To bound the Kullback-Leibler diameter, for any Σp1q , Σp2q P Tǫ0 , by definition,
« Jff
« Jff
Vp1q
Vp2q
rUp1q , Wp1q s
“ rUp2q , Wp2q s
,
J
J
Zp1q
Zp2q
which implies that they are singular value decompositions of the same matrix. Therefore, there
exists Q P Opp1 , p1 q such that
rUp2q , Wp2q s “ rUp1q , Wp1q sQ , rVp2q , Zp2q s “ rVp1q , Zp1q sQ.
32
(6.3)
Decompose Q into four blocks such that
„
Q11 Q12
Q“
.
Q21 Q22
Substitute into (6.3),
Up2q “ Up1q Q11 ` Wp1q Q21 , Vp2q “ Vp1q Q11 ` Zp1q Q21 .
Then,
}Up2q ´ Up1q }2F “ }Up1q pQ11 ´ Ik q ` Wp1q Q21 }2F
“ }Up1q pQ11 ´ Ik q}2F ` }Wp1q Q21 }2F
“ }Q11 ´ Ik }2F ` }Q21 }2F .
The second equality is due to the fact that Up1q and Wp1q have orthogonal column space and the
third equality is valid because Up1q , Wp1q P Opp1 , kq. By the same argument, we will have
}Vp2q ´ Vp1q }2F “ }Q11 ´ Ik }2F ` }Q21 }2F .
Notice that
J
J 2
}Up1q Vp1q
´ Up2q Vp2q
}F “ }pUp1q ´ Up2q qVp1q ` Up2q pVp1q ´ Vp2q q}2F
ď 2}Up1q ´ Up2q }2F ` 2}Vp1q ´ Vp2q }2F
“ 4}pUp1q ´ Up2q q}2F
`
˘
ď 8 }pUp1q ´ Up0q q}2F ` }pUp0q ´ Up2q q}2F
ď 16ǫ20 .
Then, substitute into (6.2)
8n∆2 p1 ` λk λk`1 q 2
ǫ .
p1 ´ λ2k qp1 ´ λ2k`1 q 0
dKL pT q ď
(6.4)
J ´
Let Bǫ0 “ tU P Opp1 , kq : }U ´ Up0q }F ď ǫ0 u. Under the semi-metric ρrpUp1q , Up2q q “ }Up1q Up1q
J } , we claim that the packing number of H is lower bounded by the packing number of B .
Up2q Up2q
F
ǫ0
ǫ0
To prove this claim, it suffices to show that for any U P Bǫ0 , there exists corresponding W , V , Z
such that pU , W , V , Zq P Hǫ0 . First of all, by definition, }U ´U0 }F ď ǫ0 . Let W P Opp1 , p1 ´kq be
the orthogonal complement of U . Then rU , W s P Opp1 , p1 q and therefore there exists Q P Opp1 , p1 q
such that
rU , W s “ rUp0q , Wp0q sQ.
Set rV , Zs “ rVp0q , Zp0q sQ P Opp2 , p1 q, then
rU , W s
«
VJ
ZJ
ff
“ rUp0q , Wp0q s
33
«
J
Vp0q
J
Zp0q
ff
,
which implies pU , W , V , Zq P Hǫ0 . Let
¨
˛
d
a
p1 ´ λ2k qp1 ´ λ2k`1 q
ǫ “ αǫ0 “ c ˝ k ^ pp1 ´ kq ^
kpp1 ´ kq‚,
n∆2 p1 ` λk λk`1 q
a
where c P p0, 1q depends on α and is chosen small enough such that ǫ0 “ ǫ{α P p0, 2rk ^ pp1 ´ kqs q.
By Corollary 6.4,
ˆ
˙
1 kpp1 ´kq
MpTǫ0 , ρ, αǫ0 q “ MpHǫ0 , ρr, αǫ0 q ě MpBǫ0 , ρr, αǫ0 q ě
.
Cα
Apply Lemma 6.6 with Tǫ0 , ρ, ǫ,
˜
¸
„ ›
›2
8c2 kpp1 ´ kq ` log2
ǫ2
›
›
1´
.
inf sup E ›PΣ1{2 Φ
› ě sup sup
p 1:k ´ PΣ1{2
1
x Φ1:k F
x
p 1:k ΣPF
kpp1 ´ kqlog Cα
T ĂΘ ǫą0 4
Φ
Choose α small enough such that
1´
1
8c2 kpp1 ´ kq ` log2
ě .
1
2
kpp1 ´ kqlog Cα
Then the lower bound is reduced to
#
+
„ ›
›2 c2 p1 ´ λ2 qp1 ´ λ2 q
›
›
k`1
k
kpp1 ´ kq ^ k ^ pp1 ´ kq
inf sup E ›PΣ1{2 Φ
› ě
p 1:k ´ PΣ1{2
x Φ1:k F
x
8
n∆2 p1 ` λk λk`1 q
p 1:k ΣPF
Φ
¸
+
#˜
p1 ´ λ2k qp1 ´ λ2k`1 q p1 ´ k
p1 ´ k
2
^1^
ěC k
∆2
n
k
By symmetry,
„ ›
›
inf sup E ›PΣ1{2 Ψ
p
p 1:k ΣPF
Ψ
y
1:k
´ PΣ1{2 Ψ
y
1:k
›2
›
› ě C 2k
F
#˜
p1 ´ λ2k qp1 ´ λ2k`1 q p1 ´ k
∆2
n
¸
p1 ´ k
^1^
k
+
The lower bound for operator norm error can be immediately obtained by noticing that PΣ1{2 Ψ
p 1:k ´
y
PΣ1{2 Ψ has at most rank 2k and
y
1:k
›
›
›PΣ1{2 Φ
p
x
6.4
1:k
Proof of Lemma 6.1
›2
›2
1 ››
›
›
´ PΣ1{2 Φ › ě
´ PΣ1{2 Φ ›
›PΣ1{2 Φ
p
x
x
x
1:k
1:k
1:k
2k
F
By simple algebra, the Kullback-Leibler divergence between two multivariate gaussian distributions
satisfies
)
¯
n ! ´ ´1
´1
DpPΣp1q ||PΣp2q q “
Tr Σp2q pΣp1q ´ Σp2q q ´ log detpΣp2q Σp1q q .
2
Notice that
ff
«
ff
«
1{2
1{2
Σx
Σx
Σpiq “
1{2 ,
1{2 Ωpiq
Σy
Σy
34
where
Ωpiq
ff
J ` λ W ZJ
Ip1
λ1 Upiq Vpiq
2
piq piq
.
“
J ` λ Z WJ
λ1 Vpiq Upiq
Ip2
2 piq
piq
«
Then,
DpPΣp1q ||PΣp2q q “
Also notice that
Ωpiq “
„
Ip1
)
n!
´1
Ω
q
.
Ω
q
´
pp
`
p
q
´
log
detpΩ
TrpΩ´1
1
2
p1q
p1q
p2q
p2q
2
„
ı λ „ U ”
ı
λ1 Upiq ” J
1
J
J
J
piq
U
V
Upiq
´Vpiq
´
`
piq
piq
Ip2
2 Vpiq
2 ´Vpiq
”
„
„
ı
ı
λ2 Wpiq
λ2 Wpiq ” J
J
J
J
Wpiq
Zpiq
Wpiq ´Zpiq
`
´
.
2 Zpiq
2 ´Zpiq
Therefore Ωp1q , Ωp2q share the same set of eigenvalues: 1 ` λ1 with multiplicity k, 1 ´ λ1 with
multiplicity k, 1 ` λ2 with multiplicity p1 ´ k, 1 ´ λ2 with multiplicity p1 ´ k and 1 with multiplicity
2pp2 ´ p1 q. This implies log detpΩ´1
Ω qq “ 0. On the other hand, by block inversion formula, we
p2q p1q
can compute
fi
»
2
λ21
λ1
J ` λ2 W W J
J ´ λ2 W Z J
U
U
U
V
´
I
`
p
2
2
p2q
p2q
p2q
p2q
p2q
p2q
1´λ2
p2q
1´λ2
p2q fl
1´λ1
– 1 1´λ1
.
Ω´1
λ21
λ22
p2q “
λ2
λ1
J
J
J
J
Ip2 ` 1´λ2 Vp2q Vp2q ` 1´λ2 Zp2q Zp2q
´ 1´λ2 Vp2q Up2q ´ 1´λ2 Zp2q Wp2q
1
1
Divide Ω´1
p2q Ωp1q into blocks such that
Ω´1
p2q Ωp1q “
„
J11 J12
where J11 P Rp1 ˆp1 , J22 P Rp2 ˆp2 ,
J21 J22
and
λ22
λ21
J
J
J
J
J
pU
U
´
U
V
V
U
q
`
pWp2q Wp2q ´ Wp2q Zp2q
Zp1q Wp1q
q
p2q p2q
p2q p2q p1q p1q
1 ´ λ21
1 ´ λ22
λ1 λ2
λ1 λ2
J
J
J
J
´
pUp2q Vp2q
Zp1q Wp1q
q´
pWp2q Zp2q
Vp1q Up1q
q
1 ´ λ21
1 ´ λ22
λ22
λ21
J
J
J
J
J
pVp2q Vp2q
pZp2q Zp2q ´ Zp2q Wp2q
´ Vp2q Up2q
Wp1q Zp1q
Up1q Vp1q
q
q`
“
2
1 ´ λ1
1 ´ λ22
λ1 λ2
λ1 λ2
J
J
J
J
´
pVp2q Up2q
pZp2q Wp2q
Wp1q Zp1q
Up1q Vp1q
q´
q.
2
1 ´ λ1
1 ´ λ22
J11 “
J22
We spell out the algebra for trpJ11 q, and trpJ22 q can be computed in exactly the same fashion.
1
J
J
J
J
J
J
J
J
J
Vp2q Up2q
` Up1q Vp1q
Vp1q Up1q
´ 2Up2q Vp2q
Vp1q Up1q
q
trpUp2q Up2q
´ Up2q Vp2q
Vp1q Up1q
q “ trpUp2q Vp2q
2
1
J
“ }Up1q Vp1q
´ Up2q Vp2q }2F .
2
Similarly,
J
J
trpWp2q Wp2q ´ Wp2q Zp2q
Zp1q Wp1q
q“
35
1
J
}Wp1q Zp1q
´ Wp2q Zp2q }2F .
2
J ` W Z J “ U V J ` W Z J , we have
By the assumption (6.1), i.e., Up1q Vp1q
p1q p1q
p2q p2q
p2q p2q
1
J
J
J
trpWp2q Wp2q ´ Wp2q Zp2q
Zp1q Wp1q
q “ }Up1q Vp1q
´ Up2q Vp2q }2F .
2
Further,
¯
´
J
J
J
J
J
J J
trpUp2q Vp2q
Zp1q Wp1q
q “ tr Up2q Vp2q
pUp2q Vp2q
` Wp2q Zp2q
´ Up1q Vp1q
q
¯
´
J
J
J J
“ tr Up2q Vp2q
pUp2q Vp2q
´ Up1q Vp1q
q
1
J
´ Up2q Vp2q }2F ,
“ }Up1q Vp1q
2
and by the same argument,
1
J
J
J
trpWp2q Zp2q
Vp1q Up1q
q “ }Up1q Vp1q
´ Up2q Vp2q }2F .
2
Sum these equations,
*
λ22
λ1 λ2
λ1 λ2
λ21
J
`
´
´
}Up1q Vp1q
´ Up2q Vp2q }2F
1 ´ λ21 1 ´ λ22 1 ´ λ21 1 ´ λ22
∆2 p1 ` λ1 λ2 q
J
“
}Up1q Vp1q
´ Up2q Vp2q }2F .
2p1 ´ λ21 qp1 ´ λ22 q
1
trpJ11 q “
2
"
Repeat the argument for J22 , one can show that
trpJ22 q “ trpJ11 q “
∆2 p1 ` λ1 λ2 q
J
}Up1q Vp1q
´ Up2q Vp2q }2F .
2p1 ´ λ21 qp1 ´ λ22 q
Therefore,
n
n
Ωp1q q “ ptrpJ11 q ` trpJ22 qq
trpΩ´1
p2q
2
2
n∆2 p1 ` λ1 λ2 q
}U V J ´ Up2q Vp2q }2F .
“
2p1 ´ λ21 qp1 ´ λ22 q p1q p1q
DpPΣp1q ||PΣp2q q “
References
Anderson, T. W. (1999). Asymptotic theory for canonical correlation analysis.
Multivariate Analysis 70 (1), 1–29.
Journal of
Arora, R. and K. Livescu (2013). Multi-view cca-based acoustic features for phonetic recognition
across speakers and domains. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE
International Conference on, pp. 7135–7139. IEEE.
Cai, T., Z. Ma, and Y. Wu (2015). Optimal estimation and rank detection for sparse spiked
covariance matrices. Probability theory and related fields 161 (3-4), 781–815.
Cai, T. T., Z. Ma, Y. Wu, et al. (2013). Sparse pca: Optimal rates and adaptive estimation. The
Annals of Statistics 41 (6), 3074–3110.
36
Cai, T. T. and A. Zhang (2017). Rate-optimal perturbation bounds for singular subspaces with
applications to high-dimensional statistics. The Annals of Statistics, to appear .
Chaudhuri, K., S. M. Kakade, K. Livescu, and K. Sridharan (2009). Multi-view clustering via
canonical correlation analysis. In Proceedings of the 26th annual international conference on
machine learning, pp. 129–136. ACM.
Chen, X., H. Liu, and J. G. Carbonell (2012). Structured sparse canonical correlation analysis. In
International Conference on Artificial Intelligence and Statistics, pp. 199–207.
Dhillon, P. S., D. Foster, and L. Ungar (2011). Multi-view learning of word embeddings via cca.
In Advances in Neural Information Processing Systems (NIPS), Volume 24.
Faruqui, M. and C. Dyer (2014). Improving vector space word representations using multilingual
correlation. Association for Computational Linguistics.
Foster, D. P., R. Johnson, S. M. Kakade, and T. Zhang (2008). Multi-view dimensionality reduction
via canonical correlation analysis. Technical report.
Friman, O., M. Borga, P. Lundberg, and H. Knutsson (2003). Adaptive analysis of fmri data.
NeuroImage 19 (3), 837–845.
Fukumizu, K., F. R. Bach, and M. I. Jordan (2009). Kernel dimension reduction in regression. The
Annals of Statistics, 1871–1905.
Gao, C., Z. Ma, Z. Ren, H. H. Zhou, et al. (2015). Minimax estimation in sparse canonical
correlation analysis. The Annals of Statistics 43 (5), 2168–2197.
Gao, C., Z. Ma, and H. H. Zhou (2017). Sparse cca: Adaptive estimation and computational
barriers. The Annals of Statistics, to appear .
Gong, Y., Q. Ke, M. Isard, and S. Lazebnik (2014). A multi-view embedding space for modeling
internet images, tags, and their semantics. International journal of computer vision 106 (2),
210–233.
Hom, R. A. and C. R. Johnson (1991). Topics in matrix analysis. Cambridge UP, New York .
Hotelling, H. (1936). Relations between two sets of variables. Biometrika 28, 312–377.
Kakade, S. M. and D. P. Foster (2007). Multi-view regression via canonical correlation analysis. In
In Proc. of Conference on Learning Theory.
Kim, T.-K., S.-F. Wong, and R. Cipolla (2007). Tensor canonical correlation analysis for action
classification. In Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference
on, pp. 1–8. IEEE.
Mathias, R. (1993). The hadamard operator norm of a circulant and applications. SIAM journal
on matrix analysis and applications 14 (4), 1152–1167.
Rasiwasia, N., J. Costa Pereira, E. Coviello, G. Doyle, G. R. Lanckriet, R. Levy, and N. Vasconcelos
(2010). A new approach to cross-modal multimedia retrieval. In Proceedings of the 18th ACM
international conference on Multimedia, pp. 251–260. ACM.
37
Rudelson, M. and R. Vershynin (2013). Hanson-wright inequality and sub-gaussian concentration.
Electron. Commun. Probab. 18, no. 82, 1–9.
Sridharan, K. and S. M. Kakade (2008). An information theoretic framework for multi-view
learning. In R. A. Servedio and T. Zhang (Eds.), COLT, pp. 403–414. Omnipress.
Szarek, S. J. (1982). Nets of grassmann manifold and orthogonal group. In Proceedings of research
workshop on Banach space theory (Iowa City, Iowa, 1981), Volume 169, pp. 185.
Vershynin, R. (2010). Introduction to the non-asymptotic analysis of random matrices. arXiv
preprint arXiv:1011.3027 .
Vu, V. Q., J. Lei, et al. (2013). Minimax sparse principal subspace estimation in high dimensions.
The Annals of Statistics 41 (6), 2905–2947.
Wang, W., R. Arora, K. Livescu, and J. Bilmes (2015). On deep multi-view representation learning.
In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 1083–
1092.
Wedin, P.-Å. (1972). Perturbation bounds in connection with singular value decomposition. BIT
Numerical Mathematics 12 (1), 99–111.
Wedin, P. Å. (1983). On angles between subspaces of a finite dimensional inner product space. In
Matrix Pencils, pp. 263–285. Springer.
Witten, D. M., R. Tibshirani, and T. Hastie (2009). A penalized matrix decomposition, with
applications to sparse principal components and canonical correlation analysis. Biostatistics,
kxp008.
Yu, B. (1997). Assouad, fano, and le cam. In Festschrift for Lucien Le Cam, pp. 423–435. Springer.
38
| 10 |
Using Deep Convolutional Networks for
Gesture Recognition in American Sign Language
Vivek Bheda and N. Dianna Radpour
Department of Computer Science, Department of Linguistics
State University of New York at Buffalo
{vivekkan, diannara}@buffalo.edu
Abstract
In the realm of multimodal communication, sign
language is, and continues to be, one of the most
understudied areas. In line with recent advances in
the field of deep learning, there are far reaching
implications and applications that neural networks can
have for sign language interpretation. In this paper, we
present a method for using deep convolutional
networks to classify images of both the the letters and
digits in American Sign Language.
1. Introduction
Sign Language is a unique type of communication that
often goes understudied. While the translation process
between signs and a spoken or written language is formally
called ‘interpretation,’ the function that interpreting plays
is the same as that of translation for a spoken language. In
our research, we look at American Sign Language (ASL),
which is used in the USA and in English-speaking Canada
and has many different dialects. There are 22 handshapes
that correspond to the 26 letters of the alphabet, and you
can sign the 10 digits on one hand.
Figure 1. American Sign Language Alphabet
Figure 2. American Sign Language Numbers
One of the nuances in sign language is how often
fingerspelling is used. Fingerspelling is a method of
spelling words using only hand gestures. One of the
reasons the fingerspelling alphabet plays such a vital role
in sign language is that signers used it to spell out names of
anything for which there is not a sign. People's names,
places, titles, brands, new foods, and uncommon animals
or plants all fall broadly under this category, and this list is
by no means exhaustive. Due to this reason, the recognition
process for each individual letter plays quite a crucial role
in its interpretation.
2. Related Work
Convolutional Neural Networks have been extremely
successful in
image recognition and classification
problems, and have been successfully implemented for
human gesture recognition in recent years. In particular,
there has been work done in the realm of sign language
recognition using deep CNNs, with input-recognition that
is sensitive to more than just pixels of the images. With the
use of cameras that sense depth and contour, the process is
made much easier via developing characteristic depth and
motion profiles for each sign language gesture [5].
The use of depth-sensing technology is quickly
growing in popularity, and other tools have been
incorporated into the process that have proven successful.
Developments such as custom-designed color gloves have
been used to facilitate the recognition process and make the
feature extraction step more efficient by making certain
gestural units easier to identify and classify [8].
Until recently, however, methods of automatic
sign language recognition weren’t able to make use of the
depth-sensing technology that is as widely available today.
Previous works made use of very basic camera technology
to generate datasets of simply images, with no depth or
contour information available, just the pixels present.
Attempts at using CNNs to handle the task of classifying
images of ASL letter gestures have had some success [7],
but using a pre-trained GoogLeNet architecture.
3. Method
Our overarching approach was one of basic supervised
learning using mini-batch stochastic gradient descent. Our
task was that of classification using deep convolutional
neural networks to classify every letter and the digits, 0-9,
in ASL. The inputs were fixed size high-pixel images, 200
by 200 or 400 by 400, being padded and resized to 200 by
200.
3.1 Architecture
Most implementations surrounding this task have
attempted it via transfer learning, but our network was
trained from scratch. Our general architecture was a fairly
common CNN architecture, consisting of multiple
convolutional and dense layers. The architecture included 3
groups of 2 convolutional layers followed by a max-pool
layer and a dropout layer, and two groups of fully
connected layer followed by a dropout layer and one final
output layer.
4. Data
We initially trained and tested on a self-generated dataset
of images we took ourselves. This dataset was a collection
of 25 images from 5 people for each alphabet and the digits
1-9. Since our dataset was not constructed in a controlled
setting, it was especially prone to differences in light, skin
color, and other differences in the environment that the
images were captured in, so we also used a premade
dataset to compare our dataset’s performance with [3].
Additionally, a pipeline was developed that can be used so
people are able to generate and continue adding images to
this dataset.
4.1 Preprocessing
For generating our own dataset, we captured the images for
each sign, then removed the backgrounds from each of the
images using background-subtraction techniques. When we
initially split the dataset into two for training and
validation, the validation accuracy showed to be high.
However, when we used datasets from two different
sources, i.e. training on ours and testing on the premade
and vice versa, the validation accuracy drastically
decreased. Since training on one dataset and validating on
another was not yielding as accurate of results, we used the
premade dataset for the different gestures to train the
network which yielded the following results.
Figure 4. Training Accuracy on ASL Alphabets
Figure 3. Network Architecture
Figure 5. Training Accuracy on ASL Digits
Figure 6. Training Loss on ASL Alphabet
Figure 8. Validation Accuracy on ASL Letters
Figure 7. Training Loss on ASL Digits
4.2 Data Augmentation
We saw the performances improve differently in our two
datasets via data augmentation. By transforming our
images just a few pixels (rotating by 20 degrees, translating
by 20% on both axes) there was an increased accuracy of
approximately 0.05. We also flipped the images
horizontally as we can sign using both hands. While it
wasn’t extremely effective, we saw that with better and
more representative initial training data, augmenting
improved the performance more drastically. This was
observed after augmentation of the premade dataset, which
improved the performance by nearly 20%.
5. Results
We observed 82.5% accuracy on the alphabet gestures, and
97% validation set accuracy on digits, when using the NZ
ASL dataset. On our self-generated dataset, we observed
much lower accuracy measures, as was expected since our
data was less uniform than that which was collected under
studio settings with better equipment. We saw 67%
accuracy on letters of the alphabet, and 70% accuracy on
the digits. In terms of time complexity, gestures of the
letters converged in approximately 25 minutes, and the
digits converged in nearly 10 minutes.
Figure 9. Validation Accuracy on ASL Digits
Figure 10. Validation Loss on ASL Letters
Figure 11. Validation Loss on ASL Digits
5.1 Evaluation
We trained with a categorical cross entropy loss function
for both our datasets. It is a fairly common loss function
used along with image classification problems.
Initially, we observed low accuracy measures when testing
on the validation set of the self-generated data, which we
accounted largely to the lighting and skin tone variations in
the images. The higher accuracy measure for the digits was
expected, since the gestures for the digits are much more
distinguishable and easier to classify. Compared to
previous methods working on this same task, our network
performed quite well, considering RF-JA were using both a
color glove and depth-sensing Kinect camera. The cause of
higher accuracy than Stanford’s method was likely due to
their lack of background-subtraction for the images, since
they used a large dataset from ILSVRC2012 as part of a
competition.
Method
Accuracy
deepCNN (our method)
82.5
Stanford deepCNN [7]
72
RF-JA+C(h-h) [8]
90
RF-JA+C(l-o-o) [8]
70
Figure 12. Comparison of previous methods with ours; Stanford
didn’t use background subtraction, RF-JA(h-h) split the training
and validation set 50-50, (l-o-o) omitted specific data.
6. Conclusions and Future Work
In this paper, we described a deep learning approach for
a classification algorithm of American Sign Language.
Our results and process were severely affected and
hindered by skin color and lighting variations in our
self-generated data which led us to resort to a pre-made
professionally constructed dataset. With a camera like
Microsoft’s Kinect that has a depth sensor, this problem
is easy to solve [5]. However, such cameras and
technology are not widely accessible, and can be costly.
Our method shows to have potential in solving this
problem using a simple camera, if enough substantial
training data is provided, which can be continuously
done and added via the aforementioned processing
pipeline. Since more people have access to simple
camera technologies, this could contribute to a scalable
solution.
In recognizing that classification is a limited
goal, we plan on incorporating structured PGMs in
future implementations of this classification schema that
would describe the probability distributions of the
different letters’ occurrences based on their sequential
contexts. We think that by accounting for how the
individual letters interact with each other directly (e.g.
the likelihood for the vowel ‘O’ to proceed the letter ‘J’),
the accuracy of the classification would increase. This
HMM approach with sequential pattern boosting
(SP-boosting) has been done with the actual gesture
units that occur in certain gestures’ contexts, i.e.
capturing the upper-arm movements that precede a
certain letter to incorporate that probability weight into
the next unit’s class, [6] and processing sequential
phonological information in tandem with gesture
recognition [4], but not for part-of-word tagging with an
application like what we hope to achieve.
We also recognize that the representation itself
makes a huge difference in the performance of
algorithms like ours, so we hope to find the best
representation of our data, and building off our results
from this research, incorporate it into a zero-shot
learning process. We see zero-shot learning as having
the potential to facilitate the translation process from
American Sign Language into English. Implementing
one-shot learning for translating the alphabet and
numbers from American Sign Language to written
English, and comparing it with a pure deep learning
heuristic could be successful and have the potential to
benefit from error correction via language models.
Recent implementations of one-shot adaptation have also
had success in solving real world computer vision tasks,
and effectively trained deep convolutional neural
networks using very little domain-specific data, even as
limited as single-image datasets. We ultimately aim to
create a holistic and comprehensive representation
learning system for which we have designed a set of
features that can be recognized from simple gesture
images that will optimize the translation process.
7. References
[1] X. Chen and A. Yuille. Articulated pose estimation by
a graphical model with image dependent pairwise
relations. In Advances in Neural Information
Processing Systems (NIPS), 2014.
[2] T. Pfister, J. Charles, and A. Zisserman. Flowing
convnets for human pose estimation in videos. In IEEE
International Conference on Computer Vision, 2015.
[3] Barczak, A.L.C., Reyes, N.H., Abastillas, M., Piccio,
A., Susnjak, T. (2011), A new 2D static hand gesture
colour image dataset for ASL gestures, Research Letters in
the Information and Mathematical Sciences, 15, 12-20
[4] Kim, Taehwan & Livescu, K & Shakhnarovich, Greg.
(2012). American sign language fingerspelling recognition
with phonological feature-based tandem models. In IEEE
Spoken Language Technology Workshop (SLT), 119-124.
[5] Agarwal, Anant & Thakur, Manish. Sign Language
Recognition using Microsoft Kinect. In IEEE International
Conference on Contemporary Computing, 2013.
[6] Cooper, H., Ong, E.J., Pugeault, N., Bowden, R.: Sign
language recognition using sub-units. The Journal of
Machine Learning Research, 13(1), 2205–2231, 2012.
[7] Garcia, Brandon and Viesca, Sigberto. Real-time
American Sign Language Recognition with Convolutional
Neural Networks. In Convolutional Neural Networks for
Visual Recognition at Stanford University, 2016.
[8] Cao Dong, Ming C. Leu and Zhaozheng Yin. American
Sign Language Alphabet Recognition Using Microsoft
Kinect. In IEEE International Conference on Computer
Vision and Pattern Recognition Workshops, 2015.
| 1 |
1
arXiv:1705.10091v1 [cs.IT] 29 May 2017
Rate (n − 1)/n Systematic
MDS Convolutional Codes over GF(2m)
Ángela Barbero
Universidad de Valladolid
47011 Valladolid, Spain
Email: [email protected]
Øyvind Ytrehus
Simula@UiB and University of Bergen
N-5020 Bergen, Norway
Email: [email protected]
Abstract
A systematic convolutional encoder of rate (n − 1)/n and maximum degree D generates a code of free distance at most
D = D + 2 and, at best, a column distance profile (CDP) of [2, 3, . . . , D]. A code is Maximum Distance Separable (MDS) if it
possesses this CDP. Applied on a communication channel over which packets are transmitted sequentially and which loses (erases)
packets randomly, such a code allows the recovery from any pattern of j erasures in the first j n-packet blocks for j < D, with
a delay of at most j blocks counting from the first erasure. This paper addresses the problem of finding the largest D for which
a systematic rate (n − 1)/n code over GF(2m ) exists, for given n and m. In particular, constructions for rates (2m − 1)/2m and
(2m−1 − 1)/2m−1 are presented which provide optimum values of D equal to 3 and 4, respectively. A search algorithm is also
developed, which produces new codes for D for field sizes 2m ≤ 214 . Using a complete search version of the algorithm, the
maximum value of D, and codes that achieve it, are determined for all code rates ≥ 1/2 and every field size GF(2m ) for m ≤ 5
(and for some rates for m = 6).
I. I NTRODUCTION
1
In many practical communication applications, such as multimedia transmission over packet erasure channels, on-time
delivery is an important quality-of-service criterion. Traditional ARQ systems, for example the one used by TCP for transport
layer unicast service, suffer from long delays due to erasures when the round-trip time is large. This has led to an increased
interest in the design and analysis of systems based on packet-level error correcting codes. Such coded schemes are also known
to be beneficial in other transport layer models, for example in the multi-path case.
Two main approaches to this coding problem have been discussed in the literature. The deterministic approach [1], [2], [3]
is to send packets using a fixed 2m -ary convolutional code with a good column distance profile. This approach is discussed
in Subsection II-B. Random coding was proposed as a solution in [4], [5]. In these schemes, the sender transmits k uncoded
information packets, followed by n − k parity check packets formed by random linear combinations of all information packets
that have not been acknowledged by the receiver so far. Subsection II-C describes this approach, and also discusses a hybrid
approach that combines deterministic and random coding.
A. Contributions
We present new codes in Section III. In Section III-A we present two new, general, and optimum constructions of MDS
convolutional codes. In the literature, there exist only a few general constructions of high-rate convolutional codes: As far as
we know, only the Wyner-Ash code [6] and their binary generalizations ([7], and Thms. 7.10 and 7.13 in [8]). We present a
simple (but as far as we can see, not previously described in the literature) distance-3 construction. This code has the same
rate and Viterbi complexity as the binary Wyner-Ash code, but has a better column distance profile. We also present a much
more interesting algebraic distance-4 construction in Proposition 3. In Section III-B we describe a search algorithm and in
Section III-C we present the codes found by the algorithm. For most parameters, these codes are better (in a sense which will
be made more precise) than previously known codes. Further, we present simple upper bounds in Section IV.
By convention we will call a convolutional code systematic if is it has a systematic encoder; i.e. one that preserves all
information symbols and obtains redundancy by extra parity symbols. 2m -ary systematic rate (n − 1)/n convolutional encoders
are useful in order to obtain fast recovery of packet erasures in the common case of channels with moderate erasure rates, and
we will focus only on this class of codes.
1 This work is supported by Ministerio de Economı́a, Industria y Competitividad, Gobierno de España, through project MTM2013-46949-P, the Estonian
Research council through project EMP133, the Norwegian Research Council through the SARDS project.
2
II. BACKGROUND
A. Notation
For a thorough introduction to convolutional codes, please see [9]. In the following we will describe the concept of a 2m -ary
MDS convolutional code in a way which is convenient for our purposes in this paper.
Let m ≥ 1, n ≥ 2, k = n − 1 be integers, F = GF(2m ), and define the matrices and vectors
R0 = (r0,1 , . . . , r0,k ) ∈ Fk
where Fk is the k-dimensional space of row vectors over F,
H0 = (R0 |1) ∈ Fn ,
where Fr×c denotes the space of matrices with r rows and c columns over F. For i > 1 define
Hi−1
Ri = (ri,1 , . . . , ri,k |0) ∈ Fk , Hi =
∈ F(i+1)×n
Ri
and, for an integer L ≥ 2, let
H
(L)
0(L−1)×n
01×n
= (HL ,
,...,
) ∈ F(L+1)×n(L+1) ,
HL−1
H0
(1)
where 0r×c is all-zero matrix with r rows and c columns. Then H (L) is the parity check matrix for the Lth truncated block
code C (L) of a (systematic) convolutional code C , thus any vector of length (l + 1)n for l ≤ L,
(0)
(0)
(1)
(1)
(l)
(l)
[v]l = (v1 , . . . , vn , v1 , . . . , vn , . . . , v1 , . . . , vn ) ∈ F(l+1)n
is a codeword in C (l) if and only if the syndrome
>
(l+1)×1
H (l) [v]>
.
l = (0, . . . , 0) ∈ F
A systematic encoder for the code C (L) is represented
G0 G1
G0
G(L) =
by
···
···
..
.
GL
GL−1
k(L+1)×n(L+1)
.. ∈ F
.
(2)
G0
where
k×n
k×n
G0 = (Ik |R>
, Gi = (0k |R>
for i > 0,
0)∈F
i )∈F
>
and Ik and 0k are the k ×k identity and zero matrices, respectively. It is straightforward to verify that G(L) ×H (L) = 0k(L+1)×(L+1) .
Example 1. Let F = GF(23 ) with primitive element α defined by
1 1 1 0
H (2) = 1 α 0 1
α3 1 0 1
and
α 3 + α + 1 = 0. Then the parity and generator matrices
0 0 0 0 0
1 1 0 0 0
α 0 1 1 1
1 0 1 0 0
0 1 1 0 0
0 0 0 1 0
(2)
G =
0 0 0 0 1
0 0 0 0 0
0 0 0 0 0
1
α
1
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
α3
1
1
α
1
1
define a truncated code C (2) , which is rate 6/9 block code over F. Note that the matrices are completely determined by the
parity check coefficients ri, j , i = 0, . . . , L, j = 1, . . . , k.
In the conventional polynomial notation of convolutional codes [9], the parity check matrix can be described as
D
D
H(x) = ( ∑ ri,1 xi , . . . , ∑ ri,k xi , 1) ∈ F[x].
i=0
In Example 1, H(x) =
(1 + x + α 3 x2 , 1 + αx + x2 , 1).
i=0
Similarly, the corresponding polynomial generator matrix is
1 0 1 + x + α 3 x2
G(x) =
0 1 1 + αx + x2
3
B. MDS convolutional codes constructed from superregular matrices
In the deterministic approach [1], [2], [3], the goal is to design codes with an optimum column distance profile, which we
will define below.
The l-th column distance dl = dl (C ) of a convolutional code C is the minimum Hamming weight of any truncated
(0)
(0)
codeword [c]l with the first block (c1 , . . . , cn ) nonzero, and the column distance profile (CDP) is the non-decreasing sequence
(d0 , d1 , d2 , . . . , dD = D, D, D, . . .), where D is the free distance of the code and D is the index for which the CDP reaches
D. The CDP was originally studied for its significance on the performance of sequential decoding (please see Ch. 13 of [9].)
Recently the CDP has received renewed attention in the context of 2m -ary codes, due to its importance for fast recovery from
losses of symbols in an erasure channel.
Recall that we consider only convolutional codes of rate k/n = (n − 1)/n that have a systematic encoder. In this case, by the
Singleton bound for truncated block codes, d0 ≤ 2, and by similar linear algebra arguments, dl ≤ dl−1 + 1 for l > 0. Moreover,
dl = dl−1 for l > D. So the best column distance profile one can hope to find in a code with a systematic encoder is
d0 = 2, d1 = 3, . . . , d j = j + 2, . . . , dD = D + 2 = D.
(3)
By an MDS convolutional code, in this paper we will mean a code with a CDP as in (3).
Remark 1. The concept of Strongly-MDS codes was introduced in [2]. This concept takes into account that for some codes
that do not possess a systematic encoder, the free distance may grow beyond δ + 2, where δ is the memory of a minimal
encoder. In order not to complicate the notation, and since Viterbi complexity is not an issue in this paper, we omit the details.
Definition 1. Consider a lower triangular matrix
SR =
r0
r1
r2
..
.
0
r0
r1
..
.
0
0
r0
..
.
···
···
···
..
.
0
0
0
..
.
rL
rL−1
rL−2
···
r0
where each element ri ∈ F.
Consider a square submatrix P of size p of SR, formed by the entries of SR in the rows with indices 1 ≤ i1 < i2 < · · · <
i p ≤ (L + 1) and columns of indices 1 ≤ j1 < · · · < j p ≤ (L + 1). P, and its corresponding minor, are proper if jl ≤ il for all
l ∈ {1, . . . , p}.
SR is superregular if all its proper p × p minors are non singular for any p ≤ L + 1.
When matrix SR is upper triangular the definition of proper submatrices is analogous.
A γ × γ superregular matrix can be used to construct a rate 1/2 code in two ways: (1) [1] a systematic MDS convolutional
code with CDP as in (3) with D = D + 2 = γ + 1, (2) [2] when γ = 2δ + 1, a strongly-MDS code (in general nonsystematic)
with a parity check matrix of max degree δ and the same CDP as for the systematic codes in case (1).
While superregular matrices are known to exist for all dimensions if the field is large enough, general efficient constructions
are not known, and for γ & 10 the minimum field size for which a γ × γ superregular matrix exists is not known. Another
problem with the deterministic approach is that the existing design methods do not allow a simple construction of codes of
high rate and/or high degree. Codes of higher rates (which are desirable in many practical cases) can also be constructed
from these superregular matrices, but this involves deleting columns, so that the conditions on a superregular matrix are too
strict. This means that in practice only simple codes can be constructed in this way. Since superregular matrices are so hard to
construct, the reduction to the superregular matrix problem blocks the code construction. Therefore we generalize Definition 1
as follows:
Definition 2. Consider an s-lower triangular matrix (where s is a positive integer)
r0,1
···
r0,s
0
···
0
0
···
0
r1,1
·
·
·
r
r
·
·
·
r
0
·
·
·
0
1,s
0,1
0,s
r2,1
·
·
·
r
r
·
·
·
r
r
·
·
·
r
2,s
1,1
1,s
0,1
0,s
SSR =
..
..
..
..
..
..
..
..
..
.
.
.
.
.
.
.
.
.
rL−1,1 · · · rL−1,s rL−2,1 · · · rL−2,s rL−3,1 · · · rL−3,s
rL,1
· · · rL,s
rL−1,1 · · · rL−1,s rL−2,1 · · · rL−2,s
···
···
···
0
0
0
..
.
···
··· 0
· · · r0,1
···
···
···
..
.
···
···
0
0
0
..
.
0
(4)
r0,s
Consider a square submatrix P of size p of SSR, formed by the entries of SSR in the rows with indices 1 ≤ i1 < i2 < · · · <
i p ≤ (L + 1) and columns of indices 1 ≤ j1 < · · · < j p ≤ s(L + 1). P, and its corresponding minor, are proper if jl ≤ s · il for
all l ∈ {1, . . . , p}.
The matrix SSR is called s-superregular iff all of its proper p × p minors, for any p ≤ L + 1, are nonsingular.
4
Rate
1/2
1/2
1/2
1/2
2/3
2/3
3/4
Field size
4
8
8
32
16
64
16
D
4
6
6
8
5
5
3
Table I
Description
[11]
[11]
[2] superregular
[2] superregular
[2] ad hoc
[2] superregular
[2] superregular
S OME RATE (n − 1)/n MDS CODES ( NOT NECESSARILY SYSTEMATIC ) DESCRIBED IN THE LITERATURE .
The following lemma is a restatement of Theorem 1 in [1], using the terminology of this section.
Lemma 1. Let H (D) be the parity check matrix of the D − th truncation of a systematic convolutional code, given by
r0,1
r1,1
r2,1
..
.
(D)
H =
rD−1,1
rD,1
···
···
···
..
.
r0,k
r1,k
r2,k
..
.
1
0
0
..
.
···
···
rD−1,k
rD,k
0
0
r0,1
r1,1
..
.
···
···
···
..
.
rD−2,1
rD−1,1
···
···
0
0
r0,k
r1,k
..
.
0
1
0
..
.
rD−2,k
rD−1,k
0
0
r0,1
..
.
···
···
···
..
.
r0,k
..
.
0
0
1
..
.
rD−3,1
rD−2,1
···
···
rD−3,k
rD−2,k
0
0
0
0
0
0
···
···
···
0
0
0
..
.
···
··· 0
· · · r0,1
···
···
···
..
.
0
0
0
..
.
···
···
0
r0,k
0
0
0
..
.
0
1
(5)
and let H 0(D) be the matrix obtained from H (D) by removing the columns in positions (k + 1), 2(k + 1), 3(k + 1), . . . , (D +
1)(k + 1), that is
r0,1
···
r0,k
0
···
0
0
···
0
··· 0 ··· 0
r1,1
···
r1,k
r0,1
···
r0,k
0
···
0
··· 0 ··· 0
r2,1
···
r2,k
r1,1
···
r1,k
r0,1
···
r0,k
··· 0 ··· 0
0(D)
(6)
H
=
..
..
..
..
..
..
..
..
..
..
..
..
.
.
.
.
.
.
.
.
.
.
·
·
·
.
.
rD−1,1 · · · rD−1,k rD−2,1 · · · rD−2,k rD−3,1 · · · rD−3,k · · · 0 · · · 0
rD,1
· · · rD,k
rD−1,1 · · · rD−1,k rD−2,1 · · · rD−2,k · · · r0,1 · · · r0,k
Then the CDP of the convolutional code given by H (D) is (2, 3, . . . , D + 2) if and only if H 0(D) is a k-superregular matrix.
Theorem 1 in [1] is stated without proof. For reference, we include a formal proof in Appendix A.
Definition 3. Let ∆(2m , n) be the largest free distance D such that there exists a rate (n − 1)/n systematic MDS convolutional
code over GF(2m ) with column distance profile as in (3).
The main problem that we address in this paper is to determine exact values, or constructive lower bounds, for ∆(2m , n).
Please note that there is no restriction of the degree D in Definition 3.
There are few known code constructions in the literature, beyond those based on superregular matrices. Table I contains the
current world records with respect to rate (n − 1)/n MDS codes, to the best of our knowledge. We will describe new codes in
Section III.
Although this paper focuses on rate (n − 1)/n MDS codes, we observe that the following lemma, that follows directly from
Theorem 2 in [1], implies that our results will also provide rate 1/n MDS codes.
Lemma 2. If a systematic rate (n − 1)/n MDS code of memory D and free distance D + 2 exists, then its dual code is equivalent
to a systematic rate 1/n MDS code of memory D and free distance (n − 1)(D + 1) + 1.
C. Random convolutional codes
In the terminology of this paper, the random approach [4], [5] consists of selecting the coefficients of ri j independently at
random. The advantage of this is that one can pick codes with large degrees, and that over large fields the expected performance
is “reasonably good”, although the exact loss compared to optimum average performance or optimum guaranteed worst case
performance remains to be determined.
Coefficients need to be transmitted in the headers of the data packets, but this represents only a small rate loss when large
packets are transmitted.
Proposition 1. Consider a hybrid scheme where the first blocks of coefficients ri, j (until time i = D) are selected fixed, and
subsequent random coefficients ri, j for i > D are selected at random. Thus the parity check equation will be on the form
H(x) = HCDP (x) + HRandom (x)
(7)
5
where
D
D
HCDP (x) = ( ∑ ri,1 xi , . . . , ∑ ri,k xi , 1)
i=0
and
i=0
?
HRandom (x)(
∑
i=D+1
?
Ri,1 xi , . . . ,
∑
Ri,k xi , 0)
i=D+1
where all Ri, j are nonzero randomly selected coefficients and where the degree of the random polynomials does not need to be
fixed (except by the application protocol). Then the initial CDP (until time D) is not affected by the random part of the code
construction.
Proof. Obvious: Only the first component HCDP (x) of the parity check matrix determines the initial part of the CDP.
Our suggestion is to use such hybrid codes, i. e. codes where the terms of degree 0, . . . , D of the parity check polynomials
are preselected constants yielding an optimum initial column distance profile, while subsequent random parity checks are added
as needed. This guarantees optimum recovery for the simplest and most likely erasure patterns, and hence better performance
than random codes for light to moderate erasure patterns, while still allowing the degree to grow if required by the application.
III. N EW CODES
Gluesing-Luerssen et. al. [2] use superregular matrices to design codes. However, the authors also give examples of codes
that are better than the ones constructed from superregular matrices, and note that ”..the abundance of (small) examples suggests
that such a construction might be possible and might lead to smaller alphabets for given parameters than the construction ,[...]
We will leave this as an open question for future research.”
So here comes the future research. In this section we present constructions and a new search algorithm that, in combination,
improve our knowledge of ∆(2m , n) for almost all sets of parameters, with respect to what we find in the literature.
A. Codes with free distance D ∈ {3, 4}
We present two optimum constructions, for D ∈ {3, 4}. For D = 3 the construction is simple, but we have not seen it
presented in prior literature.
We have tacitly assumed the following fact for the constant terms. Here comes the justification.
Lemma 3. We can w.l.o.g assume r0,1 = · · · = r0,n = 1.
Proof. If there is a r0, j equal to zero, then d0 < 2. We don’t want that. Then assume some nonzero r0, j 6= 1. If we multiply
−1
the corresponding column of G(D) by r0,
j , we obtain a new code with the same CDP and weight structure.
Proposition 2. ∆(qm , qm ) = 3 for q prime and m ≥ 0.
Proof. Select r0,i = 1 and r1,i , i = 1, . . . , qm − 1 as the qm − 1 distinct nonzero elements of GF(qm ). Without loss of generality,
the parity check matrix of (1) takes the form
1 1 ···
1
1 0 ··· 0 0
(1)
H =
1 2 · · · qm − 1 0 1 · · · 1 1
1 1 ···
1
0 ··· 0
H 0(1) =
1 2 · · · qm − 1 1 · · · 1
is qm -superregular because it is obvious that all the proper minors of sizes 1 and 2 are nonsingular. Clearly, d0 = 2 and d1 =
3.
Remark 2. It is instructive to compare the construction of Proposition 2 with the binary Wyner-Ash codes [6]. Wyner-Ash
codes were considered for digital media transmission already in 1974 [10]. The Wyner-Ash code of length 4 has the binary
polynomial parity check matrix
HWA = 1 + x + x2 1 + x 1 + x2 1 .
It is easy to see that the CDP of the Wyner-Ash code is [2, 2, 3], i. e. this is not an MDS code. The construction of Proposition 2
can be considered as a qm -ary generalization of the Wyner-Ash code, of memory 2, but this code is an MDS code, with CDP
[2, 3].
For D = 4, we present an optimum construction in Proposition 3. Complete computer searches for m ≤ 5 indicate that the
construction is unique and, in a sense, much better than what can be achieved through other choices of the set of first degree
coefficients {r1,i }.
6
Lemma 4. For a code with a CDP of [2, 3, 4], its parity check matrix H (2) must satisfy
(i) ri,s 6= 0 for i = 1, 2, s = 1, . . . , k,
(ii) ri,s 6= ri,t for i = 1, 2, 1 ≤ s < t ≤ k,
(iii) r1,t 6= r2,s /r1,s for 1 ≤ s,t ≤ k,
(iv) r2,s /r1,s 6= r2,t /r1,t for 1 ≤ s < t ≤ k,
(v) r2,s − r2,t 6= r1,u (r1,s − r1,t ) for 1 ≤ s < t ≤ k, 1 ≤ u ≤ k,
(vi) r2,s 6= (r1,s (r2,u − r2,t ) − r1,t r2,u + r1,u r2,t )/(r1,u − r1,t ) for 1 ≤ s < t < u ≤ k.
Proof. From Lemma 1 we need H 0(2) to be k-superregular.
That all 1 × 1 proper minors of H 0(2) are non singular is equivalent to condition (i).
Proper minors of size 2 × 2 are of the following types
1
ri,s
1
0
,
ri,s
r1,t
1
0
,
r2,s
1
r
1
, 1,s
r2,s
ri,t
r
1
, 1,s
r2,s
r1,t
r1,t
r2,t
The first type are trivially non zero, the second type are non zero when condition (i) is satisfied. The third type being
nonsingular is equivalent to condition (ii), the fourth type is guaranteed to be non zero if and only if condition (iii) is satisfied
and the fifth type being nonsingular is equivalent to condition (iv).
Finally, 3 × 3 proper minors can be of four different types:
1
r1,s
r2,s
0
1
r2,t
1
0
0 , r1,s
r2,s
1
1
r1,t
r2,t
1
0
0 , r1,s
r2,s
1
1
r1,t
r2,t
0
1
r1,u
1
, r1,s
r2,s
1
r1,t
r2,t
1
r1,u
r2,u
Those of the first type are trivially nonsingular. Condition (ii) takes care of those in the second type to be nonsingular.
Those in the third type are nonsingular if and only if condition (v) is satisfied. The fifth type are non singular if and only if
condition (vi) is satisfied.
Example 2. Consider the code in Example 1. By checking conditions (i)-(vi) in Lemma 4 we observe that the code has CDP
equal to [2,3,4].
Proposition 3. ∆(2m , 2m−1 ) = 4.
Proof. Let F = GF(2m ).
(≥:) The following construction gives a code that meets the requirements: The trace function [12] is defined by
Trm () : F
x
→ GF(2)
2i
→ Trm (x) = ∑m−1
i=0 x .
Consider the set
Hβ = {x ∈ F|Trm (β x) = 0}.
When F is regarded as an m-dimensional vector space over GF(2), the set Hβ is a hyperplane (an (m − 1)-dimensional linear
subspace) of F. Let k = 2m−1 − 1, select β as an arbitrary nonzero field element, select c as an arbitrary constant in F \ Hβ .
Then select a1 , . . . , ak := r1,1 , . . . , r1,k as all distinct nonzero elements in Hβ , and set bs := r2,s = as (as + c) = r1,s (r1,s + c) for
s = 1, . . . , k. We need to verify that this construction satisfies the conditions in Lemma 4.
(i) This holds because bs = as (as + c) is a product of two nonzeros.
(ii) All as ’s are distinct. Assume that bs = bt , s 6= t. Then 0 = as (as + c) = at (at + c) = (as + at )c + a2s + at2 = (as + at )c + (as +
at )2 = (as + at )(c + as + at ). The first factor is nonzero since as 6= at . The second factor is also nonzero since as + at ∈ Hβ
(because Hβ is closed under addition) while c 6∈ Hβ , a contradiction.
(iii) Assume that as at = bs . Then as at = as (as + c) ⇒ at = as + c, a contradiction, since at ∈ Hβ and as + c 6∈ Hβ .
(iv) Assume that bs /as = bt /at , s 6= t. Then as + c = at + c ⇒ as = at , a contradiction.
(v)
bs + bt + au (as + at ) = as (as + c) + at (at + c) + au (as + at )
= a2s + at2 + (as + at )(c + au )
= (as + at )2 + (as + at )(c + au )
= (as + at )(as + at + c + au )
7
which again is a product of nonzero factors, because c 6∈ Hβ and as + at + au ∈ Hβ , and hence nonzero.
(vi)
bs +
as (bt + bu ) + au bt + at bu
at + au
as (at (at + c) + au (au + c)) + au at (at + c) + at au (au + c)
at + au
as (at + au )2 + as c(at + au ) + at au (at + au )
= as (as + c) +
at + au
= as (as + c) + as (at + au ) + as c + at au
= as (as + c) +
= a2s + as at + as au + at au
= (as + at )(as + au ) 6= 0.
(≤:) This follows from Theorem 1 in Section IV.
Remark 3. By Theorem 1 later, the construction in Proposition 3 is optimum not only in the sense that it offers the maximum
distance 4 for the given field size and code rate, but also it offers the minimum field size for a code of the given rate and
distance 4, and the maximum code rate given the field size and distance 4. Moreover, complete computer search for field sizes
2m ≤ 32 show that the construction is unique for these parameters.
B. Computer search algorithm
The goal of the search algorithm is to select the coefficients ri, j successively, ordered first on i and then reversely on j, in
such a way that the conditions on the minors are met.
1) Some useful facts: First, as for the constructions in Section III-A, we use Lemma 3 in order to set r0,1 = · · · = r0,k = 1.
In order to simplify the search we apply the following results.
Lemma 5. We can w.l.o.g assume r1,i < r1,i+1 , i = 1, . . . , k − 1 for any choice of ordering <.
Lemma 6. Consider an MDS convolutional code C with polynomial parity check matrix
D
D
H(x) = (1 + ∑ ri,1 xi , . . . , 1 + ∑ ri,k xi , 1) ∈ F[x].
i=1
i=1
Then the code Cc with parity check matrix
D
D
Hc (x) = (1 + ∑ ci ri,1 xi , . . . , 1 + ∑ ci ri,k xi , 1) ∈ F[x]
i=1
i=1
is also MDS for any c ∈ F \ {0}.
D
i
i
>
>
Proof. Let v(x) = (v1 (x), . . . , vn (x)) = (∑D
i=0 v1,i x , . . . , ∑i=0 vn,i x ). Then v(x)H(x) = 0 iff vc (x)Hc (x) = 0 for
D
D
i=0
i=0
vc (x) = ( ∑ c−i v1,i xi , . . . , ∑ c−i vn,i xi ).
Corollary 1. If a systematic MDS convolutional code exists, we can w.l.o.g. assume that it has a parity check matrix with
r1,k = 1.
Proof. Assume that a systematic MDS convolutional code exists with a parity check matrix with r1,k = a ∈ F. Then apply
Lemma 6 with c = a−1 .
Lemma 7. Let M be a k-superregular matrix over GF(qm ), with q a prime. Raising each element of M to power q yields
another k-superregular matrix.
Proof. Given any square matrix
a1,1
..
A= .
an,1
· · · a1,n
..
.
· · · an,n
with ai, j ∈ GF(qm ), by definition
det(A) =
∑
s(σ )a1,σ (1) · · · an,σ (n)
σ ∈Sn
where Sn is the group of permutations of n elements and s(σ ) is the sign of each permutation σ .
8
Also we have
(x1 + x2 + · · · + xn )q =
c(q1 , . . . , qn )x1q1 x2q2 · · · xnqn
∑
q1 + · · · + qn = q
0 ≤ qi ≤ q
q−q1 −···−qn−1
1
.
and c(q1 , . . . , qn ) = qq1 q−q
qn
q2 · · ·
When q is prime, q is a divisor of coefficient c(q1 , . . . , qn ) except in the cases qi = q, q j = 0 for j 6= i, and this for each
i ∈ {1, . . . , n}. Therefore, in characteristic q we have
(x1 + x2 + · · · + xn )q = x1q + x2q + · · · + xnq
Back to the definition of determinant, in characteristic q we have
!q
(det(A))q =
∑
s(σ )a1,σ (1) · · · an,σ (n)
=
∑
σ ∈Sn
σ ∈Sn
s(σ )q aq1,σ (1) · · · aqn,σ (n)
Finally, s(σ ) is either 1 or -1, so in case q = 2 then s(σ ) = 1 for any σ ∈ Sn , and s(σ )2 = s(σ ), and in case q is odd we
have s(σ ) = s(σ )q .
This gives
(det(A))q =
∑
σ ∈Sn
s(σ )q aq1,σ (1) · · · aqn,σ (n) =
∑
σ ∈Sn
s(σ )aq1,σ (1) · · · aqn,σ (n) = det(A◦q )
where A◦q denotes the q-th Hadamard or Schur power of A, that is, the matrix whose entries are the the entries of A raised
to q.
Now it is clear that given any proper minor P of size p of matrix M in GF(qm ), P◦q is the corresponding proper minor in
◦q
M and P is nonsingular if and only if P◦q is nonsingular, so M is k-superregular if and only if M ◦q is k-superregular.
Corollary 2. In particular, let M be a k-superregular matrix over GF(2m ). Squaring each element of M yields another
k-superregular matrix.
Proof. This is just the particular case for q = 2 of the Lemma above.
Hence we also have:
Corollary 3. Assume that the values for r0,i , i = 1, . . . , k and for r1,k are all fixed to 1, as allowed by Lemma 3 and Corollary 1.
Then, for r1,k−1 , it suffices to consider one representative of each cyclotomic coset.
Proof. Consider any minor of M. Squaring all coefficients in M will not change the values of r0,i , i = 1, . . . , k or r1,k . Thus if
there is a k-superregular matrix with r1,k−1 = v, then there is also a k-superregular matrix with r1,k−1 = v2 .
The search can be simplified (by constant factors O((2m − 1)n · (2m − 1) · (n − 2)!) by use of Lemma 3, Corollary 1, and
Lemma 5, respectively. Corollary 3 reduces complexity by an extra factor of approximately log2 (2m ) = m, but this reduction
is not entirely independent of the other reductions. In summary, the search algorithm is highly exponential in complexity, but
the tricks allow a deeper search than would otherwise be possible.
The search algorithm is sketched in Algorithm 1. The trickier steps are explained in some detail in Remark 4.
Remark 4. Here we explain the steps of Algorithm 1.
(i) In essence, the algorithm runs through a search tree. At each depth of the tree, ρ points to one of the variables ri, j in
(6). Abusing notation, we will also say that ρ points to the current depth. Throughout the course of the algorithm, ρ
goes back and forth along r1,k−1 , . . . , r1,1 , r2,k , . . . , r2,1 , r3,k , . . ., i. e. along the values of the last row of (6) in reverse order
starting at r1,k−1 (since r1,k , r0,1 , . . . , r0,k can be assumed to be all equal to 1 by Lemma 3 and Corollary 1.) We will in
this context use the ordering “<” to refer to the reverse order of the last row of (6), and addition and subtraction on ρ
moves ρ left and right, respectively, on this row.
(ii) Line 3: Let ρ refer to one element ri, j in the last row of (6). Then Mρ is the set of (formal) proper submatrices of
SrD0 ,1
(6) which have ρ in its left lower corner. If all matrices in M = ρ=r
Mρ for some D0 ≤ D are nonsingular, where
0,k
∗
D = D − 2 is the target maximum degree, then the submatrix of (6) with rD0 ,1 in the lower left corner is k-superregular.
The set Mρ can be found by recursion. (The number of proper submatrices M is related to the Catalan numbers. We
omit the details.)
(iii) Line 5: At depth ρ, values have already been assigned for each depth ρ 0 < ρ. Hence, by keeping track of subdeterminants
already computed, for each determinant corresponding to a proper submatrix in Mρ , the value in GF(2m ) that would
9
Algorithm 1: A computer search algorithm
Result: finds good 2m -ary MDS codes of rate (n − 1)/n
Input : Field size 2m , target distance D ∗ , code length n
Data: ρ points to current position
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
initialization;
value(r0,i ) := 1, i = 1, . . . , k, value(r1,k ) = 1;
SrD ∗ −2,1
Precompute the set of proper submatrices M = ρ=r
Mρ ;
1,k−1
ρ := r1,k−1 ;
Precompute the set of legal values L (ρ);
while ρ ≤ rD ∗ −2,1 and more coefficient values to check for ρ do
if more coefficient values to check for ρ then
assign next value to coefficient at ρ;
update determinants needed for Mρ+1 , and L (ρ + 1);
if deepest level so far then
record selected values of coefficients;
end
ρ = ρ + 1;
else
ρ = ρ − 1;
end
end
make the determinant zero can be obtained in constant time. In other words, going once through Mρ , we can identify
the set L (ρ) of all illegal values for coefficient ρ.
(iv) Line 6: (a) A complete search version of the algorithm will successively try all values in L (ρ). For a faster but incomplete
search, the algorithm may be set to skip an arbitrary subset of values in L (ρ) at each depth ρ. (b) The target distance
D ∗ is an input parameter for the algorithm. In order to determine that a code has a maximum distance D, it is necessary
to verify that a complete search version of the algorithm will pass depth ρ = rD,1 , but not depth ρ = rD+1,1 .
(v) Line 9: Using the set of values currently assigned to coefficients at all depths ρ 0 ≤ ρ, compute subdeterminants that will
be useful for computing determinants in Mρ+1 , and initialize the set of legal values L (ρ + 1) for the next depth.
(vi) Complexity: The assumptions enabled by the Lemmas of this section, together with the efficient computation of the
determinants, allow a deeper search than would be possible with a naı̈ve search. However, the depth of the search tree
that finds a code of degree D is (n − 1)D, and for many of the “early depths”, a complete search needs to go through
almost 2m values. The size of the set of proper submatrices also grows exponentially with n and D. So the overall
∗
complexity is at least O(2mn·(D −2) · |M | · wd ) where |M | is the number of proper submatrices.
C. Codes found by computer search with D ≥ 5
Here we present codes found from computer search, for field sizes of characteristic 2 ranging from 8 to 16384, and free
distances D ≥ 5. Exact values of ∆(qm , qm ) = 3 and ∆(qm , qm−1 ) = 4 are provided by Propositions 2 and 3.
In Tables II–XIII, each row summarizes what we have discovered about rate k/n = (n − 1)/n codes. The ∆ column lists
the maximum value of D for which we have found a code with CDP [2, 3, . . . , D]. The absence of a ≥ sign in this column
indicates that we have established, through an exhaustive search, that this value of D is indeed maximum for this rate and
field size. The Coefficients column presents one encoder that possesses this CDP, in terms of logα () of the coefficients
(r1,k , . . . , r1,1 ), (r2,k , . . . , r2,1 ), (r3,k , . . . , r3,1 ), . . ., where α is the primitive element of the field. Note that the degree zero terms,
(r0,1 , . . . , r0,k ) are suppressed since they are assumed to be identically 1 = α 0 . The R column contains the rareness of the code,
which will be explained in Section IV-B. In the Reference column we include references in the few cases where “similar
codes” (i. e. codes over the same field which have the same CDP, but that do not necessarily possess a systematic encoder)
have previously been described in the literature. We do not list encoders found by the search if we also found codes with the
same set of parameters (rate, CDP) over a smaller field. Also, due to Lemma 8, we do not list codes of rate (n − 2)/(n − 1)
if there exist codes of rate (n − 1)/n with the same CDP:
Lemma 8. If a systematic MDS code with free distance D and rate k/(k + 1) exists for k > 1, then there is also a systematic
MDS code with free distance D and rate (k − 1)/k.
Proof. Shorten the k-superregular matrix H 0(D) by selecting any j0 ∈ {1, . . . , k} and removing columns j0 , j0 + k, j0 + 2k, . . . in
H 0(D) .
10
Coefficients
R
Remark
0, 1, 4, 3
0.035
[11]
Table II
TABLE OF BOUNDS ON ∆(23 , n) FOR THE FIELD DEFINED BY 1 + α + α 3 = 0.
n
2
∆
6
Coefficients
R
Remark
0, 1, 4, 3, 0
0.024
0 1, 4 0, 1 7
0.014
[2]
Table III
TABLE OF BOUNDS ON ∆(24 , n) FOR THE FIELD DEFINED BY 1 + α + α 4 = 0. P LEASE ALSO SEE E XAMPLE 3.
n
2
3
∆
7
5
Example 3. According to Table III, for the finite field GF(24 ) defined by 1+α +α 4 = 0, there exists a systematic code of rate 2/3
and with CDP= [2, 3, 4, 5]. An example of such a code is represented by (r1,2 , r1,1 ), (r2,2 , r2,1 ), (r3,2 , r3,1 ) = (α 0 , α 1 ), (α 4 , α 0 ), (α 1 , α 7 ) =
(1, α), (α 4 , 1), (α, α 7 ), and (implicitly) (r0,1 , . . . , r0,k ) =(1, . . . , 1). Thus the code has a polynomial parity check matrix
H(x) = (1 + αx + x2 + α 7 x3 , 1 + x + α 4 x2 + αx3 , 1)
and encoder/generator matrix
G(x) =
1 0 1 + αx + x2 + α 7 x3
0 1 1 + x + α 4 x2 + αx3
Obviously, G(x)H(x)> = (0, 0). The absence of a ≥ symbol in the ∆ column in Table III indicates that a complete search of all
systematic MDS codes of rate 2/3 reveals that D = 5 is maximum. The R column, as will be explained later, indicates that one
in seventy random assignments of nonzero values for (r1,1 , r1,2 ), (r2,1 , r2,2 ), (r3,1 , r3,2 ) will give a code with the same CDP, i.
e. codes with these parameters are not very rare. A nonsystematic code over GF(24 ) with degree δ = 2 and CDP= [2, 3, 4, 5]
was presented in [2].
IV. U PPER BOUNDS AND CODE ASSESSMENT
It would be useful to determine upper bounds on ∆(qm , n) in order to assess how good the codes from random search are
with respect to optimum. The Heller bound [13] relates convolutional codes with a given free distance D with its truncated
block codes, and uses known bounds on block codes to determine convolutional code parameters that cannot be achieved.
Unfortunately the Heller bound is of limited use in our case, since the truncated code will actually have a much lower minimum
distance than D when viewed as a block code, and also since exact bounds on block codes in the range of parameters that
we are interested in here are not well known. Moreover, the approach of sphere packing for binary codes [14] cannot be
easily adapted to the current case, since the structure of optimum nonbinary codes turns out to be quite different from that of
optimum binary codes2 .
A simple bound is described in the next subsection. In Subsection IV-B we present an alternative way of describing how
great our codes are, through the concept of rareness.
Coefficients
R
0, 1, 19, 5, 24, 15, 0
3.4 · 10−8
0 1, 11 28, 21 6, 24 11
4.4 · 10−5
0 1 18 2, 5 8 17 25, 3 2 13 18
5.2 · 10−11
Table IV
TABLE OF BOUNDS ON ∆(25 , n) FOR THE FIELD DEFINED BY 1 + α 2 + α 5 = 0.
n
2
3
5
∆
9
6
5
Coefficients
R
0, 1, 6, 61, 60, 46, 28, 23
1.2 · 10−10
0 1, 6 0, 2 37, 21 44, 55 28
4.1 · 10−11
0 1 6, 2 6 26, 13 61 38, 30 33 60
1.4 · 10−11
0 1 6 2 12 3, 14 36 26 25 51 13, 19 60 16 62 5 58
3.2 · 10−20
Table V
TABLE OF BOUNDS ON ∆(26 , n) FOR THE FIELD DEFINED BY 1 + α + α 6 = 0.
n
2
3
4
7
∆
10
7
≥6
≥5
2 Optimum binary convolutional codes tend to require parity check matrices with many r
1, j = 0, whereas we have seen that in the nonbinary case, all
degree one coefficients r1, j are nonzero. These differences impose different combinatorial constraints in the binary and the nonbinary case.
11
Coefficients
R
0 1 31 2, 62 103 64 125, 51 57 19 110, 11 39 43 114
8 · 10−18
0 1 31 2 62 32 103, 3 31 15 0 7 1 63,
6.4 · 10−16
8 94 119 51 41 10 17
Table VI
TABLE OF BOUNDS ON ∆(27 , n) FOR THE FIELD DEFINED BY 1 + α 3 + α 7 = 0.
n
5
8
n
2
3
4
11
∆
≥6
≥5
Coefficients
R
0, 1, 25, 3, 0, 198, 152, 56, 68
2.2 · 10−7
0 1, 25 0, 1 238, 100 106, 195 245, 37 33
2.0 · 10−12
0 1 25, 2 25 198, 1 14 228, 113 74 214, 21 250 172
≈ 2 · 10−17
0 96 95 176 156 169 160 81 11 245, 107 5 223 167 7 177 98 238 93 53,
≈ 3 · 10−28
37 208 233 89 75 74 184 31 119 100
Table VII
TABLE OF BOUNDS ON ∆(28 , n) FOR THE FIELD DEFINED BY 1 + α 2 + α 3 + α 4 + α 8 = 0.
∆
≥11
≥8
≥7
≥5
Coefficients
R
0, 54, 91, 181, 267, 291, 379, 28, 95, 143
1.4 · 10−11
0 280 362 276 426, 206 155 326 324 360,
3.9 · 10−11
356 447 507 312 144, 224 375 236 55 448
13
≥5
0 19 325 321 356 397 317 455 98 130 149 413,
8.4 · 10−27
48 101 120 272 209 188 405 352 46 343 289 152,
318 80 256 98 255 274 147 340 392 453 30 451
Table VIII
TABLE OF BOUNDS ON ∆(29 , n) FOR THE FIELD DEFINED BY 1 + α 4 + α 9 = 0.
n
2
6
n
3
5
8
17
n
2
4
∆
≥13
≥8
9
≥6
n
2
6
∆
≥14
≥7
11
∆
≥12
≥6
Coefficients
R
0 603, 246 106, 115 693, 483 544, 603 152, 815 788, 984 721
≈ 10−15
0 498 997 964, 560 214 101 723, 453 111 370 54,
5 · 10−18
455 17 625 509, 904 431 926 856
≥6
0 322 804 12 140 1004 384, 778 916 786 247 586 698 294,
3 · 10−24
379 7 784 239 817 284 398, 178 588 110 41 425 976 393
≥5
0 1 77 2 154 78 956 3 10 155 325 79 618 957 231 4,
4 · 10−39
308 0 4 77 11 1 200 10 80 3 24 155 87 325 619 618,
958 768 255 404 577 976 368 374 709 33 530 109 677 594 652 226
Table IX
TABLE OF BOUNDS ON ∆(210 , n) FOR THE FIELD DEFINED BY 1 + α 3 + α 10 = 0.
∆
≥9
≥7
Coefficients
0, 1992, 813, 1890, 440, 630, 1947, 1574, 1356, 234, 1266
0 1809 1118, 2027 1610 539, 1042 7 1730,
2020 591 1459, 902 899 1584, 172 1192 513
0 1999 762 1845 1102 1115 1014 328, 1349 345 498 1561 27 987 1300 1793,
1728 562 488 304 43 71 1911 1140, 1524 660 465 327 322 748 1574 1414
Table X
TABLE OF BOUNDS ON ∆(211 , n) FOR THE FIELD DEFINED BY 1 + α 2 + α 11 = 0.
R
1.0 · 10−9
5.6 · 10−15
2.0 · 10−22
Coefficients
R
0, 3294, 1040, 448, 3624, 2406, 826, 1122, 587, 1034, 342, 4037
< 10−15
0 3202 2711 92 2688, 3908 1649 1252 3897 1604, 3687 3602 1603 2339 1350,
1.2 · 10−14
1700 2969 104 3406 2679, 1345 919 3302 2116 810
≥6
0 669 4050 4007 745 3863 324 1617 3951 1343,
3 · 10−31
703 1123 782 3343 1919 3177 1839 1006 2183 426,
2139 2050 1676 1187 3222 467 1764 2387 2868 641,
2564 2249 3187 3114 3228 743 443 1220 3540 2620
Table XI
TABLE OF BOUNDS ON ∆(212 , n) FOR THE FIELD DEFINED BY 1 + α 3 + α 4 + α 7 + α 12 = 0.
12
Coefficients
R
0 337, 7672 6843, 3625 3361, 7970 7490,
3.6 · 10−11
5531 2322, 5227 5758, 133 2290, 1453 189
5
≥8
0 441 2192 3413, 3222 7502 7405 4155, 88 5939 343 6171,
≈ 5 · 10−21
1082 8149 2823 7269, 8022 6454 4999 3373, 3518 442 710 6968
7
≥7
0 5160 5711 7681 748 5319, 2131 6233 723 4539 7315 5654,
2 · 10−19
5126 7465 3577 6826 5553 1131, 4954 6763 6593 1568 7157 8112,
1961 4310 877 2927 7197 2672
13
≥6
0 5645 7651 3109 2678 802 6934 1946 5589 2833 5821 38,
≈ 8 · 10−37
5394 2500 5877 3141 4724 3374 5191 7218 4844 423 822 6875,
5712 6619 3935 6414 8025 1422 4391 5698 5481 6850 2635 4786,
556 2558 1063 5172 566 7978 3664 5848 3859 6905 6434 71
Table XII
TABLE OF BOUNDS ON ∆(213 , n) FOR THE FIELD DEFINED BY 1 + α + α 3 + α 4 + α 13 = 0.
n
3
n
4
∆
≥9
8
≥7
15
≥6
∆
≥10
Coefficients
R
0 61 9533, 1260 4487 6469, 3689 8777 4510, 11257 13252 1239,
3 · 10−14
15121 10306 11679, 9618 13110 4549, 12420 5210 13006
0 14132 6404 8841 7620 6707 1150,
1.4 · 10−22
14939 8238 9174 9560 1677 4156 11112,
11424 2037 7827 4640 11071 14007 6628,
13374 10684 2080 14648 1097 14383 1198,
10966 15875 9746 9595 13007 4019 1354
0 15439 10581 4136 503 11096 5590 8608 16006 8229 562 15423 14311 16137,
2 · 10−38
5899 1875 8985 16334 15293 13429 5172 5303 9128 109 10068 1358 7752 6288,
13251 13386 11513 2438 443 15582 4641 2845 3509 12593 6608 14686 11470 15578,
8683 12489 444 8891 4727 12844 12383 5530 4478 9079 9226 5886 6790 8363
Table XIII
TABLE OF BOUNDS ON ∆(214 , n) FOR THE FIELD DEFINED BY 1 + α + α 11 + α 12 + α 14 = 0.
A. A simple bound
The following simple bound is tight for D ≤ 4.
Theorem 1. For rate (n − 1)/n codes over GF(qm ) with CDP = [2, 3, . . . , D], n − 1 ≤ (qm − 1)/(D − 2).
Proof. For D = 3 the result follows from Proposition 2. Assume that D = 4. Recall that all coefficients are nonzero. Consider
the 2 × 2 minors of type
1
r1,s
1
r
= r1,s + r1,t , 1,s
r1,t
r2,s
r
r1,t
= r1,s r2,t + r1,t r2,s , and 1,s
r2,s
r2,t
1
= r2,s + r1,s r1,t .
r1,t
(8)
From the conditions on the 2 × 2 proper minors, since all those minors have to be nonzero, it follows that in order to have
D > 3, the values in the sets {r1,1 , . . . , r1,k } and {r2,1 /r1,1 , . . . , r2,k /r1,k } must be 2k distinct values in GF(qm ) \ {0}.
Now consider a code with D > 4. Then the minors
r2,s
r3,s
r2,t
r
= r2,s r3,t + r2,t r3,s , 2,s
r3,t
r3,s
1
r
= r2,s r1,t + r3,s , and 2,s
r1,t
r3,s
r1,t
= r2,s r2,t + r1,t r3,s .
r2,t
(9)
Again they all have to be nonzero, and this implies that the set {r3,1 /r2,1 , . . . , r3,k /r2,k } is a new set of k different values,
and they are all different from the values in the sets {r1,1 , . . . , r1,k } and {r2,1 /r1,1 , . . . , r2,k /r1,k }. So in order to have D ≥ 5 we
need to have at least 3k different non zero elements in the field.
Generalizing the argument, it follows that all ri,t /ri−1,t for 1 ≤ i ≤ D − 2, 1 ≤ t ≤ k are distinct nonzero values.
B. Rareness
In this section we address the probability that a randomly generated convolutional code over GF(2m ) of rate (n − 1)/n will
be an MDS code with CDP of [2, . . . , D]. By “randomly generated” code we will mean one generated by a random systematic
encoder, where each coding coefficient ri, j is selected independently and uniformly in GF(2m ) \ {0}. We define this probability
as the rareness of the parameter pair (n, D).
For small values of n and D, the exact value of the rareness can be determined as a by-product of a complete code search.
Since for large parameters it quickly becomes intractable to determine the best codes, it also quickly turns difficult to compute
exact results for rareness. However, it is possible to obtain estimates of rareness, as described below.
13
First assume that a complete search is applied. This will determine the set G (ρ ∗ , n, m) of distinct sequences r1,k−1 , . . . , r1,1 ,
r2,k , . . . , ρ ∗ over GF(2m ) for which all proper submatrices in Mρ 0 , ρ 0 ≤ ρ ∗ are nonsingular. Thus the probability that a given
randomly selected sequence corresponds to a path in the search tree that satisfies the conditions at depth ρ ∗ is
PR (ρ ∗ , n, m) =
|G (ρ ∗ , n, m)|
∗ .
(2m − 1)|ρ |
For |ρ ∗ | > 1, define
PR (ρ ∗ , n, m|ρ ∗ − 1) =
|L ρ ∗ |
PR (ρ ∗ , n, m)
= Avg( m
)
∗
PR (ρ − 1, n, m)
2 −1
(10)
where Avg() is the average computed over the complete search. PR (ρ ∗ , n, m|ρ ∗ − 1) is the average conditional probability that
a random generator which satisfies depth ρ ∗ − 1 in the search tree also satisfies depth ρ ∗ . For large parameters we are not able
to carry out a complete search. However, we can perform deep but incomplete searches, which also provide estimates of the
conditional probabilities PR (ρ ∗ , n, m|ρ ∗ − 1) in (10) as . These estimates will be quite accurate especially for the first depths,
and hence they can be changed together to obtain an estimate for PR (ρ ∗ , n, m). As long as there is a substantial number of
different search tree paths leading to depth ρ ∗ − 1, the estimate PR (ρ ∗ , n, m|ρ ∗ − 1) should be reasonably good. Hence we can
also estimate PR (ρ ∗ , n, m|ρ ∗ − 1)) as
˜
P̃R (ρ ∗ , n, m|ρ ∗ − 1)) = Avg(
|L ρ ∗ |
2m − 1
)
˜
where Avg()
is the (weighted) average computed over the incomplete search, and we can then estimate PR (ρ ∗ , n, m) as
ρ∗
P̃R (ρ ∗ , n, m) =
∏
P̃R (ρ − 1, n, m|ρ − 1)
ρ=(1,k−1)
where for ρ = (1, k − 1), P̃R (ρ − 1, n, m|ρ − 1) = 1.
In Tables II–XIII, we include the exact rareness in cases where we can perform a complete search, and otherwise we include
the estimate. We concede that this approach is not foolproof. For example, the construction in Proposition 3, is unique at least for
field sizes up to 32. For other choices for the first layer of coefficients r1,1 , . . . , r1,2m−1 than indicated in the proof of Proposition 3,
it appears that the search tree ends up being considerably shallower. The rareness of the construction in Proposition 3, i. e. the
m
probability that a random sequence will match that construction exactly, is 2m−1 · (2m−1 − 1)!/(2m − 1)2 −3 . Already for m = 5
−30
−393
the rareness is about 10 , for m = 8 less than 10
. Hence, if for an arbitrary set of search parameters there exists a very
rare construction that is not caught by the incomplete search, the estimates for the deepest values of ρ ∗ may be unprecise.
However, we do believe that our estimates of PR (ρ ∗ , n, m) provide some intuition about the difficulty of reaching a certain
depth in the search tree with a random path, and in the cases where we are able to carry out a complete search, we also note
that the estimates as described here are pretty accurate with a modest non-exhaustive search effort.
Figure 1 contains exact values (for n = 2, 3) and estimates (for n = 4, 7) of PR (ρ ∗ , n, 6). Please see the figure caption for
explanations. We have also include rareness estimates in Tables II–XIII.
V. C ONCLUSION AND OPEN PROBLEMS
Motivated by the practical problem of fast recovery of a coded packet-erasure channel, we have studied systematic MDS
convolutional codes over GF(2m ). We have characterized them in terms of k-superregularity of a certain matrix. We have
presented new optimum constructions for free distances D ≤ 4, tables of new codes found by computer search, and a
combinatorial upper bound which is tight in the case of small free distances. In order to assess how “good” a code is,
we have also introduced the concept of rareness.
It would be interesting to establish upper bounds that are tight also for larger free distances. Another issue would be to
study whether there exist general algebraic constructions, similar to the one in Proposition 3, for systematic MDS codes of
free distance D ≥ 5.
It would also be of some theoretical interest to optimize the CDP of strongly-MDS codes over GF(2m ) under an additional
constraint on the degree δ of their minimal encoders. We have not considered this problem since the complexity of Viterbi
decoding of such codes is prohibitive for all but small values of the product m · δ (and since it seems difficult).
R EFERENCES
[1] E. M. Gabidulin, “Convolutional codes over large alphabets,” in Proc. Int. Workshop on Algebraic Combinatorial and Coding Theory, Varna, Bulgaria,
1988, pp. 80—84.
[2] Heide Gluesing-Luerssen, Joachim Rosenthal, and Roxana Smarandache, “Strongly-MDS Convolutional Codes,” IEEE Transactions on Information
Theory, vol. 52, no. February 2006, 584–598.
[3] Paulo Almeida, Diego Napp, and Raquel Pinto, “A new class of superregular matrices and MDP convolutional codes,”
14
1
4
10−2
10−6
10−8
10−10
10−12
10−14
10−16
log10 𝑃( random code is MDS)
10−4
9
6
10
5
7
6
Rate 1/2
Rate 2/3
10−18
Rate 3/4
Rate 6/7
5
Number of coefficients 𝜌 (starting with 𝑟1,𝑘−1 )
Figure 1. Rareness PR (ρ, n, 6) of codes for GF(64) for n ∈ {2, 3, 4, 7}: Exact rareness PR (ρ, n, 6) for ρ ≤ 7, estimates P̃R (ρ, n, 6) for n > 7. In the figure, the
search depth ρ is measured in terms of number of coefficients. In order to construct a rate 6/7 encoder of distance D = 5, it is necessary to find a sequence
of 17 coefficients r1,5 , . . . , r1,1 , r2,6 , . . . , r3,1 . To get an encoder with distance D = 4, it suffices with 11 coefficients. Similar for the other cases.
[4] Pierre Ugo Tournoux, Emmanuel Lochin, Jérôme Lacan, Amine Bouabdallah, and Vincent Roca, “On-the-Fly Erasure Coding for Real-Time Video
Applications,” IEEE Transactions on Multimedia, vol. 17, no. 4, (2011), pp. 797–812.
[5] M. Kim, J. Cloud, A. Parandeh Gheibi, L. Urbina, K. Fouli, D. J. Leith, and M. Médard, “Network Coded TCP (CTCP),” http://arxiv.org/abs/1212.2291.
[6] A. D. Wyner and R. B. Ash, “Analysis of recurrent codes,” IEEE Transactions on Information Theory,Vol. 9, Issue: 3, Jul. 1963, pp. 143 – 156.
[7] Øyvind Ytrehus, “Ascetic convolutional codes,” Proc. 33rd. Allerton Conference on Communications, control, and Computing (October 1995), 382–390.
[8] Robert J. McEliece, The Algebraic Theory of Convolutional Codes,, in: Handbook of Coding Theory, Eds. V. S. Pless and W. C. Huffman, pp. 1065–1138,
North-Holland, 1998.
[9] S. Lin and D. Costello, Error Control Coding, 2nd. Ed., Prentice-Hall, 2004.
[10] J. H. Stott, A. Oliphant, D. W. Osborne, “Digital video: Error correcting codes and a practical study of a Wyner-Ash error corrector,” Techn. Report,
British Broadcasting Corporation, December 1974.
[11] J. Justesen and L. Hughes, “On maximum-distance-separable convolutional codes (Corresp.),” in IEEE Transactions on Information Theory, vol. 20, no.
2, pp. 288–288, Mar 1974.
[12] F. J. MacWilliams and N. J. A. Sloane,, The Theory of Error-Correcting Codes, Elsevier, 1977.
[13] J. A. Heller, “Sequential decoding: Short constraint length convolutional codes,” Space Programs Summary JPL, Pasadena, CA, 1968.
[14] Eirik Rosnes and Øyvind Ytrehus, “Sphere-packing bounds for convolutional codes,” IEEE Transactions on Information Theory,Vol. 50, Issue: 11, Nov.
2004, pp. 2801 – 2809.
A PPENDIX A : P ROOF OF L EMMA 1
Proof. Before starting we will set some notations.
In H (D) for each s = 1, . . . , (D + 1) let Cs be the set of column indices Cs = {(s − 1)(k + 1) + 1, (s − 1)(k + 1) + 2, . . . , s(k + 1)}.
Taking into account the way H (D) is constructed it is clear that for any s ≤ D, the submatrix of H (D) formed by the first
s + 1 rows and the columns in C1 , . . . ,Cs+1 is H (s) (the parity-check matrix of the s-th truncation).
Also the submatrix formed by the last s + 1 rows and the columns in CD−s+1 , . . . ,CD+1 is H (s) .
For each set of column indices Cs the last index is s(k + 1) and the corresponding column in H (D) is the s-th column of the
identity ID+1 .
In an analogous way we will call Cs0 the set of column indices Cs0 = {(s − 1)k + 1, (s − 1)k + 2, . . . , sk} in the matrix H 0(D) .
15
In what follows we will use the same name for a square submatrix and for the corresponding minor since it will create no
confusion.
Now we start the proof.
1) Assume that the CDP is (d0 = 2, d1 = 3, . . . , dD = D + 2).
In particular d0 = 2 implies that all the entries r0, j for j ∈ C1 are non zero.
Let M 0 be a proper minor of H 0(D) of size p × p formed by the entries of H 0(D) in rows with indices 1 ≤ i01 < i02 < · · · <
i0p ≤ s + 1 and columns with indices 1 ≤ j10 < j20 < · · · < j0p ≤ k(D + 1).
Since M 0 is proper we have jl0 ≤ ki0l for l = 1, . . . , p.
Let F 0 = {i01 , . . . , i0p } be the set of row indices in M 0 . From M 0 we construct a (D + 1) × (D + 1) minor M in H (D) by
doing the following:
• The row indices are {1, . . . , D + 1}.
• For each s ∈ {1, . . . , D + 1} we define the column index js as follows:
– If s ∈ F 0 then there exists a unique l(s) ∈ {1, . . . , p} such that i0l(s) = s (note that l(s) ≤ s). Considering the
0
corresponding column index in M 0 we have jl(s)
= ql(s) k + rl(s) with 0 ≤ ql(s) ≤ D and 1 ≤ rl(s) ≤ k unique. (We
0
note that l is an increasing function of s and also that jl(s)
≤ ki0l(s) = ks, which implies ql(s) < s).
0
0
Then define js = ql(s) (k +1)+rl(s) . Clearly jl(s) ∈ Cql(s) +1 and jl(s) ∈ Cql(s) +1 and actually the corresponding columns
are identical.
– If s ∈
/ F 0 then js = s(k + 1), so the corresponding column is the last in the block with column indices Cs .
Let us note that 1 ≤ j1 , . . . , jD+1 ≤ (D + 1)(k + 1) but those column indices are not guaranteed to be ordered in increasing
order as the js0 were.
The added columns will form a submatrix which is ID+1−p in the rows that were not in F 0 , and we have
ID+1−p ?
M=
0
M0
therefore the value of minor M 0 is the same as the value of M and in order to see that H 0 (D) is k-superregular we just
need to check that M 6= 0. We will proceed in a recursive way using that each s-th truncation will provide minimum
distance ds = s + 2 for each s ≤ D.
• M has at least one column index in C1 .
Proof: If there are no columns in C1 it means that 1 ∈ F 0 (otherwise j1 = 1(k + 1), which is in C1 , would be in M).
1 ∈ F 0 implies i01 = 1, hence l(1) = 1 and j10 ≤ ki01 = k, that is, j10 = 0 · k + r1 , and j1 = 0(k + 1) + r1 < (k + 1). This
means j1 ∈ C1 which would contradict the assumption.
• If M has exactly 1 column in C1 then the other D columns have indices in C2 ∪ · · · ∪CD+1 and all have 0 in the first
position, so we have
r0, j1
0···0
M=
∗
M2...D+1
•
where M2...D+1 is the submatrix of M formed by the last D rows and the last D columns. Since r0, j1 6= 0 we have
M 6= 0 if and only if M2...D+1 6= 0, and we can proceed working with M2...D+1 in H (D−1) in the same way.
If M has at least two columns in C1 . Suppose that s is the first index for which we have that at least two columns
of M are in C1 , at least 3 are in C1 ∪C2 , . . . , at least s + 1 columns are in C1 ∪C2 ∪ · · · ∪Cs but there are no s + 2
columns in C1 ∪C2 ∪ · · · ∪Cs ∪Cs+1 .
This clearly implies that there are no column of M in Cs+1 .
Now let us consider each t ∈ {1, . . . , s + 1}.
– If t ∈ F 0 there exists l(t) ≤ t ≤ s + 1 such that i0l(t) = t.
0 = q k+r
0
jl(t)
l(t)
l(t) ≤ kil(t) = kt, therefore ql(t) ≤ t − 1, and from here jt = ql(t) (k + 1) + rl(t) ≤ (t − 1)(k + 1) + rl(t) .
So jt ∈ C1 ∪C2 ∪ · · · ∪Ct ⊆ C1 ∪C2 ∪ · · · ∪Cs+1 , but it cannot be in Cs+1 , then jt ∈ C1 ∪C2 ∪ · · · ∪Cs .
– If t ∈
/ F 0 . Note that in this case t ≤ s since s + 1 ∈
/ F 0 implies column (s + 1)(k + 1) is in M and in Cs+1 , contradicting
that there were no columns in Cs+1
Then jt = t(k + 1) ∈ C1 ∪C2 ∪ · · · ∪Cs .
We have proven that even though indices j1 , . . . js+1 are not ordered in increasing order, we have that they are all in
C1 ∪C2 ∪ · · · ∪Cs .
On the other hand, index js+2 ∈
/ C1 ∪C2 ∪ · · · ∪Cs+1 .
Hence, M can be decomposed as
M1...s+1
0
M=
∗
Ms+2...D+1
16
where M1...s+1 is the part of M corresponding to the first s + 1 rows and columns and we have proven it is contained
in the submatrix of H (D) formed by the first s + 1 rows and the first (s + 1)(k + 1) columns, which actually is H (s) and
it is guaranteed to be non zero because ds = s + 2 and the minor satisfies the condition that it has at least 2 columns
among the first k + 1 columns of H (s) , at least three among the first 2(k + 1), . . ., and at least s + 1 among the first
s(k + 1).
Minor Ms+2...D+1 is formed by the last D − s rows and columns of M and it is contained in the submatrix of H (D)
formed by the last D − s rows and the last (D − s)(k + 1) columns, which is H (D−s−1) and the same argument used
so far can be used to prove that is is non zero by decomposing it further into blocks; each of them nonzero.
Finally, we can note that M will have at most one column index in CD+1 . Having at least two would imply that M 0 has
0
also at least two columns in CD+1
and this would contradict the condition of M 0 being proper since i0p−1 ≤ D, j0p−1 ∈
0
0
CD+1 implies j p−1 ≥ kD + 1 > kD = ki0p−1 .
2) Suppose now that H 0(D) is k-superregular.
Consider a minor M of size (D + 1) in H (D) formed by the columns in positions 1 ≤ j1 < j2 < · · · < jD+1 ≤ (D + 1)(k + 1)
and assume that j2 ≤ (k + 1), j3 ≤ 2(k + 1), . . . , jD+1 ≤ D(k + 1).
We construct a minor M 0 by removing from M any column which is in position s(k + 1) and the corresponding row s.
As before, it is clear that
ID+1−p ?
M=
0
M0
where D + 1 − p is the number of removed columns and p is the size of the remaining minor M 0 .
With a careful analysis similar to the one done in the reciprocal part of the proof, one can prove that M 0 is a proper minor
in H 0 (D) and hence non zero. For this we will continue using the same notations as in the demonstration of the reverse.
Consider that the rows remaining in M 0 are 1 ≤ i01 < i02 < · · · i0p = D + 1. We call this set of indices F 0 as before. The other
rows correspond to the identity columns that have been suppressed.
The corresponding column indices in M 0 are 1 ≤ j10 < j20 < · · · < jD+1 , and each of those columns jt0 is a copy of a column
0 . Using the same notations as in the
j f (t) in M and it is clear that j f (t) ∈ Cb(t) for some b(t) ≤ D + 1, implies jt0 ∈ Cb(t)
other part of the proof, if j f (t) = q f (t) (k + 1) + r f (t) with 0 ≤ q f (t) ≤ D and 1 ≤ r f (t) ≤ k, then jt = q f (t) k + r f (t) , so they
will be in the same block of column indices; b(t) = q f (t) + 1. Note that r(t) ≤ k since column indices that are multiples
of k + 1 will be removed and will never turn into columns in M 0 .
• First we observe that i p = D + 1 because there were no columns of M in the block with indices in CD+1 , hence the
last column of ID+1 cannot be removed (it was never there) and row D + 1 remains in F 0 .
The corresponding column j0p will be a copy of some column j f (p) ≤ jD+1 ∈ C1 ∪ · · · ∪CD , so j0p ∈ C10 ∪ · · · ∪CD0 , hence
j0p ≤ Dk < (D + 1)k ≤ i0p k. So the proper condition is satisfied for the last index.
• In general when we consider row index in position p − s we have i0p−s = D + 1 − s − r where r is the number of
identity columns after the D + 1 − s that have been removed.
Column j0p−s is copy of column j f (p−s) and f (p−s) ≤ D+1−s−r (s columns after it have been already considered and
r have been removed. From here we have j f (p−s) ≤ jD+1−s−r ∈ C1 ∪ · · · ∪CD−s−r and this implies j0p−s ≤ (D − s − r)k <
i0p−s k.
• A final observation is that j10 is always in block C10 (because block C1 contained at least two columns of M, so even
is one is removed there will always be at least one column remaining in that first block. On the other hand i01 ≥ 1
and we have j10 ≤ k ≤ ki01 .
The first and last observations are not necessary but they help to understand the general case.
We have proven that minor M 0 is proper and therefore cannot be singular.
| 7 |
FUSION SYSTEMS WITH SOME SPORADIC J-COMPONENTS
arXiv:1605.04615v3 [math.GR] 7 Jun 2017
JUSTIN LYND AND JULIANNE RAINBOLT
Abstract. Aschbacher’s program for the classification of simple fusion systems of “odd”
type at the prime 2 has two main stages: the classification of 2-fusion systems of subintrinsic component type and the classification of 2-fusion systems of J-component type. We
make a contribution to the latter stage by classifying 2-fusion systems with a J-component
isomorphic to the 2-fusion systems of several sporadic groups under the assumption that
the centralizer of this component is cyclic.
1. Introduction
The Dichotomy Theorem for saturated fusion systems [AKO11, II 14.3] partitions the
class of saturated 2-fusion systems into the fusion systems of characteristic 2-type and the
fusion systems of component type. This is a much cleaner statement than the corresponding
statement for finite simple groups, and it has a much shorter proof. In the last few years, M.
Aschbacher has begun work on a program to give a classification of a large subclass of the
2-fusion systems of component type. A memoir setting down the outline and first steps of
such a program is forthcoming [Asc16], but see [Asc15] for a survey of some of its contents.
The immediate goal is to give a simpler proof of roughly half of the classification of the
finite simple groups by carrying out most of the work in the category of saturated 2-fusion
systems.
Let F be a saturated fusion system over a finite 2-group S, of which the standard example
is the fusion system FS (G), where G is a finite group and S is a Sylow 2-subgroup of G. A
component is a subnormal, quasisimple subsystem. The system is said to be of component
type if some involution centralizer in F has a component. The 2-fusion systems of odd type
consist of those of subintrinsic component type and those of J-component type. This is a
proper subclass of the 2-fusion systems of component type. In focusing attention on this
restricted class, one is expected to avoid several difficulties in the treatment of standard form
problems like the ones considered in this paper. By carrying out the work in fusion systems,
it is expected that certain difficulties within the classification of simple groups of component
type can be avoided, including the necessity of proving Thompson’s B-conjecture.
We refer to [Asc16] for the definition of a fusion system of subinstrinsic component type,
as it is not needed in this paper. The fusion system F is said to be of J-component type if
it is not of subintrinsic component type, and there is a (fully centralized) involution x ∈ S
such that the 2-rank of CS (x) is equal to the 2-rank of S, and CF (x) has a component. We
shall call such a component in an involution centralizer a J-component.
Date: March 13, 2018.
Key words and phrases. fusion systems, sporadic groups, involution centralizer, components.
The research of the first author was partially supported by NSA Young Investigator Grant H98230-14-10312 and was supported by an AMS-Simons grant which allowed for travel related to this work.
1
In this paper, we classify saturated 2-fusion systems having a J-component isomorphic to
the 2-fusion system of M23 , J3 , McL, or Ly under the assumption that the centralizer of the
component is a cyclic 2-group. A similar problem for the fusion system of L2 (q), q ≡ ±1
(mod 8) was treated in [Lyn15] under stronger hypotheses.
Theorem 1.1. Let F be a saturated fusion system over the finite 2-group S. Suppose that
x ∈ S is a fully centralized involution such that F ∗ (CF (x)) ∼
= Q × K, where K is the 2-fusion
system of M23 , J3 , McL, or Ly, and where Q is a cyclic 2-group. Assume further that
m(CS (x)) = m(S). Then K is a component of F .
Here F ∗ (CF (x)) is the generalized Fitting subsystem of the centralizer CF (x) [Asc11],
and m(S) := m2 (S) is the 2-rank of S – that is, the largest rank of an elementary abelian
2-subgroup of S. We mention that any fusion system having an involution centralizer with a
component isomorphic to McL or Ly is necessarily of subintrinsic component type by [Asc16,
6.3.5]. This means that, when restricted to those components, Theorem 1.1 gives a result
weaker than is needed to fit into the subintrinsic type portion of Aschbacher’s program.
However, we have included McL and Ly here because our arguments apply equally well in
each of the four cases.
There is no almost simple group with an involution centralizer having any of these simple
groups as a component, but the wreath product G = (K1 × K2 )hxi with K1x = K2 always
has CG (x) = hxi × K with K a component that is diagonally embedded in K1 × K2 . The
strategy for the proof of Theorem 1.1 is to locate a suitable elementary abelian subgroup F
in the Sylow 2-subgroup of K, and then to show that the normalizer in S of E := hxiF has
at least twice the rank as that of F . Thus, the aim is to force a resemblance with the wreath
product, in which NG (hxiF ) modulo core is an extension of F1 × F2 (with Fi the projection
of F onto the ith factor) by hxi × AutK (F ). Lemma 3.2 is important for getting control of
the extension of E determined by NF (E) in order to carry out this argument.
Acknowledgements. We would like to thank the Department of Mathematics and Statistics at Saint Louis University and the Departments of Mathematics at Rutgers University
and Ohio State University for their hospitality and support during mutual visits of the authors. We would also like to thank R. Solomon and R. Lyons for helpful discussions, and an
anonymous referee for their comments and suggestions.
2. Background on fusion systems
We assume some familiarity with notions regarding saturated fusion systems as can be
found in [AKO11] or [Cra11], although some items are recalled below. Most of our notation
is standard.
Whenever G is a group, we write G# for the set of nonidentity elements of G. If we wish
to indicate that G is a split extension of a group A P G by a group B, then we will write
G = A · B. For g ∈ G, denote by cg the conjugation homomorphism cg : x 7→ xg and its
restrictions. Morphisms in fusion systems are written on the right and in the exponent.
That is, we write xϕ (or P ϕ ) for the image of an element x (or subgroup P ) of S under a
morphism ϕ in a fusion system, by analogy with the more standard exponential notation for
conjugation in a group.
2
2.1. Terminology and basic properties. Throughout this section, fix a saturated fusion
system F over the p-group S. We will sometimes refer to S as the Sylow subgroup of F . For
a subgroup P ≤ S, we write AutF (P ) for HomF (P, P ), and OutF (P ) for AutF (P )/ Inn(P ).
Whenever two subgroups or elements of S are isomorphic in F , we say that they are F conjugate. Write P F for the set of F -conjugates of P . If E is a subsystem of F on the
subgroup T ≤ S and α : T → S is a morphism in F , the conjugate of E by α is the
subsystem E α over T α with morphisms ϕα := α−1 ϕα for ϕ a morphism in E.
We first recall some of the terminology for subgroups and common subsystems in a fusion
system.
Definition 2.1. Fix a saturated fusion system over the p-group S, and let P ≤ S. Then
• P is fully F -centralized if |CS (P )| ≥ |CS (Q)| for all Q ∈ P F ,
• P is fully F -normalized if |NS (P )| ≥ |NS (Q)| for all Q ∈ P F ,
• P is F -centric if CS (Q) ≤ Q for all Q ∈ P F ,
• P is F -radical if Op (OutF (P )) = 1,
• P is weakly F -closed if P F = {P },
• the centralizer of P in F is the fusion system CF (P ) over CS (P ) with morphisms
those ϕ ∈ HomF (Q, R) such that there is an extension ϕ̃ ∈ HomF (P Q, P R) that
restricts to the identity on P ,
• the normalizer of P in F is the fusion system NF (P ) over NS (P ) with morphisms
those ϕ ∈ HomF (Q, R) such that there is an extension ϕ̃ : P Q → P R in F such that
P ϕ̃ = P .
We write F f and F c for the collections of fully F -normalized and F -centric, respectively,
and we write F f c for the intersection of these two collections.
Sometimes we refer to an element x of S as being fully F -centralized, when we actually
mean that the group hxi is fully F -centralized, especially when x is an involution. For
example, this was done in the the statement of the theorem in the introduction.
Whenever P ≤ S, we write A(P ) for the set of α ∈ HomF (NS (P ), S) such that P α is fully
F -normalized.
Lemma 2.2. For each P ≤ S, A(P ) is not empty. Moreover, for each Q ∈ P F ∩ F f , there
is α ∈ A(P ) with P α = Q.
Proof. This is [BLO03, A.2(b)], applied with K = Aut(P ).
By a result of Puig, the centralizer CF (P ) is saturated if P is fully F -centralized, and the
normalizer NF (P ) is saturated if P is fully F -normalized. We write Op (F ) for the (unique)
largest subgroup P of S satisfying NF (P ) = F , and Z(F ) for the (unique) largest subgroup
P of S satisfying CF (P ) = F . We note that if F = FS (G) for some finite group G with
Sylow p-subgroup S, then Op (G) ≤ S is normal in F so that Op (G) ≤ Op (F ), but the
converse does not hold in general.
2.2. The Model Theorem. A subgroup P ≤ S is F -centric if and only if CS (P ) ≤ P and
P is fully F -centralized [AKO11, I.3.1]. If P is F -centric and fully F -normalized, then the
normalizer fusion system M := NF (P ) is constrained – that is, Op (M) is M-centric. By the
Model Theorem [AKO11, Proposition III.5.10], there is then a unique finite group M up to
isomorphism having Sylow p-subgroup NS (P ) and such that Op′ (M) = 1, Op (M) = Op (M),
and FNS (P ) (M) ∼
= M. Then M is said to be a model for M in this case.
3
2.3. Tame fusion systems. The main hypothesis of Theorem 1.1 is that the generalized
Fitting subsystem of the involution centralizer C is the fusion system of a finite group Q×K,
where Q is a cyclic 2-group, and K is simple. In this situation, CF (x) is itself the fusion
system of a finite group C with F ∗ (C) = Q × K, where K ∼
= M23 , McL, J3 , or Ly, since
each of these simple groups tamely realizes its 2-fusion system [AOV12, Oli16b]. Roughly, a
finite group tamely realizes its fusion system if every automorphism of its fusion system is
induced by an automorphism of the group. Moreover, a fusion system is said to be tame if
there is some finite group that tamely realizes it. We refer to [AOV12] for more details.
The importance of tameness in the context of standard form problems was pointed out in
[Lyn15, §§1.5]. The discussion there is centered around the notion of strong tameness, which
was needed for proofs of the results of [AOV12], but the contents of [Oli13, GL16] imply that
a fusion system is tame if and only if it is strongly tame. Recently, Oliver has established
the following useful corollary of the results in [AOV12], which we state for our setup here.
Theorem 2.3 ([Oli16a, Corollary 2.5]). Let C be a saturated fusion system over a 2-group.
Assume that F ∗ (C) = O2 (C)K, where K is simple and tamely realized by a finite simple group
K. Then C is tamely realized by a finite group C such that F ∗ (C) = O2 (C)K.
Note that, upon application of Theorem 2.3 to the involution centralizer C = CF (x) in
Theorem 1.1, we have O2 (C) = Q = O2 (C). Indeed, O2 (C) ≤ Q since O2 (C) is normal in C,
and one sees that Q = CS (K) by combining Lemma 1.12(c) of [Lyn15] with Lemma 3.7(c)
below. However, O2 (C) is normal and self-centralizing in CC (K) by properties of the generalized Fitting subgroup, so that CC (K)/O2 (C) is a group of outer automorphisms of the
cyclic 2-group O2 (C), and so is itself a 2-group. It follows that CC (K) = CS (K) is a normal
2-subgroup of C (since K E C), and hence Q = CS (K) ≤ O2 (C).
Thus, the effect of Theorem 2.3 for our purposes is that we may work in the group C,
where Q is a normal subgroup. In particular, in the setup of the Theorem 1.1, the quotient
C/Q is isomorphic to a subgroup of Aut(K) containing Inn(K), where K is one of the simple
groups appearing in Theorem 1.1.
3. Structure of the components
In this section, we recall some properties of the simple systems appearing in Theorem 1.1
that are required for the remainder.
Lemma 3.1. Let G be A7 or GL2 (4), and V a faithful F2 [G]-module of dimension 4. Then
(a) G acts transitively on the nonzero vectors of V ,
(b) CGL(V ) (G) ≤ G,
(c) H 1 (G, V ) = 0, and
(d) if G acts on a homocyclic 2-group Y with Ω1 (Y ) = V , then Y = V .
Proof. In each case, V is irreducible. There is a unique such module for GL2 (4) ∼
= C3 × A5 ,
namely the natural F4 [G]-module considered as a module over F2 , and thus (a) holds in
this case. The module for A7 is unique up to taking duals; clearly points (a) and (b) are
independent of the choice between these two modules, and (c) is independent of such a choice
by [Asc00, §17]. Note that A7 acts transitively on V # , which can be seen by noting that
a Sylow 7-subgroup acts with exactly one fixed point on V # , and a Sylow 5-subgroup acts
with no fixed points. Point (b) holds for G = A7 by absolute irreducibility. Similarly for
4
G = GL2 (4), one has that CGL(V ) (G) = EndF2 [G] (V )× = F×
4 , and so (b) follows in this case
as Z(G) ∼
C
.
Point
(c)
for
G
=
GL
(4)
holds
because,
by
coprime action, Z(G) (and so
= 3
2
G) has a fixed point on any 5-dimensional module containing V as a submodule (see [Asc00,
§17]). Point (c) for G = A7 holds, for example, by applying [AG72] with L = {L0 , L1 , L2 },
where L1 = CG ((1, 2, 3)), L2 = CG ((4, 5, 6)), and L0 = L1 ∩ L2 . The Li indeed satisfy the
hypotheses of that theorem: H 0 (Li , V ) = 0 because each Sylow 3-subgroup of GL4 (2) ≥ G
has no nontrivial fixed point on V , and H 1 (Li , V ) = 0 using a similar argument via coprime
action as above.
We now turn to (d), which follows from a special case of a result of G. Higman [Hig68,
Theorem 8.2]. This says that if SL2 (4) acts faithfully on a homocyclic 2-group Y in which
an element of order 3 acts without fixed points on Y # , then Y is elementary abelian. In case
G = GL2 (4), V is the natural module for G and certainly SL2 (4) ≤ G has (with respect to
an appropriate basis) a diagonal element of order 3 acting without fixed points. In the case
G = A7 , we have G ≤ A8 ∼
= GL4 (2), and the action of G on V is the restriction of the natural
(or dual) action of GL4 (2). Restriction of either one to GL2 (4) ∼
= C3 × SL2 (4) ∼
= C3 × A5
shows that SL2 (4) is embedded in G as an A5 moving 5 points in the natural permutation
action. So SL2 (4) is contained in G up to conjugacy, and as before, it has an element of
order 3 acting without fixed points on V # . Hence, (d) holds in this case as well by Higman’s
Theorem.
For a vector space E over the field with two elements, the next lemma examines under rather strong hypotheses the structure of extensions of E by certain subgroups of the
stabilizer in GL(E) of a hyperplane.
Lemma 3.2. Let E ∼
= E2n , P = NA (V ), and
= E2n+1 (n ≥ 3), A = Aut(E), V ≤ E with V ∼
U = O2 (P ). Let L be a complement to U in P acting decomposably on E, x ∈ E − V the
fixed point for the action of L, and G ≤ L. Let H be an extension of E by UG with the given
action and let X be the preimage in H of U under the quotient map H → UG. Assume that
(a) G acts transitively on V # ;
(b) CGL(V ) (G) ≤ G; and
(c) H 1 (G, V ) = 0.
Then there is a subgroup Y of X that is elementary abelian or homocyclic of order 22n and
a G-invariant complement to hxi in X.
Proof. Let X̄ = X/V . Since the commutator map
(3.3)
[x, −] determines a G-equivariant linear isomorphism X/E → V ,
G is transitive on the nonzero vectors of X/E by (a), and hx̄i ≤ Z(X̄). Hence if X̄ is not
elementary abelian, then it is extraspecial with center hx̄i, and G preserves the squaring map
X̄ → Z(X̄). This is not the case, because G is transitive on the nonzero vectors of X/E and
n ≥ 3. Therefore,
(3.4)
X̄ is elementary abelian.
Assumption (c) now yields that there is a G-invariant complement Ȳ to hx̄i. Let Y be
the preimage of Ȳ in X. We claim that Y is abelian; assume on the contrary. Then [Y, Y ]
and ✵1 (Y ) are contained in V since Ȳ is elementary abelian, and by assumption, neither of
5
these are trivial. Similarly, V is contained in Z(Y ), which is not Y . Therefore,
V = [Y, Y ] = Φ(Y ) = ✵1 (Y ) = Z(Y ),
(3.5)
by (a) and (3.3).
√
By (3.5), the squaring map Ȳ → V is G-equivariant
linear
isomorphism;
let
− be its
√
inverse. Then the map ξ : V → V given by ξ(v) = [x, v] is a linear isomorphism commuting
with the action of G, and so ξ ∈ ρ(G) by (b), where ρ : G → GL(V ) is the structure map.
Let g ∈ G map to ξ −1 under ρ. Then y 7→ [x, y g ] is the squaring map. This means that for
each y ∈ Y , we have y gx = y 2 y g . Hence, for each pair w, y ∈ Y ,
w 2 y 2 w g y g = w gx y gx = (wy)gx = (wy)2w g y g
which gives w 2 y 2 = (wy)2 = w 2 y 2 [y, w]. Thus Y is abelian after all. It follows that Ω1 (Y ) =
V or Y by (a), and this completes the proof of the lemma.
We now examine the structure of the simple systems occupying the role of K in Theorem 1.1. Let T0 be a 2-group isomorphic to a Sylow 2-subgroup of L3 (4). This is generated
by involutions t1 , t2 , a1 , a2 , b1 , and b2 such that Z(T0 ) = ht1 , t2 i and with additional defining
relations:
[a1 , a2 ] = [b1 , b2 ] = 1,
[a1 , b1 ] = [a2 , b2 ] = t1 ,
[a2 , b1 ] = t2 ,
[a1 , b2 ] = t1 t2 .
A Sylow subgroup of M23 or McL is isomorphic to a Sylow 2-subgroup of an extension of
L3 (4) by a field automorphism; this is a semidirect product T0 hf i with
f 2 = 1,
[a1 , f ] = [a2 , f ] = a1 a2 ,
[b1 , f ] = [b2 , f ] = b1 b2 ,
[t2 , f ] = t1 .
A Sylow subgroup of J3 is isomorphic to a Sylow 2-subgroup of L3 (4) extended by a unitary
(i.e., graph-field) automorphism; this is a semidirect product T0 hui with
u2 = 1,
[a1 , u] = [u, b1 ] = a1 b1 ,
[a2 , u] = [u, b2 ] = a2 b2 ,
[t2 , u] = t1 .
A Sylow subgroup of Ly is isomorphic to a Sylow 2-subgroup of Aut(L3 (4)); this is a semidirect product T0 hf, ui with [f, u] = 1 and the relations above.
Denote by T1 a 2-group isomorphic to one of T0 hf i, T0 hui, or T0 hf, ui. Recall that the
Thompson subgroup J(P ) of a finite p-group P is the subgroup generated by the elementary
abelian subgroups of P of largest order.
Lemma 3.6. Let K be M23 , McL, J3 , or Ly, with Sylow 2-subgroup T1 as above. Then
(a) Z(T1 ) = ht1 i is of order 2;
(b) A(T1 ) = {F1 , F2 } where F1 = ht1 , t2 , a1 , a2 i and F2 = ht1 , t2 , b1 , b2 i, so that J(T1 ) =
T0 . Also, after suitable choice of notation, one of the following holds:
(i) K = M23 , McL, or Ly, and AutK (F1 ) ∼
= A7 , or
(ii) K = J3 and AutK (F1 ) ∼
= GL2 (4).
(c) There is F ∈ A(T1 ) such that the pair (AutK (F ), F ) satisfies assumptions (a)-(c) of
Lemma 3.2 in the role of (G, V ).
(d) All involutions in ht1 , t2 i are AutK (J(T1 ))-conjugate.
Proof. Point (a) holds by inspection of the relations above. Now F1 and F2 are the elementary
abelian subgroups of T0 of maximal rank, and so to prove (b), it suffices to show that each
elementary abelian subgroup of maximal rank in T1 is contained in T0 . Set L := L3 (4), and
identify Inn(L) with L. Write Inndiag(L) ≥ L for the group of inner-diagonal automorphisms
6
of L. Then Inndiag(L) contains L with index 3, corresponding to the size of the center of
the universal version SL3 (4) of L [GLS98, Theorem 2.5.12(c)]. Also, Aut(L) is a split
extension of of Inndiag(L) by ΦL ΓL = ΦL × ΓL , where ΦL = hϕi ∼
= C2 is generated by a
C
is
generated
by
a
graph
automorphism [GLS98,
field automorphism of L, and ΓL = hγi ∼
= 2
Theorem 2.5.12]. By [GLS98, Theorems 4.9.1, 4.9.2], each involution of Aut(L)−L is Aut(L)conjugate to ϕ, ϕγ, or γ, and the centralizers in L of these automorphisms are isomorphic
to L3 (2), U3 (2) ∼
= (C3 × C3 )Q8 , and Sp2 (4) ∼
= A5 , respectively, again by those theorems.
These centralizers have 2-ranks 2, 1, and 2, respectively. Since T0 has 2-rank 4, this shows
that J(T1 ) ≤ T0 . From the relations used in defining T0 , each involution in T0 is contained
in one of F1 or F2 , so we conclude that A(T1 ) = A(T0 ) = {F1 , F2 }. The description of the
automizers in (b) follows from [Fin76a, Table 1] for M23 and McL, [Fin76b, Lemma 3.7]
for J3 , and [Wil84] for Ly. Now point (c) follows from (b) and Lemma 3.1, and point (d)
follows from (c) and Burnside’s fusion theorem (i.e. the statement that the automizer of a
weakly K-closed subgroup of T1 (which is J(T1 ) in this case) controls the K-conjugacy in its
center).
Lemma 3.7. Let K be one of the sporadic groups M23 , McL, J3 , or Ly, and let T1 be a
Sylow 2-subgroup of K. Then
(a) Out(K) = 1 if K ∼
= C2 otherwise; and
= M23 or Ly, and Out(K) ∼
(b) for each involution α ∈ Aut(K) − Inn(K),
(i) CK (α) ∼
= McL,
= M11 if K ∼
∼
(ii) CK (α) ∼
L
(17)
if
K
= J3 ; and
= 2
(c) each automorphism of K centralizing a member of A(T1 ) is inner.
Proof. Points (a) and (b) follow by inspection of Table 5.3 of [GLS98]. By Lemma 3.6(b),
the 2-rank of K is 4, while each of the centralizers in (b)(i-ii) is of 2-rank 2, so (c) holds.
4. Preliminary lemmas
We now begin in this section the proof of Theorem 1.1, and so we fix the notation and
hypotheses that will hold throughout the remainder of the paper.
Let F be a saturated fusion system over the 2-group S, and let x ∈ S be an involution.
Assume that hxi is fully F -centralized, that m(CS (x)) = m(S), and that F ∗ (CF (x)) = Q×K,
where Q is cyclic. Set C = CF (x) and T = CS (x), so that C is a saturated fusion system
over T by the remark just after Lemma 2.2. Let T1 be the Sylow subgroup of K, and set
R := Q × T1 ≤ T.
Assume that K is the fusion system of one of the sporadic groups K = M23 , J3 , McL, or
Ly. Since K tamely realizes K in each case, the quotient
(4.1)
T /R induces a 2-group of outer automorphisms of K
by Theorem 2.3. Arguing by contradiction, we assume
K is not a component of F.
We fix the presentation in Section 3 for T1 in whichever case is applicable, and we note that
Ω1 (Q) = hxi by assumption on Q.
Lemma 4.2. Notation may be chosen so that T and J(T ) are fully F -normalized.
7
Proof. We repeatedly use Lemma 2.2. Let α ∈ A(T ). Then as |CS (xα )| ≥ |T α | = |T | =
|CS (x)|, we have that xα is still fully F -centralized. Thus we may assume that T is fully F normalized after replacing x, T , and J(T ) by their conjugates under α. Now let β ∈ A(J(T )).
Then as NS (T ) = NNS (J(T )) (T ), it follows that
|NS (T β )| = |NNS (J(T )β ) (T β )| ≥ |NNS (J(T ))β (T β )| = |NNS (J(T )) (T )β | = |NS (T )β | = |NS (T )|
and so equality holds because T is fully F -normalized. Hence T β is still fully F -normalized.
As before, xβ is still fully F -centralized.
Lemma 4.3. hxi is not weakly F -closed in T .
Proof. Assume on the contrary, in which case T = S, otherwise NS (T ) contains T properly
and moves x. It follows that x ∈ Z(S), which is contained in the center of every F -centric
subgroup. Hence x ∈ Z(F ) by Alperin’s fusion theorem [BLO03, Theorem A.10], and we
conclude that K is a component of C = F , contrary to hypothesis.
Lemma 4.4. The following hold.
(a) J(T ) = J(R) = hxi × J(T1 ); and
(b) CT (T1 ) = Qht1 i.
Proof. Suppose (a) does not hold. Choose A ∈ A(T ) with A hxi × J(T1 ). Then A R
by the structure of R, and A acts nontrivially on K by (4.1). In particular, Out(K) = 2
and m(CR (A)) ≤ 3 by Lemma 3.7. Hence, m(A) ≤ 4 while m(R) = 5. This contradicts the
choice of A and establishes (a). By Lemma 3.6(a), CR (T1 ) = Qht1 i. Also, CT (T1 ) = CR (T1 )
by part (a), so (b) is also established.
Lemma 4.5. If T = S, then Ω1 Z(S) = hx, t1 i. If T < S, then Ω1 (Z(S)) = ht1 i.
Proof. Note first that Ω1 (Z(S)) ≤ T . By Lemma 3.7(c) and (4.1), Ω1 (Z(S)) ≤ R, and so
Ω1 Z(S) ≤ Ω1 (Z(R)) = hx, t1 i by Lemma 3.6(a). Thus, the lemma holds in case T = S. In
case T < S, let a ∈ NS (T )−T with a2 ∈ T . Note that [J(T ), J(T )] = [J(T1 ), J(T1 )] = ht1 , t2 i
by Lemma 4.4(a). So a normalizes Ω1 Z(T ) = hx, t1 i and Ω1 Z(T ) ∩ [J(T ), J(T )] = hx, t1 i ∩
ht1 , t2 i = ht1 i, but a does not centralize x. Thus, Ω1 Z(S) = ht1 i as claimed.
Lemma 4.6. The following hold.
(a) x is not F -conjugate to t1 ; and
(b) x is conjugate to xt1 if and only if T < S, and in this case x is NS (T )-conjugate to
xt1 .
Proof. For part (a), let ϕ ∈ F with xϕ ∈ Z(T ). Assume first that T = S. By the extension axiom, ϕ extends to ϕ̃ ∈ AutF (T ), which restricts to an automorphism of J(T ).
Lemma 4.4(a) shows that x ∈
/ [J(T ), J(T )], while t1 ∈ [J(T ), J(T )] from Lemma 3.6(d).
Hence, xϕ 6= t1 ; this shows x is not F -conjugate to t1 in this case.
Now assume that T < S. Then x ∈
/ Z(S), whereas Z(S) = ht1 i by Lemma 4.5. Since
hxi is fully F -centralized by assumption, we conclude that x is not F -conjugate to t1 in this
case either. This completes the proof of (a).
If T < S, then x is NS (T )-conjugate to xt1 by (a), while if T = S, then (a) and Burnside’s
fusion theorem imply that hxi is weakly F -closed in Ω1 (Z(S)) = hx, t1 i. Thus, (b) holds.
8
5. The 2-central case
In this section it is shown that T < S; that is, x is not 2-central. We continue the notation
set at the beginning of Section 4.
Lemma 5.1. If T = S, then hxi is weakly F -closed in R.
Proof. Assume T = S. Then Ω1 (Z(S)) = hx, t1 i by Lemma 4.5. Using Burnside’s fusion
theorem and assumption, we see from Lemma 4.6 that
(5.2)
hxi is weakly F -closed in hx, t1 i.
By inspection of [GLS98, Table 5.3], K has one class of involutions. Thus, there are exactly
three C-classes of involutions, namely {x}, (xt1 )C , and tC1 . The lemma therefore holds by
(5.2).
Lemma 5.3. If T = R, then T < S. In particular, T < S in case K is the fusion system of
M23 or Ly.
Proof. Assume T = R, and also to the contrary that T = S. By Lemma 5.1, hxi ≤
Z(S) is weakly F -closed, and so is fixed by each automorphism of each F -centric subgroup.
Therefore, hxi ≤ Z(F ), and so K is a component of C = F , contrary to assumption. The
last statement follows then follows from Lemma 3.7(a).
Lemma 5.4. Assume K is the fusion system of McL or J3 . Then T < S.
Proof. Assume T = S. Then R < T by Lemma 5.3. Fix f ∈ xF ∩ (T − R). The extension
Khf i := KT1 hf i of K is defined by [Asc11, §8], and Khf i/Q is the 2-fusion system of
Aut(McL) or Aut(J3 ) by Theorem 2.3. Thus, |T : R| = 2 by Lemma 3.7(b).
Conjugating in Khf i if necessary, we may assume hf i is fully Khf i-centralized. By
Lemma 3.7(b), all involutions of T1 hf i−T1 are Khf i-conjugate, and CKhf i (f ) ∼
= hf i×F2 (M11 )
or hf i × F2 (L2 (17)). In particular, CT1 (f ) is semidihedral or dihedral, respectively, of order
16 and with center ht1 i. Fix a four subgroup V ≤ CT1 (f ). Then f is conjugate to f t1 (for
example, by an element in the normalizer of CT1 hf i (f ) in T1 hf i), and hence is F -conjugate
to each element of f V by the structures of F2 (M11 ) and F2 (L2 (17)).
Fix α ∈ HomF (hf i, hxi). By the extension axiom, α extends to a morphism, which we
also call α, defined on hf iV ≤ CT (f ). Therefore, x is F -conjugate to each element in xV α .
Now the intersection xV α ∩ R = V α ∩ R is nontrivial because |T : R| = 2, so as x is not
itself in V α , we see that x has a distinct conjugate in R. This contradicts Lemma 5.1 and
completes the proof.
6. Proof of Theorem 1.1
Continue the notation and hypotheses set at the beginning of Section 4. In addition, we
fix F ∈ A(T1 ) satisfying assumptions (a)-(c) of Lemma 3.2, as guaranteed by Lemma 3.6(c),
and set E := hxiF . Then E ∈ A(T ) by Lemma 4.4(a), and so
(6.1)
m(T ) = 5.
In this section, we finish the proof of Theorem 1.1, by showing that the hypotheses of
Lemma 3.2 hold for a model of the normalizer in F of an appropriate F -conjugate of E. Via
Lemma 3.1(d), this forces the 2-rank of S to be at least 8, contrary to the hypothesis that
m(S) = m(T ).
9
By Lemmas 5.3 and 5.4,
T < S.
Lemma 6.2. | AutF (T ) : AutC (T )| = 2.
Proof. Represent AutF (T ) on Ω1 Z(T ) = hx, t1 i and apply Lemma 4.6.
Lemma 6.3. The following hold.
(a) | AutF (J(T )) : AutC (J(T ))| = 4 and
(b) xAutF (E) = xF , and so | AutF (E) : AutC (E)| = 16.
Proof. Represent AutF (J(T )) on Ω1 Z(J(T )) = hx, t1 , t2 i. Now AutC (J(T )) = CAutF (J(T ) (x),
and the former is transitive on Ω1 ZJ(T1 )# by Lemma 3.6(d). Also, since x is NS (T )conjugate to xt1 , we conclude from Lemma 4.6 that xAutF (J(T )) = xZJ(T1 ) is of size 4. Thus,
(a) holds.
Similarly to (a), we have AutC (E) = CAutF (E) (x), and the former is transitive on F #
by choice of F . From Lemma 4.4(a) and Lemma 3.6(a), |A(T )| = 2. By part (a) and
Lemma 4.2, |NS (J(T )) : T | = 4. Representing NS (J(T )) on A(T ), we see that the kernel
has index at most 2, so there is an element of NS (J(T ))−T that normalizes E. In particular,
NT (E) < NS (E), and so x is AutF (E)-conjugate to a member of xF # . Now by choice of
F , another appeal to Lemma 4.6 yields that xAutF (E) = F x has size 16, which establishes
(b).
Lemma 6.4. The following hold:
(a) Q = hxi.
(b) E is F -centric.
Proof. Suppose on the contrary that Q > hxi and choose w ∈ Q with w 2 = x. Fix a ∈
NS (T ) − T such that a2 ∈ T . Then xa = xt1 and also (w a )2 = xt1 . Further hw a i is normal in
T since hwi is. Thus, [hw a i, T1 ] ≤ hw a i ∩ T1 = 1, whereas CT (T1 ) = Qht1 i by Lemma 4.4(b).
It follows that hxt1 i = ✵1 (hw a i) ≤ Ω1 ✵1 (Qht1 i) = hxi, a contradiction that establishes (a).
Let E0 be one of the two elementary abelian subgroups of rank 5 in T , and set F0 =
E0 ∩ J(T1 ). Then E0 = hxiF0 contains x, and so CS (E0 ) = CT (E0 ). By Lemma 3.7(c),
CT (E0 ) = CR (E0 ) = QF0 . Hence CS (E0 ) = QF0 = E0 by part (a).
We can now prove (b). Fix α ∈ A(E). Since hxi is fully centralized, the restriction of α−1
to hxα i has an extension β : CS (xα ) → CS (x) = T , which is defined on CS (E α ). Thus, setting
E0 := CS (E α )β ≤ T , we see from the previous paragraph that |CS (E)| = |CS (E0 )| = |E0 | =
|CS (E α )|, so that CS (E α ) = E α . As E α is fully F -normalized and contains its centralizer in
S, this means that E is F -centric, as claimed.
Since we will be working in NF (E) for the remainder, we may assume, after replacing E by
an F -conjugate if necessary, that E is fully F -normalized. Hence E ∈ F f c by Lemma 6.4(b).
Fix a model H for NF (E) (cf. §§2.2).
Lemma 6.5. H satisfies the hypotheses of Lemma 3.2.
Proof. Set G̃ = AutC (E) and observe that AutF (E) ∼
= H/E by Lemma 6.4(b). Thus G̃
contains G := A7 or GL2 (4) with index 1 or 2. As G acts transitively on F # and centralizes
x, it follows from Lemma 6.3(b) that xF and F # are the orbits of AutF (E) on E # . Hence
10
AutF (E) ≤ NAut(E) (F ), a nontrivial split extension of an elementary abelian 2-group U of
order 16 by GL(F ) with the standard action.
We claim that U ≤ AutF (E). Suppose that this is not the case. Now G acts transitively
on F # and the commutator map [x, −] defines an isomorphism of G-modules from U to F , so
G acts transitively on U # . Since U is normalized by AutF (E), we see that AutF (E) ∩ U = 1.
In particular, AutF (E) embeds into GL(F ). Now G ∼
= A7 or GL2 (4) in the cases under
consideration, and by Lemma 6.3(b), AutF (E) is therefore a subgroup of GL(F ) containing
G with index 16 or 32. However, A7 has index 8 in GL4 (2), and GL2 (4) is contained with
index 2 in a unique maximal subgroup of GL4 (2), a contradiction. Therefore, U ≤ AutF (E)
as claimed.
It has thus been shown that AutF (E) contains a subgroup with index 1 or 2 that is a split
extension of U = O2 (NAut(E) (F )) by G. Thus, H has a subgroup of index 1 or 2 that is an
extension of E by UG. Assumptions (a)-(c) of Lemma 3.2 hold via Lemma 3.6(c) by the
choice of F .
Proof of Theorem 1.1. Keep the notation of the proof of Lemma 6.5. By that lemma and
Lemma 3.2, there is a G-complement Y to hxi in O2 (H) that is homocyclic of order 28 with
Ω1 (Y ) = F , or elementary abelian of order 28 . Now G is isomorphic to A7 or GL2 (4) with
faithful action on F , so the former case is impossible by Lemma 3.1. Hence, m2 (T ) = 5 <
8 ≤ m2 (S), contrary to hypothesis.
References
[AG72]
[AKO11]
[AOV12]
[Asc00]
[Asc11]
[Asc15]
[Asc16]
[BLO03]
[Cra11]
[Fin76a]
[Fin76b]
[GL16]
[GLS98]
J. L. Alperin and Daniel Gorenstein, A vanishing theorem for cohomology, Proc. Amer. Math. Soc.
32 (1972), 87–88. MR 0291293
Michael Aschbacher, Radha Kessar, and Bob Oliver, Fusion systems in algebra and topology, London Mathematical Society Lecture Note Series, vol. 391, Cambridge University Press, Cambridge,
2011. MR 2848834
Kasper K. S. Andersen, Bob Oliver, and Joana Ventura, Reduced, tame and exotic fusion systems,
Proc. Lond. Math. Soc. (3) 105 (2012), no. 1, 87–152. MR 2948790
Michael Aschbacher, Finite group theory, second ed., Cambridge Studies in Advanced Mathematics,
vol. 10, Cambridge University Press, Cambridge, 2000. MR 1777008 (2001c:20001)
, The generalized Fitting subsystem of a fusion system, Mem. Amer. Math. Soc. 209 (2011),
no. 986, vi+110. MR 2752788
, Classifying finite simple groups and 2-fusion systems, ICCM Not. 3 (2015), no. 1, 35–42.
MR 3385504
, On fusion systems of component type, preprint (2016), 228pp.
Carles Broto, Ran Levi, and Bob Oliver, The homotopy theory of fusion systems, J. Amer. Math.
Soc. 16 (2003), no. 4, 779–856 (electronic).
David A. Craven, The theory of fusion systems, Cambridge Studies in Advanced Mathematics, vol.
131, Cambridge University Press, Cambridge, 2011, An algebraic approach. MR 2808319
Larry Finkelstein, Finite groups with a standard component isomorphic to M23 , J. Algebra 40
(1976), no. 2, 541–555. MR 0414700 (54 #2795)
, Finite groups with a standard component isomorphic to HJ or HJM, J. Algebra 43 (1976),
no. 1, 61–114. MR 0427450 (55 #482)
George Glauberman and Justin Lynd, Control of fixed points and existence and uniqueness of
centric linking systems, Invent. Math. 206 (2016), no. 2, 441–484.
Daniel Gorenstein, Richard Lyons, and Ronald Solomon, The classification of the finite simple
groups. Number 3. Part I. Chapter A, Mathematical Surveys and Monographs, vol. 40, American
Mathematical Society, Providence, RI, 1998, Almost simple K-groups. MR 1490581 (98j:20011)
11
[Hig68]
Graham Higman, Odd characterizations of finite simple groups, Lecture Notes of University of
Michigan. Ann Arbor (1968), 76pp.
[Lyn15] Justin Lynd, A characterization of the 2-fusion system of L4 (q), J. Algebra 428 (2015), 315–356.
MR 3314296
[Oli13] Bob Oliver, Existence and uniqueness of linking systems: Chermak’s proof via obstruction theory,
Acta Math. 211 (2013), no. 1, 141–175. MR 3118306
[Oli16a]
, Reductions to simple fusion systems, Bulletin of the London Mathematical Society 48
(2016), no. 6, 923–934.
, Tameness of fusion systems of sporadic simple groups, preprint (2016), 29pp.,
[Oli16b]
arXiv:1604.05681v2.
[Wil84] Robert A. Wilson, The subgroup structure of the Lyons group, Math. Proc. Cambridge Philos. Soc.
95 (1984), no. 3, 403–409. MR 755827
Institute of Mathematics, University of Aberdeen, Fraser Noble Building, Aberdeen
AB24 3UE
E-mail address: [email protected]
Department of Mathematics and Statistics, Saint Louis University, 220 North Grand
Blvd., Saint Louis, MO 63103
E-mail address: [email protected]
12
| 4 |
Prediction with a Short Memory
arXiv:1612.02526v3 [cs.LG] 9 Nov 2017
Sham Kakade
University of Washington
[email protected]
Percy Liang
Stanford University
[email protected]
Vatsal Sharan
Stanford University
[email protected]
Gregory Valiant
Stanford University
[email protected]
Abstract
We consider the problem of predicting the next observation given a sequence of past observations, and consider the extent to which accurate prediction requires complex algorithms that
explicitly leverage long-range dependencies. Perhaps surprisingly, our positive results show that
for a broad class of sequences, there is an algorithm that predicts well on average, and bases
its predictions only on the most recent few observation together with a set of simple summary
statistics of the past observations. Specifically, we show that for any distribution over observations, if the mutual information between past observations and future observations is upper
bounded by I, then a simple Markov √
model over the most recent I/ observations obtains expected KL error —and hence `1 error —with respect to the optimal predictor that has access
to the entire past and knows the data generating distribution. For a Hidden Markov Model with
n hidden states, I is bounded by log n, a quantity that does not depend on the mixing time,
and we show that the trivial prediction algorithm based on the empirical frequencies of length
O(log n/) windows of observations achieves this error, provided the length of the sequence is
dΩ(log n/) , where d is the size of the observation alphabet.
We also establish that this result cannot be improved upon, even for the class of HMMs,
in the following two senses: First, for HMMs with n hidden states, a window length
√ of log n/
is information-theoretically necessary to achieve expected KL error , or `1 error . Second,
the dΘ(log n/) samples required to accurately estimate the Markov model when observations
are drawn from an alphabet of size d is necessary for any computationally tractable learning/prediction algorithm, assuming the hardness of strongly refuting a certain class of CSPs.
1
Memory, Modeling, and Prediction
We consider the problem of predicting the next observation xt given a sequence of past observations,
x1 , x2 , . . . , xt−1 , which could have complex and long-range dependencies. This sequential prediction
problem is one of the most basic learning tasks and is encountered throughout natural language
modeling, speech synthesis, financial forecasting, and a number of other domains that have a
sequential or chronological element. The abstract problem has received much attention over the
last half century from multiple communities including TCS, machine learning, and coding theory.
The fundamental question is: How do we consolidate and reference memories about the past in
order to effectively predict the future?
Given the immense practical importance of this prediction problem, there has been an enormous
effort to explore different algorithms for storing and referencing information about the sequence.
These efforts have led to recurrent neural networks [1]—which encode the past as a real vector of
fixed length that is updated after every observation—and specific classes of such networks, such
as Long Short-Term Memory (LSTM) networks [2, 3]. Other recently popular models that have
explicit notions of memory include neural Turing machines [4], memory networks [5], differentiable
neural computer [6], etc. These models have been quite successful (see e.g. [7, 8]); nevertheless,
they seem largely unable to consistently learn long-range dependencies, which are crucial in many
settings including language.
In parallel to these efforts to design systems that explicitly use memory, there has been much
effort from the neuroscience community to understand how humans and animals are able to make
accurate predictions about their environment. Many of these efforts also attempt to understand
the computational mechanisms behind the formation of memories (memory “consolidation”) and
retrieval [9, 10, 11].
Despite the long history of studying sequential prediction, many fundamental questions remain:
• How much memory is necessary to accurately predict future observations, and what properties
of the underlying sequence determine this requirement?
• Must one remember significant information about the distant past or is a short-term memory
sufficient?
• What is the computational complexity of accurate prediction?
• How do answers to the above questions depend on the metric that is used to evaluate prediction accuracy?
Aside from the intrinsic theoretical value of these questions, their answers could serve to guide
the construction of effective practical prediction systems, as well as informing the discussion of the
computational machinery of cognition and prediction/learning in nature.
In this work, we provide insights into the first three questions. We begin by establishing the
following proposition, which addresses the first two questions with respect to the pervasively used
metric of average prediction error:
Proposition 1. Let M be any distribution over sequences with mutual information I(M) between the past observations . . . , xt−2 , xt−1 and future observations xt , xt+1 , . . .. The best `-th order
Markov model, which makes predictions based only on the most recent ` observations, predicts
the
p
distribution of the next observation with average KL error I(M)/` or average `1 error I(M)/`,
with respect to the actual conditional distribution of xt given all past observations.
The intuition behind the statement and proof of this general proposition is the following: at time
t, we either predict accurately and are unsurprised when xt is revealed to us, or if we predict poorly
and are surprised by the value of xt , then xt must contain a significant amount of information about
1
the history of the sequence, which can then be leveraged in our subsequent predictions of xt+1 , xt+2 ,
etc. In this sense, every timestep in which our prediction is ‘bad’, we learn some information about
the past. Because the mutual information between the history of the sequence and the future is
bounded by I(M), if we were to make I(M) consecutive bad predictions, we have captured nearly
this amount of information about the history, and hence going forward, as long as the window we
are using spans these observations, we should expect to predict well.
This general proposition, framed in terms of the mutual information of the past and future,
has immediate implications for a number of well-studied models of sequential data, such as Hidden
Markov Models (HMMs). For a HMM with n hidden states, the mutual information of the generated
sequence is trivially bounded by log n, which yields the following corollary to the above proposition.
We state this proposition now, as it provides a helpful reference point in our discussion of the more
general proposition.
Corollary 1. Suppose observations are generated by a Hidden Markov Model with at most n hidden
states. The best log n -th order Markov model, which makes predictions based only on the most recent
log n
predicts the distribution of the next observation with average KL error ≤ or `1
observations,
√
error ≤ , with respect to the optimal predictor that knows the underlying HMM and has access
to all past observations.
In the setting where the observations are generated according to an HMM with at most n hidden
states, this “best” `th order Markov model is easy to learn given sufficient data, and corresponds
to the naive “empirical” `-gram model based on the previous observations. Specifically, this is
the model that, given xt−` , xt−`+1 , . . . , xt−1 , outputs the observed (empirical) distribution of the
observation that has followed this length ` sequence. (To predict what comes next in the phrase
“. . . defer the details to the ” we look at the previous occurrences of this subsequence, and predict
according to the empirical frequency of the subsequent word.) The following theorem makes this
claim precise.
Theorem 1. Suppose observations are generated by a Hidden Markov Model with at most n hidden
states, and output alphabet of size d. For > 1/n there exists a window length ` = O( log n )
and absolute constant c such that for any T ≥ dc` , if t ∈ {1, 2, . . . , T } is chosen uniformly at
random, then the expected `1 distance between the true distribution of xt given the entire history
(and knowledge of the HMM), and the distribution predicted by the naive “empirical” `-th order
√
Markov model based on x0 , . . . , xt , is bounded by .
The above theorem states that the window length necessary to predict well is independent of
the mixing time of the HMM in question, and holds even if the model does not mix. While the
amount of data required to make accurate predictions using length ` windows scales exponentially
in `—corresponding to the condition in the above theorem that t is chosen uniformly between 0
and T = dO(`) —our lower bounds, discussed in Section 1.3, argue that this exponential dependency
is unavoidable.
1.1
Interpretation of mutual information of past and future
While the mutual information between the past observations and the future observations is an
intuitive parameterization of the complexity of a distribution over sequences, the fact that it is the
right quantity is a bit subtle. It is tempting to hope that this mutual information is a bound on
the amount of memory that would be required to store all the information about past observations
that is relevant to the distribution of future observations. Consider the following setting: Given a
2
joint distribution over random variables P AST and F U T , suppose we wish to define a function
f that maps P AST to a binary “advice”/memory string f (P AST ), possibly of variable length,
such that F U T is independent of P AST , given f (P AST ). As is shown in Harsha et al. [12],
there are joint distributions over (P AST, F U T ) such that even on average, the minimum length
of the advice/memory string necessary for the above task is exponential in the mutual information
I(P AST ; F U T ). This setting can also be interpreted as a two-player communication game where
one player generates P AST and the other generates F U T given limited communication (i.e. the
ability to communicate f (P AST )). 1
Given the fact that this mutual information is not even an upper bound on the amount of
memory that an optimal algorithm (computationally unbounded, and with complete knowledge of
the distribution) would require, Proposition 1 might be surprising.
1.2
Implications of Proposition 1 and Corollary 1.
These results show that a Markov model—a model that cannot capture long-range dependencies
or structure of the data—can predict accurately on any data-generating distribution, provided the
order of the Markov model scales with the complexity of the distribution, as parameterized by the
mutual information between the past and future. Strikingly, this parameterization is indifferent
to whether the dependencies in the sequence are relatively short-range as in an HMM that mixes
quickly, or very long-range as in an HMM that mixes slowly or does not mix at all. Independent
of the nature of these dependencies, provided the mutual information is small, accurate prediction
is possible based only on the most recent few observation. (See Figure 1 for a concrete illustration
of this result in the setting of an HMM that does not mix and has long-range dependencies.)
Figure 1: A depiction of a HMM on n states, that repeats a given length n binary sequence of outputs, and
hence does not mix. Corollary 1 and Theorem 1 imply that accurate prediction is possible based only on
short sequences of O(log n) observations.
At a time where increasingly complex models such as recurrent neural networks and neural
Turing machines are in vogue, these results serve as a baseline theoretical result. They also help
explain the practical success of simple Markov models, such as “Kneser-Ney” smoothing [13, 14],
which are crucial components in state-of-the-art machine translation and speech recognition systems. Although recent recurrent neural networks have yielded empirical gains (see e.g. [7, 8]),
current models still seem largely incapable of successfully capturing long-range dependencies.2 In
1
It is worth noting that if the advice/memory string s is sampled first, and then P AST and F U T are defined to
be random functions of s, then the length of s can be related to I(P AST ; F U T ) (see [12]). This latter setting where
s is generated first corresponds to allowing shared randomness in the two-player communication game; however, this
is not relevant to the sequential prediction problem.
2
One amusing example is the recent sci-fi short film Sunspring whose script was automatically generated by an
LSTM. Locally, each sentence of the dialogue (mostly) makes sense, though there is no cohesion over longer time
frames, and no overarching plot trajectory (despite the brilliant acting).
3
some settings, such as natural language, capturing such long-range dependencies seems crucial for
achieving human-level results. Indeed, the main message of a narrative is not conveyed in any single
short segment. More generally, higher-level intelligence seems to be about the ability to judiciously
decide what aspects of the observation sequence are worth remembering and updating a model of
the world based on these aspects.
Thus, for such settings, Proposition 1, can be interpreted as a negative result—that average
error is not a good metric for training and evaluating models. It is important to note that average
prediction error is the metric that ubiquitously used in practice, both in the natural language
processing domain and elsewhere. Our results suggest that a different metric might be essential
to driving progress towards systems that attempt to capture long-range dependencies and leverage
memory in meaningful ways. We discuss this possibility of alternate prediction metrics more in
Section 1.4.
For many other settings, such as financial prediction and lower level language prediction tasks
such as those used in OCR or speech recognition, average prediction error is the most meaningful
metric. For these settings, the result of Proposition 1 is extremely positive: no matter the nature of
the dependencies in the financial markets, it is sufficient to learn a Markov model. As one obtains
more and more data, one can learn a higher and higher order Markov model, and average prediction
accuracy should continue to improve.
For these applications, the question now becomes a computational question: the naive approach
to learning an `th-order Markov model in a domain with an alphabet of size d might require Ω(d` )
space to store, and data to learn. From a computational standpoint, is there a better algorithm?
What properties of the underlying sequence imply that such models can be learned, or approximated
more efficiently or with less data?
Our computational lower bounds, described below, provide some perspective on these computational considerations.
1.3
Lower bounds
Our positive results show that accurate prediction is possible via an algorithmically simple model—
a Markov model that only depends on the most recent observations—which can be learned in an
algorithmically straightforward fashion by simply using the empirical statistics of short sequences
of examples, compiled over a sufficient amount of data. Nevertheless, the Markov model has d`
parameters, and hence requires an amount of data that scales as Ω(d` ) to learn, where d is a bound
on the size of the observation alphabet. This prompts the question of whether it is possible to learn
a successful predictor based on significantly less data.
We show that, even for the special case where the data sequence is generated from an HMM over
n hidden states, this is not possible in general, assuming a natural complexity-theoretic assumption.
A HMMs with n hidden states and an output alphabet of size d is defined via only O(n2 + nd)
parameters and O (n2 + nd) samples are sufficient, from an information theoretic standpoint, to
learn a model that will predict accurately. While learning an HMM is computationally hard (see
e.g. [15]), this begs the question of whether accurate (average) prediction can be achieved via a
computationally efficient algorithm and and an amount of data significantly less than the dΘ(log n)
that the naive Markov model would require.
Our main lower bound shows that there exists a family of HMMs such that the Ω(dlog n/ ) sample complexity requirement is necessary for any computationally efficient algorithm that predicts
accurately on average, assuming a natural complexity-theoretic assumption. Specifically, we show
that this hardness holds, provided that the problem of strongly refuting a certain class of CSPs is
4
hard, which was conjectured in [16] and studied in related works [17] and [18]. See Section 5 for a
description of this class and discussion of the conjectured hardness.
Theorem 2. Assuming the hardness of strongly refuting a certain class of CSPs, for all sufficiently
large n and any ∈ (1/nc , 0.1) for some fixed constant c, there exists a family of HMMs with n
hidden states and an output alphabet of size d such that any polynomial time algorithm that achieves
average error (with respect to the optimal predictor) for a random HMM in the family must observe
dΘ(log n/) observations from the HMM.
As the mutual information of the generated sequence of an HMM with n hidden states is
bounded by log n , Theorem 2 directly implies that there are families of data-generating distributions
M with mutual information I(M) and observations drawn from an alphabet of size d such that any
computationally efficient algorithm requires dΩ(I(M)/) samples from M to achieve average error
. The above bound holds when d is large compared to log n or I(M), but a different but equally
relevant regime is where the alphabet size d is small compared to the scale of dependencies in the
sequence (for example, when predicting characters [19]). We show lower bounds in this regime
of the same flavor as those of Theorem 2 except based on the problem of learning a noisy parity
function; the (very slightly) subexponential algorithm of Blum et al. [20] for this task means that
we lose at least a superconstant factor in the exponent in comparison to the positive results of
Proposition 1.
Proposition 2. Let f (k) denote a lower bound on the amount of time and samples required to learn
parity with noise on uniformly random k-bit inputs. For all sufficiently large n and ∈ (1/nc , 0.1)
for some fixed constant c, there exists a family of HMMs with n hidden states such that any algorithm
that achieves average prediction error (with respect to the optimal predictor) for a random HMM
in the family requires at least f (log n/) time or samples.
Finally, we also establish the information theoretic optimality of the results of Proposition 1, in
the sense that among (even computationally unbounded) prediction algorithms that predict based
only on the most recent ` observations, an average KL prediction error of Ω(I(M)/`) and `1 error
Ω(I(M)/2 ) with respect to the optimal predictor, is necessary.
Proposition 3. There is an absolute constant c < 1 such that for all 0 < < 1/4 and sufficiently
large n, there exists an HMM with n hidden states such that it is not information-theoretically
√
possible to obtain average KL prediction error less than or `1 error less than (with respect to the
optimal predictor) while using only the most recent c log n/ observations to make each prediction.
1.4
Future Directions
As mentioned above, for the settings in which capturing long-range dependencies seems essential,
it is worth re-examining the choice of “average prediction error” as the metric used to train and
evaluate models. One possibility, that has a more worst-case flavor, is to only evaluate the algorithm
at a chosen set of time steps instead of all time steps. Hence the naive Markov model can no longer
do well just by predicting well on the time steps when prediction is easy. In the context of natural
language processing, learning with respect to such a metric intuitively corresponds to training a
model to do well with respect to a question answering task instead of a language modeling task.
A fertile middle ground between average error (which gives too much reward for correctly guessing
common words like “a” and “the”), and worst-case error might be a re-weighted prediction error that
provides more reward for correctly guessing less common observations. It seems possible, however,
that the techniques used to prove Proposition 1 can be extended to yield analogous statements for
such error metrics.
5
Given the many settings for which average error is the most natural metric of prediction accuracy, and the upper bounds of Proposition 1, it is natural to consider what additional structure
might be present that avoids the (conditional) computational lower bounds of Theorem 2. One possibility is a robustness property—for example the property that a Markov model would continue to
predict well even when each observation were obscured or corrupted with some small probability.
The lower bound instances in Theorem 2 and Proposition 2 rely on parity based constructions
and hence are very sensitive to noise and corruptions. For learning over product distributions,
there are well known connections between noise stability and approximation by low-degree polynomials [21, 22]. Additionally, low-degree polynomials can be learned agnostically over arbitrary
distributions via polynomial regression [23]. It is tempting to hope that this thread could be made
rigorous, by establishing a connection between natural notions of noise stability over arbitrary distributions, and accurate low-degree polynomial approximations. Such a connection could lead to
significantly better sample complexity requirements for prediction on such “robust” distributions of
sequences, perhaps requiring only poly(d, I(M), 1/) data. Additionally, such sample-efficient approaches to learning succinct representations of large Markov models may inform the many practical
prediction systems that currently rely on Markov models.
1.5
Related Work
Parameter Estimation. It is interesting to compare using a Markov model for prediction with
methods that attempt to properly learn an underlying model. For example, method of moments
algorithms [24, 25] allow one to estimate a certain class of Hidden Markov model with polynomial
sample and computational complexity. These ideas have been extended to learning neural networks
[26] and input-output RNNs [27]. Using different methods, Arora et al. [28] showed how to learn
certain random deep neural networks. Learning the model directly can result in better sample
efficiency, and also provide insights into the structure of the data. The major drawback of these
approaches is that they usually require the true data-generating distribution to be in (or extremely
close to) the model family that we are learning. This is a very strong assumption that often does
not hold in practice.
Universal Prediction and Coding Theory. On the other end of the spectrum is the class
of no-regret online learning methods which assume that the data generating distribution can even
be adversarial [29]. However, the nature of these results are fundamentally different from ours:
whereas we are comparing to the perfect model that can look at the infinite past, online learning
methods typically compare to a fixed set of experts, which is much weaker.
There is much work on sequential prediction based on KL-error from the information theory
and statistics communities. The philosophy of these approaches are often more adversarial, with
perspectives ranging from minimum description length [30, 31] and individual sequence settings [32],
where no model of the data distribution process is assumed. With regards to worst case guarantees
(where there is no data generation process), and regret as the notion of optimality, there is a line of
work on both minimax rates and the performance of Bayesian algorithms, the latter of which has
favorable guarantees in a sequential setting. With regards to minimax rates, [33] provides an exact
characterization of the minimax strategy, though the applicability of this approach is often limited
to settings where the strategies available to the learner is relatively small (i.e., the normalizing
constant in [33] must exist). More generally, there has been considerable work on the regret in
information-theoretic and statistical settings, such as the works in [32, 34, 35, 36, 37, 38, 39, 40].
With regards to log-loss more broadly, there is considerable work on information consistency
(convergence in distribution) and minimax rates with regards to statistical estimation in parametric
and non-parametric families [41, 42, 43, 44, 45, 46]. In some of these settings, e.g. minimax risk in
6
parametric, i.i.d. settings, there are characterizations in terms of mutual information [42].
There is also work on universal lossless data compression algorithm, such as the celebrated
Lempel-Ziv algorithm [47]. Here, the setting is rather different as it is one of coding the entire
sequence (in a block setting) rather than prediction loss.
Sequential Prediction in Practice. Our work was initiated by the desire to understand
the role of memory in sequential prediction, and the belief that modeling long-range dependencies
is important for complex tasks such as understanding natural language. There have been many
proposed models with explicit notions of memory, including recurrent neural networks [48], Long
Short-Term Memory (LSTM) networks[2, 3], attention-based models[49], neural Turing machines
[4], memory networks [5], differentiable neural computers [6], etc. While some of these models have
been quite successful in practice (see e.g. [7, 8]), they still largely fail to capture many long-range
dependencies–in the case of LSTMs, for example, it is not difficult to show that they forget the past
exponentially quickly if they are “stable” [1]. To gain more insight into this problem, we began
by analyzing the simplest Markov predictor, and found to our surprise that it performed nearly as
well as one could hope.
2
Proof Sketch of Theorem 1
We provide a sketch of the proof of Theorem 1, which is stronger than Proposition 1 but applies
specifically to sequences generated from a Hidden Markov Model. The core of this proof is the
following lemma that guarantees that the Markov model that knows the true marginal probabilities of all short sequences, will end up predicting well. Additionally, this good expected prediction
will hold with respect to only the randomness of the HMM during the short window, as opposed
to over the randomness of when the window begins (as in our more general results). For settings
such as financial forecasting, this additional guarantee is particularly pertinent; you do not need
to worry about the possibility of choosing an “unlucky” time to begin your trading regime, as long
as you plan to trade for a duration that spans an entire short window. Beyond the extra strength
of this result for HMMs, the proof approach is intuitive and pleasing, in comparison to the more
direct proof of Proposition 1. We first state the lemma and sketch its proof, and then conclude the
section by describing how this yields Theorem 1.
Lemma 4.
Consider an HMM with n hidden states, let the hidden state at time s = 0 be
chosen according to an arbitrary distribution π, and denote the observation at time s by xs . Let
OP Ts denote the conditional distribution of xs given observations x0 , . . . , xs−1 , and knowledge of the
hidden state at time s = 0. Let Ms denote the conditional distribution of xs given only x0 , . . . , xs−1 ,
which corresponds to the naive sth order Markov model that knows only the joint probabilities of
sequences of the first s observations. Then with probability at least 1 − 1/nc−1 over the choice of
initial state, for ` = c log n/2 ,
`−1
hX
i
E
kOP Ts − Ms k1 ≤ 4`,
s=0
where the expectation is with respect to the randomness in the outputs x0 , . . . , x`−1 .
The proof of the this lemma will hinge on establishing a connection between OP Ts —the Bayes
optimal model that knows the HMM and the initial hidden state h0 , and at time s predicts the
true distribution of xs given h0 , x0 , . . . , xs−1 —and the naive order s Markov model Ms that knows
the joint probabilities of sequences of s observations (given that the initial state is drawn according
to π), and predicts accordingly. This latter model is precisely the same as the model that knows
7
the HMM and distribution π (but not h0 ), and outputs the conditional distribution of xs given the
observations.
To relate these two models, we proceed via a martingale argument that leverages the intuition
that, at each time step either OP Ts ≈ Ms , or, if they differ significantly, we expect the sth
observation xs to contain a significant amount of information about the hidden state at time zero,
h0 , which will then improve Ms+1 . Our submartingale will precisely capture the sense that for any
s where there is a significant deviation between OP Ts and Ms , we expect the probability of the
initial state being h0 conditioned on x0 , . . . , xs , to be significantly more than the probability of h0
conditioned on x0 , . . . , xs−1 .
More formally, let H0s denote the distribution of the hidden state at time 0 conditioned on
x0 , . . . , xs and let h0 denote the true hidden state at time 0. We show that the following expression
is a submartingale:
s
P r[H0s = h0 ]
1X
log
−
kOP Ti − Mi k21 .
1 − P r[H0s = h0 ]
2
i=0
The fact that this is a submartingale is not difficult: Define Rs as the conditional distribution of
xs given observations x0 , · · · , xs−1 and initial state drawn according to π but not being at hidden
state h0 at time 0. Note that Ms is a convex combination of OP Ts and Rs , hence kOP Ts − Ms k1 ≤
kOP Ts − Rs k1 . To verify the submartingale property, note that by Bayes Rule, the change in the
LHS at any time step s is the log of the ratio of probability of observing the output xs according to
the distribution OP Ts and the probability of xs according to the distribution Rs . The expectation
of this is the KL-divergence between OP Ts and Rs , which can be related to the `1 error using
Pinsker’s inequality.
At a high level, the proof will then proceed via concentration bounds (Azuma’s inequality), to
2
show
that, with high
probability, if the error from the first ` = c log n/ timesteps is large, then
P r[H0`−1 =h0 ]
is also likely to be large, in which case the posterior distribution of the hidden
1−P r[H0`−1 =h0 ]
state, H0`−1 will be sharply peaked at the true hidden state, h0 , unless h0 had negligible mass (less
than n−c ) in distribution π.
log
There are several slight complications to this approach, including the fact that the submartingale
we construct does not necessarily have nicely concentrated or bounded differences, as the first term
in the submartingale could change arbitrarily. We address this by noting that the first term
should not decrease too much except with tiny probability, as this corresponds to the posterior
probability of the true hidden state sharply dropping. For the other direction, we can simply
“clip” the deviations to prevent them from exceeding log n in any timestep, and then show that the
submartingale property continues to hold despite this clipping by proving the following modified
version of Pinsker’s inequality:
Lemma 1. (Modified Pinsker’s inequality) For any two distributions
h
µ(x)nand ν(x)
oidefined on
µ(x)
x ∈ X, define the C-truncated KL divergence as D̃C (µ k ν) = Eµ log min ν(x) , C
for some
fixed C such that log C ≥ 8. Then D̃C (µ k ν) ≥ 12 kµ − νk21 .
Given Lemma 4, the proof of Theorem 1 follows relatively easily. Recall that Theorem 1 concerns the expected prediction error at a timestep t ← {0, 1, . . . , dc` }, based on the model Memp
corresponding to the empirical distribution of length ` windows that have occurred in x0 , . . . , xt ,.
The connection between the lemma and theorem is established by showing that, with high probability, Memp is close to Mπ̂ , where π̂ denotes the empirical distribution of (unobserved) hidden
states h0 , . . . , ht , and Mπ̂ is the distribution corresponding to drawing the hidden state h0 ← π̂ and
then generating x0 , x1 , . . . , x` . We provide the full proof in Appendix A.
8
3
Definitions and Notation
Before proving our general Proposition 1, we first introduce the necessary notation. For any random
variable X, we denote its distribution as P r(X). The mutual information between two random
variables X and Y is defined as I(X; Y ) = H(Y ) − H(Y |X) where H(Y ) is the entropy of Y and
H(Y |X) is the conditional entropy of Y given X. The conditional mutual information I(X; Y |Z)
is defined as:
I(X; Y |Z) = H(X|Z) − H(X|Y, Z) = Ex,y,z log
P r(X|Y, Z)
= Ey,z DKL (P r(X|Y, Z) k P r(X|Z))
P r(X|Z)
P
p(x)
where DKL (p k q) =
x p(x) log q(x) is the KL divergence between the distributions p and q.
Note that we are slightly abusing notation here as DKL (P r(X|Y, Z) k P r(X|Z)) should technically be DKL (P r(X|Y = y, Z = z) k P r(X|Z = z)). But we will ignore the assignment in the
conditioning when it is clear from the context. Mutual information obeys the following chain rule:
I(X1 , X2 ; Y ) = I(X1 ; Y ) + I(X2 ; Y |X1 ).
Given a distribution over infinite sequences, {xt } generated by some model M where xt is random variable denoting the output at time t, we will use the shorthand xji to denote the collection of
random variables for the subsequence of outputs {xi , · · · , xj }. The distribution of {xt } is stationary
if the joint distribution of any subset of the sequence of random variables {xt } is invariant with
respect to shifts in the time index. Hence P r(xi1 , xi2 , · · · , xin ) = P r(xi1 +l , xi2 +l , · · · , xin +l ) for any
l if the process is stationary.
We are interested in studying how well the output xt can be predicted by an algorithm which
only looks at the past ` outputs. The predictor A` maps a sequence of ` observations to a predicted
distribution of the next observation. We denote the predictive distribution of A` at time t as
QA` (xt |xt−1
t−` ). We refer to the Bayes optimal predictor using only windows of length ` as P` , hence
the prediction of P at time t is P r(xt |xt−1
t−` ). P` is just the naive `-th order Markov predictor
provided with the true distribution of the data. Let the Bayes optimal predictor looking at the
entire history of the model be P∞ , the prediction of P∞ at time t is P r(xt |xt−1
−∞ ). We will evaluate
the predictions of A` and P` with respect to P∞ over a long time window [0 : T − 1].
The crucial property of the distribution that is relevant to our results is the mutual information
between past and future observations. For a stochastic process {xt } generated by some model M
we define the mutual information I(M) of the model M as the mutual information between the
past and future, averaged over the window [0 : T − 1].
T −1
1 X
∞
I(M) =
I(xt−1
−∞ ; xt )
T
(3.1)
t=0
∞
If the process {xt } is stationary, then I(xt−1
−∞ ; xt ) is the same for all time steps hence I(M) =
∞
I(x−1
−∞ ; x0 ).
We compare the prediction of the predictor P` and A` with respect to P∞ . Let F (P, Q) be
some measure of distance between two predictive distributions. In this work, we consider the
KL-divergence, `1 distance and the relative zero-one loss between the two distributions. The KLdivergence and `1 distance between two distributions are defined in the standard way. We define
the relative zero-one loss as the difference between the zero-one loss of the optimal predictor P∞
and the algorithm A` . We define the expected loss of any predictor A` with respect to the optimal
9
predictor P∞ and a loss function F as follows:
(t)
δF (A` )
i
h
t−1
)
), QA` (xt |xt−1
= Ext−1 F (P r(xt |x−∞
t−` ,
−∞
T −1
1 X (t)
δF (A` ) =
δF (A` )
T
t=0
(t)
δ̂F (A` )
We also define
and δ̂F (A` ) for the algorithm A` in the same fashion as the error in estimating
P (xt |xt−1
),
the
true
conditional
distribution of the model M.
t−`
i
h
(t)
t−1
))
,
δ̂F (A` ) = Ext−1 F (P r(xt |xt−`
), QA` (xt |xt−1
t−`
t−`
4
δ̂F (A` ) =
T −1
1 X (t)
δ̂F (A` )
T
t=0
Predicting Well with Short Windows
To establish our general proposition, which applies beyond the HMM setting, we provide an elementary and purely information theoretic proof.
Proposition 1. For any data-generating distribution M with mutual information I(M) between
past and future observations, the best `-th order Markov model P` obtains average KL-error,
δKL (P` ) ≤ I(M)/` with respect to the optimal predictor with access to the infinite history. Also,
any predictor A` with δ̂KL (A` ) average KL-error in estimating the joint probabilities over windows
of length ` gets average error δKL (A` ) ≤ I(M)/` + δ̂KL (A` ).
Proof. We bound the expected error by splitting the time interval 0 to T − 1 into blocks of length
`. Consider any block starting at time τ . We find the average error of the predictor from time τ to
τ + ` − 1 and then average across all blocks.
To begin, note that we can decompose the error as the sum of the error due to not knowing
the past history beyond the most recent ` observations and the error in estimating the true joint
(t)
distribution of the data over a ` length block. Consider any time t. Recall the definition of δKL (A` ).
i
h
(t)
t−1
δKL (A` ) = Ext−1 DKL (P r(xt |xt−1
))
)
k
Q
(x
|x
t
A
−∞
`
t−`
−∞
h
i
h
i
t−1
t−1
t−1
= Ext−1 DKL (P r(xt |xt−1
−∞ ) k P (xt |xt−` )) + Ext−1 DKL (P r(xt |xt−` ) k QA` (xt |xt−` ))
−∞
=
−∞
(t)
δKL (P` )
+
(t)
δ̂KL (A` )
(t)
t−`−1
Therefore, δKL (A` ) = δKL (P` ) + δ̂KL (A` ). It’s easy to verify that δKL (P` ) = I(x−∞
; xt |xt−1
t−` ).
This relation expresses the intuition that the current output (xt ) has a lot of extra information
t−`−1
about the past (x−∞
) if we cannot predict it as well using the ` most recent observations (xt−1
t−` )
t−1
as can be done by using the entire past (x−∞ ). We will now upper bound the total error for the
−1 ∞
window [τ, τ + ` − 1]. We expand I(xτ−∞
; xτ ) using the chain rule,
∞
τ +`−1
X
X
τ −1 ∞
τ −1
−1
t−1
I(x−∞ ; xτ ) =
I(x−∞ ; xt |xτ ) ≥
I(xτ−∞
; xt |xt−1
τ ).
t=τ
t=τ
(t)
−1
t−`−1
Note that I(xτ−∞
; xt |xτt−1 ) ≥ I(x−∞
; xt |xt−1
t−` ) = δKL (P` ) as t−` ≤ τ and I(X, Y ; Z) ≥ I(X; Z|Y ).
The proposition now follows from averaging the error across the ` time steps and using Eq. 3.1 to
average over all blocks of length ` in the window [0, T − 1],
τ +`−1
1 X (t)
1
I(M)
−1 ∞
δ (P` ) ≤ I(xτ−∞
; xτ ) =⇒ δKL (P` ) ≤
` t=τ KL
`
`
10
Proposition 1 also directly gives guarantees for the scenario where the task is to predict the
distribution of the next block of outputs instead of just the next immediate output, because KLdivergence obeys the chain rule. The following easy corollary, relating KL error to `1 error yields
the following statement, which also trivially applies to zero/one loss with respect to that of the
optimal predictor, as the expected relative zero/one loss at any time step is at most the `1 loss at
that time step.
Corollary 2. For any data-generating distribution M with mutual information I(M) between past
and
p future observations, the best `-th order Markov model P` obtains average `1 -error, δ`1 (P` ) ≤
I(M)/2` with respect to the optimal predictor with access to the infinite history. Also, any
predictor A
p` with δ̂`1 (A` ) average `1 -error in estimating the joint probabilities gets average error
δ`1 (A` ) ≤ I(M)/2` + δ̂`1 (A` ).
Proof. We again decompose the error as the sum of the error in estimating P̂ and the error due to
not knowing the past history using the triangle inequality.
h
i
(t)
)k1
δ`1 (A` ) = Ext−1 kP r(xt |xt−1
) − QA` (xt |xt−1
−∞
t−`
−∞
i
i
h
h
t−1
t−1
t−1
t−1
≤ Ext−1 kP r(xt |x−∞ ) − P r(xt |xt−` )k1 + Ext−1 kP r(xt |xt−` ) − QA` (xt |xt−` )k1
−∞
−∞
=
(t)
δ`1 (P` )
+
(t)
δ̂`1 (A` )
(t)
Therefore, δ`1 (A` ) ≤ δ`1 (P` ) + δ̂`1 (A` ). By Pinsker’s inequality and Jensen’s inequality, δ`1 (A` )2 ≤
(t)
δKL (A` )/2. Using Proposition 1,
T −1
1 X (t)
I(M)
δKL (A` ) =
δKL (A` ) ≤
T
`
t=0
Therefore, using Jensen’s inequality again, δ`1 (A` ) ≤
5
p
I(M)/2`.
Lower Bound for Large Alphabets
Our lower bounds for the sample complexity in the large alphabet case leverage a class of Constraint
Satisfaction Problems (CSPs) with high complexity. A class of (Boolean) k-CSPs is defined via a
predicate—a function P : {0, 1}k → {0, 1}. An instance of such a k-CSP on n variables {x1 , · · · , xn }
is a collection of sets (clauses) of size k whose k elements consist of k variables or their negations.
Such an instance is satisfiable if there exists an assignment to the variables x1 , . . . , xn such that
the predicate P evaluates to 1 for every clause. More generally, the value of an instance is the
maximum, over all 2n assignments, of the ratio of number of satisfied clauses to the total number
of clauses.
Our lower bounds are based on the presumed hardness of distinguishing random instances of
or a certain class of CSP, versus instances of the CSP with high value. There has been much
work attempting to characterize the difficulty of CSPs—one notion which we will leverage is the
complexity of a class of CSPs, first defined in [16] and studied in [17]:
Definition 1. The complexity of a class of k-CSPs defined by predicate P : {0, 1}k → {0, 1} is the
largest r such that there exists a distribution supported on the support of P that is (r − 1)-wise
independent (i.e. “uniform”), and no such r-wise independent distribution exists.
11
Example 1. Both k-XOR and k-SAT are well-studied classes of k-CSPs, corresponding, respectively,
to the predicates PXOR that is the XOR of the k Boolean inputs, and PSAT that is the OR of the
inputs. These predicates both support (k − 1)-wise uniform distributions, but not k-wise uniform
distributions, hence their complexity is k. In the case of k-XOR, the uniform distribution over
{0, 1}k restricted to the support of PXOR is (k − 1)-wise uniform. The same distribution is also
supported by k-SAT.
A random instance of a CSP with predicate P is an instance such that all the clauses are
chosen uniformly at random (by selecting the k variables uniformly, and independently negating
each variable with probability 1/2). A random instance will have value close to E[P ], where E[P ]
is the expectation of P under the uniform distribution. In contrast, a planted instance is generated
by first fixing a satisfying assignment σ and then sampling clauses that are satisfied, by uniformly
choosing k variables, and picking their negations according to a (r−1)-wise independent distribution
associated to the predicate. Hence a planted instance always has value 1. A noisy planted instance
with planted assignment σ and noise level η is generated by sampling consistent clauses (as above)
with probability 1 − η and random clauses with probability η, hence with high probability it has
value 1 − η + ηE[P ]. Our hardness results are based on distinguishing whether a CSP instance is
random or has a high value.
As one would expect, the difficulty of distinguishing random instances from noisy planted instances, decreases as the number of sampled clauses grows. The following conjecture of Feldman
et al. [16] asserts a sharp boundary on the number of clauses, below which this problem becomes
computationally intractable, while remaining information theoretically easy. The notation is made
more explicit in Appendix B.
Conjectured CSP Hardness [Conjecture 1] [16]: Let Q be any distribution over k-clauses and
n variables of complexity r and 0 < η < 1. Any polynomial-time (randomized) algorithm that,
given access to a distribution D that equals either the uniform distribution over k-clauses Uk or a
(noisy) planted distribution Qησ = (1 − η)Qσ + ηUk for some σ ∈ {0, 1}n and planted distribution
Qσ , decides correctly whether D = Qησ or D = Uk with probability at least 2/3 needs Ω̃(nr/2 ) clauses.
Feldman et al. [16] proved the conjecture for the class of statistical algorithms 3 . Recently,
Kothari et al. [18] showed that the polynomial time Sum-of-Squares (SOS) algorithm requires
Ω̃(nr/2 ) clauses to refute random instances of a CSP with complexity r, hence proving Conjecture
1 for any polynomial-size semidefinite programming relaxation for refutation. Note that Ω̃(nr/2 ) is
tight, as Allen et al. [17] give a SOS algorithm for refuting random CSPs beyond this regime. Other
recent papers such as Daniely and Shalev-Shwartz [51] and Daniely [52] have also used presumed
hardness of strongly refuting random k-SAT and random k-XOR instances with a small number of
clauses to derive conditional hardness of learning results.
A first attempt to encode a k-CSP as a sequential model is to construct a model which outputs
k randomly chosen literals for the first k time steps 0 to k −1, and then their (noisy) predicate value
for the final time step k. Clauses from the CSP correspond to samples from the model, and the
algorithm would need to solve the CSP to predict the final time step k. However, as all the outputs
up to the final time step are random, the trivial prediction algorithm that guesses randomly and
does not try to predict the output at time k, would be near optimal. To get strong lower bounds,
3
Statistical algorithms are an extension of the statistical query model, these are algorithms that do not access
samples from the distribution but instead have access to estimates of the expectation of any bounded function of a
sample through an oracle. Feldman et al. [50] point out that almost all algorithms that work on random data also
work with this limited access to samples, refer to Feldman et al. [50] for more details and examples.
12
we will output m > 1 functions of the k literals after k time steps, while still ensuring that all the
functions remain collectively hard to invert without a large number of samples.
We use elementary results from the theory of error correcting codes to achieve this, and prove
hardness due to a reduction from a specific family of CSPs to which Conjecture 1 applies. By
choosing k and m carefully, we obtain the near-optimal dependence on the mutual information and
error —matching the upper bounds implied by Proposition 1. We provide a short outline of the
argument, followed by the detailed proof in the appendix.
5.1
Sketch of construction and proof
We construct a sequential model M such that making good predictions on the model requires distinguishing random instances of a k-CSP C on n variables from instances of C with a high value.
The output alphabet of M is {ai } of size 2n. We choose a mapping from the 2n characters {ai }
to the n variables {xi } and their n negations {x̄i }. For any clause C and planted assignment σ to
the CSP C, let σ(C) be the k-bit string of values assigned by σ to literals in C. The model M
randomly uniformly outputs k characters from time 0 to k − 1, which correspond to literals in the
CSP C, hence the k outputs correspond to a clause C of the CSP. For some m to be specified later,
we will construct a binary matrix A ∈ {0, 1}m×k , which will correspond to a good error-correcting
code. For the time steps k to k + m − 1, with probability 1 − η the model outputs y ∈ {0, 1}m
where y = Av mod 2 and v = σ(C) with C being the clause associated with the outputs of the
first k time steps. With the remaining probability, η, the model outputs m uniformly random
bits. Note that the mutual information I(M) is at most m as only the outputs from time k to
k + m − 1 can be predicted. We claim that M can be simulated by a HMM with 2m (2k + m) + m
hidden states. This can be done as follows. For every time step i from 0 to k − 1, we maintain
2m hidden states corresponding to vi = 0 and 2m hidden states corresponding to vi = 1. Each of
these 2m states stores the current value of the m bits of y. This takes a total of k2m+1 hidden
states. We use 2m hidden states for each time step k through k + m − 1 for the k output bits.
Finally, we need an additional m hidden states to output m uniform random bits from time k to
k + m − 1 with probability η. This accounts for a total of k2m+1 + 2m + m hidden states. Note
that the larger m is with respect to k, the higher the cost (in terms of average prediction error)
of failing to correctly predict the outputs from time k to k + m − 1. Tuning k and m allows us
to control the number of hidden states or the mutual information, and average error incurred by a
computationally constrained predictor.
We define the CSP C in terms of a collection of predicates P (y) for each y ∈ {0, 1}m . While
Conjecture 1 does not directly apply to C, as it is defined by a collection of predicates instead of a
single one, we will later show a reduction from a related CSP C0 defined by a single predicate for
which Conjecture 1 holds. For each y, the predicate P (y) of C is the set of v ∈ {0, 1}k which satisfy
y = Av mod 2. Hence each clause has an additional label y which determines the satisfying assignments, this label is just the output of our sequential model M from time k to k + m − 1. Hence
for any planted assignment σ, the set of satisfying clauses C of the CSP C are all clauses such that
Av = y mod 2 where y is the label of the clause and v = σ(C). We define a (noisy) planted
distribution over clauses Qησ by first uniformly randomly sampling a label y, and then sampling
a consistent clause with probability (1 − η), otherwise with probability η we sample a uniformly
random clause. Let Uk be the uniform distribution over all k-clauses with uniformly chosen labels
y. We will show that Conjecture 1 implies that distinguishing between the distributions Qησ and
Uk is hard without sufficiently many clauses. This gives us the hardness results we desire for our
sequential model M: if an algorithm obtains low prediction error on the outputs from time k
13
through (k + m − 1), then it can be used to distinguish between instances of the CSP C with a high
value and random instances, as no algorithm obtains low prediction error on random instances.
Hence hardness of strongly refuting the CSP C implies hardness of making good predictions on M.
We now sketch the argument for why Conjecture 1 implies the hardness of strongly refuting the
CSP C. We define another CSP C0 which we show reduces to C. The predicate P of the CSP C0
is the set of all v ∈ {0, 1}k such that Av = 0 mod 2. Hence for any planted assignment σ, the
set of satisfying clauses of the CSP C0 are all clauses such that v = σ(C) is in the nullspace of A.
As before, the planted distribution over clauses is uniform on all satisfying clauses with probability
(1 − η), with probability η we add a uniformly random k-clause. For some γ ≥ 1/10, if we can construct A such that the set of satisfying assignments v (which are the vectors in the nullspace of A)
supports a (γk − 1)-wise uniform distribution, then by Conjecture 1 any polynomial time algorithm
cannot distinguish between the planted distribution and uniformly randomly chosen clauses with
less than Ω̃(nγk/2 ) clauses. We show that choosing a matrix A whose null space is (γk − 1)-wise
uniform corresponds to finding a binary linear code with rate at least 1/2 and relative distance γ,
the existence of which is guaranteed by the Gilbert-Varshamov bound.
We next sketch the reduction from C0 to C. The key idea is that the CSPs C0 and C are defined by linear equations. If a clause C = (x1 , x2 , · · · , xk ) in C0 is satisfied with some assignment
t ∈ {0, 1}k to the variables in the clause then At = 0 mod 2. Therefore, for some w ∈ {0, 1}k such
that Aw = y mod 2, t + w mod 2 satisfies A(t + w) = y mod 2. A clause C 0 = (x01 , x02 , · · · , x0k )
with assignment t + w mod 2 to the variables can be obtained from the clause C by switching the
literal x0i = x̄i if wi = 1 and retaining x0i = xi if wi = 0. Hence for any label y, we can efficiently
convert a clause C in C0 to a clause C 0 in C which has the desired label y and is only satisfied
with a particular assignment to the variables if C in C0 is satisfied with the same assignment to
the variables. It is also not hard to ensure that we uniformly sample the consistent clause C 0 in C
if the original clause C was a uniformly sampled consistent clause in C0 .
We provide a small example to illustrate the sequential model constructed above. Let k = 3,
m = 1 and n = 3. Let A ∈ {0, 1}1×3 . The output alphabet of the model M is {ai , 1 ≤ i ≤ 6}. The
letter a1 maps to the variable x1 , a2 maps to x̄1 , similarly a3 → x2 , a4 → x̄2 , a5 → x3 , a6 → x̄3 . Let
σ be some planted assignment to {x1 , x2 , x3 }, which defines a particular model M. If the output
of the model M is a1 , a3 , a6 for the first three time steps, then this corresponds to the clause with
literals, (x1 , x2 , x̄3 ). For the final time step, with probability (1 − η) the model outputs y = Av
mod 2, with v = σ(C) for the clause C = (x1 , x2 , x̄3 ) and planted assignment σ, and with probability η it outputs a uniform random bit. For an algorithm to make a good prediction at the final
time step, it needs to be able to distinguish if the output at the final time step is always a random
bit or if it is dependent on the clause, hence it needs to distinguish random instances of the CSP
from planted instances.
We re-state Theorem 2 below, deferring its proof to Appendix B.
Theorem 2. Assuming Conjecture 1, for all sufficiently large T and 1/T c < ≤ 0.1 for some
fixed constant c, there exists a family of HMMs with T hidden states and an output alphabet of size
n such that, any polynomial time prediction algorithm that achieves average KL-error, `1 error or
relative zero-one error less than with probability greater than 2/3 for a randomly chosen HMM
in the family needs requires nΘ(log T /) samples from the HMM over any window length which the
algorithm uses for prediction.
14
6
Lower Bound for Small Alphabets
Our lower bounds for the sample complexity in the binary alphabet case are based on the average case hardness of the decision version of the parity with noise problem, and the reduction is
straightforward.
In the parity with noise problem on n bit inputs we are given examples v ∈ {0, 1}n drawn uniformly from {0, 1}n along with their noisy labels hs, vi+ mod 2 where s ∈ {0, 1}n is the (unknown)
support of the parity function, and ∈ {0, 1} is the classification noise such that P r[ = 1] = η
where η < 0.05 is the noise level.
Let Qηs be the distribution over examples of the parity with noise instance with s as the support
of the parity function and η as the noise level. Let Un be the distribution over examples and labels
where each label is chosen uniformly from {0, 1} independent of the example. The strength of
of our lower bounds depends on the level of hardness of parity with noise. Currently, the fastest
algorithm for the problem due to Blum et al. [20] runs in time and samples 2n/ log n . We define the
function f (n) as follows–
Definition 2. Define f (n) to be the function such that for a uniformly random support s ∈ {0, 1}n ,
with probability at least (1 − 1/n2 ) over the choice of s, any (randomized) algorithm that can
distinguish between Qηs and Un with success probability greater than 2/3 over the randomness of
the examples and the algorithm, requires f (n) time or samples.
Our model will be the natural sequential version of the parity with noise problem, where
each example is coupled with several parity bits. We denote the model as M(Am×n ) for some
A ∈ {0, 1}m×n , m ≤ n/2. From time 0 through (n − 1) the outputs of the model are i.i.d. and
uniform on {0, 1}. Let v ∈ {0, 1}n be the vector of outputs from time 0 to (n − 1). The outputs
for the next m time steps are given by y = Av + mod 2, where ∈ {0, 1}m is the random
noise and each entry i of is an i.i.d random variable such that P r[i = 1] = η, where η is the
noise level. Note that if A is full row-rank, and v is chosen uniformly at random from {0, 1}n , the
distribution of y is uniform on {0, 1}m . Also I(M(A)) ≤ m as at most the binary bits from time
n to n + m − 1 can be predicted using the past inputs. As for the higher alphabet case, M(Am×n )
can be simulated by an HMM with 2m (2n + m) + m hidden states (see Section 5.1).
We define a set of A matrices, which specifies a family of sequential models. Let S be the set
of all (m × n) matrices A such that the sub-matrix of A corresponding to all rows but only the
first 2n/3 columns is full row rank. We need this restriction to lower bound I(M(A)), as otherwise
there could be small or no dependence of the parity bits on the inputs from time 0 to 2n/3 − 1. We
denote R as the family of models M(A) for A ∈ S. Lemma 2 shows that with high probability over
the choice of A, distinguishing outputs from the model M(A) from random examples Un requires
f (n) time or examples.
Lemma 2. Let A be chosen uniformly at random from the set S. Then, with probability at least
(1 − 1/n) over the choice A ∈ S, any (randomized) algorithm that can distinguish the outputs from
the model M(A) from the distribution over random examples Un with success probability greater
than 2/3 over the randomness of the examples and the algorithm needs f (n) time or examples.
The proof of Proposition 2 follows from Lemma 2 and is similar to the proof of Theorem 2.
15
Proposition 2. With f (T ) as defined in Definition 2, For all sufficiently large T and 1/T c <
≤ 0.1 for some fixed constant c, there exists a family of HMMs with T hidden states such that
any algorithm that achieves average relative zero-one loss, average `1 loss, or average KL loss less
than with probability greater than 2/3 for a randomly chosen HMM in the family needs, requires
f (log T /) time or samples samples from the HMM over any window length which the algorithm
uses for prediction.
7
Information Theoretic Lower Bounds
We show that information theoretically, windows of length cI(M)/2 are necessary to get expected
relative zero-one loss less than . As the expected relative zero-one loss is at most the `1 loss, which
can be bounded by the square of the KL-divergence, this automatically implies that our window
length requirement is also tight for `1 loss and KL loss. In fact, it’s very easy to show the tightness
for the KL loss, choose the simple model which emits uniform random bits from time 0 to n − 1
and repeats the bits from time 0 to m − 1 for time n through n + m − 1. One can then choose n, m
to get the desired error and mutual information I(M). To get a lower bound for the zero-one loss
we use the probabilistic method to argue that there exists an HMM such that long windows are
required to perform optimally with respect to the zero-one loss for that HMM. We state the lower
bound and a rough proof idea, deferring the details to Appendix D.
Proposition 3. There is an absolute constant c such that for all 0 < < 0.5 and sufficiently large
n, there exits an HMM with n states such that it is not information theoretically possible to get
average relative zero-one loss or `1 loss less than using windows of length smaller than c log n/2 ,
and KL loss less than using windows of length smaller than c log n/.
We illustrate the construction in Fig. 2 and provide the high-level proof idea with respect to
Fig. 2 below.
Figure 2: Lower bound construction, n = 16
We want show that any predictor P using windows of length ` = 3 cannot make a good
prediction. The transition matrix of the HMM is a permutation and the output alphabet is binary.
Each state is assigned a label which determines its output distribution. The states labeled 0 emit 0
with probability 0.5 + and the states labeled 1 emit 1 with probability 0.5 + . We will randomly
and uniformly choose the labels for the hidden states. Over the randomness in choosing the labels
for the permutation, we will show that the expected error of the predictor P is large, which means
that there must exist some permutation such that the predictor P incurs a high error. The rough
16
proof idea is as follows. Say the Markov model is at hidden state h2 at time 2, this is unknown
to the predictor P. The outputs for the first three time steps are (x0 , x1 , x2 ). The predictor P
only looks at the outputs from time 0 to 2 for making the prediction for time 3. We show that
with high probability over the choice of labels to the hidden states and the outputs (x0 , x1 , x2 ),
the output (x0 , x1 , x2 ) from the hidden states (h0 , h1 , h2 ) is close in Hamming distance to the
label of some other segment of hidden states, say (h4 , h5 , h6 ). Hence any predictor using only the
past 3 outputs cannot distinguish whether the string (x0 , x1 , x2 ) was emitted by (h0 , h1 , h2 ) or
(h4 , h5 , h6 ), and hence cannot make a good prediction for time 3 (we actually need to show that
there are many segments like (h4 , h5 , h6 ) whose label is close to (x0 , x1 , x2 )). The proof proceeds
via simple concentration bounds.
A
Proof of Theorem 1
Theorem 1. Suppose observations are generated by a Hidden Markov Model with at most n hidden
states, and output alphabet of size d. For > 1/n there exists a window length ` = O( log n )
and absolute constant c such that for any T ≥ dc` , if t ∈ {1, 2, . . . , T } is chosen uniformly at
random, then the expected `1 distance between the true distribution of xt given the entire history
(and knowledge of the HMM), and the distribution predicted by the naive “empirical” `-th order
√
Markov model based on x0 , . . . , xt , is bounded by .
Proof. Let πt be a distribution over hidden states such that the probability of the ith hidden state
under πt is the empirical frequency of the ith hidden state from time 1 to t − 1 normalized by
t − 1. For 0 ≤ s ≤ ` − 1, consider the predictor Pt which makes a prediction for the distribution
of observation xt+s given observations xt , . . . , xt+s−1 based on the true distribution of xt under the
HMM, conditioned on the observations xt , . . . , xt+s−1 and the distribution of the hidden state at
time t being πt . We will show that in expectation over t, Pt gets small error averaged across the
time steps 0 ≤ s ≤ ` − 1, with respect to the optimal prediction of the distribution of xt+s which
knows the hidden state ht at time t. In order to show this, we need to first establish that the true
hidden state ht at time t does not have very small probability under πt , with high probability over
the choice of t.
Lemma 3. With probability 1 − 2/n over the choice of t ∈ {0, . . . , T }, the hidden state ht at time
t has probability at least 1/n3 under πt .
Proof. Consider the ordered set Si of time indices t where the hidden state ht = i. For the sets
corresponding to hidden states j which have probability less than 1/n2 under πT , the cardinality
|Sj | ≤ T /n2 . The sum of the cardinality of all such small sets is at most T /n, and hence the
probability that a uniformly random t ∈ {1, . . . , T } lies in one of these sets is at most 1/n. Now
consider the set of time indices Si corresponding to some hidden state i which has probability at
least 1/n2 under πT . For all t which are not among the first T /n3 time indices in this set, the
hidden state i has probability at least 1/n3 under πt . As the fraction of the “bad” time steps t
corresponding to any hidden state which has probability at least 1/n2 under πT is at most 1/n,
the total fraction of these “bad” time steps t is at most 1/n. Therefore using a union bound, with
failure probability 2/n, the hidden state ht at time t has probability at least 1/n3 under πt .
Consider any time index t, for simplicity assume t = 0, and let OP Ts denote the conditional
distribution of xs given observations x0 , . . . , xs−1 , and knowledge of the hidden state at time s = 0.
Let Ms denote the conditional distribution of xs given only x0 , . . . , xs−1 , given that the hidden
state at time 0 has the distribution π0 .
17
Lemma 4. For > 1/n, if the true hidden state at time 0 has probability at least 1/nc under π0 ,
then for ` = c log n/2 ,
`−1
h1 X
i
E
kOP Ts − Ms k1 ≤ 4,
`
s=0
where the expectation is with respect to the randomness in the outputs from time 0 to ` − 1.
By Lemma 3, for a randomly chosen t ∈ {1, . . . , T } the probability that the hidden state i
at time 0 has probability less than 1/n3 in the prior distribution πt is at most 2/n. Hence using
Lemma 4, the expected average error of the predictor Pt across all t is at most 4 + 2/n ≤ 6 for
` = 3 log n/2 .
Now consider the predictor P̂t which for 0 ≤ s ≤ ` − 1 predicts xt+s given xt , . . . , xt+s−1
according to the empirical distribution of xt+s given xt , . . . , xt+s−1 , based on the observations up
to time t. We will now argue that the predictions of P̂t are close in expectation to the predictions
of Pt . Recall that prediction of Pt at time t + s is the true distribution of xt under the HMM
conditioned on the observations xt , . . . , xt+s−1 and the distribution of the hidden state at time t
being drawn from πt . For any s < `, let P1 refer to the prediction of P̂t at time t + s and P2 refer
to the prediction of Pt at time t + s. We will show that kP1 − P2 k1 is small in expectation.
We do this using a martingale concentration argument. Consider any string r of length s. Let
Q1 (r) be the empirical probability of the string r up to time t and Q2 (r) be the true probability
of the string r given that the hidden state at time t is distributed as πt . Our aim is to show that
|Q1 (r) − Q2 (r)| is small. Define the random variable
Yτ = P r[[xτ : xτ +s−1 ] = r|hτ ] − I([xτ : xτ +s−1 ] = r),
P
where I denotes the indicator function and Y0 is defined to be 0. We claim that Zτ = τi=0 Yi is a
martingale with respect to the filtration {φ}, {h1 }, {h2 , x1 }, {h3 , x2 }, . . . , {ht+1 , xt }. To verify, note
that,
E[Yτ |{h1 }, {h2 , x1 }, . . . , {hτ , xτ −1 }] = P r[[xτ : xτ +s−1 ] = r|hτ ]
− E[I([xτ : xτ +s−1 ] = r)|{h1 }, {h2 , x1 }, . . . , {xτ −1 , hτ }]
= P r[[xτ : xτ +s−1 ] = r|hτ ] − E[I([xτ : xτ +s−1 ] = r)|hτ ] = 0
Therefore E[Zτ |{h1 }, {h2 , x1 }, . . . , {hτ , xτ −1 }] = Zτ −1 , and hence Zτ is a martingale. Also, note
that |Zτ − Zτ −1 | ≤ 1 as 0 ≤ P r[[xτ : xτ +s−1 ] = r|hτ ] ≤ 1 and 0 ≤ I([xτ : xτ +s−1 ] = r) ≤ 1. Hence
using Azuma’s inequality (Lemma 8),
P r[|Zt−s | ≥ K] ≤ 2e−K
2 /(2t)
Note that Zt−s /(t − s) = Q2 (r) − Q1 (r). By Azuma’s inequality and doing a union bound over
all ds ≤ d` strings r of length s, for c ≥ 4 and
t ≥ T /n2 = dc` /n2 ≥ dc`/2 , kQ1 − Q2 k1 ≤
√
1/dc`/20 with failure probability at most 2d` e− t/2 ≤ 1/n2 . Similarly, for all strings of length
s + 1, the estimated probability of the string has error at most 1/dc`/20 with failure probability
1/n2 . As the conditional distribution of xt+s given observations xt , . . . , xt+s−1 is the ratio of the
joint distributions of {xt , . . . , xt+s−1 , xt+s } and {xt , . . . , xt+s−1 }, therefore as long as the empirical
distributions of the length s and length s + 1 strings are estimated with error at most 1/dc`/20
and the string {xt , . . . , xt+s−1 } has probability at least 1/dc`/40 , the conditional distributions P1
and P2 satisfy kP1 − P2 k1 ≤ 1/n2 . By a union bound over all ds ≤ d` strings and for c ≥ 100,
18
the total probability mass on strings which occur with probability less than 1/dc`/40 is at most
1/dc`/50 ≤ 1/n2 for c ≥ 100. Therefore kP1 − P2 k1 ≤ 1/n2 with overall failure probability 3/n2 ,
hence the expected `1 distance between P1 and P2 is at most 1/n.
By using the triangle inequality and the fact that the expected average error of Pt is at most
6 for ` = 3 log n/2 , it follows that the expected average error of P̂t is at most 6 + 1/n ≤ 7. Note
that the expected average error of P̂t is the average of the expected errors of the empirical s-gram
Markov models for 0 ≤ s ≤ ` − 1. Hence for ` = 3 log n/2 there must exist at least some s < `
such that the s-gram Markov model gets expected `1 error at most 7.
A.1
Proof of Lemma 4
Let the prior for the distribution of the hidden states at time 0 be π0 . Let the true hidden
state h0 at time 0 be 1 without loss of generality. We refer to the output at time t by xs . Let
H0s (i) = P r[h0 = i|xs0 ] be the posterior probability of the ith hidden state at time 0 after seeing the
observations xs0 up to time t and having the prior π0 on the distribution of the hidden states at time
0. For convenience, denote us = H0s (1) and vs = 1 − us . Define Pis (j) = P r[xs = j|xs−1
0 , h0 = i]
as the distribution of the output at time t conditioned on the hidden state at time 0 being i and
s
observations xs−1
0 . Note that OP Ts = P1 . As before, define Rs as the conditional distribution of
xs given observations xP
0 , · · · , xs−1 and initial distribution π but not being at hidden state h0 at
time 0 i.e. Rs = (1/vs ) ni=2 H0s (i)Pis . Note that Ms is a convex combination of OP Ts and Rs , i.e.
Ms = us OP Ts + vs Rs . Hence kOP Ts − Ms k1 ≤ kOP Ts − Rs k1 . Define δs = kOP Ts − Ms k1 .
Our proof relies on a martingale concentration argument, and in order to ensure that our
martingale has bounded differences we will ignore outputs which cause a significant drop in the
posterior of the true hidden state at time 0. Let B be theP set of all outputs j at some time t
P
4 j∈B Rs (j)
Ts (j)
4
4
such that OP
≤ clog
j∈B OP Ts (j) ≤
clog n
n . Hence by a union
Rs (j) ≤ clog n . Note that,
4
Ts (j)
bound, with failure probability at most 2 any output j such that OP
Rs (j) ≤ clog n is not emitted in
a window of length clog n/2 . Hence we will only concern ourselves with sequences of outputs such
Ts (j)
4
that the output j emitted at each step satisfies OP
Rs (j) ≤ clog n , let the set of all such outputs be S1 ,
note that P r(xs0 ∈
/ S1 ) ≤ 2 . Let ES1 [X] be the expectation of any random variable X conditioned
on the output sequence being in the set S1 .
Consider the sequence of random variables Xs = log us − log vs for t ∈ [−1, ` − 1] defining
X−1 = log(π1 ) − log(1 − π1 ). Let ∆s+1 = Xs+1 − Xs be the change in Xs on seeing the output xs+1
at time s + 1. Let the output at time s + 1 be j. We will first find an expression for ∆s+1 . The
posterior probabilities after seeing the (s + 1)th output get updated according to Bayes rule –
P r[h0 = 1|xs0 ]P r[x[s + 1] = j|h0 = 1, xs0 ]
P r[x[s + 1] = j|xs0 ]
us OP Ts+1 (j)
=
P r[x[s + 1] = j|xs0 ]
H0s+1 (1) = P r[h0 = 1|xs0 , x[s + 1] = j] =
=⇒ us+1
Let P r[x[s + 1] = j|xs0 ] = dj . Note that H0s+1 (i) = H0s (i)Pis+1 (j)/dj if the output at time s + 1 is
j. We can write –
Rs+1 =
n
X
H0s (i)Pis+1 /vs
i=2
vs+1 =
n
X
H0s+1 (i) =
n
X
i=2
i=2
19
H0s (i)Pis+1 (j) /dj
= vs Rs+1 (j)/dj
Therefore we can write ∆s+1 and its expectation E[∆s+1 ] as –
∆s+1 = log
=⇒ E[∆s+1 ] =
X
OP Ts+1 (j)
Rs+1 (j)
OP Ts+1 (j) log
j
OP Ts+1 (j)
= D(OP Ts+1 k Rs+1 )
Rs+1 (j)
˜ s+1 as ∆
˜ s+1 := min{∆s+1 , log log n} to keep martingale differences bounded. E[∆
˜ s+1 ]
We define ∆
then equals a truncated version of the KL-divergence which we define as follows.
Definition 3. hFor any two
n distributions
oiµ(x) and ν(x), define the truncated KL-divergence as
D̃C (µ k ν) = E log min µ(x)/ν(x), C
for some fixed C.
We are now ready to define our martingale. Consider the sequence
of random variables
X̃s :=
Pn
2
˜
X̃s−1 + ∆s for t ∈ [0, ` − 1], with X̃−1 := X−1 . Define Z̃s := s=1 X̃s − X̃s−1 − δs /2 . Note that
˜ s =⇒ Xs ≥ X̃s .
∆s ≥ ∆
is with respect
Lemma 5. ES1 [X̃s − X̃s−1 ] ≥ δs2 /2, where the expectation
to the output at time t.
Ps
Hence the sequence of random variables Z̃s := i=0 X̃s − X̃s−1 − δs2 /2 is a submartingale with
respect to the outputs.
˜ s and E[∆
˜ s ] = D̃C (OP Ts k Rs ), C = log n. By taking an
Proof. By definition X̃s − X̃s−1 = ∆
expectation with respect to only sequences S1 instead of all possible sequences, we are removing
˜ s ], hence
events which have a negative contribution to E[∆
˜ s ] ≥ E[∆
˜ s ] = D̃C (OP Ts k Rs )
ES1 [∆
We can now apply Lemma 6.
Lemma 6. (Modified Pinsker’s inequality) For any two distributions
h
µ(x)nand ν(x)
oidefined on
µ(x)
x ∈ X, define the C-truncated KL divergence as D̃C (µ k ν) = Eµ log min ν(x) , C
for some
fixed C such that log C ≥ 8. Then D̃C (µ k ν) ≥ 12 kµ − νk21 .
˜ s ] ≥ 1 kOP Ts − Rs k2 . Hence ES [X̃s − X̃s−1 ] ≥ δ 2 /2.
Hence ES1 [∆
s
1
1
2
We now claim that our submartingale has bounded differences.
√
Lemma 7. |Z̃s − Z̃s−1 | ≤ 2 log(cγ 4 log n).
2 )/2 can be at most 2. Z − Z
˜
˜
Proof. Note that (δs2 − δs−1
s
s−1 = ∆s . By definition ∆s ≤ log(log n).
˜ s ≥ − log(clog n/4 ) as we restrict ourselves to sequences in the set S1 . Hence |Z̃s − Z̃s−1 | ≤
Also, ∆
√
log(clog n/4 ) + 2 ≤ 2 log(clog n/4 ).
We now apply Azuma-Hoeffding –
Lemma 8. (Azuma-Hoeffding
2 inequality) Let Zi be a submartingale with |Zi − Zi−1 | ≤ C. Then
−λ
P r[Zs − Z0 ≤ −λ] ≤ exp 2tC
2
20
Applying Lemma 8 we can show,
P r[Z̃`−1 − Z̃0 ≤ −log n] ≤ exp
−log n
≤ 2
4c(1/)2 log2 (clog n/4 )
(A.1)
We now bound the average error in the window 0 to ` − 1. With failure probability at most
over the randomness in the outputs, Z̃`−1 − Z̃0 ≥ −log n by Eq. A.1. Let S2 be the set of all
sequences in S1 which satisfy Z̃`−1 − Z̃0 ≥ −log n. Note that X0 = X̃0 ≥ log(1/π1 ). Consider the
last point after which vs decreases below 2 and remains below that for every subsequent step in the
window. Let this point be τ , if there is no such point define τ to be ` − 1. The total contribution
of the error at every step after the τ th step to the average error is a 2 term as the error after this
step is 2 . Note that Xτ ≤ log(1/)2 =⇒ X̃τ ≤ log(1/)2 as X̃s ≤ Xs . Hence for all sequences in
S2 –
2
X̃τ ≤ log(1/)2
=⇒ X̃τ − X̃−1 ≤ log(1/)2 + log(1/π1 )
τ
X
=⇒ 0.5
δs2 ≤ log(1/)2 + log(1/π1 ) + log n
=⇒
s=0
P`−1 2
s=0 δs
c log n/2
(By Eq. A.1)
≤ 62
(as log(1/π1 ) ≤ c log n)
≤ 3
(By Jensen’s inequality)
P`−1
=⇒
s=0 δs
c log n/2
P
As the total probability of sequences outside S2 is at most 22 , E[ `−1
s=0 δs ] ≤ 4, whenever the
hidden state i at time 0 has probability at least 1/nc in the prior distribution π0 .
A.2
Proof of modified Pinsker’s inequality (Lemma 6)
Lemma 6. (Modified Pinsker’s inequality) For any two distributions
h
µ(x)nand ν(x)
oidefined on
µ(x)
x ∈ X, define the C-truncated KL divergence as D̃C (µ k ν) = Eµ log min ν(x) , C
for some
fixed C such that log C ≥ 8. Then D̃C (µ k ν) ≥ 12 kµ − νk21 .
Proof. We rely on the following Lemma which bounds the KL-divergence for binary distributionsLemma 9. For every 0 ≤ q ≤ p ≤ 1, we have
2
1. p log pq + (1 − p) log 1−p
1−q ≥ 2(p − q)
2
2. 3p + (1 − p) log 1−p
1−q ≥ 2(p − q)
Proof. For the second result, first observe that log(1/(1 − q)) ≥ 0 and (p − q) ≤ p as q ≤ p. Both
the results then follow from standard calculus.
Let A := {x ∈ X : µ(x) ≥ ν(x)} and B := {x ∈ X : µ(x) ≥ Cν(x)}. Let µ(A) = p,
µ(B) = δ, ν(A) = q and ν(B) = . Note that kµ − νk1 = 2(µ(A) − ν(A)). By the log-sum
inequality–
D̃C (µ k ν) =
X
x∈B
µ(x) log
X
X
µ(x)
µ(x)
µ(x)
+
µ(x) log
+
µ(x) log
ν(x)
ν(x)
ν(x)
x∈A−B
21
x∈X−A
= δ log C + (p − δ) log
1. Case 1 :
δ
p
p−δ
1−p
+ (1 − p) log
q−
1−q
≥ 0.5
p
1−p
log C + (1 − p) log
2
1−q
1
≥ 2(p − q)2 = kµ − νk21
2
D̃C (µ k ν) ≥
2. Case 2 :
δ
p
< 0.5
p
δ
1−p
+ (p − δ) log 1 −
+ (1 − p) log
q−
p
1−q
p
2δ
1−p
≥ δ log C + (p − δ) log − (p − δ) + (1 − p) log
q
p
1−q
p
1−p
≥ δ(log C − 2) + (p − δ) log + (1 − p) log
q
1−q
D̃C (µ k ν) = δ log C + (p − δ) log
(a) Sub-case 1 : log pq ≥ 6
p
1−p
+ (1 − p) log
q
1−q
1−p
≥ 3p + (1 − p) log
1−q
1
≥ 2(p − q)2 = kµ − νk21
2
D̃C (µ k ν) ≥ (p − δ) log
(b) Sub-case 2 : log pq < 6
p
p
1−p
D̃C (µ k ν) ≥ δ(log C − 2 − log ) + p log + (1 − p) log
q
q
1−q
1
≥ 2(p − q)2 = kµ − νk21
2
B
B.1
Proof of Lower Bound for Large Alphabets
CSP formulation
We first go over some notation that we’ll use for CSP problems, we follow the same notation and
setup as in Feldman et al. [16]. Consider the following model for generating a random CSP instance
on n variables with a satisfying assignment σ. The k-CSP is defined by the predicate P : {0, 1}k →
{0, 1}. We represent a k-clause by an ordered k-tuple of literals from {x1 , · · · , xn , x̄1 , · · · , x̄n }
with no repetition of variables and let Xk be the set of all such k-clauses. For a k-clause C =
(l1 , · · · , lk ) let σ(C) ∈ {0, 1}k be the k-bit string of values assigned by σ to literals in C, that
is {σ(l1 ), · · · , σ(lk )} where σ(li ) is the value of the literal li in assignment σ. In the planted
model we draw clauses with probabilities that depend on the value of σ(C). Let Q : {0, 1}k →
22
P
R+ , t∈{0,1}k Q(t) = 1 be some distribution over satisfying assignments to P . The distribution Qσ
is then defined as followsQ(σ(C))
Qσ (C) = P
(B.1)
0
C 0 ∈Xk Q(σ(C ))
Recall that for any distribution Q over satisfying assignments we define its complexity r as the
largest r such that the distribution Q is (r − 1)-wise uniform (also referred to as (r − 1)-wise independent in the literature) but not r-wise uniform.
Consider the CSP C defined by a collection of predicates P (y) for each y ∈ {0, 1}m for some
m ≤ k/2. Let A ∈ {0, 1}m×k be a matrix with full row rank over the binary field. We will later
choose A to ensure the CSP has high complexity. For each y, the predicate P (y) is the set of
solutions to the system y = Av mod 2 where v = σ(C). For all y we define Qy to be the uniform
distribution over all consistent assignments, i.e. all v ∈ {0, 1}k satisfying y = Av mod 2. The
planted distribution Qσ,y is defined based on Qy according to Eq. B.1. Each clause in C is chosen
by first picking a y uniformly at random and then a clause from the distribution Qσ,y . For any
planted σ we define Qσ to be the distribution over all consistent clauses along with their labels
y. Let Uk be the uniform distribution over k-clauses, with each clause assigned a uniformly chosen
label y. Define Qησ = (1 − η)Qσ + ηUk , for some fixed noise level η > 0. We consider η to be
a small constant less than 0.05. This corresponds to adding noise to the problem by mixing the
planted and the uniform clauses. The problem gets harder as η becomes larger, for η = 0 it can be
efficiently solved using Gaussian Elimination.
We will define another CSP C0 which we show reduces to C and for which we can obtain hardness
using Conjecture 1. The label y is fixed to be the all zero vector in C0 . Hence Q0 , the distribution
over satisfying assignments for C0 , is the uniform distribution over all vectors in the null space of
A over the binary field. We refer to the planted distribution in this case as Qσ,0 . Let Uk,0 be
the uniform distribution over k-clauses, with each clause now having the label 0. For any planted
assignment σ, we denote the distribution of consistent clauses of C0 by Qσ,0 . As before define
Qησ,0 = (1 − η)Qσ,0 + ηUk,0 for the same η.
Let L be the problem of distinguishing between Uk and Qησ for some randomly and uniformly
chosen σ ∈ {0, 1}n with success probability at least 2/3. Similarly, let L0 be the problem of
distinguishing between Uk,0 and Qησ,0 for some randomly and uniformly chosen σ ∈ {0, 1}n with
success probability at least 2/3. L and L0 can be thought of as the problem of distinguishing
random instances of the CSPs from instances with a high value. Note that L and L0 are at least
as hard as the problem of refuting the random CSP instances Uk and Uk,0 , as this corresponds to
the case where η = 0. We claim that an algorithm for L implies an algorithm for L0 .
Lemma 10. If L can be solved in time t(n) with s(n) clauses, then L0 can be solved in time
O(t(n) + s(n)) and s(n) clauses.
Let the complexity of Q0 be γk, with γ ≥ 1/10 (we demonstrate how to achieve this next).
By Conjecture 1 distinguishing between Uk,0 and Qησ,0 requires at least Ω̃(nγk/2 ) clauses. We now
discuss how A can be chosen to ensure that the complexity of Q0 is γk.
B.2
Ensuring high complexity of the CSP
Let N be the null space of A. Note that the rank of N is (k − m). For any subspace D, let
w(D) = (w1 , w2 , · · · , wk ) be a randomly chosen vector from D. To ensure that Q0 has complexity
23
γk, it suffices to show that the random variables w(N ) = (w1 , w2 , · · · , wk ) are (γk − 1)-wise uniform. We use the theory of error correcting codes to find such a matrix A.
A binary linear code B of length k and rank m is a linear subspace of Fk2 (our notation is
different from the standard notation in the coding theory literature to suit our setting). The
rate of the code is defined to be m/k. The generator matrix of the code is the matrix G such
that B = {Gv, v ∈ {0, 1}m }. The parity check matrix of the code is the matrix H such that
B = {c ∈ {0, 1}k : Hc = 0}. The distance d of a code is the weight of the minimum weight
codeword and the relative distance δ is defined to be δ = d/k. For any codeword B we define its
dual codeword B T as the codeword with generator matrix HT and parity check matrix GT . Note
that the rank of the dual codeword of a code with rank m is (k − m). We use the following standard
result about linear codes–
Fact 1. If B T has distance l, then w(B) is (l − 1)-wise uniform.
Hence, our job of finding A reduces to finding a dual code with distance γk and rank m, where
γ = 1/10 and m ≤ k/2. We use the Gilbert-Varshamov bound to argue for the existence of such a
code. Let H(p) be the binary entropy of p.
Lemma 11. (Gilbert-Varshamov bound) For every 0 ≤ δ < 1/2, and 0 < ≤ 1 − H(δ), there exists
a code with rank m and relative distance δ if m/k = 1 − H(δ) − .
Taking δ = 1/10, H(δ) ≤ 0.5, hence there exists a code B whenever m/k ≤ 0.5, which is the
setting we’re interested in. We choose A = GT , where G is the generator matrix of B. Hence the
null space of A is (k/10 − 1)-wise uniform, hence the complexity of Q0 is γk with γ ≥ 1/10. Hence
for all k and m ≤ k/2 we can find a A ∈ {0, 1}m×k to ensure that the complexity of Q0 is γk.
B.3
Sequential model of CSP and sample complexity lower bound
We now construct a sequential model which derives hardness from the hardness of L. Here we
slightly differ from the outline presented in the beginning of Section 5 as we cannot base our
sequential model directly on L as generating random k-tuples without repetition increases the
mutual information, so we formulate a slight variation L0 of L which we show is at least as hard
as L. We did not define our CSP instance allowing repetition as that is different from the setting
examined in Feldman et al. [16], and hardness of the setting with repetition does not follow from
hardness of the setting allowing repetition, though the converse is true.
B.3.1
Constructing sequential model
Consider the following family of sequential models R(n, Am×k ) where A ∈ {0, 1}m×k is chosen as
defined previously. The output alphabet of all models in the family is X = {ai , 1 ≤ i ≤ 2n} of
size 2n, with 2n/k even. We choose a subset S of X of size n, each choice of S corresponds to a
model M in the family. Each letter in the output alphabet is encoded as a 1 or 0 which represents
whether or not the letter is included in the set S, let u ∈ {0, 1}2n be the vector which stores this
encoding so ui = 1 whenever the letter ai is in S. Let σ ∈ {0, 1}n determine the subset S such
that entry u2i−1 is 1 and u2i is 0 when σ i is 1 and u2i−1 is 0 and u2i is 1 when σ i is 0, for all i.
We choose σ uniformly at random from {0, 1}n and each choice of σ represents some subset S, and
hence some model M. We partition the output alphabet X into k subsets of size 2n/k each so the
first 2n/k letters go to the first subset, the next 2n/k go to the next subset and so on. Let the ith
24
subset be Xi . Let Si be the set of elements in Xi which belong to the set S.
At time 0, M chooses v ∈ {0, 1}k uniformly at random from {0, 1}k . At time i, i ∈ {0, · · · , k−1},
if vi = 1, then the model chooses a letter uniformly at random from the set Si , otherwise if vi = 0
it chooses a letter uniformly at random from Xi − Si . With probability (1 − η) the outputs for the
next m time steps from k to (k + m − 1) are y = Av mod 2, with probability η they are m uniform
random bits. The model resets at time (k + m − 1) and repeats the process.
Recall that I(M) is at most m and M can be simulated by an HMM with 2m (2k + m) + m
hidden states (see Section 5.1).
B.3.2
Reducing sequential model to CSP instance
We reveal the matrix A to the algorithm (this corresponds to revealing the transition matrix of
the underlying HMM), but the encoding σ is kept secret. The task of finding the encoding σ
given samples from M can be naturally seen as a CSP. Each sample is a clause with the literal
corresponding to the output letter ai being x(i+1)/2 whenever i is odd and x̄i/2 when i is even. We
refer the reader to the outline at the beginning of the section for an example. We denote C 0 as
the CSP C with the modification that the ith literal of each clause is the literal corresponding to
a letter in Xi for all 1 ≤ i ≤ k. Define Q0σ as the distribution of consistent clauses for the CSP
C 0 . Define Uk0 as the uniform distribution over k-clauses with the additional constraint that the
ith literal of each clause is the literal corresponding to a letter in Xi for all 1 ≤ i ≤ k. Define
0
0
Qση = (1 − η)Q0σ + ηUk0 . Note that samples from the model M are equivalent to clauses from Qση .
We show that hardness of L0 follows from hardness of L–
Lemma 12. If L0 can be solved in time t(n) with s(n) clauses, then L can be solved in time t(n)
with O(s(n)) clauses. Hence if Conjecture 1 is true then L0 cannot be solved in polynomial time
with less than Ω̃(nγk/2 ) clauses.
We can now prove the Theorem 2 using Lemma 12.
Theorem 2. Assuming Conjecture 1, for all sufficiently large T and 1/T c < ≤ 0.1 for some
fixed constant c, there exists a family of HMMs with T hidden states and an output alphabet of size
n such that, any polynomial time prediction algorithm that achieves average KL-error, `1 error or
relative zero-one error less than with probability greater than 2/3 for a randomly chosen HMM
in the family needs requires nΘ(log T /) samples from the HMM over any window length which the
algorithm uses for prediction.
Proof. We describe how to choose the family of sequential models R(n, Am×k ) for each value of
and T . Recall that the HMM has T = 2m (2k + m) + m hidden states. Let T 0 = 2m+2 (k + m). Note
that T 0 ≥ T . Let t = log T 0 . We choose m = t − log(1/) − log(t/5), and k to be the solution of
m
t = m + log(k + m) + 2, hence k = t/(5) − m − 2. Note that for ≤ 0.1, k ≥ m. Let 0 = 29 k+m
.
0
We claim ≤ . To verify, note that k + m = t/(5) − 2. Therefore,
0 =
2m
10(t − log(1/) − log(t/5))
=
≥ ,
9(k + m)
9t(1 − 10/t)
for sufficiently large t and ≥ 2−ct for a fixed constant c. Hence proving hardness for obtaining
error 0 implies hardness for obtaining error . We choose the matrix Am×k as outlined earlier. For
25
each vector σ ∈ {0, 1}n we define the family of sequential models R(n, A) as earlier. Let M be a
randomly chosen model in the family.
We first show the result for the relative zero-one loss. The idea is that any algorithm which does
a good job of predicting the outputs from time k through (k + m − 1) can be used to distinguish
between instances of the CSP with a high value and uniformly random clauses. This is because it
is not possible to make good predictions on uniformly random clauses. We relate the zero-one error
from time k through (k + m − 1) with the relative zero-one error from time k through (k + m − 1)
and the average zero-one error for all time steps to get the required lower bounds.
Let ρ01 (A) be the average zero-one loss of some polynomial time algorithm A for the output
0 (A) be the average relative zero-one loss of A for the outtime steps k through (k + m − 1) and δ01
put time steps k through (k + m − 1) with respect to the optimal predictions. For the distribution
Uk0 it is not possible to get ρ01 (A) < 0.5 as the clauses and the label y are independent and y is
0
chosen uniformly at random from {0, 1}m . For Qση it is information theoretically possible to get
ρ01 (A) = η/2. Hence any algorithm which gets error ρ01 (A) ≤ 2/5 can be used to distinguish be0
tween Uk0 and Qση . Therefore by Lemma 12 any polynomial time algorithm which gets ρ01 (A) ≤ 2/5
with probability greater than 2/3 over the choice of M needs at least Ω̃(nγk/2 ) samples. Note that
0 (A) = ρ (A) − η/2. As the optimal predictor P
δ01
01
∞ gets ρ01 (P∞ ) = η/2 < 0.05, therefore
0 (A) ≤ 1/3 =⇒ ρ (A) ≤ 2/5. Note that δ (A) ≥ δ 0 (A) m . This is because δ (A) is the
δ01
01
01
01
01
k+m
average error for all (k +m) time steps, and the contribution to the error from time steps 0 to (k −1)
m
0 (A) < 1 =⇒ ρ (A) ≤ 2/5.
is non-negative. Also, 13 k+m
> 0 , therefore, δ01 (A) < 0 =⇒ δ01
01
3
Hence any polynomial time algorithm which gets average relative zero-one loss less than 0 with
probability greater than 2/3 needs at least Ω̃(nγk/2 ) samples. The result for `1 loss follows directly
from the result for relative zero-one loss, we next consider the KL loss.
0 (A) be the average KL error of the algorithm A from time steps k through (k + m − 1).
Let δKL
0 (A) ≤ 2/9 =⇒ δ 0 (A) ≤ 1/3.
By application of Jensen’s inequality and Pinsker’s inequality, δKL
01
0 (A) < 2/9 needs Ω̃(nγk/2 ) samTherefore, by our previous argument any algorithm which gets δKL
0 (A) ≤ 2/9. Hence any polynomial time algorithm which
ples. But as before, δKL (A) ≤ 0 =⇒ δKL
succeeds with probability greater than 2/3 and gets average KL loss less than 0 needs at least
Ω̃(nγk/2 ) samples.
We lower bound k by a linear function of log T / to express the result directly in terms of
log T /. We claim that log T / is at most 10k. This follows because–
log T / ≤ t/ = 5(k + m) + 10 ≤ 15k
Hence any polynomial time algorithm needs nΘ(log T /) samples to get average relative zero-one loss,
`1 loss, or KL loss less than on M.
B.4
Proof of Lemma 10
Lemma 10. If L can be solved in time t(n) with s(n) clauses, then L0 can be solved in time
O(t(n) + s(n)) and s(n) clauses.
Proof. We show that a random instance of C0 can be transformed to a random instance of C in
time s(n)O(k) by independently transforming every clause C in C0 to a clause C 0 in C such that
26
C is satisfied in the original CSP C0 with some assignment t to x if and only if the corresponding
clause C 0 in C is satisfied with the same assignment t to x. For every y ∈ {0, 1}m we pre-compute
and store a random solution of the system y = Av mod 2, let the solution be v(y). Given any
clause C = (x1 , x2 , · · · , xk ) in C0 , choose y ∈ {0, 1}m uniformly at random. We generate a clause
C 0 = (x01 , x02 , · · · , x0k ) in C from the clause C in C0 by choosing the literal x0i = x̄i if vi (y) = 1 and
x0i = xi if vi (y) = 0. By the linearity of the system, the clause C 0 is a consistent clause of C with
some assignment x = t if and only if the clause C was a consistent clause of C0 with the same
assignment x = t.
We next claim that C 0 is a randomly generated clause from the distribution Uk if C was drawn
from Uk,0 and is a randomly generated clause from the distribution Qσ if C was drawn from Qσ,0 .
By our construction, the label of the clause y is chosen uniformly at random. Note that choosing
a clause uniformly at random from Uk,0 is equivalent to first uniformly choosing a k-tuple of unnegated literals and then choosing a negation pattern for the literals uniformly at random. It is clear
that a clause is still uniformly random after adding another negation pattern if it was uniformly
random before. Hence, if the original clause C was drawn to the uniform distribution Uk,0 , then
C 0 is distributed according to Uk . Similarly, choosing a clause uniformly at random from Qσ,y for
some y is equivalent to first uniformly choosing a k-tuple of unnegated literals and then choosing a
negation pattern uniformly at random which makes the clause consistent. As the original negation
pattern corresponds to a v randomly chosen from the null space of A, the final negation pattern
on adding v(y) corresponds to the negation pattern for a uniformly random chosen solution of
y = Av mod 2 for the chosen y. Therefore, the clause C 0 is a uniformly random chosen clause
from Qσ,y if C is a uniformly random chosen clause from Qσ,0 .
Hence if it is possible to distinguish Uk and Qησ for some randomly chosen σ ∈ {0, 1}n with
success probability at least 2/3 in time t(n) with s(n) clauses, then it is possible to distinguish
between Uk,0 and Qησ,0 for some randomly chosen σ ∈ {0, 1}n with success probability at least 2/3
in time t(n) + s(n)O(k) with s(n) clauses.
B.5
Proof of Lemma 12
Lemma 12. If L0 can be solved in time t(n) with s(n) clauses, then L can be solved in time t(n)
with O(s(n)) clauses. Hence if Conjecture 1 is true then L0 cannot be solved in polynomial time
with less than Ω̃(nγk/2 ) clauses.
Proof. Define E to be the event that a clause generated from the distribution Qσ of the CSP C has
the property that for all i the ith literal belongs to the set Xi , we also refer to this property of the
clause as E for notational ease. It’s easy to verify that the probability of the event E is 1/k k . We
claim that conditioned on the event E, the CSP C and C 0 are equivalent.
This is verified as follows. Note that for all y, Qσ,y and Q0σ,y are uniform on all consistent
clauses. Let U be the set of all clauses with non-zero probability under Qσ,y and U 0 be the set of all
clauses with non-zero probability under Q0σ,y . Furthermore, for any v which satisfies the constraint
that y = Av mod 2, let U(v) be the set of clauses C ∈ U such that σ(C) = v. Similarly, let U 0 (v)
be the set of clauses C ∈ U 0 such that σ(C) = v. Note that the subset of clauses in U(v) which
satisfy E is the same as the set U 0 (v). As this holds for every consistent v and the distributions
Q0σ,y and Qσ,y are uniform on all consistent clauses, the distribution of clauses from Qσ is identical to the distribution of clauses Q0σ conditioned on the event E. The equivalence of Uk and Uk0
27
conditioned on E also follows from the same argument.
Note that as the k-tuples in C are chosen uniformly at random from satisfying k-tuples, with
high probability there are s(n) tuples having property E if there are O(k k s(n)) clauses in C. As
the problems L and L0 are equivalent conditioned on event E, if L0 can be solved in time t(n)
with s(n) clauses, then L can be solved in time t(n) with O(k k s(n)) clauses. From Lemma 10 and
Conjecture 1, L cannot be solved in polynomial time with less than Ω̃(nγk/2 ) clauses. Hence L0
cannot be solved in polynomial time with less than Ω̃(nγk/2 /k k ) clauses. As k is a constant with
respect to n, L0 cannot be solved in polynomial time with less than Ω̃(nγk/2 ) clauses.
C
C.1
Proof of Lower Bound for Small Alphabets
Proof of Lemma 2
Lemma 2. Let A be chosen uniformly at random from the set S. Then, with probability at least
(1 − 1/n) over the choice A ∈ S, any (randomized) algorithm that can distinguish the outputs from
the model M(A) from the distribution over random examples Un with success probability greater
than 2/3 over the randomness of the examples and the algorithm needs f (n) time or examples.
Proof. Suppose A ∈ {0, 1}m×n is chosen at random with each entry being i.i.d. with its distribution
uniform on {0, 1}. Let A0 be the sub-matrix of A corresponding to the first 2n/3 columns and all
the m rows. Recall that S is the set of all (m × n) matrices A such that the sub-matrix A0 is full
row-rank. We claim that P (A ∈ S) ≥ 1 − m2−n/6 . To verify, consider the addition of each row
one by one to A0 . The probability of the ith row being linearly dependent on the previous (i − 1)
rows is 2i−1−2n/3 . Hence by a union bound, A0 is full row-rank with failure probability at most
m2m−2n/3 ≤ m2−n/6 . From Definition 2 and a union bound over all the m ≤ n/2 parities, any
algorithm that can distinguish the outputs from the model M(A) for uniformly chosen A from the
distribution over random examples Un with probability at least (1 − 1/(2n)) over the choice of A
needs f (n) time or examples. As P (A ∈ S) ≥ 1 − m2−n/6 for a uniformly randomly chosen A, with
probability at least (1 − 1/(2n) − m2−n/6 ) ≥ (1 − 1/n) over the choice A ∈ S any algorithm that
can distinguish the outputs from the model M(A) from the distribution over random examples Un
with success probability greater than 2/3 over the randomness of the examples and the algorithm
needs f (n) time or examples.
C.2
Proof of Proposition 2
Proposition 2. With f (T ) as defined in Definition 2, For all sufficiently large T and 1/T c <
≤ 0.1 for some fixed constant c, there exists a family of HMMs with T hidden states such that
any algorithm that achieves average relative zero-one loss, average `1 loss, or average KL loss less
than with probability greater than 2/3 for a randomly chosen HMM in the family needs, requires
f (log T /) time or samples samples from the HMM over any window length which the algorithm
uses for prediction.
Proof. We describe how to choose the family of sequential models Am×n for each value of and
T . Recall that the HMM has T = 2m (2n + m) + m hidden states. Let T 0 = 2m+2 (n + m). Note
that T 0 ≥ T . Let t = log T 0 . We choose m = t − log(1/) − log(t/5), and n to be the solution of
m
.
t = m + log(n + m) + 2, hence n = t/(5) − m − 2. Note that for ≤ 0.1, n ≥ m. Let 0 = 29 n+m
28
We claim ≤ 0 . To verify, note that n + m = t/(5) − 2. Therefore,
0 =
2m
10(t − log(1/) − log(t/5))
=
≥ ,
9(n + m)
9t(1 − 10/t)
for sufficiently large t and ≥ 2−ct for a fixed constant c. Hence proving hardness for obtaining
error 0 implies hardness for obtaining error . We choose the matrix Am×n as outlined earlier.
The family is defined by the model M(Am×n ) defined previously with the matrix Am×n chosen
uniformly at random from the set S.
Let ρ01 (A) be the average zero-one loss of some algorithm A for the output time steps n through
0 (A) be the average relative zero-one loss of A for the output time steps n through
(n+m−1) and δ01
(n + m − 1) with respect to the optimal predictions. For the distribution Un it is not possible to get
ρ01 (A) < 0.5 as the clauses and the label y are independent and y is chosen uniformly at random
from {0, 1}m . For Qηs it is information theoretically possible to get ρ01 (A) = η/2. Hence any algorithm which gets error ρ01 (A) ≤ 2/5 can be used to distinguish between Un and Qηs . Therefore by
Lemma 2 any algorithm which gets ρ01 (A) ≤ 2/5 with probability greater than 2/3 over the choice
0 (A) = ρ (A) − η/2. As the optimal
of M(A) needs at least f (n) time or samples. Note that δ01
01
0 (A) ≤ 1/3 =⇒ ρ (A) ≤ 2/5. Note that
predictor P∞ gets ρ01 (P∞ ) = η/2 < 0.05, therefore δ01
01
0 (A) m . This is because δ (A) is the average error for all (n + m) time steps, and the
δ01 (A) ≥ δ01
01
n+m
m
contribution to the error from time steps 0 to (n − 1) is non-negative. Also, 31 n+m
> 0 , therefore,
1
0
0
δ01 (A) < =⇒ δ01 (A) < 3 =⇒ ρ01 (A) ≤ 2/5. Hence any algorithm which gets average relative
zero-one loss less than 0 with probability greater than 2/3 over the choice of M(A) needs f (n)
time or samples. The result for `1 loss follows directly from the result for relative zero-one loss, we
next consider the KL loss.
0 (A) be the average KL error of the algorithm A from time steps n through (n + m − 1).
Let δKL
0 (A) ≤ 2/9 =⇒ δ 0 (A) ≤ 1/3.
By application of Jensen’s inequality and Pinsker’s inequality, δKL
01
0 (A) < 2/9 needs f (n) samples.
Therefore, by our previous argument any algorithm which gets δKL
0 (A) ≤ 2/9. Hence any algorithm which gets average KL loss
But as before, δKL (A) ≤ 0 =⇒ δKL
0
less than needs f (n) time or samples.
We lower bound n by a linear function of log T / to express the result directly in terms of
log T /. We claim that log T / is at most 10n. This follows because–
log T / ≤ t/ = 5(n + m) + 10 ≤ 15n
Hence any algorithm needs f (log T /)) samples and time to get average relative zero-one loss, `1
loss, or KL loss less than with probability greater than 2/3 over the choice of M(A).
D
Proof of Information Theoretic Lower Bound
Proposition 3. There is an absolute constant c such that for all 0 < < 0.5 and sufficiently large
n, there exits an HMM with n states such that it is not information theoretically possible to get
average relative zero-one loss or `1 loss less than using windows of length smaller than c log n/2 ,
and KL loss less than using windows of length smaller than c log n/.
29
Proof. Consider a Hidden Markov Model with the Markov chain being a permutation on n states.
The output alphabet of each hidden state is binary. Each state i is marked with a label li which
is 0 or 1, let G(i) be mapping from hidden state hi to its label li . All the states labeled 1 emit 1
with probability (0.5 + ) and 0 with probability (0.5 − ). Similarly, all the states labeled 0 emit 0
with probability (0.5 + ) and 1 with probability (0.5 − ). Fig. 3 illustrates the construction and
provides the high-level proof idea.
Figure 3: Lower bound construction, ` = 3, n = 16. A note on notation used in the rest of the
proof with respect to this example: r(0) corresponds to the label of h0 , h1 and h2 and is (0, 1, 0) in
this case. Similarly, r(1) = (1, 1, 0) in this case. The segments between the shaded nodes comprise
the set S1 and are the possible sequences of states from which the last ` = 3 outputs could have
come. The shaded nodes correspond to the states in S2 , and are the possible predictions for the
next time step. In this example S1 = {(0, 1, 0), (1, 1, 0), (0, 1, 0), (1, 1, 1)} and S2 = {1, 1, 0, 0}.
Assume n is a multiple of (` + 1), where (` + 1) = c log n/2 , for a constant c = 1/33. We
will regard as a constant with respect to n. Let n/(` + 1) = t. We refer to the hidden states
by hi , 0 ≤ i ≤ (n − 1), hji refers to the sequence of hidden states i through j. We will show that
a model looking at only the past ` outputs cannot get average zero-one loss less than 0.5 − o(1).
As the optimal prediction looking at all past outputs gets average zero-one loss 0.5 − + o(1) (as
the hidden state at each time step can be determined to an arbitrarily high probability if we are
allowed to look at an arbitrarily long past), this proves that windows of length ` do not suffice to
get average zero-one error less than − o(1) with respect to the optimal predictions. Note that the
Bayes optimal prediction at time (` + 1) to minimize the expected zero-one loss given outputs from
` = s` ) where s` is the sequence of
time 1 to ` is to predict the mode of the distribution P r(x`+1 |xP
1
1
1
outputs from time 1 to `. Also, note that P r(x`+1 |x`1 = s`1 ) = i P r(hi` =i |x`1 = s`1 )P r(x`+1 |hi` =i )
where hi` is the hidden state at time `. Hence the predictor is a weighted average of the prediction
of each hidden state with the weight being the probability of being at that hidden state.
We index each state hi of the permutation by a tuple (f (i), g(i)) = (j, k) where j = i mod (` + 1)
i
and k = b `+1
c hence 0 ≤ j ≤ `, 0 ≤ k ≤ (t − 1) and i = k(` + 1) + j. We help the predictor to
make the prediction at time (` + 1) by providing it with the index f (i` ) = i` mod (` + 1) of the
true hidden state hi` at time `. Hence this narrows down the set of possible hidden states at time `
(in Fig. 3, the set of possible states given this side information are all the hidden states before the
shaded states). The Bayes optimal prediction at time (` + 1) given outputs s`1 from time 1 to ` and
index f (hi` ) = j is to predict the mode of P r(x`+1 |x`1 = s`1 , f (hi` ) = j). Note that by the definition
of Bayes optimality, the average zero-one loss of the prediction using P r(x`+1 |x`1 = s`1 , f (hi` ) = j)
cannot be worse than the average zero-one loss of the prediction using P r(x`+1 |x`1 = s`1 ). Hence
30
we only need to show that the predictor with access to this side information is poor. We refer
to this predictor using P r(x`+1 |x`1 = s`1 , f (hi` ) = j) as P. We will now show that there exists
some permutation for which the average zero-one loss of the predictor P is 0.5 − o(1). We argue
this using the probabilistic method. We choose a permutation uniformly at random from the set
of all permutations. We show that the expected average zero-one loss of the predictor P over
the randomness in choosing the permutation is 0.5 − o(1). This means that there must exist some
permutation such that the average zero-one loss of the predictor P on that permutation is 0.5−o(1).
To find the expected average zero-one loss of the predictor P over the randomness in choosing
the permutation, we will find the expected average zero-one loss of the predictor P given that we
are in some state hi` at time `. Without loss of generality let f (i` ) = 0 and g(i` ) = (` − 1), hence
we were at the (` − 1)th hidden state at time `. Fix any sequence of labels for the hidden states
`−1
h`−1
emitted by the hidden states h`−1
from time 0 to ` − 1, let E[δ(s0`−1 )]
0 . For any string s0
0
be the expected average zero-one error
P over the randomness in the rest of the
P of the predictor
`−1
permutation. Also, let E[δ(h`−1 )] = s`−1 E[δ(s0 )]P r[s`−1
0 ] be the expected error averaged across
0
all outputs. We will argue that E[δ(h`−1 )] = 0.5 − o(1). The set of hidden states hi with g(i) = k
k(`+1)−2
defines a segment of the permutation, let r(k) be the label G(h(k−1)(`+1) ) of the segment k, excluding
its last bit which corresponds to the predictions. Let S1 = {r(k), ∀ k 6= 0} be the set of all the
labels excluding the first label r(0) and S2 = {G(hk(`+1)+` ), ∀ k} be the set of all the predicted bits
(refer to Fig. 3 for an example). Consider any assignment of r(0). To begin, we show that with
`−1
`−1
of
high probability over the output s`−1
0 , the Hamming distance D(s0 , r(0)) of the output s0
`
from
r(0)
is
at
least
the set of hidden states h`−1
−
2`.
This
follows
directly
from
Hoeffding’s
0
2
inequality as all the outputs are independent conditioned on the hidden state4 –
2
−2`
P r[D(s`−1
≤ n−2c
0 , r(0)) ≤ `/2 − 2`] ≤ e
(D.1)
We now show that for any k 6= 0, with decent probability the label r(k) of the segment k is closer
than r(0). Then we argue that with high probability
in Hamming distance to the output s`−1
0
in Hamming distance than r(0). Hence
there are many such segments which are closer to s`−1
0
these other segments are assigned as much weight in predicting the next output as r(0), which
means that the output cannot be predicted with a high accuracy as the output bits corresponding
to different segments are independent.
We first find the probabilityp
that the segment corresponding to some k with label r(k) has a
Hamming distance less than 2` − ` log t/8 from any fixed binary string x of length `. Let F (l, m, p)
be the probability of getting at least l heads in m i.i.d. trails with each trial having probability p
of giving a head. F (l, m, p) can be bounded below by the following standard inequality–
l
1
F (l, m, p) ≥ √
exp − mDKL
p
m
2m
h
1−q
where DKL (q k p) = q log pq + (1 − q) log 1−p
. We can use this to lower bound P r D(r(k), x) ≤
i
p
`/2 − ` log t/8 ,
h
i
p
p
P r D(r(k), x) ≤ `/2 − ` log t/8 = F (`/2 + ` log t/8, `, 1/2)
4
For n independent random variables {Xi } lying in the interval [0, 1] with X̄ =
−2nt2
e
. In our case t = and n = `.
31
1
n
P
i
Xi , P r[X ≤ E[X̄] − t] ≤
1
1
≥ √ exp − `DKL
+
2
2`
r
log t 1
8` 2
Note that DKL ( 12 + v k 12 ) ≤ 4v 2 by using the inequality log(1 + v) ≤ v. We can simplify the
KL-divergence using this and write–
h
i
p
√
P r D(r(k), x) ≤ `/2 − ` log t/8 ≥ 1/ 2`t
(D.2)
p
Let D be the set of all k 6= 0 such that D(r(k), x) ≤ 2` − ` log t/8 for some fixed x. We argue that
with high probability over the randomness of the permutation |D| is large. This follows from Eq.
D.2 and the Chernoff bound as the labels for all segments r(k) are chosen independently5 –
h
i
√
p
1
P r |D| ≤ t/(8`) ≤ e− 8 t/(2`)
q
p
1
0.25
Note that t/(8`) ≥ n . Therefore for any fixed x, with probability 1−exp(− 8 2`t ) ≥ 1−n−0.25
q
t
0.25 segments in a randomly chosen permutation which have Hamming disthere are
8` ≥ n
p
p
tance less than `/2 − ` log t/8 from x. Note that by our construction 2` ≤ ` log t/8 because
log(` + 1) ≤ (1 − 32c) log n. Hence the segments in D are closer in Hamming distance to the output
if D(s`−1
s`−1
0 , r(0)) > `/2 − 2`.
0
Therefore if D(s`−1
0 , r(0)) > `/2 − 2`, then with high probability over randomly choosing
the segments S1 there is a subset D of segments in S1 with |D| ≥ n0.25 such that all of the
`−1
`−1
such that
segments in D have Hamming distance less than D(s`−1
0 , r(0)) from s0 . Pick any s0
`−1
D(s0 , r(0)) > `/2 − 2`. Consider any set of segments S1 which has such a subset D with respect
to the string s`−1
0 . For all such permutations, the predictor P places at least as much weight on
the hidden states hi with g(i) = k, with k such that r(k) ∈ D as the true hidden state h`−1 . The
prediction for any hidden state hi is the corresponding bit in S2 . Notice that the bits in S2 are
independent and uniform as we’ve not used them in any argument so far. The average correlation
of an equally weighted average of m independent and uniform random bits with any one of the
√
random bits is at most 1/ m. Hence over the randomness of S2 , the expected zero-one loss of the
predictor is at least 0.5 − n−0.1 . Hence we can writep
−0.1
E[δ(s`−1
)P r[|D| ≥ t/(8`)]
0 )] ≥ (0.5 − n
0.25
≥ (0.5 − n−0.1 )(1 − e−n
)
≥ 0.5 − 2n−0.1
By using Equation D.1, for any assignment r(0) to h`−1
0
h
i h
i
`−1
`−1
E[δ(h`−1 )] ≥ P r D(s`−1
0 , r(0)) > `/2 − 2` E δ(s0 ) D(s0 , r(0)) > `/2 − 2`
≥ (1 − n−2c )(0.5 − 2n−0.1 )
= 0.5 − o(1)
As this is true for all assignments r(0) to h`−1
and for all choices of hidden states at time `, using
0
linearity of expectations and averaging over all hidden states, the expected average zero-one loss of
5
For independent random variables {Xi } lying in the interval [0, 1] with X =
p
P r[X ≤ (1 − )µ] ≤ exp(−2 µ/2). In our case = 1/2 and µ = t/(2`).
32
P
i
Xi and µ = E[X],
the predictor P over the randomness in choosing the permutation is 0.5 − o(1). This means that
there must exist some permutation such that the average zero-one loss of the predictor P on that
permutation is 0.5 − o(1). Hence there exists an HMM on n states such that is not information
theoretically possible to get average zero-one error with respect to the optimal predictions less than
− o(1) using windows of length smaller than c log n/2 for a fixed constant c.
Therefore, for all 0 < < 0.5 and sufficiently large n, there exits an HMM with n states
such that it is not information theoretically possible to get average relative zero-one loss less than
/2 < − o(1) using windows of length smaller than c−2 log n. The result for relative zero-one loss
follows on replacing /2 by 0 and setting c0 = c/4. The result follows immediately from this as
the expected relative zero-one loss is less than the expected `1 loss. For KL-loss we use Pinsker’s
inequality and Jensen’s inequality.
References
[1] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with
gradient descent is difficult. IEEE transactions on neural networks, 5(2):157–166, 1994.
[2] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):
1735–1780, 1997.
[3] Felix A Gers, Jürgen Schmidhuber, and Fred Cummins. Learning to forget: Continual prediction with LSTM. Neural computation, 12(10):2451–2471, 2000.
[4] Alex Graves, Greg Wayne, and Ivo Danihelka.
arXiv:1410.5401, 2014.
Neural turing machines.
arXiv preprint
[5] J. Weston, S. Chopra, and A. Bordes. Memory networks. In International Conference on
Learning Representations (ICLR), 2015.
[6] Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou,
et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538
(7626):471–476, 2016.
[7] M. Luong, H. Pham, and C. D. Manning. Effective approaches to attention-based neural
machine translation. In Empirical Methods in Natural Language Processing (EMNLP), pages
1412–1421, 2015.
[8] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao,
K. Macherey, et al. Google’s neural machine translation system: Bridging the gap between
human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
[9] Zhe Chen and Matthew A Wilson. Deciphering neural codes of memory during sleep. Trends
in Neurosciences, 2017.
[10] Zhe Chen, Andres D Grosmark, Hector Penagos, and Matthew A Wilson. Uncovering representations of sleep-associated hippocampal ensemble spike activity. Scientific reports, 6:32193,
2016.
33
[11] Matthew A Wilson, Bruce L McNaughton, et al. Reactivation of hippocampal ensemble memories during sleep. Science, 265(5172):676–679, 1994.
[12] Prahladh Harsha, Rahul Jain, David McAllester, and Jaikumar Radhakrishnan. The communication complexity of correlation. In Twenty-Second Annual IEEE Conference on Computational Complexity (CCC’07), pages 10–23. IEEE, 2007.
[13] R. Kneser and H. Ney. Improved backing-off for m-gram language modeling. In International
Conference on Acoustics, Speech, and Signal Processing (ICASSP), volume 1, pages 181–184,
1995.
[14] S. F. Chen and J. Goodman. An empirical study of smoothing techniques for language modeling. In Association for Computational Linguistics (ACL), 1996.
[15] E. Mossel and S. Roch. Learning nonsingular phylogenies and hidden Markov models. In
Theory of computing, pages 366–375, 2005.
[16] Vitaly Feldman, Will Perkins, and Santosh Vempala. On the complexity of random satisfiability problems with planted solutions. In Proceedings of the Forty-Seventh Annual ACM on
Symposium on Theory of Computing, pages 77–86. ACM, 2015.
[17] Sarah R Allen, Ryan O’Donnell, and David Witmer. How to refute a random CSP. In
Foundations of Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on, pages
689–708. IEEE, 2015.
[18] Pravesh K Kothari, Ryuhei Mori, Ryan O’Donnell, and David Witmer. Sum of squares lower
bounds for refuting any CSP. arXiv preprint arXiv:1701.04521, 2017.
[19] Y. Kim, Y. Jernite, D. Sontag, and A. M. Rush. Character-aware neural language models.
arXiv preprint arXiv:1508.06615, 2015.
[20] Avrim Blum, Adam Kalai, and Hal Wasserman. Noise-tolerant learning, the parity problem,
and the statistical query model. Journal of the ACM (JACM), 50(4):506–519, 2003.
[21] Ryan O’Donnell. Analysis of boolean functions. Cambridge University Press, 2014.
[22] Eric Blais, Ryan ODonnell, and Karl Wimmer. Polynomial regression under arbitrary product
distributions. Machine learning, 80(2-3):273–294, 2010.
[23] Adam Tauman Kalai, Adam R Klivans, Yishay Mansour, and Rocco A Servedio. Agnostically
learning halfspaces. SIAM Journal on Computing, 37(6):1777–1805, 2008.
[24] D. Hsu, S. M. Kakade, and T. Zhang. A spectral algorithm for learning hidden Markov models.
In Conference on Learning Theory (COLT), 2009.
[25] A. Anandkumar, D. Hsu, and S. M. Kakade. A method of moments for mixture models and
hidden Markov models. In Conference on Learning Theory (COLT), 2012.
[26] H. Sedghi and A. Anandkumar. Training input-output recurrent neural networks through
spectral methods. arXiv preprint arXiv:1603.00954, 2016.
[27] M. Janzamin, H. Sedghi, and A. Anandkumar. Beating the perils of non-convexity: Guaranteed
training of neural networks using tensor methods. arXiv preprint arXiv:1506.08473, 2015.
34
[28] S. Arora, A. Bhaskara, R. Ge, and T. Ma. Provable bounds for learning some deep representations. In International Conference on Machine Learning (ICML), pages 584–592, 2014.
[29] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press,
2006.
[30] A. Barron, J. Rissanen, and B. Yu. The minimum description length principle in coding and
modeling. IEEE Trans. Information Theory, 44, 1998.
[31] P.D. Grunwald. A tutorial introduction to the minimum description length principle. Advances
in MDL: Theory and Applications, 2005.
[32] A. Dawid. Statistical theory: The prequential approach. J. Royal Statistical Society, 1984.
[33] Y. Shtarkov. Universal sequential coding of single messages. Problems of Information Transmission, 23, 1987.
[34] K. S. Azoury and M. Warmuth. Relative loss bounds for on-line density estimation with the
exponential family of distributions. Machine Learning, 43(3), 2001.
[35] D. P. Foster. Prediction in the worst case. Annals of Statistics, 19, 1991.
[36] M. Opper and D. Haussler. Worst case prediction over sequences under log loss. The Mathematics of Information Coding, Extraction and Distribution, 1998.
[37] Nicolo Cesa-Bianchi and Gabor Lugosi. Worst-case bounds for the logarithmic loss of predictors. Machine Learning, 43, 2001.
[38] V. Vovk. Competitive on-line statistics. International Statistical Review, 69, 2001.
[39] S. M. Kakade and A. Y. Ng. Online bounds for bayesian algorithms. Proceedings of Neural
Information Processing Systems, 2004.
[40] M. W. Seeger, S. M. Kakade, and D. P. Foster. Worst-case bounds for some non-parametric
bayesian methods, 2005.
[41] B. S. Clarke and A. R. Barron. Information-theoretic asymptotics of Bayes methods. IEEE
Transactions on Information Theory, 36(3):453–471, 1990.
[42] David Haussler and Manfred Opper. Mutual information, metric entropy and cumulative
relative entropy risk. Annals Of Statistics, 25(6):2451–2492, 1997.
[43] A. Barron. Information-theoretic characterization of Bayes performance and the choice of
priors in parametric and nonparametric problems. In Bernardo, Berger, Dawid, and Smith,
editors, Bayesian Statistics 6, pages 27–52, 1998.
[44] A. Barron, M. Schervish, and L. Wasserman. The consistency of posterior distributions in
nonparametric problems. Annals of Statistics, 2(27):536–561, 1999.
[45] P. Diaconis and D. Freedman. On the consistency of Bayes estimates. Annals of Statistics, 14:
1–26, 1986.
[46] T. Zhang. Learning bounds for a generalized family of Bayesian posterior distributions. Proceedings of Neural Information Processing Systems, 2006.
35
[47] J. Ziv and A. Lempel. Compression of individual sequences via variable-rate coding. IEEE
Transactions on Information Theory, 1978.
[48] D. Rumelhart, G. Hinton, and R. Williams. Learning representations by back-propagating
errors. Nature, 323(6088):533–538, 1986.
[49] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align
and translate. arXiv preprint arXiv:1409.0473, 2014.
[50] Vitaly Feldman, Elena Grigorescu, Lev Reyzin, Santosh Vempala, and Ying Xiao. Statistical
algorithms and a lower bound for detecting planted cliques. In Proceedings of the forty-fifth
annual ACM symposium on Theory of computing, pages 655–664. ACM, 2013.
[51] Amit Daniely and Shai Shalev-Shwartz. Complexity theoretic limitations on learning DNF’s.
In 29th Annual Conference on Learning Theory, pages 815–830, 2016.
[52] Amit Daniely. Complexity theoretic limitations on learning halfspaces. In Proceedings of the
48th Annual ACM SIGACT Symposium on Theory of Computing, pages 105–117. ACM, 2016.
36
| 2 |
An Optimal Algorithm for Range Search on
Multidimensional Points
T.Hema ∗and K.S. Easwarakumar†
Department of Computer Science & Engineering
Anna University, Chennai 600 025, INDIA.
arXiv:1607.00208v1 [cs.CG] 1 Jul 2016
Abstract
This paper proposes an efficient and novel method to address range search on multidimensional points in θ(t) time, where t is the number of points reported in <k space.
This is accomplished by introducing a new data structure, called BITS k d-tree. This
structure also supports fast updation that takes θ(1) time for insertion and O(log n)
time for deletion. The earlier best known algorithm for this problem is O(logk n + t)
time [5, 15] in the pointer machine model.
Keywords: BITS k d-tree, Threaded Trie, Range Search.
1
Introduction
k d-trees introduced by J.L.Bentley [4, 6] are multidimensional binary search trees commonly
used for storing k dimensional points. They are also used to perform search operations such
as exact match, partial match and range queries. Range queries are mostly used in GIS
applications to locate cities within a certain region in a map. Similarly, in the geometrical
view of a database, one can use orthogonal range search to perform a query. Generally,
k d-trees with n nodes have a height n and hence the complexity for insertion and search
are high. Although many multi-dimensional search structures are found in the literature
[2, 8, 20, 22, 23] they differ from the standard k d-trees mainly in the space-partitioning
methods used. Recall that a 2-d tree stores two-dimensional point data of the form (x, y).
A 2-d tree splits primarily on the x coordinate of a point at even level and then on the
corresponding y coordinate at the odd level, and so on. Hence, the trees are unbalanced and
are not efficient for√
search operations. Also, the worst case time complexity for range search
on a 2-d tree is O( n + t), where t is the number of points reported and for k dimensions
it is O(n1−1/k + t)[4, 16]. In general, most of the k d-tree variants get unbalanced when the
data is clustered thereby affecting query operations.
P R k -d tree, Bucket P R k -d tree [19], P M R k -d trees [21] and Path level compressed
P R k -d trees[18] are some of the trie-based kd trees used to store point data. However,
these trees are not always balanced, especially when the data is clustered. One of the
dynamic versions of k -d tree is the divided k -d trees [25] for which the range query time is
O(n1−1/k log1/k n + t).
The best known dynamically balanced tree uses bitwise interlaced data [24] over k d-trees
mapping k dimensions to one dimension. Although their search time is O(k(log n + t)) for
reporting t points, bitwise interlacing leads to discarded areas during range search. In
the case of squarish k d-trees [12], an x, y discriminant is based on the longest side of
rectangle enclosing the problem space instead of alternating the keys. Recently, hybrid
versions of squarish k d-tree, relaxed k d-tree and median k d-trees [11] have overcome the
problem of height balancing. An amortized worst case efficiency of range search for the
∗ Email:
[email protected]
Author.Email: [email protected]
† Corresponding
1
hybrid squarish k d-trees, relaxed and median trees for k -dimensional partial match queries
are 1.38628 log2 n, 1.38629 log2 n and 1.25766 log2 n respectively. Their experimental results
match the aforementioned theoretical results, where they show that the hybrid median
trees outperform the other variants. However, as far as query handling is concerned, these
structures perform only partial match queries for two dimensions efficiently. The most recent
work in the pointer machine model is an orthogonal range reporting data structure with
O(n(log n/ log logn)d ) space that address range queries in O(log n(log n/log log n) time,
where d ≥ 4 [1].
Range trees of Bentley and Maurer [6, 5] are yet another class of balanced binary search
trees used for rectangular range search which showed improvement in the query time of
O(logk n + t) over O(n1−1/k + t) of kd-trees, where k is the dimension for a set of n points
and k is the number of reported points. This was later improved to O(logk−1 n + t) using
fractional cascading in layered range trees[13] but the space requirements are relatively high
of O(n logk−1 n). A k d-Range DSL-tree performs k-dimensional range search in O(logk n+t)
time was proposed in [15].
Recently, Chan et.al [10] have proposed two data structures for 2d orthogonal range
search in the word RAM model. The first structure takes O(n lg lg n) space and O(lg lg n)
query time. They show improved performance over previous results [3] of which O(n lg n)
space and O(lg lg n) query time, or with O(n lg lg n) space and O(lg 2 lg n) query time.
The second data strucure is based on O(n) space and answers queries in O(lg n) time that
outperforms previous O(n) space data structure [17], answers queries in O(lg n/lg lg n)
time.
Furthermore, they also propose an efficient data structure for 3-d orthogonal range reporting with O(n lg 1+ + n) space and O(lg lg n + k) query time for points in rank space
where > 0. This improves their previous results [9] with O(n lg 2 n) space and O(lg lg n+k)
query time, or with O(n lg 1+ n) space and O(lg 2 lg n + k) query time, where k points are
reported. Finally they have extended range search to higher dimensions also.
Since such range queries are common among multi-dimensional queries in database applications, we have mainly considered an orthogonal range search on multi-dimensional points.
2
Our Contributions
In this work, we make use of the BIT S-tree [14], a segment tree variant that performs
stabbing and range queries on segments efficiently in logarithmic time. Most importantly,
the distribution of the data points (uniform or skewed) does not affect the height of the
BIT S-tree and in turn facilitates faster search time. Here, we actually use the BIT S-tree
structure to store points related to each dimension and thereby form a multi-level tree,
called BIT S k d-tree. In addition, certain nodes of the BIT S-tree associate to a variant of
the trie data structure, called threaded trie, to facilitate fetching a required node in constant
time. Unlike k -d trees, it does not associate co-ordinate axis, level wise, for comparison to
locate or insert a point. Instead, the tree at the first level has nodes with a key on only
distinct values of first co-ordinate of the points. Therefore, this tree corresponds to the one
dimensional data. This tree is then augmented with another tree at second level and there
in key values of the nodes associated with distinct first two co-ordinates of the points. In
general, ith tree corresponds to the distinct first i co-ordinates of the set of points given.
Moreover, in each tree, the inorder sequence provides the sorted sequence. That is, BIT S
k -d trees is a multi-level tree, and its construction is illustrated in the subsequent sections.
2.1
BIT S-Trees
Originally, the BIT S-tree (Balanced Inorder Threaded Segment Tree) [14] is a dynamic
structure that stores segments, and also answers both stabbing and range queries efficiently.
Unlike segment trees, it also permits insertion of segment with any interval range.
2
Figure 1: (a) Set of segments.(b) BIT S-Tree for the given segments.
Definition 1 A BIT S-tree is a height balanced two-way inorder-threaded binary tree T that
satisfies the following properties.
1. Each node v of T is represented as v([a, b], L), where [a, b] is the range associated with
the node v, and L is the list of segments containing the range [a, b], i.e if [c, d] ∈ L then
[a, b] ⊆ [c, d].
2. Given v1 ([a1 , b1 ], L1 ) 6= v2 ([a2 , b2 ], L2 ), then
[b1 ] if a2 = b1
[b2 ] if a1 = b2
[a1 , b1 ] ∩ [a2 , b2 ] =
φ otherwise
i.e ranges can either overlap only at end points or do not overlap at all.
3. Suppose v1 ([a1 , b1 ], L1 ) appears before v2 ([a2 , b2 ], L2 ) in the inorder sequence, then b1 ≤
a2 .
4. It has a special node, called dummy node denoted by D, with range and list as φ(empty).
5. Suppose v1 ([a1 , b1 ], L1 ) and vn ([an , bn ], Ln ) are the first and last nodes of the inorder
sequence respectively, then InP red(v1 ) = InSucc(vn ) = D, and the range, say [a, b], of
any node contained in [a1 , bn ], i.e [a, b] ⊆ [a1 , bn ].
Here, the functions InPred() and InSucc() respectively returns inorder predecessor and successor. A sample BIT S-tree is shown in Figure 1.
Note that the dangling threads actually point to a dummy node, which is not shown in the
figure.
The BIT S-tree is originally developed for storing segments, but we use this for a different purpose of storing points. Thus, we modify this structure to suit our requirement as
described below.
1. Each node v([a, b], L) is replaced by v(p, L0 , T ), where p is a point in <k , k ≥ 1, and
L0 is a pointer to the list of collinear points in dimension k + 1, having p for the first
k co-ordinates. However, this list is maintained in the tree at the next level, which is
described in section 2.2. Now, T is either null or a pointer to a threaded trie, which is
elaborated in the next section.
2. For any two points p1 and p2 stored in a tree, p1 6= p2 .
3. Suppose v1 (p1 , L01 , T1 ) appears before v2 (p2 , L02 , T2 ) in the inorder sequence, then p1 < p2
as per the following definition.
Definition 2 Let p1 = (x11 , x12 , . . . x1k ) and p2 = (x21 , x22 , . . . x2k ) be two points in a kdimensional space, then
a. p1 = p2 implies x1j = x2j for each j=1, 2,. . . k.
b. p1 < (or >)p2 implies head(p1 , j) = head(p2 , j) and x1j+1 < (or >) x2j+1 for some j.
3
In the subsequent sections, for better clarity, we use hyphen(−) for a certain parameter
of a node to denote that the particular parameter is irrelevant with respect to the context.
For instance, (p, −, T ) denotes that the list contents are irrelevant for that point p at this
time.
2.2
Threaded Trie
Figure 2: A sample threaded trie
Threaded tries are variants of tries that consists of two types of nodes viz. trie node
and data node. For instance, in Figure 2, A, B and C are trie nodes and the rest are data
nodes. Unlike in tries, the trie node here does not have a field for blank (6b). However, each
of these trie nodes contain two segments. One is the index pointer and the other is a tag
value, which is either 0 or 1, where 0 denotes the corresponding index point in a thread
and otherwise it will be 1. Here, all null pointers are replaced by threaded pointers, which
point to the next valid node, if one exists. For instance, the thread pointers of 1, 2, 3 of
node A points to the node C, as this is the next valid node. Similarly, thread pointers of 0
and 1, in C points to the data node 42. Note here that ordering on the nodes provides the
sorted sequence. Also, data nodes appear at the same level. This is accomplished by having
uniform width for all data. For instance, the data 8 is treated as 08 in Figure 2.
2.3
Construction of Multi-level BIT S-Tree
Multi-level BIT S-trees are constructed using a collection of BIT S-trees one at each level,
and interlinking the trees of two consecutive levels in a specified manner, which are due to
the following definitions. These multi-level BIT S-trees are termed here as BIT S-k d trees.
Definition 3 Given a point p = (x1 , x2 , . . . xk ) and an integer l ≤ k, the head of p and tail of
p are defined respectively as head(p, l) = (x1 , x2 , . . . xl ) and tail(p, l) = (xk−l+1 , xk−l+2 , . . . xk ).
Also, having (head(p, l), y1 , y2 , . . . ym ) = (x1 , x2 , . . . xl , y1 , y2 , . . . ym ), leads to (head(p, l),
tail(p, k − l)) = p.
Definition
4 Given S as the set of points in <k and |S|=n, the set S1 is defined as,
Sn
S1 = i=1 {(x) | p ∈ S and head(p,
Sn 1) = (x)}. That is, S1 is the set of distinct x values of the
points in S. In general, Sj = i=1 {(x1 , x2 , . . . xj ) | p ∈ S and head(p, j) = (x1 , x2 , . . . xj )},
where 1 ≤ j ≤ k.
Definition 5 For a point p = (x1 , x2 , . . . xj ) in Sj , the term xj is said to be the dimensional
value of p as the set of points in Sj is used to construct j th level BIT S-tree.
4
Figure 3: A BIT S 2d-tree: (a) Spatial representation of points. (b) BIT S-2d tree for points
shown in (a).
Definition 6 A BIT S kd-tree is a multi-level tree, which is constructed as follows.
1. Create separate BIT S trees, Tj for each Sj , 1 ≤ j ≤ k.
2. Let Xj = (x1 , x2 , . . . xj ). Now, for each node, say vj = (Xj , L, −), of Tj , 1 ≤ j < k, the
list L points to the node vj+1 = ((Xj , x0j+1 ), −, −) of Tj+1 ,
where x0j+1 = min{xj+1 | head(p, j) = Xj and (head(p, j), xj+1 ) ∈ Sj+1 }. We term
these links as cross links, and the node vj+1 as a cross link node in Tj+1 .
3. In T1 , there is only one cross link node, which is the first node in the in order sequence,
and the tree pointer always points to this node.
4. For each node v = ((Xj−1 , x0j ), −, T ) in Tj , 1 ≤ j ≤ k, T is pointer to the threaded tree
if v is a cross link node, and otherwise T is set to be null.
5. For every cross link node vj = ((Xj−1 , x0j ), −, T ) in Tj , the data node of T for a key, say
k 0 , points to the node ((Xj−1 , k 0 ), −, −) in Tj . That is, T provides links to the nodes in
{((Xj−1 , xj ), −, −)|(Xj−1 , xj ) ∈ Sj } and these links are termed as trie links.
A BIT S 2d-tree for the sample points in Figure 3(a) is shown in Figure 3(b). Since, BIT S
k d-trees are multi-level trees with binary inorder threaded search trees at each level, the
height of the trees at each level is O(log n). Also, each node in Ti−1 has a cross link to a
node in Ti , which has the least value for the ith co-ordinate with respect to the head value of
the node in Ti−1 . Note here that at least one such point exists. This link is useful to locate
a list of collinear points in the ith dimension, associated with a point in Ti−1 . Also, the trie
links are useful to locate a point in a given range window in constant time. The cross link
and trie links also make the structure much suitable to address range queries efficiently.
Normally, k d-trees perform insertion by a simple comparison between the respective coordinates at each level. However, deletion is tedious due to candidate replacement. This is
because, candidate for replacement can be anywhere in the subtree. Also, it requires a little
more work when the right subtree is empty. Now, to find a candidate for replacement, it
is required to find the smallest element from the left subtree to avoid violation of the basic
5
rules of k d-trees and then it is required to perform a swap of left and right subtrees, as many
possible candidate keys exist in the left subtree. To handle such a situation, we make use of
a collection of BIT S-trees, one for each dimension. Here, deleting a point may or may not
require a replacement, but if so, it is only the inorder successor and that can be located in
θ(1) time as inorder links exist for each node. Also, the cross links that exist between two
consecutive levels, practically provide a faster search on next level trees.
Another advantage of this structure is that when a node is pruned out at a particular
level, it need not be considered in the subsequent levels. That is, nodes that have head
values as these will be ignored in the subsequent levels. To the best of our knowledge, there
is no such structure using multi-levels of balanced binary search trees, with two-way threads
introduced in this work, for storing point data and to perform range search efficiently.
2.4
2d-Range Search for Window Query
Given a rectangular range in the form of a window, a range query finds all points lying within
this window. Let [x1 : x2 ] × [y1 : y2 ] be a given query range. First, we use a trie stored in
the first node, which is the only cross link node, in the tree at level 1 to find the smallest
point larger than or equal to x1 in S1 . The non-existence of such a point is determined from
the trie itself. On the other hand, once such a point p is located, subsequent points that fall
within [x1 : x2 ] can be determined using the inorder threads as the inorder sequence is in
sorted order. Let us say that the reported set of points as S 0 . However, if the dimensional
value of the point p is greater than x2 , it implies absence of required candidates.
Now, using cross links of the node in T1 , that corresponds to each point in S 0 , further
search is performed at T2 in a similar fashion. Note that each cross link node in T2 has a
trie structure that supports quick access to a node in T2 , where the dimensional value is
in [y1 : y2 ]. In case, the dimensional value of the cross link node is within [y1 : y2 ], the
respective trie structure need not be looked into, instead the inorder threads are used to
find the remaining candidates.
Example: 1 For instance, let us consider Figure 3 with search range [1 : 8] × [5 : 7]. First,
we use the only cross link node present in T1 . As its dimension value, ie. 2 lies within
the range [1 : 8], we do not use the respective trie. Instead, we use the inorder threads
to identify the candidate points, which are 2,6 and 8. Now, for each of these candidates,
further search is continued respectively from (2, 2),(6, 2) and (8, 10) in T2 , as these are the
corresponding cross link nodes. Now, by looking at the tries of these cross link nodes we find
a point whose dimensional value is the smallest one is [5 : 7]. Thus, tries of (2, 2) yields
(2, 6), (6, 2) yields (6, 6) and (8, 10) yields nothing. Further, by performing inorder traversal
from (2, 6) and (6, 6), the final reported points for Q1 are E(2, 6) and C(6, 6). Also, for Q2
i.e, ([5 : 8] × [12 : 14]), no points will be reported.
Notice that one can stop the search at T1 without traversing T2 if there is no candidate
node in T1 within the given range. This is also applicable in k-d trees because if there is
no candidate node in the higher tree, the lower level trees need not be searched. Thus,
this structure prunes the search in some cases and thereby practically reduces the time for
reporting a query.
2.5
k -d Range Search
A range search on k-dimensional points can be performed by extending the search on
T3 , T4 , . . . Tk , similar to that of T2 as in the case of 2d range search. However in T1 and T2 ,
we need to perform the search as described for 2d range search. That is, when we take the
query range as [x1 : x01 ] × [x2 : x02 ] × . . . [xk : x0k ], the search is performed to find candidates
within the range of [x1 : x01 ] in T1 , [x2 : x02 ] in T2 , [x3 : x03 ] in T3 , and so on. Finally,
the points reported from Tk will be in Q. It is important to note that the search requires
comparison of keys within the given range of the particular co-ordinate dimension in each
of T1 , T2 , . . . Tk . This simplifies subsequent searches at the next level.
6
3
3.1
Implementation Details
Two Dimensions
Given a set of two dimensional points in <2 , a two-level tree (BIT S2d-tree) is constructed
in O(n) time as a point may require at most two insertions, one at T1 and the other at
T2 . But the position at which insertion is to be made in T1 and T2 could be determined
in constant time as described in the proof of Lemma 5. Thus, to insert n nodes requires
O(n) time. Also, it may be required to create a cross link for each node of T1 in the case of
BIT S 2d-tree. Since, T1 cannot have more than n points, the number of cross links created
cannot exceed n. Also, the number of trie links created cannot exceed the number of nodes
in T1 and T2 , which is O(n). Moreover, construction of a trie requires only constant time
as the height of the trie is constant due to fixed size of the key. Thus, all these factors lie
within O(log n) for each insertion.
Regarding space requirements in a BIT S 2d-tree, it is O(n), as the second tree is the
one that contains all the n points, and fewer or equal number of points in the first tree.
Also, the number of trie nodes is O(n) as the height of a trie is constant which is due to the
size of(number of digits) of the key. Thus, we obtain the following lemma.
Lemma: 1 Construction of BIT S 2d-tree for n points requires O(n) time and O(n) space.
Now, searching a candidate node in T1 is done through the trie in T1 and that requires only
constant time as the height of the trie is fixed. Once such a point is identified, subsequent
points are identified through inorder threads. Thus. for identifying candidate points, it
takes only θ(t1 ) time, if there are t1 candidate points in T1 . Now, using cross links of each
of these nodes, we can locate the required tries in constant time and further search is to be
done in a similar fashion as described earlier. Thus, it leads to the following lemma.
Lemma: 2 Range search for window query using BIT S 2d-tree can be addressed in θ(t)
time, where t stands for the number of points reported.
3.2
Higher Dimensions
A straight forward extension of BIT S 2d-tree to k dimensions is made easy by connecting
(cross links) to the corresponding nodes in the tree at next level. Unlike range trees [7]
which build another range tree at a given node from the main tree, we maintain the trees
T1 , T2 , . . . Tk , dimension-wise such that the inorder traversal provides an ordered sequence
of points stored in the tree. This definitely reduces the overall time taken for range search
across k dimensions. As described in the previous section, the time required to find a
candidate point in any Ti , 1 ≤ i ≤ k, is only a constant. Thus, it leads to the following
lemma.
Lemma: 3 Let S be a set of points in k-dimensional space, k ≥ 1. A range search on
BIT S kd-tree reports all points that lie within the rectangular query range in θ(t) time,
where t is the number of points reported.
Lemma: 4 Given a set of n points, a BIT S kd-tree can be constructed in O(n) time and
O(n) space.
Proof: Since we construct T1 , T2 . . . Tk , such that Tk at level k has at most n nodes, it follows
that N (Ti ) ≤ N (Ti+1 ), 1 ≤ i < k and N (Tk )=n, where N (Ti ) is the number of nodes in Ti .
Note that levels correspond to dimensions and hence may be used interchangeably. Also,
the number of trie nodes is O(n) as its height is constant. Therefore for k levels, a BIT S
k d-tree uses O(n) storage in the worst case as k is a constant. Now, construction of BIT S
k d-tree is considered as a sequence of insertions. Each insertion, may or may not alter Ti ,
1 ≤ i ≤ k, a BIT S tree of a particular level. However, if a BIT S-tree Tj is altered, due
to insertion, all trees Tj+1 , Tj+2 , . . . Tk will be altered. Let j be the least index such that
7
Table 1: Theoretical comparison of k d-trees, divided k -d trees, range trees, k -d Range DSLtrees, layered range trees and the proposed BIT S k d-trees.
Description
k d-Trees [4]
Divided k -d trees [25]
Range Trees[7]
k d-Range DSL-Trees [15]
Layered Range Trees[13]
BITS k d-Trees
Storage
O(n)
O(n)
O(n logk−1 n)
O(n logk−1 n)
O(n logk−1 n)
O(n)
Construction
O(n log n)
O(n log n)
O(n logk−1 n)
O(n logk n)
O(n logk−1 n)
O(n)
Update
O(logk n)
O(logk−1 n)
O(logk n)
O(n logk−1 n)
O(logk n)
Ins. θ(1) Del. O(log n)
Range Search
O(n1−1/k + t)
O(n1−1/k log1/k n + t)
O(logk n + t)
O(logk n + t)
O(logk−1 n + t)
θ(t)
n-number of points, k-dimensions, t-number of points reported.
the tree Tj is altered. Thus, for T1 , T2 , . . . Tj−1 , with trie links and cross links, one can
determine that the required values are already stored in those trees within constant time.
Now, from a particular cross link in Tj−1 followed by a trie link in Tj , one can find a position
for the new value in Tj . This requires only constant time. Then, while inserting the value
if the tree is unbalanced, atmost one rotation is required to balance the tree. So, for Tj
too, it requires constant time. Let nj be the new node inserted in Tj . Now, by taking cross
link of inorder successor of nj , one can determine the position of the new node in Tj+1 , and
that as inorder predecessor of cross link node of inorder successor of nj . This new node in
Tj+1 need to have a trie, which again be created in constant time. Then, the process is to
be continued for Tj+2 . . . Tk . Here, updation in each Ti , 1 ≤ i ≤ k, takes only constant time
and hence each insertion takes θ(1) time. So, construction of BIT S k d-tree for n points
requires O(n) time.
Lemma: 5 Insertion and Deletion of a point in a BIT S kd-tree can be respectively done
in θ(1) and O(log n) time.
Proof: As per the description given in the proof of Lemma 4, insertion of a point in BITS
k d-tree takes only θ(1) time. But for deletion, finding a node to be removed from a BIT Stree requires only constant time. However, if that node is not a leaf node a cascading
replacement with inorder successor is required until reaching a leaf node to be removed
physically. Certainly, the number of such replacements to be done cannot exceed O(log n).
After that it may require a sequence of rotations on the path from the physically removed
leaf to the root, and that too in at most O(log n) rotations. So, deletion of a point in BIT S
k d-tree requires O(log n) time.
4
Performance
Table 1 summarizes the performance of k d-trees, divided k -d trees, range trees, k d-range
DSL-trees and the BIT Sk d- tree proposed in this work. Furthermore, our theoretical
comparison of the BIT S k d-tree is made with k d-trees adapted for internal memory(pointer
machine model) and not with any of the other bulk loading k d-trees(RAM model). The
results give an θ(t) query time using the BIT S k d-tree that shows a reduction in time as
compared to the existing bounds. Since we try to capitalize on the efficiency of balanced
search trees at all the levels by using cross links and trie links, we ensure that the number
of nodes visited during a range query is considerably reduced in BIT S k d-tree. Observe
that the storage is increased from O(n) in k d-trees to O(n logk−1 n) in range trees while
BIT S k d-tree still maintains an O(n). Notice that the update time for BITSk d-tree has
been reduced considerably. To summarize, although the storage requirements of BIT S k dtree are comparable to k -d trees, divided k -d trees, the construction and update time are
improved considerably. Moreover, the overall query time is improved to θ(t) time where t is
the number of points reported as it prunes points falling outside the query region for each
dimension.
8
5
Conclusion
A BIT S k d-tree for storing k-dimensional points having update and query operations efficiently than kd-trees is proposed. The main advantage of this tree is that it effectively
handles the collinear points. As a result, number of nodes visited during search is much less
compared to other k d-tree variants that are either not height balanced or update operation
is complex. In the case of height balanced k d-trees, having better search efficiency, insertion
is tedious. A k -d range DSL tree gives a logarithmic amortized worst case search time with
efficient updates mainly for partial match queries and not for window queries. In BIT S
k d-tree, overall insertion time is θ(1). Moreover, points can be dynamically updated at each
level. Since co-ordinate dimensions at each level are distributed and using threaded tries,
we quickly find points falling within the query range. Also, points falling above and below
the search range are pruned efficiently using cross links to the next level and inorder threads
similar to the BIT S-tree. In addition, threaded tries introduced in this work link the node,
having cross link, by means of trie links to find the points within the given range in constant
time. Therefore, range search for points in a rectangular region using BIT S k -d tree takes
θ(t) time where t is the number of points reported, and therefore the logarithmic factor in
earlier worst case bounds is reduced. Hence it is definitely a remarkable improvement over
O(n1−1/k + t) of k d-trees and O(logk n + t) time of k -d range DSL trees.
References
[1] P. Afshani, L. Arge, and K.G. Larsen. Higher-dimensional orthogonal range reporting
and rectangle stabbing in the pointer machine model. In Proceedings of the twentyeighth annual symposium on Computational geometry, pages 323–332. ACM, 2012.
[2] P. K. Agarwal. Range searching. In J. E. Goodman and J. ORourke, editors, CRC
Handbook of Discrete and Computational Geometry. CRC Press, Inc, 2004.
[3] S. Alstrup, G. S. Brodal, and T. Rauhe. New data structures for orthogonal range
searching. In Foundations of Computer Science, 2000. Proceedings. 41st Annual Symposium on, pages 198–207. IEEE, 2000.
[4] J.L. Bentley. Multidimensional binary search tress used for associative searching. Communications of ACM, 18(9):509–516, 1975.
[5] J.L. Bentley. Decomposable search problems. Information Processing Letters, 8(5):5–9,
June 1979.
[6] J.L. Bentley. Multidimensional binary search trees in database applications. IEEE
Transactions on Software Engineering, SE-5(4):333–340, 1979.
[7] J.L. Bentley. Multidimensional divide and conquer. Communications of the ACM,
23(4):214–229, April 1980.
[8] M.D. Berg, O. Cheong, M.V. Kreveld, and M. Overmars. Computational Geometry:
algorithms and applications. Springer-Verlag, New York,USA, third edition, 2008.
[9] T.M. Chan. Persistent predecessor search and orthogonal point location on the word
ram. In Proceedings of the Twenty-second Annual ACM-SIAM Symposium on Discrete
Algorithms, SODA ’11, pages 1131–1145. SIAM, 2011.
[10] T.M. Chan, K.G. Larsen, and M. Pătraşcu. Orthogonal range searching on the ram,
revisited. In Proceedings of the Twenty-seventh Annual Symposium on Computational
Geometry, SoCG ’11, pages 1–10, New York, NY, USA, 2011. ACM.
9
[11] M.M.P. Crespo. Design, Analysis and Implementation of New Variants of Kd-trees.
Master Thesis, Universitat Politecnica de Catalunya,Departament de Llenguatges i
Sistemes Informatics, 2010.
[12] L. Devroye, J. Jabbour, and C. Zamora-Cura. Squarish kd-trees. SIAM Journal of
Computing, 30:1678–1700, 2000.
[13] D.Willard. New data structures for orthogonal queries. Harvard University, TR:22–78,
1978.
[14] K.S. Easwarakumar and T. Hema. BITS-Tree-An efficient data structure for segment
storage and query processing. International Journal of Computers and Technology,
11(10):3108–3116, December 2013.
[15] M.G. Lamoureux and B.G. Nicolson. Determinisitic skip lists for k-dimensional range
search. Technical Report(TR95-098), pages 1–95, Novemeber 1995.
[16] D.T. Lee and C. K. Wong. Worst-case analysis for region and partial region searches in
multidimensional binary search trees and balanced quad trees. Acta Informatica, pages
23–29, 1977.
[17] Y. Nekrich. Orthogonal range searching in linear and almost-linear space. Computational Geometry, 42(4):342–351, 2009.
[18] S. Nilsson and M.Tikkanen. An experimental study of compression methods for dynamic
tries. Algorithmica, 33(1):19–33, 2002.
[19] J.A. Orienstein. Multidimensional tries used for associative searching. Information
Processing Letters, 14(4):150–157, June 1982.
[20] F.P. Preparata and M.L. Shamos. Computational Geometry: An Introduction. SpringerVerlag, New York, 1985.
[21] R.C.Nelson and H.Samet. A consistent hierarchical representation for vector data.
In Proceedings of the SIGGRAPH’86 Conference, Dallas, volume 20, pages 197–206,
August 1986.
[22] H. Samet. Fundamentals of Multi-dimensional and Metric Data Structures. Academic
Press, New York,USA, 1974.
[23] H. Samet. The Design and Analysis of Spatial Data Structures. Addison Wesley, 1990.
[24] H. Tropf and H.Herzog. Multidimensional range search in dynamically balanced trees.
Applied Informatics, Vieweg Verlag,Germany, 2:71–77, 1981.
[25] M.J. van Kreveld and M.H. Overmars. Divided k-d trees. Algorithmica, 6:840–858,
1991.
10
| 8 |
Slow links, fast links, and the cost of gossip
arXiv:1611.06343v2 [cs.DC] 14 Dec 2017
Suman Sourav
National University of Singapore
[email protected]
Peter Robinson
Royal Holloway, University of London
[email protected]
Seth Gilbert
National University of Singapore
[email protected]
Abstract
Consider the classical problem of information dissemination: one (or more) nodes in a network have
some information that they want to distribute to the remainder of the network. In this paper, we study the
cost of information dissemination in networks where edges have latencies, i.e., sending a message from
one node to another takes some amount of time. We first generalize the idea of conductance to weighted
graphs by defining φ∗ to be the “critical conductance” and `∗ to be the “critical latency”. One goal of this
paper is to argue that φ∗ characterizes the connectivity of a weighted graph with latencies in much the
same way that conductance characterizes the connectivity of unweighted graphs.
We give near tight lower and upper bounds on the problem of information dissemination, up to
polylogarithmic factors. Specifically, we show that in a graph with (weighted) diameter D (with latencies
as weights) and maximum degree ∆, any information dissemination algorithm requires at least Ω(min(D+
∆, `∗ /φ∗ )) time in the worst case. We show several variants of the lower bound (e.g., for graphs with
small diameter, graphs with small max-degree, etc.) by reduction to a simple combinatorial game.
We then give nearly matching algorithms, showing that information dissemination can be solved in
O(min((D + ∆) log3 n, (`∗ /φ∗ ) log n) time. This is achieved by combining two cases. We show that
the classical push-pull algorithm is (near) optimal when the diameter or the maximum degree is large.
For the case where the diameter and the maximum degree are small, we give an alternative strategy in
which we first discover the latencies and then use an algorithm for known latencies based on a weighted
spanner construction. (Our algorithms are within polylogarithmic factors of being tight both for known
and unknown latencies.)
While it is easiest to express our bounds in terms of φ∗ and `∗ , in some cases they do not provide
the most convenient definition of conductance in weighted graphs. Therefore we give a second (nearly)
equivalent characterization, namely the average conductance φavg .
1
1
Introduction
Consider the problem of disseminating information in a large-scale distributed system: nodes in the network
have information that they want to share/aggregate/reconcile with others. Real world network communication
often has a time delay, which we model here as edges with latencies. The latency of an edge captures how
long communication takes, i.e., how many rounds it takes for two neighbors to exchange information. Low
latency on links imply faster message transmission whereas higher latency implies longer delays.
In the case of unweighted graphs, all edges are considered the same and are said to have unit latencies.
However, this is not true in real life and link latencies can vary greatly. In fact, even if nodes are connected
directly it might not be the fastest route for communication due to large latency of the link (which might arise
due to poor connection quality, hardware or software restrictions etc.); often choosing a multi-hop lower
latency path leads to faster distribution of information.
For unweighted graphs (without latencies), there exists a significant amount of literature, characterizing
the connectivity of a graph (referred as the conductance of a graph) which exactly indicates how efficient
information dissemination will be. We would like to do the same for graphs with latencies, however, due
to the presence of latencies, not all edges can be regarded as the same; and therefore connectivity alone
is no longer enough. Thus, we introduce a new notion of the critical conductance φ∗ that generalizes the
notion of classical conductance. Using φ∗ , we give nearly tight lower and upper bounds for information
dissemination. For some cases, φ∗ might not be the most convenient definition of conductance in weighted
graphs. Alternatively, we give a (nearly) equivalent characterization, namely the average conductance φavg .
Model. We model the network as a connected, undirected graph G = (V, E) with n = |V | nodes. Each
node knows the identities of its neighbors and a polynomial upper bound on the size of the network. Nodes
communicate bidirectionally over the graph edges, and communication proceeds in synchronous rounds. An
edge is said to be activated whenever a node sends any message over the edge.
Latencies occur in the communication channel and not on the nodes. For simplicity, we assume that each
edge latency is an integer. (If not, latencies can be scaled and rounded to the nearest integer.) Also, the edge
latencies here are symmetric. Problems for non-symmetric arbitrarily large latencies are at least as hard as
directed unweighted networks (for which many tasks are impossible to achieve efficiently). Let D be the
(weighted) diameter of the graph (with latencies as weights), and let `max be the maximum edge latency. We
consider both cases where nodes know the latencies of adjacent edges (Section 5) and cases where nodes do
not know the latencies of adjacent edges (the rest of the paper). Nodes do not know D or `max .1
In each round, each node can choose one neighbor to exchange information with: it sends a message
to that neighbor and (automatically) receives a response.2 If the edge has latency `, then this round-trip
exchange takes time `. This model is, within constant factors, equivalent to a more standard model in which a
round-trip involves first sending a message with latency `, receiving it at the other end, and then sending a
response at a cost of latency `. Notice that each node can initiate a new exchange in every round, even if
previous messages have not yet been delivered, i.e., communication is non-blocking.
Information dissemination. We focus in this paper on one-to-all information dissemination. A designated
source node begins with a message (the rumor) and, when the protocol completes, every node should have
received the message.
Classic examples include distributed database replication, sensor network data aggregation, and P2P
publish-subscribe systems. This fundamental problem has been widely studied under various names, e.g.,
information dissemination (e.g., [5]), rumor spreading (e.g., [8]), global broadcast (e.g., [18]), one-to-all
1
In real world settings, nodes are often aware of their neighbors. However, due to fluctuations in network quality (and hence
latency), a node cannot necessarily predict the latency of a connection.
2
Notice that this model of communication is essentially equivalent to the traditional push-pull where each node can either push
data to a neighbor or pull data from a neighbor; here we assume a node always does both simultaneously. Without the ability to pull
data, it is easy to see that information exchange takes Ω(nD) time, e.g., in a star. Simple flooding matches this lower bound.
2
multicast, and information spreading (e.g., [6]). As a building block, we look at local broadcast, i.e., the
problem of every node distributing a message to all of its neighbors.
Conductance in weighted graphs. Our goal in this paper is to determine how long it takes to disseminate
information in a graph with latencies. Clearly the running time will depend on the (weighted) diameter D of
the graph. Typically, such algorithms also depend on how well connected the graph is, and this is normally
captured by the conductance φ. Unfortunately, conductance is no longer a good indicator of connectivity in a
graph with latencies, as slow edges (with large weights) are much worse than fast edges.3
We begin by generalizing the idea of conductance to weighted graphs. We give two (nearly) equivalent
definitions of conductance in weighted graphs, which we refer to as the critical conductance φ∗ (Definition 2)
and the average conductance φavg (Definition 4). While they give (approximately) the same value for every
graph, there are times when one definition is more convenient than the other. In fact, we show that the values
φ∗
of φ∗ and φavg are closely related; as in 2`
< φavg < φ`∗∗ dlog(`max )e (c.f. Theorem 5). We compare these
∗
definitions further in Section 2.3. We use φ∗ in determining the lower and upper bounds for information
dissemination as it makes our analysis simpler and then use the above relation to determine the bounds for
φavg .
A core goal of this paper is to argue that the notion of φ∗ (and φavg ) defined herein well captures the
connectivity of weighted graphs, and may be useful for understanding the performance of other algorithms.
Lower bounds. These constitute some of the key technical contributions of this paper. For a graph G,
with diameter D, maximum degree ∆, critical conductance φ∗ , and critical latency `∗ , we show that any
information dissemination algorithm requires Ω(min(D +∆, `∗ /φ∗ )) rounds. That is, in the worst case it may
take time D + ∆ to distribute information. However, if the graph is well connected, then we may do better
and the time is characterized by the critical conductance. We show that this lower bound holds even in various
special cases, e.g., for graphs with small diameter, or with small max-degree, etc. By the relation provided in
Theorem 5, we determine the lower bound in terms of average conductance as Ω(min(D + ∆, 1/φavg )).
The main technique we use for showing our lower bounds is a reduction to a simpler combinatorial
guessing game. (See [25] for a demonstration of how other variants of guessing games can be used to prove
lower bounds for radio networks.) We first show that the guessing game itself takes a large number of rounds.
Thereafter we reduce the problem of solving the game to that of solving information dissemination via a
simulation.
Upper bounds. We then show nearly matching upper bounds, i.e., algorithms for solving information dissemination. In this regard, we differentiate our model into two cases. For the case where nodes are not aware of the
adjacent edge latencies, we show that the classical push-pull random phone call algorithm [22] in which each
node initiates a connection with a randomly chosen neighbor in each round, completes in O((`∗ /φ∗ ) log n)
rounds. By using the relationship between φ∗ and φavg , we give a O((log(`max )/φavg ) log n) upper bound
in terms of φavg .
For the case where nodes do know the latencies of the incident edges, we obtain nearly tight bounds
that are independent of ∆ and φ∗ : we give a O(D log3 n)-time algorithm (which is within polylogarithmic
factors of the trivial Ω(D) lower bound). The key idea of the algorithm is to build a (weighted) spanner
(based on that in [2]). This spanner is then used to distribute information. This algorithm, however, requires
knowledge of a polynomial upper bound on n; hence for completeness we also provide an alternate algorithm
in Appendix 5.4 that does not require the knowledge of n but takes an additional log D factor (instead of
log n), making it unsuitable for graphs with large diameters.
Finally, we observe that we can always discover the latencies of the “important” adjacent edges in
3
Notice you might model an edge with weight w as a path of w edges with weight 1. If you calculate the conductance of the
resulting graph, you do not get a good characterization of the connectivity of the original graph for a few different reasons. For e.g.,
consider the ability of the imaginary nodes on the edge to pull data from the endpoints.
3
Õ(D + ∆) time4 , after which we can use the algorithm that works when latencies are known. Hence, even if
latencies are unknown, combining the various algorithms, we can always solve the information dissemination
in O(min((D + ∆) log3 n, (`∗ /φ∗ ) log n) time (or O(min((D + ∆) log3 n, (log(`max )/φavg ) log n) time),
matching the lower bounds up to polylogarithmic factors (with respect to the critical conductance).
Summary of our contributions. To the best of our knowledge, this work provides a first ever characterization
of conductance in graphs with latencies. In this regard, we provide two different parameters namely φ∗ and
φavg . Note that, we provide the summary here only in terms of φ∗ , however, for each case there exists an
alternate version in terms of φavg .
For lower bounds, we show that there exists graphs with
(a) O(log n) diameter with maximum degree ∆ where local broadcast requires Ω(∆) rounds;
(b) O(`∗ ) diameter with critical conductance φ∗ where local broadcast requires Ω(1/φ∗ + `∗ ) rounds;
(c) Θ(1/φ∗ ) diameter where information dissemination requires Ω(min(D + ∆, `∗ /φ∗ )) rounds; showing
the trade-off among the various parameters affecting information dissemination.
For upper bounds on information dissemination, we show that
(d) the push-pull algorithm takes O(`∗ log(n)/φ∗ ) rounds;
(e) a spanner-based algorithm takes O((D + ∆) log3 n) rounds.
We view our results as a step towards a more accurate characterization of connectivity in networks with
delays and we believe that the metrics φ∗ and φavg can prove useful in solving other graph problems.
Prior work. There is a long history studying the time and message complexity of disseminating information
when all the links have the same latency. It is interesting to contrast what can be achieved in the weighted
case with what can be achieved in the unweighted case.
The classic model for studying information dissemination is the random phone call model, introduced
by [10]: in each round, each node communicates with a single randomly selected neighbor; if it knows the
rumor, then it “pushes” the information to its neighbor; if it does not know the rumor, then it “pulls” it from
its neighbor (see, e.g., [13], [23], [16]).
An important special case is when the graph is a clique: any pair of nodes can communicate directly. In a
seminal paper, Karp et al. [22] show that a rumor can be disseminated in a complete graph in O(log n) rounds
with O(n log log n) message complexity. Fraigniaud and Giakkoupis [14] show how to simultaneously
achieve optimal communication complexity (except for extremely small rumor sizes).
When the graph is not a clique, the performance of the classical push-pull protocol, wherein a node
exchanges information with a random neighbor in each round, typically depends on the topology of the
graph, specifically, how well connected the graph is. An exciting sequence of papers (see [7, 8, 16, 24] and
references therein) eventually showed that rumor spreading in this manner takes time O( logφ n ), where φ is
the conductance of the graph.
The question that remained open was whether a more careful choice of neighbors lead to faster information
dissemination. In a breakthrough result, Censor-Hillel et al. [5] gave a randomized algorithm for solving
information dissemination in any (unweighted) graph in time O(D + polylogn), where D here is the nonweighted diameter of the graph. Of note, the protocol has no dependence on the conductance of the graph but
only on the diameter (which is unavoidable). There were two key ingredients to their solution: first, they gave
a “local broadcast” protocol where each node exchanges information with all its neighbors in O(log3 n) time;
second, as a by-product of this protocol they obtain a spanner which they use in conjunction with a simulator
(defined therein) to achieve information dissemination in O(D + polylogn) time. Haeupler [18] then showed
how local broadcast could be achieved in O(log2 n) time using a simple deterministic algorithm.
The conclusion, then, is that in an unweighted graph (with unit latency edges), information dissemination
can be achieved in time O(D + polylogn) or in time O(log(n)/φ).
4
The notation Õ hides polylogarithmic factors, which arise due to D and ∆ being unknown.
4
Other related works. The problem has been well researched in several other settings as well. For graphs
modeling social networks Doerr et al. [11, 12] show a Θ(log n) time bound for solving broadcast. For the
case of direct addressing, Haeupler and Malkhi [19] show that broadcast can be performed optimally in
O(log log n) rounds. Information dissemination in random geometric graphs has been studied by Bradonjić
et al. [4], in wireless sensor networks and adhoc networks by Boyd et al. [3], Sarwate and Dimakis [26] and
Gandhi et al. [15], Giakkoupis et al. [17] study the problem in dynamic graphs.
2
Conductance in Weighted Graphs
In this section, we provide two different approaches to characterize conductance in weighted graphs namely
the critical conductance and the average conductance and show the relationship between them. In the sections
that follow, where we determine the bounds on information dissemination, we use the critical conductance as
it makes our analysis simpler. Corresponding bounds for average conductance are obtained by the application
of the given relationship in Theorem 5.
2.1
Critical Conductance
We now define the critical conductance of a graph, generalizing the classical notion of conductance. For a
given graph G = (V, E), and for a set of edges S ⊆ E, we define E` (S) to be the subset of edges of S that
have latency 6 `. For a set of nodes U ⊆ V and cut C = (U, V \ U ), we define
P E` (C) to be the subset of
edges across the cut C with latency 6 `, and we define the volume Vol(U ) = v∈U degv , where degv refers
to the degree of node v.
We first define the critical conductance of a cut for a given latency `, and then define the weight-`
conductance as the minimum critical conductance across all cuts.
Definition 1 (Weight-` Conductance). Consider a graph G = (V, E). For any cut C in the set of all possible
cuts (C̃) of the graph G and an integer `, we define
φ` (C) =
|E` (C)|
.
min{Vol(U ), Vol(V \ U )}
The weight-` conductance is given by φ` (G) = min{φ` (C) | C ∈ C̃}.
Definition 2 (Critical Conductance). We define the critical conductance φ∗ (G) as
φ` (G)
is maximum for any ` ∈ (1, `max ) .
φ∗ (G) = φ` (G)
`
We call `∗ the critical latency for G if `∗ = ` and φ∗ (G) = φ` (G).
We simply write φ∗ (or φ` ) instead of φ∗ (G) (or φ` (G)) when graph G is clear from the context. If all
edges have latency 1, then φ∗ is exactly equal to the classical graph conductance [21].
2.2
Average Conductance
For a given graph G = (V, E), we first define dlog(`max )e different latency classes, where the first class
contains all the edges of latency < 2 and the subsequent ith latency class consists of all the edges in the
latency range of (2i−1 , 2i ]. For a set of nodes U ⊆ V and the cut C = (U, V \ U ), we define ki (C) to be the
subset of edges across the cut C belonging to latency class i (i.e. all cut edges of latency > 2i−1 and 6 2i ).
For a cut C, we first define the average cut conductance as φavg (C), and then define the average
conductance as the minimum average cut conductance across all cuts.
5
Definition 3 (Average Cut Conductance). Consider a graph G = (V, E), a set of nodes U ⊆ V and the cut
C = (U, V \ U ). Let S be the min{Vol(U ), Vol(V \ U )}.
1
S
φavg (C) =
dlog(`max )e
X
i=1
|ki (C)|
2i
Definition 4 (Average Conductance). Let C̃ be the set of all possible cuts of the graph G. We define the
average conductance as φavg (G) = min{φavg (C) | C ∈ C̃}.
We simply write φavg instead of φavg (G) when graph G is clear from the context. If all edges have
latency 1, then φavg is exactly equal to the classical graph conductance [21].
2.3
Comparing Critical and Average Conductances
Conductance, in general, is a characterization of the “bottleneck in communication” of a graph. For
unweighted graphs, the only bottleneck in communication can be the connectivity of the graph, however, for
weighted graphs the bottleneck can arise either due to the graph connectivity or due to the edge latency (even
if the nodes are directly connected by a slow edge, there might exist a different multi-hop faster path). Our
aim is to capture both aspects of this bottleneck in communication.
Having good connectivity facilitates faster communication whereas large latencies result in slow-downs.
Ideally, we would want the best connectivity along with the least slowdown for faster communication.
We obtain the definition of φ∗ by directly optimizing these orthogonal parameters. The connectivity that
maximizes this ratio is defined as the critical conductance φ∗ and the corresponding latency is defined as the
critical latency `∗ . In other words, φ∗ captures the bottleneck due to connectivity whereas `∗ captures the
bottleneck due to latency.
The definition of the average conductance φavg is inspired by the classical notion of conductance. Each
cut edge’s contribution towards the overall connectivity is normalized by dividing it with its latency (rounded
to the upper bound of its latency class), so as to account for the slow-down.
Surprisingly, we see that φ∗ and φavg are closely related and to show the relationship between them, we
first define L as the number of non-empty latency classes in the given graph G. Latency class i is said to be
non-empty if there is at least one edge in the graph G that has a latency > 2i−1 and 6 2i . The maximum
value that L can take is dlog(`max )e which is the total number of possible latency classes.
Theorem 5. φ∗ /2`∗ < φavg < Lφ∗ /`∗ .
Proof. Consider any weighted graph G that has critical conductance φ∗ and critical latency as `∗ . We first
show the upper bound. Let C be the cut from which φ∗ was obtained and let S be the minimum volume
among either side of the cut.
By the definition of weight-` conductance,
Pi
φ2i (C) =
j=1
|kj (C)|
S
and from the definition of φ∗ , we know that
⇒
φ∗
`∗
φ2i (C)
2i
is >
∀i ∈ (1, dlog(`max )e)
φ`
`
Pi
=
j=1
|kj (C)|/2i
S
>
|ki (C)|/2i
S
for any `, which implies
φ i (C)
|ki (C)|/2i
φ∗
> 2 i >
`∗
2
S
Note that, in the definition of φavg , the terms corresponding to the empty latency classes becomes zero.
We replace each remaining term in the definition of φavg (C) by φ`∗∗ and using the above inequality, we get
φavg (C) 6 φ`∗∗ L. Combining with the fact that φavg is the minimum average cut conductance, we obtain
6
φavg 6 φavg (C) 6
φ∗
L
`∗
(1)
Next we show the lower bound, for this we consider the cut C 0 that determines φavg and let S 0 be the
minimum volume among either side of the cut. On this cut C 0 consider the latency class of the critical latency
`∗ ; say `∗ lies in the latency class x, which implies that 2x−1 < `∗ 6 2x . From the definition of weight-`
conductance, we get
φ`∗ (C 0 )
|k1 (C 0 )| + |k2 (C 0 )| + · · · + |kx (C 0 )|
6
.
2`∗
2`∗ S 0
Rewriting φavg as (from definition)
φavg
|kdlog(`max )e (C 0 )|
|k1 (C 0 )| |k2 (C 0 )|
,
+
+ ··· +
=
2S 0
22 S 0
2dlog(`max )e S 0
and comparing the first x terms of φavg to that of φ`∗ (C 0 )/2`∗ , we observe that each term in the expression
of φavg is at least as large as the corresponding term in the above upper bound on φ`∗ (C 0 )/2`∗ . Also there
are some additional positive terms in φavg . Combining this with the fact that φ`∗ /2`∗ 6 φ`∗ (C 0 )/2`∗ (as by
definition φ` is chosen as the minimum value among all possible cuts), we obtain
φ` ∗
φ` (C 0 )
6 ∗
6 φavg
2`∗
2`∗
(2)
This proves the lower bound and completes the proof.
3
Lower Bounds
We proceed to lower bound the time for completing information dissemination. The main goal of this section
(as found in Theorems 9, 10, and 13) is to show that every gossip algorithm requires Ω(min{∆ + D, `∗ /φ∗ })
on graphs with diameter D, max-degree ∆, critical conductance φ∗ , and critical latency `∗ . Throughout this
section, we assume that nodes do not know the latencies of their adjacent links (when nodes do know the
latencies, the trivial lower bound of Ω(D) is sufficient).
We begin by defining a combinatorial guessing game (a similar approach as in [25]) and show a lower
bound for it.5 We then construct several different worst-case graphs and reduce the guessing game to solving
information dissemination on these graphs, thereby showing our lower bound.
3.1
The Guessing Game
We define a guessing game played by Alice against an oracle. Conceptually, the game is played on a bipartite
graph of 2m nodes. The oracle selects a subset of the edges as the target. In each round, Alice guesses a set
of at most 2m edges, and the oracle reveals any target edges that have been hit. At the same time, if any edge
(u, v) in the target set is guessed by Alice, then all adjacent edges (x, v) in the target set are removed from
the target set.
Fix an integer m. Let A and B be two disjoint sets of m integers each, i.e., the left and right group of
nodes in the bipartite graph. The winning condition of the game depends on a predicate P , which returns
a subset of edges from A × B. For example, P = Randomp returns a subset T that contains elements of
A × B, where each element is chosen with probability p or discarded with probability 1 − p.
5
The results of [25] do not apply directly to our setting, as their “proposal set” of the player must intersect the target set in exactly
1 element. By contrast, the guessing game here requires us to discover sufficiently many target elements such that every element in
the target set occurs at least once.
7
We now define the game Guessing(2m, P ), which begins when Alice receives two disjoint sets A and B.
The oracle chooses a target set T1 ⊆ A × B returned by the predicate. Throughout, we assume that Alice has
access to a source of unbiased random bits. Alice’s goal is to eliminate all the elements in the target set. In
each round r > 1, Alice submits a set Xr ⊆ A × B of size at most 2m as her round r guesses to the oracle.
The oracle replies by revealing the items she guessed correctly, i.e., Xr ∩ Tr . The oracle then computes the
round r + 1 target set Tr+1 by removing the items that Alice hit, i.e., all the items in Tr that have the same
B-component as an item in Xr ∩ Tr :
Tr+1 = Tr \ TrA × TrB ∩ XrB .
(3)
This concludes round r and the next round begins.
The game is solved in the first round r0 , where Alice’s guesses result in an empty target set; at this point,
the oracle answers halt. In other words, the game ends in round r0 if, for every b ∈ T1B , there was some
a0 ∈ A such that (a0 , b) ∈ Xr ∩ Tr , in some r ∈ [1, r0 ].
Alice’s aim is to minimize the number of rounds until the target set becomes empty. We say that a protocol
Π solves Guessing(2m, P ) with probability 1 − in r rounds, if Π always terminates within r rounds, and
Tr+1 = ∅ with probability > 1 − , for any target set T . In this case, we call Π an -error protocol.
3.2
Guessing by Gossiping
Our lower bound results use variants of an n-node distributed network that has a guessing game gadget of
2m nodes embedded as a subgraph. In our gadget construction, we use predicate P , to specify a set of hidden
low latency edges, which we call fast edges. We show that, the execution of a gossip algorithm on an n-node
network can be simulated by Alice when playing the guessing game Guessing(2m, P ), where n > 2m.
We use the notation id(v) to denote the ID of a vertex v, which, by construction is unique. For a given
instance of the guessing game, Alice creates a set of nodes L = {v1 , . . . , vm } where id(vi ) = ai ∈ A for
i = 1, . . . , m and, similarly, maps the integers in B to the IDs of the vertex set R = {u1 , . . . , um } in a
one-to-one fashion. Next, Alice creates a complete bipartite graph on sets L and R by adding m2 cross edges
and adds a clique on the vertices in L where all clique edges are considered to have latency 1.
For given integer parameters lo and hi, we construct the network in a way that only some cross edges in the
target set are useful to the algorithm by giving them a low latency lo whereas all other cross edges are assigned
a large latency value hi. Formally, the latencies of a cross edge e = (vi , uj ) is lo iff (id(vi ), id(uj )) ∈ P ;
otherwise e has latency hi. We denote this constructed gadget as G(2m, lo, hi, P ), where the parameters
refer to the size of the gadget (i.e. 2m), the low latency value lo, the high latency value hi, and the predicate
P respectively. We also consider a symmetric variant, called Gsym (2m, lo, hi, P ), where Alice creates a
clique on R in addition to the one on L. See figure 1.
Since Alice does not know the target set T in advance, she also does not know when a cross edge should
have latency lo or latency hi. Nevertheless, implicitly these latency assignments are fixed a priori by the
target set (unknown to Alice) which in turn depends on the predicate P . Whenever a cross edge e is activated
in our simulation, Alice submits the ID pair of the vertices of e as a guess to the oracle, whose answer reveals
the target set membership and hence also the latency of e.
Lemma 6 (Gossip Protocol Simulation). Suppose that there is a t-round -error algorithm A that solves
local broadcast on a given n-node network H that contains G(2m, 1, h, P ) or Gsym (2m, 1, h, P ) such that
the cross edges of the gadget form a cut of H, for h > t, n > 2m, and a predicate P . Then there is an -error
protocol Π for Guessing(2m, P ) that terminates in 6 t rounds.
Proof. We argue that Alice can simulate the execution of A on network H and, in particular, on the subgraph
G(2m, 1, h, P ), until the gossip algorithm A terminates or the oracle answers halt. (It is straightforward to
8
v1
u1
v1
u1
v2
u2
v2
u2
vm-1
um-1
vm-1
vm
L
R
vm
um
(a)
L
R
um
(b)
Figure 1: Guessing Game Gadgets. Red edges correspond to “fast” links whereas the blue edges are “slow”
links with high latency.
extend the argument to a subgraph Gsym (2m, 1, h, P ).) At the same time, Alice can use the behavior of A on
the subgraph G(2m, 1, h, P ) to derive a protocol for Guessing(2m, P ).
For a given instance of the guessing game, Alice creates the network H by first assigning all edges in the
subgraph H \ G(2m, 1, h, P ) a latency of 1. Moreover, she creates the edges of the subgraph G(2m, 1, h, P )
as described in Section 3.2; we will see below that the latency of a cross edges is only set when it is first
activated.
If an non-cross edge (vi , vj ) (i.e. a clique edge on L or an edge in E(H \ G(2m, 1, h, P ))) is activated
by the algorithm, Alice locally simulates the bidirectional message exchange by updating the state of nodes
vi and vj accordingly. In each round r of the gossip algorithm, a set of at most 2m cross edges is activated by
the vertices simulated by Alice’s. For each activated cross edge (vi , uj ), Alice uses (id(vi ), id(uj )) as one of
her round r guesses. Consider some round r > 1, and suppose the oracle returns the empty set. For each
one of Alice’s submitted round r guess (ai , bj ) that was not contained in the oracle’s answer, Alice sets the
latency of (ai , bj ) to h by updating the local state of ai . Here ai = id(vi ) and bj = id(uj ) that are chosen in
round r, for some vi ∈ L and uj ∈ R. It follows by a simple inductive argument that the state of every vertex
in the simulation is equivalent to executing the algorithm on the network.
We now argue that the above simulation of a t-round gossip algorithm for local broadcast solves the game
Guessing(2m, P ) in at most t rounds with probability > 1 − , for any predicate P . Recall that the guessing
game ends if T becomes empty, which happens when Alice’s correct guesses have included every b ∈ T B at
least once. By the premise of the lemma, the cross edges of G(2m, 1, h, P ) form a cut of H, which tells us
that A cannot solve local broadcast without using the cross edges between L × R. Since every such b ∈ R is
a neighbor of a node in L, the only way it can receive a local broadcast message is via a fast cross-edge in T .
Hence, if the local broadcast algorithm terminates, we know that b was hit by one of Alice’s guesses.
3.3
Guessing Game Lower Bounds
The following lemma is instrumental for showing the Ω(∆) lower bound of Theorem 9, which holds when
there are no other assumptions on the critical conductance of the graph.
Lemma 7. Let Guessing(2m, |T | = 1) be the guessing game where the target set is a single pair chosen
uniformly at random from A × B. If protocol Π is an -error protocol for Guessing(2m, |T | = 1) where
< 1, then the number of rounds until Π terminates is at least Ω(m).
Proof. For the sake of a contradiction, suppose that Π solves Guessing(2m, |T | = 1) in t < m
2 − 1 rounds.
We define Time to be the random variable of the number of rounds until termination in a given execution of
Π.
9
Consider a round r > 1 of the protocol and suppose that the game has not yet ended, i.e., Alice has not
yet guessed all of T correctly and has made at most 2m(r − 1) (incorrect) guesses in the previous rounds.
Let Xr denote the (at most 2m) pairs from A × B chosen by Alice in round r. Since from Alice’s point of
view, the adversary has chosen the single element of T uniformly at random from the m2 elements in A × B,
2
2m
6 m−2r
the probability that Alice guesses the element of T in round r is at most m2 −2m(r−1)
. Let Correct
denote the event that protocol Π correctly solves the game. It follows that
Pr Time = r Correct = Pr Tr ⊆ Xr Correct 6
2
m−2r .
(4)
In the remainder of the proof, we will lower bound the probability of event {Time > t}. Observe that
Pr[Time > t] > Pr Time > t Correct Pr[Correct].
If Time > t, then none of the rounds 1, . . . , t guesses of Alice were successful, i.e.,
Q
Pr Time > t Correct > ti=1 1 − Pr Time = i Correct .
Applying (4) to each round i 6 t, we get
Pr[Time > t] >
Qt
i=1
1−
2
m−2i
(1 − ) >
1−
1
m
2 −t
t
(1 − ).
Since the running time of Π never exceeds t rounds, i.e., Pr[Time > t] = 0 and < 1, we get a contradiction
to t < m
2 − 1.
The next lemma bounds the number of guesses required when the target set is less restricted and its edges
form a random subset of the cross edges between A × B. This allows us to derive a lower bound on the local
broadcast time complexity in terms of the critical conductance in Theorem 10.
Lemma 8. For the guessing game input sets A and B, let Randomp be the predicate that defines the target
1
set T by adding each element of A × B to T with probability p, for some p > Ω( m
). Then, any protocol
that solves Guessing(2m, Randomp ) requires Ω(1/p) rounds in expectation. On the other hand, if Alice uses
the protocol where she submits her 2m guesses in each round by choosing, for each a ∈A, anelement
b0 ∈ B uniformly at random, and, for each b ∈ B, an a0 ∈ A uniformly at random, then Ω logpm rounds
are required in expectation.
Proof. Recall that the game ends when the guesses of Alice have hit each element in T B ⊆ B at least
once, whereas T B is itself a random variable. Let Y be the maximum number of guesses required by Alice’
protocol Π. For the sake of our analysis, we will consider Alice’s guesses as occurring sequentially and hence
we can assume that elements of T B are discovered one by one. For each j > 1, we define Zj to denote the
number of guesses required to guess the j-th element of T B , after having already guessed j − 1 elements.
We will first consider general protocols. Considering that each edge is in the target set with probability p,
we can assume that the target membership of an edge e is determined only at the point when Alice submits e
as a guess. Recalling that Alice has full knowledge of the remaining elements in T B that she still needs to
guess (cf. (3)), we can assume that her guess is successful with probability p (as she will only guess edges
that potentially discover a new element in T B ). For this guessing strategy, this remains true independently
of the current target set and the set of previously discovered elements (which we denote by Dj ). Formally,
Pr[Zj | Dj , T ] = Pr[Zj ] and hence E[Zj | Dj , T ] = E[Zj ] = d1/pe. Note that any b ∈ B will be part of
some target edge in T , i.e., b ∈ T B , with probability > 1 − (1 − p)m = Ω(1), since p = Ω(1/m), and
therefore E[|T |] = Ω(m). It follows that
hP
i
hP
i
|T |
|T |
m
E[Y ] = E[E[Y | Dj , T ]] = E
E[Z
|
D
,
T
]
=
E
E[Z
]
i
j
i = Ω( p ).
i=1
i=1
10
Considering that Alice can guess up to 2m elements per round, it follows that the time is Ω( p1 ), which
completes the proof for general algorithms.
Now consider the case where Alice uses the protocol where she submits her 2m guesses in each round by
choosing, for each a ∈ A, an element b0 ∈ B uniformly at random, and, for each b ∈ B, an a0 ∈ A uniformly
at random. Note that this process of selecting her guesses is done obliviously of her (correct and incorrect)
guesses so far.
Observe that Zj depends on a random variable Fj , which is the size of T after the (j − 1)-th successful
guess. Since Zj is the number of times that the protocol needs to guess until a new element in T B is
discovered, the distribution of Zj corresponds to a geometric distribution. According to Alice’s protocol, the
2
F
B
probability of guessing a new element is given by mj2 and hence E[Zj | Fj ] > m
Fj . Let U = |T1 |; i.e., U is
the number of all elements in B that are part of an edge in T initially. We have
m
E Y |U > m
2 = E E[Y | Fi ] | U > 2
hP
i
U
m
E[Z
|
F
]
|
U
>
=E
i
i
i=1
2
bm/2c
>
X
E E[Zi | Fi ] | U >
m
2
i=1
bm/2c
>
X
i=1
m2
E Fi | U >
m
2
,
where the last inequality follows from E[1/X] > 1/E[X], for any positive random variable X, due to
Jensen’s Inequality. Since Alice has already correctly guessed i − 1 elements from T B , we discard all
elements that “intersect” with successful guesses when updating the target set at the end of each round,
according to (3). It can happen that the protocol discovers multiple elements of T B using the round r guesses
(which we have assumed to happen sequentially in this analysis). In that case, the target set is not updated
in-between guesses. However, it is easy to see that this does not increase the probability of guessing a new
element of T B . We get
E Fi | U > m
2 6 (m − i)mp,
and thus
E Y |U >
m
2
>
bm/2c
m X
1
.
p
m−i
i=1
This sum is the harmonic number Hbm/2c−1 , which is Θ(log m), for sufficiently large m, and hence
E Y |U >
m
2
m log m
.
>Ω
p
By the law of total expectation it follows that
E[Y ] > E Y | U >
m
2
Pr U >
m
2
.
Finally, a standard probability calculation shows that U > m
2 happens with large probability, assuming that
c
p > m for a sufficiently large constant c > 0. The time bound follows since Alice can submit 2m guesses
per round.
11
3.4
Lower Bounds for Information Dissemination
In this section we show three different lower bounds. Together, these show what properties cause poor
performance in information dissemination protocols: in some graphs, high degree is the cause of poor
performance (Theorem 9); in other graphs, poor connectivity is the cause of poor performance (Theorem 10).
And finally, we give a family of graphs where we can see the trade-off between D, ∆, and φ∗ (Theorem 13).
We begin with a result showing that Ω(∆) is a lower bound:
Theorem 9. For any ∆ ∈ (Θ(1), dn/2e), there is an n-node network that has a weighted diameter of
O(log n), and a maximum node degree Θ(∆), where any algorithm requires Ω(∆) rounds to solve local
broadcast with constant probability.
Proof. Consider the network H of n nodes that consists of the guessing game gadget Gsym (2∆, 1, ∆, P ),
where predicate P returns an arbitrary singleton target set, combined with a constant degree regular expander
[20] of n − 2∆ vertices (if any) of which any one node is connected to all the vertices on the left side of
the gadget; all the edges of, and connected to the expander have latency 1 and the latencies of the edges in
the gadget are assigned as in Lemma 6. Clearly, the weighted diameter of H is O(log n) (diameter of the
expander [20]). By Lemma7, we know that any guessing game protocol on Guessing(2∆, |T | = 1) requires
Ω(∆) rounds for the predicate that returns exactly 1 pair as the target set. Lemma 6 tells us that any gossip
algorithm that solves local broadcast in H, must require Ω(∆) rounds.
We next show that every local broadcast algorithm requires time at least Ω(1/φ∗ + `∗ ). Note that, we get
this Ω(1/φ∗ ) lower bound just for local broadcast and not information dissemination, which is in contrast
to the results in the unweighted case. The following result is given in terms of the weight-` conductance,
for any `, and thus also holds for φ∗ and `∗ . In the proof, we construct a network that corresponds to the
bipartite guessing game graph with a target set where each edge is fast with probability φ∗ . That way, we
obtain a network with critical conductance Θ(φ∗ ), hop diameter O(1), and a weighted diameter of O(`∗ ).
The guessing game lower bound of Lemma 8 tells us that the cost of information dissemination still depends
on φ∗ .
Theorem 10. For any ` ∈ [1, n] and φ` where Ω(log(n)/n) 6 φ` 6 1/2, there is a network of 2n nodes
that has a weighted diameter O(`) (w.h.p.), and critical conductance Θ(φ` ) (w.h.p.), such that any gossip
algorithm requires Ω((1/φ` ) + `) rounds for solving local broadcast in expectation. Also, solving local
broadcast using push-pull requires Ω((log n/φ` ) + `) rounds in expectation.
Proof. Our goal is to reduce the game Guessing(2n, Randomφ` ) to local broadcast, hence we consider the
2n-node graph G(2n, `, n2 , Random
φ` ) as our
guessing game gadget defined in Section 3.2. Since we want
to show the time bound of t = Ω
log n
φ`
+ ` rounds (for push-pull), for the high latency edges we can use
n
the value n2 > log
φ` + ` (as Ω(log(n)/n) 6 φ` and ` 6 n).
We assign each cross edge latency ` independently with probability φ` and latency n2 with probability
1 − φ` . The fast cross edges have the same distribution as the target set implied by the predicate Randomφ` ,
which we have used to show a lower bound of Ω( φ1` ) for general protocols on Guessing(2n, Randomφ` ) in
n
Lemma 8, and also a stronger lower bound of Ω( log
φ` ) for “random guessing” protocols, which choose a
random edge for each vertex as their guesses. It is straightforward to see that push-pull gossip corresponds
exactly to this random guessing game strategy. Applying Lemma 6, this means that local broadcast requires
n
in expectation Ω( φ1` ) time for general algorithms and Ω( log
φ` ) time for push-pull. The additional term of
Ω(`) in the theorem statement is required to actually send the broadcast over the latency ` edge once it is
discovered.
12
Since each edge of L × R is assigned latency ` with probability φ` = Ω(log(n)/n), it follows that each
u ∈ R is connected by a latency ` edge to some node in L with high probability. Hence, the weighted
diameter of G(2n, `, n2 , Randomφ` ) is O(`) with high probability.
In the remainder of the proof, we show that G(2n, `, n2 , Randomφ` ) has a conductance of Θ(φ` ) with
high probability. We point out that several previous works prove bounds on the network expansion (e.g., [27]
and [1]). However, as these results were shown for random graphs, we cannot employ these results directly
and thus need to adapt these proof techniques to show a conductance of Θ(φ` ) for our guessing game gadget.
We assume that there is an integer-valued function f = f (n), such that nf = φ` , noting that this
assumption does not change the asymptotic behavior of our bounds. For readability, we only consider ` = 1
and note that the extension to the general case is straightforward. By construction, G(2n, 1, n2 , Randomf /n )
consists of edges with latencies 1 or n2 and we have
1
φ1
φn 2
6 2 6
,
n2
n
1
where the last inequality follows from the assumption φ1 > Ω logn n . Thus, we know that φ∗ = φ1 and
hence we need to prove φ1 = Θ(f /n).
Consider a set S ⊆ L ∪ R of at most n vertices and let l = |S ∩ L| and r = |S ∩ R|. We first assume
that l > r, since the number of latency 1 cross edges is symmetric for vertices in L and R; subsequently, we
will remove this assumption by a union bound argument.
For vertex sets A and B, let E1 (A, B) be the set of the (randomly sampled) latency 1 edges in the cut
(A, B) and define e1 (A, B) = |E1 (A, B)|. Given the set S, our goal is to show that many latency 1 edges
originating in S ∩ L have their other endpoint in R \ S, assuming that there are sufficiently many latency
1 cross edges to begin with. In other words, we need to bound from above the probability of the event
e1 (S ∩ L, S ∩ R) > Ω(f l) conditioned that there are sufficiently many latency 1 cross edges.
Claim 11 (Sufficiently many latency 1 cross edges). There exist constants c, c0 > 0, such that events
LR := {∀S, |S| 6 n : (e1 (S ∩ L, R) > cf l) ∧ (e1 (S ∩ R, L) > cf r)},
−
0
0
LR := {∀S, |S| 6 n : (e1 (S ∩ L, R) 6 c f l) ∧ (e1 (S ∩ R, L) 6 c f r)}
(5)
(6)
occur with high probability.
Proof. According to the construction of G(2n, 1, n2 , Randomf /n ), the latency 1 cross edges are chosen
independently each with probability f /n. Note that each cross edges is assigned latency 1 independently
with probability f /n = Ω( logn n ). Thus, for each node v, the expected number of cross edges is f = Ω(log n)
and, by a standard Chernoff bound, we know that the number of latency 1 cross edges to v is in [c1 f, c2 f ]
with high probability, for suitable constants c2 > c1 > 0. After taking a union bound over all nodes in V (G),
we can conclude that the claim holds for any set S ⊆ V (G).
Conditioning on LR is equivalent with choosing a subset of (at least) cf l edges among all possible
edges in the cut E1 (S ∩ L, R) uniformly at random and assigning them latency 1. Consider an edge
(v, u) ∈ E1 (S ∩ L, R). It follows that u ∈ S ∩ R (and hence (v, u) ∈ E1 (S ∩ L, S ∩ R)), with probability
r
n and we need to exclude the event
Bad(S) := {e1 (S ∩ L, S ∩ R) > 45 cf l},
cf l
subsets of latency 1 edges incident to vertices in S ∩ L. In addition, we need to bound the
probability that Bad(S) happens, for S chosen in any of the nl ways of choosing S that satisfy |S ∩ L| = l.
for all
4
cf l
5
13
Claim 12. Pr ∃S : Bad(S) LR 6 n−Ω(1) .
Proof of claim. Combining the above observations, we get
n
cf l r 45 cf l
.
Pr ∃S : Bad(S) LR 6
4
n
l
5 cf l
(7)
First, we assume that r and l are both large, i.e., l > r > c0 n, for a sufficiently small positive constant
k
m·H2 ( m
)
4
c0 < 5e
. Then, we can apply Stirling’s approximation of the form m
, where H2 (x) =
k ≈ 2
−x log2 (x) − (1 − x) log(1 − x) is the binary entropy function. Thus, for sufficiently large n, we get
r 4 cf l
5
n·H2 ( nl )+cf lH2 ( 45 )
Pr ∃S : Bad(S) LR 6 2
·
n
n+cf l·H2 ( 45 )− 45 cf l
,
62
(8)
where, to derive the second inequality, we have used the facts that H2 ( nl ) 6 1 and nr 6 12 , since r + l 6 n
and r 6 l. By the premise of the theorem, f = Ω(log n), which implies c f l = Ω(n log n). Together with
the fact that H2 ( 45 ) < 54 , this means that the term (− 45 c f l) dominates in the exponent of (8) and hence
Pr ∃S : Bad(S) LR 6 2−Θ(n log n) .
em k
Next, we consider the case where r 6 l < c0 n. Applying the upper bound of the form m
to (7),
k
k 6
tells us that
4
en l 5e r 5 cf l
Pr ∃S : Bad(S) LR 6
l
4n
en l 5c0 e 45 cf l
6
,
l
4
since r < c0 n. We get
4
5 0
Pr ∃S : Bad(S) LR 6 exp l 1 + log n − log l + cf l log c e
5
4
4
5
6 exp l 1 + log n + cf l log c0 e
.
5
4
4
By assumption, c0 < 5e
and hence the term 45 c f l log 45 c0 e in
the exponent is negative. Moreover, recall that
f = Ω(log n) and thus we can assume that 45 c f l log 54 c0 e 6 −c00 log n, for a sufficiently large constant
c00 > 0. This term dominates the other terms in the exponent, thereby completing the proof of the claim.
cf |S|
Considering that l > |S|
2 , the above bound implies that at least b 10 c latency 1 edges incident to S are
connected to nodes outside in S, with probability at least 1 − n−Ω(1) . Taking a union bound over all possible
choices for the values of l and r adhering to r 6 l and r + l 6 n = |V (G)|
2 , shows that
h
i
Pr ∀S, |S| 6 n : e1 (S ∩ L, R \ S) > cf10|S| LR > 1 − n−Ω(1) .
Observe that the latency 1 cross edges are constructed symmetrically for the left and right side of the
bipartite graph G and thus we can apply the above argument in a similar manner for a set S where r > l,
conditioned on e1 (S ∩ R, L) > cf l. Thus, we can conclude that
h
i
Pr ∀S, |S| 6 n : e1 (S, V (G) \S) > cf10|S| LR > 1 − n−Ω(1) .
(9)
14
We can remove the conditioning in (9) by virtue of Claim (11), since
h
i
h
Pr ∀S, |S| 6 n : e1 (S, V (G) \S) > cf10|S| > Pr ∀S, |S| 6 n : e1 (S, V (G) \S) >
cf |S|
10
i
LR Pr[LR]
> 1 − n−Ω(1) .
To upper bound Vol(S) for any set S, we take into account the n cross edges of each node in S. Also,
if v ∈ L, then we need to account for the n − 1 incident clique edges of v, yielding Vol(S) 6 2|S|n.
Considering the upper bound on the number of latency 1 cross edges given by (6), we have
φ∗ = min φ1 (S) = min
S
S
e1 (S, V (G) \ S)
cf |S|
> min
= Ω( nf ),
S 20|S|n
Vol(S)
where the inequality is true with high probability. To see that this bound
istight, observe that φ∗ 6 φ1 (L). By
(5) and (6), we know that e1 (L, R) = Θ(f n) and hence φ1 (L) = Θ nf with high probability, as required.
This completes the proof of Theorem 10.
Finally, we give a family of graphs that illustrate the trade-off among the parameters. Intuitively, when the
edge latencies are larger, it makes sense to search for the best possible path and the lower bound is Ω(D + ∆);
when the edge latencies are smaller, then we can simply rely on connectivity and the lower bound is Ω(`/φ` ).
Note that, we can individually obtain a lower bound of Ω((`/φ) log n), using the technique in [7] where we
show that there exists a graph with diameter (`/φ) log n. Unlike here, that lower bound is simply D.
Theorem 13. For a given α ∈ [Ω(1/n), O(1)] and any integer ` ∈ [1, O(n2 α2 )], there is a class of networks
of 2n nodes, critical conductance φ∗ = φ` = Θ(α), maximum degree ∆ = Θ(αn), and weighted diameter
D = Θ(1/φ` ), such that any gossip algorithm that solves broadcast with at least constant probability,
requires Ω(min{∆ + D, `/φ` }) rounds.
Proof. We create a network G consisting of a series of k node layers V1 , . . . , Vk that are
wired together
asa
q
2
8
ring, using the guessing game gadgets introduced above. We define k = cα
where c = 34 + 41 9 − nα
.
This implies that 1 6 c < 3/2 as α ∈ [Ω(1/n), O(1)]. Each layer consists of s = cnα nodes. As it does not
change our asymptotic bounds, we simplify the notation by assuming that 2/cα and cnα are integers.
V(k/2)+1
Vk/2
V3k/4
V(k/4)+1
V(3k/4)+1
Vk/4
Vk
V1
Figure 2: Guessing Game Gadgets wired together as a ring.
For each pair Vi and V(i+1) mod k (0 6 i 6 k − 1), we construct the symmetric guessing game gadget
Gsym (2cnα, 1, `, P ) (in Section 3.2), for simulating a gossip algorithm to solve the game Guessing(2cnα, |T | =
15
1). That is, we create a complete bipartite graph on Vi and V(i+1) mod k and form cliques on Vi and V(i+1) mod k
(see Figure 2). We assign latency ` to every cross edge between Vi and V(i+1) mod k , except for a uniformly at
random chosen edge that forms the singleton target set, which we assign latency 1. Observe that the weight-j
conductance φj cannot be maximal for any j other than 1 or `.
Observation 14. Let s = cnα. Graph G is (3s − 1)-regular.
Proof. For a layer Vi , we call V(i−1) mod k the predecessor layer and V(i+1) mod k the successor layer. The
size of a layer is s = cnα. Each node has 2s edges to its neighbors in the predecessor resp. successor layer
and s − 1 edges to nodes in its own layer. This means that G is a (3s − 1)-regular graph.
We define a cut C that divides the ring into two equal halves such that none of the internal clique edges
are cut edges. By a slight abuse of notation, we also use C to denote the set of vertices present in the smaller
side of the partition created by the cut C (ties broken arbitrarily).
Lemma 15. φ` (C) = α.
Proof. Since C partitions G into two sets of identical size, the volume can be determined by considering
either partition of size n, thus we focus on the node set C. Also, by Observation 14 we know that G is
(3s − 1)-regular. The volume of C can be calculated to be n(3cnα − 1). The number of cut edges of latency
6 ` is 2(cnα)2 (by the construction of C). According to Definition 1, the `-weight conductance is given by
2(cnα)2
φ` (C) = n(3cnα−1)
. By plugging in the value of c, we can verify that φ` (C) is exactly equal to α.
Using the conductance bound of Lemma 15 for cut C, we know that φ` 6 α. In the proof of the next
lemma, we show that φ` = Ω(α).
Lemma 16. The weight-` conductance of the constructed ring network is φ` = Θ(α).
Proof. By Lemma 15, we know that φ` 6 α as the actual graph conductance is always 6 to any cut
conductance. We will now show φ` = Ω(α) as well.
By Observation 14 we know that G is (3s − 1)-regular and therefore for a set of nodes U the volume
Vol(U ) is exactly equal to (3s − 1)|U |. This clearly implies that for any two sets U and V , Vol(U ) 6 Vol(V )
if and only if |U | 6 |V |.
Now, consider an arbitrary cut (U, V (G) \ U ) of G and suppose that U contains at most half of the nodes
of G, i.e., |U | 6 n, since G has 2n nodes. If there are at least Θ(s2 ) cut edges, then, using the fact that
|U | 6 n, we get
φ` (U ) > Θ(s2 /s|U |) = Θ(s/|U |) > Θ(s/n) > Θ(α),
and we are done. In the remainder of the proof, we will show that there are Θ(s2 ) cut edges. We distinguish
two cases:
1. |U | > 3s/4:
We classify each node in U either as good if it has at least s/4 adjacent edges across the cut (U, V \ U ) and
as bad otherwise. Thus, our goal is to identify Θ(s) good nodes, which in turn implies Θ(s2 ) cut edges.
Let S be an arbitrary subset of 3s/4 nodes in U . If all nodes in S are good, we are done. Otherwise, let
x ∈ S be a bad node. It is important to note that the following properties are true for every bad node:
(a) Node x is in a layer in G which contains at least 3s/4 nodes inside U .
(b) The successor layer from x has at least 3s/4 nodes inside U .
To see why (a) holds, assume that it was not true. Then, x would have at least s/4 neighbors in its own layer
across the cut, contradicting the assumption that x is bad. Similarly, if (b) was false, x would be connected to
at least s/4 nodes in the successor layer outside U . (This is true of the predecessor layer too.)
Let A be the successor layer to the layer containing x. We now run the following procedure:
16
Invariant: A contains at least 3s/4 nodes in U .
If at least half of the nodes in A are good, we are done. Terminate and claim Θ(s2 ) cut edges.
Otherwise, let y be a bad node in A.
Let A0 be the successor layer of the layer A. Then, start again at Step (1) with A = A0 and y = x.
From the assertion (b), A0 contains at least 3s/4 nodes in S.
If this procedure ever terminates in Step (2), we are done. Otherwise, it continues around until every layer
has been explored. In that case, the invariant implies that every layer contains at least 3s/4 nodes in U . This
implies that > 1/2 of the nodes of G are in U , which contradicts the choice of U . Thus, the procedure does
terminate, which means there must be at least Θ(s2 ) cut edges, implying φ` > α.
2. |U | < 3s/4:
Let m be the number of nodes in U . Since G is (3s − 1)-regular, the volume of U is m(3s − 1). Each
node in U now contains at least s/4 neighbors outside of U (since it has > s neighbors and there are
only < 3s/4 other nodes in U ), so the cut size is at least sm/4. Thus, the conductance of this graph
(sm/4)
φ` > m(3s−1)
= Ω(1) > Θ(α).
Since, φ` 6 α and φ` > Θ(α), it is clearly the case that φ` = Θ(α), which is what we wanted to
prove.
(1)
(2)
(3)
(4)
Combining Lemmas 15 and 16 (and again using cut C), we argue that the critical latency is `.
Lemma 17. For any ` 6 O(cnα)2 , φ∗ = φ` = Θ(α).
Proof. To prove that φ∗ is in fact φ` , which by Lemma 16 is Θ(α), we need to show that (φ` /`) > (φ1 /1) =
φ1 . To this end, let us consider the cut C defined above. We will show that φ`` > φ1 (C) > φ1 , and since
weight-j conductance φj (cf. Definition 1), cannot be maximal for any j other than 1 or `, we get φ∗ = φ` .
There are two latency 1 cross edges in the cut C and the volume of C can be calculated as in the proof of
Lemma 15 to be n(3cnα − 1). Thus, we need to show that
φ`
Θ(α)
2
=
>
.
`
`
(3cnα − 1)n
As c is a constant, the above inequality is true as long as ` = O(α2 n2 ), which is ensured by the premise of
the theorem.
The weighted diameter of the network D = Θ(k/2), since each pair of adjacent node layers is connected
by a latency 1 edge and, internally, each layer forms a latency 1 clique. Using the fact that c ∈ [1, 32 ), it can
be shown that (2/3α) < D 6 (1/α), implying that D = Θ(1/φ` ) (by lemma 16).
Now, consider a source node in layer V1 that initiates the broadcast of a rumor. Each node can either spend
time in finding the required fast edge (which we assume can be done in parallel) or, instead, it can instantly
use an edge of latency ` to forward the rumor. Lemma 7 tells us that finding the single latency 1 cross edge
with constant probability, for the guessing game gadget corresponding to any pair of node layers, requires
Ω(∆) rounds, and then forwarding the rumor takes Ω(D) additional rounds. Alternatively, the algorithm can
forward the rumor along latency ` edges across node layers and spread the rumor using the latency 1 edges
within each clique. It follows that the required time for broadcast is Ω(min{∆ + D, `/φ` }).
We obtain the following corollary that gives a lower bound on information dissemination in terms of
φavg , either by a similar analysis as above, or by the application of Theorem 5.
Corollary 18. For a given α ∈ [Ω(1/n), O(1)] and any integer ` ∈ [1, O(n2 α2 )], there is a class of networks
of 2n nodes, average conductance φavg = Θ(α/`), maximum degree ∆ = Θ(αn), and weighted diameter
D = Θ(1/`φavg ), such that any gossip algorithm that solves broadcast with at least constant probability,
requires Ω(min{∆ + D, 1/φavg }) rounds.
17
Proof. Observe that in the given graph, there exists edges with latency either 1 or `, and as such the number
of non-empty latency classes here is 2. Now, Theorem 5 reduces to φ∗ /2`∗ < φavg < 2φ∗ /`∗ . This implies
that for this case φavg = Θ(φ∗ /`∗ ). Alternatively, φ∗ = `φavg (as in this case ` = `∗). Replacing this value
of φ∗ in Theorem 13 gives us the above required corollary.
4
Algorithms for Unknown Latencies
We divide the upper bounds on information dissemination into two sub-components and later combine them
n
),
to obtain a unified result. First, we analyze classical push-pull, showing that it completes in time O( `∗ φlog
∗
which is optimal when D + ∆ is large. Alternatively for graphs where D + ∆ is small, we give an algorithm
wherein each node first spends Õ(D + ∆) time discovering the neighboring latencies after which nodes use
the local information to build a spanner, across which data can be distributed in Õ(D) time.
4.1
Push-Pull
To show the time required for information dissemination in a weighted graph G using push-pull, we define
E` as the set of all edges of latency 6 `, Eu as the set of incident edges of vertex u and Eu,` := E` ∩ Eu .
n
) rounds in a network G, where
Theorem 19. The push-pull protocol achieves broadcast w.h.p. in O( `∗ φlog
∗
φ∗ is the critical conductance of G and `∗ is the corresponding critical latency.
Proof. We construct a strongly edge-induced graph G` , which is a generalization of the strongly (vertex)
induced subgraph defined in [5] and which has the same vertex set as G. The edges of G` have a multiplicity6
defined by the edge multiplicity function µ, given by
if (u, v) ∈ E`
1
µ(u, v) = |Eu | − |Eu,` | if u = v
(10)
0
otherwise
It is easy to see that the (unweighted) conductance φ(G` ) corresponds to φ` (G), as a self-loop at node u is
counted as µ(u, u) edges when computing the volume. We also define another unweighted graph G0 that is
derived from G by dropping all edge latencies.
Now, we consider the Markov chain process describing the informed node set, i.e., the vertex set that is in
possession of some message m originating from a vertex v when running push-pull. Formally, the state space
of the Markov chain consists of all possible informed node sets. Only paths that correspond to monotonically
growing informed node sets have nonzero probability. We argue that this process on G0 (resp. G) dominates
the respective process in the graph G` . We observe that each node v selects an incident edge in E` from G`
in the push-pull protocol with the same probability
as in G0 . The probability of choosing an edge ∈ Eu \ E`
P
(i.e., a self loop in case of G` ) is µ(u, u)/ v∈V µ(u, v) in both graphs. Clearly, choosing a self loop of
a node u cannot help in the propagation of the message in G` , but choosing the corresponding edge in G0
might. It follows that the Markov process of reaching any informed node set S in G0 dominates over the one
in G` , i.e., the probability of reaching any informed node set S by using the Markov chain in G0 is at least as
large as the probability of reaching the same set S by using the Markov chain for G` .
To translate this result back to our actual network G (with weighted edges), we charge each round of
push-pull in G` to ` rounds in G. With similar arguments, it follows that the Markov process of the informed
node set given by considering ` consecutive rounds of push-pull in G at a time, dominates the one in G` .
6
The “multiplicity of an edge” is called “edge weight” in [5]. We use a different terminology here to avoid confusion with the
latencies of edges and consider “edge weight” as a synonym to edge latency instead.
18
From [16] and [5] it is known that O(log(n)/φ(G` )) rounds suffice w.h.p. to solve broadcast in G` . Hence,
achieving broadcast in G requires O(` log(n)/φ` (G)) rounds. Since the above analysis applies for any ` > 1,
and in particular for the critical latency `∗ , the theorem follows.
We combine Theorem 19 with Theorem 5 to obtain the following corollary that gives the upper bound on
information dissemination using push-pull in terms of φavg .
n
Corollary 20. The push-pull protocol achieves broadcast w.h.p. in O( Lφlog
) rounds in a network G, where
avg
φavg is the average conductance of G and L is the number of non-empty latency classes in G.
4.2
An Õ(D + ∆) Algorithm
In Section 5.1 we provide an algorithm that solves all-to-all information dissemination when each node
knows the latencies of all its adjacent edges. The same algorithm can be naturally extended for the case
where nodes do not know the adjacent latencies by first discovering the edge latencies and then running the
algorithm as such. When both D and ∆ are known: for ∆ rounds, each node broadcasts a request to each
neighbor (sequentially) and then waits up to D rounds for a response to determine the adjacent edge’s latency.
If both or either values are unknown, the guess and double strategy (described in Section 5.3) can be used, as
we can efficiently detect when information dissemination has completed correctly. By similar arguments as
in Section 5.3 we obtain an algorithm that solves information dissemination in O((D + ∆) log3 n) time.
5
Algorithms for Known Latencies
In this section, we discuss the case where each node knows the latencies of the adjacent edges. We focus on
the problem of all-to-all information dissemination (instead of one-to-all information dissemination), as it will
simplify certain issues to solve the seemingly harder problem. (Of course, all-to-all information dissemination
also solves one-to-all information dissemination. And most one-to-all information dissemination algorithms
can be used to solve all-to-all information dissemination by using them to collect and disseminate data.)
In Section 5.1, we use the fact that nodes know a polynomial upper bound on the network size (and this is
the only place where we rely on that assumption). When edge latencies are known, the spanner algorithm
(described below) solves all-to-all information dissemination in O(D log3 n) which differs from the trivial
lower bound of Ω(D) by only polylog factors.
5.1
Spanner Algorithm Preliminaries
We initially assume that the weighted diameter (D) is known to all nodes; later (in Section 5.3), we do away
with the assumption via a guess-and-double technique. It is assumed w.l.o.g. that every edge has latency
6 D: clearly we do not want to use any edges with latency > D.
Local broadcast. An important building block of our algorithms is local broadcast. For unweighted graphs,
the (randomized) Superstep algorithm by Censor-Hillel et al. [5] and the Deterministic Tree Gossip (DTG)
algorithm by Haeupler [18] solve this problem. We make use of the DTG algorithm, which runs in O(log2 n)
rounds on unweighted graphs. See [18] and Appendix A.1 for details. Observe that for the unweighted
case, if any algorithm solves local broadcast in O(t) rounds, it obtains a t-spanner as a direct consequence,
which thereafter can be used for propagating information. However, for graphs with latencies, just solving
local broadcast might take O(D) time, resulting in a O(D)-spanner (and leading to an O(D2 ) solution for
information dissemination). Recall that a subgraph S = (V, E 0 ) of a graph G = (V, E) is called an α-spanner
if any two nodes u, v with distance ` in G have distance at most α` in S.
19
For weighted graphs, we are mainly interested in the `-local broadcast problem in which each node
disseminates some information to all its neighbors that are connected to it by edges of latency 6 `. While
DTG assumes edges to be unweighted (uniform weight), we can execute the same protocol in a graph with
non-uniform latencies simply by ignoring all edges with a latency larger than ` and simulating 1 round of
the DTG protocol as ` rounds in our network. We refer to this protocol as the `-DTG protocol. It follows
immediately that within O(` log2 n) time, the `-DTG protocol ensures that each node has disseminated the
information to all its neighbors connected to it with edges of latency 6 `. Note that we can trivially solve the
all-to-all information dissemination problem in O(D2 log2 n) time using `-DTG protocol (if D were known)
by simply repeating it D times with ` = D.
The challenge now, given the restriction that finding neighbors by a direct edge might be costly, is to
somehow find sufficiently short paths to all of them. We show here that with sufficient exploration of the local
neighborhood up to O(log n) steps and using only favorable weights, we are able to obtain a global spanner.
An intermediate goal of our algorithm is to construct an O(log n)-spanner and to obtain an orientation of the
edges such that each node has a small, i.e., O(log n), out-degree.7 Once we have such a structure, we achieve
all-to-all information dissemination by using a flooding algorithm that repeatedly activates the out-edges in
round-robin order.
5.2
Spanner Construction and Broadcast
In a seminal work, Baswana and Sen [2] provide a spanner construction algorithm for weighted graphs
(where weights did not correspond to latency) in the LOCAL model of communication. As our goal here
is to find a low stretch, low out-degree spanner, we modify the algorithm of [2] by carefully associating a
direction with every edge that is added to a spanner such that each node has w.h.p. O(log n) out-degree.
To deal with latencies, we choose to locally simulate the algorithm on individual nodes after obtaining the
log n-hop neighborhood information by using the `-DTG protocol. We show that this log n-hop neighborhood
information is sufficient for obtaining the required spanner. The algorithm in [2] also assumes distinct edge
weights. We can ensure this by using the unique node IDs to break ties. We first show that the size of the
obtained spanner does not increase significantly when running the algorithm of [2] with an estimate of n
(namely n̂).
5.2.1
Spanner Construction Algorithm
Each node v executes a set of rules for adding edges (explained below) and each time one of these rules is
triggered, v adds some of its incident edges to the spanner while assigning them as outgoing direction. This
way, we obtain a low stretch spanner (undirected stretch) where nodes also have a low out-degree, which we
leverage in the subsequent phases of our algorithm.
For a given parameter k, the algorithm computes a (2k − 1)-spanner by performing k iterations. At the
beginning of the i-th iteration, for 1 6 i 6 k − 1, every node that was a cluster center in the previous iteration,
chooses to become an active cluster with probability n̂−1/k , for some n 6 n̂ 6 poly(n); note that for i = 1,
every node counts as a previously active center. Then, every active center c broadcasts this information to all
cluster members. As a cluster grows by at most 1 hop in each round, this message needs to be disseminated
throughout the i-neighborhood of c.8 Then, every cluster member broadcasts its membership information to
all its neighbors to ensure that every node is aware of its adjacent active clusters. For adding edges to the
spanner, nodes also remember its set of incident clusters Ci−1 that were active in iteration i − 1. With this
information in hand, every node u adds some of its incident edges to its set of spanner edges Hu , and also
(permanently) discards some edges, as follows:
7
8
It is clearly impossible to guarantee small degree in an undirected sense, for example, if the original graph is a star.
By slight abuse of notation, we use c to denote cluster centres and the cluster itself when the distinction is clear from the context.
20
(Rule 1) If none of u’s adjacent clusters in Ci−1 were sampled in iteration i, then u adds its least weight edge
to cluster c as an outgoing edge to Hu and discards all other edges to nodes in c, for every c ∈ Ci−1 .
(Rule 2) If u has active adjacent clusters, then u will add the edge ev to some cluster c with the minimum
weight among all these clusters and, for each adjacent cluster c0 ∈ Ci−1 that has a weight less than ev , node
u also adds one outgoing edge to the respective node in c0 . All other edges from v to nodes in clusters c and
c0 are discarded.
In the k-th iteration, every vertex v adds the least weight edge to each adjacent cluster in Ck−1 to Hv .
Lemma 21. Consider a synchronous network of n nodes where nodes know only n̂, where n 6 n̂ 6 nc , for
some constant c > 1. For any k > c, there’s a distributed algorithm (based on [2]) that computes a spanner
and terminates in O(k) rounds in the LOCAL model and each node’s out-degree is O(nc/k log n) w.h.p.
Proof. Note that the running time of the algorithm is O(k 2 ) rounds if used with a restricted message size
of O(log n). Inspecting the algorithm reveals that the computation at each node only depends on its k-hop
neighborhood in the graph. Also, because the decision to remove an edge (u, v) can be taken by either node
u or v, each node needs to simulate the running of the algorithm at all its neighbors (to know when to remove
the edge (u, v) from consideration) and hence we can simulate the execution of the algorithm locally by first
collecting this information regarding (k + 1)-hop neighborhood in k + 1 rounds in the LOCAL model.
We now analyse the difference when running the algorithm with n̂ instead of n. First, we observe that
sampling clusters with probability n̂(−1/k) does not affect the stretch guarantee. For the sake of our analysis
we assume that the spanner is directed: we count every incident edge of v that it adds to its set of spanner
edges Hv as an outgoing edge of v. The degree bound will follow by showing an upper bound on the number
of outgoing edges of each node.
Consider any iteration i in Phase 1 of the algorithm, i.e., 1 6 i < k. We call a cluster sampled in iteration
i if it is among the sampled clusters in all iterations 1, . . . , i. Every cluster that was sampled in the previous
iteration is sampled again with probability n̂−1/k . (In the very first iteration, every node counts as a previously
sampled cluster.) To bound the number of edges that contribute to the out-degree of a node v, we consider the
clusters adjacent to v that were sampled in iteration i − 1 and order them as c1 , . . . , cq in increasing order of
the weight of their least weight edge incident to v.
Let Ai be the event that v adds at least l edges to its outdegree in iteration i. Note that Ai occurs if and
only if (1) none of the clusters c1 , . . . cl is sampled in iteration i and (2) there are at least l active clusters in
iteration i − 1. By the description of Phase 1 (first k − 1 iterations) of the algorithm, we only add an edge from
v to a node in cluster cj in iteration i if Ai does not happen. We have Pr[Ai ] 6 (1−n−c/k )l and taking a union
bound over the first k − 1 iterations and over all n nodes, it follows that the probability of any node adding
more than l edges to the spanner in any of the first k − 1 iterations is at most exp(−n−c/k l + log k + log n).
By choosing l > Ω(n1/k (log n + log k)), this probability is 6 n−Ω(1) as required.
In Phase 2 (final iteration), every vertex u adds a least weight (outgoing) edge to every cluster that was
sampled in iteration k − 1. Let Xv be the indicator random variable that vertex v is the center of a cluster
sampled in iteration k − 1 that is incident to u. We have
Pr[Xv ] 6 n−
Setting X =
P
v:(u,v)∈G Xv ,
c(k−1)
k
c
= n−c+ k .
it follows that
c
c
E[X] 6 n1−c+ k 6 n k ,
since c > 1. Since each cluster is sampled independently all Xv are independent, we can apply a standard
Chernoff bound to show that, for some sufficiently large constant c1 depending on c, it holds that
h
i
c
c/k
Pr X > c1 n k log n 6 e−Θ(n log n) 6 n−Ω(1) .
21
By taking a union bound over all vertices, we can see the number of edges that each vertex adds to the spanner
c
in Phase 2 is at most O(n k log n) with high probability. Combining this with the bound that we have derived
for Phase 1 completes the proof.
Theorem 22. There is an O(D log3 n) time algorithm A in the gossip model that yields an O(log n)-spanner
that has O(n log n) edges (w.h.p.). Moreover, A also computes an orientation of the edges that guarantees
that each node has an out-degree of O(log n) (w.h.p.).
Proof. To convert the classic synchronous algorithm for the local model assumed in Lemma 21 to an algorithm
that works in the gossip model with latencies, we use the `-DTG protocol and simulate each of the k = log n
iterations of the spanner algorithm by first discovering the log n-hop neighborhood. The neighborhood
discovery takes O(D log3 n) rounds in our model and then all computations are done locally.
To broadcast on this directed spanner we use the RR broadcast algorithm, which is a deterministic
round-robin-style exchange of information among nodes. Each node sends all the rumors known to it to all its
1-hop neighbors one by one in a round robin fashion. The algorithm with a parameter k is run on the directed
spanner of the graph Gk (G without edges of latency > k).
RR Broadcast (k)
1: for each vertex v in parallel do
2:
for iteration i equals 1 to (k∆out + k) do
3:
propagate rumor set Rv along the out-edges of length 6 k one-by-one in a round robin fashion
4:
add all received rumors to Rv
Algorithm 1: RR Broadcast
u
∆-1 edges
k1
u1
∆-2 edges
u2
k2
∆-2 edges
kh
v
∆-1 edges
Figure 3: An example of message propagation from node u to node v.
Lemma 23. After the execution of RR Broadcast algorithm with a parameter k on the directed spanner
of graph Gk , any two nodes u and v at a distance 6 k in G have exchanged rumors with one another in
O(k∆out + k) rounds, where ∆out is the maximum out-degree of any node in Gk .
Proof. Consider a path from a node u to another node v at a distance k or less from it. Clearly, all edges in
this path would have a weight of 6 k. Therefore, we can work on Gk (G without edges of latency > k) as
well without affecting the correctness of the algorithm. Also, let us assume that the number of hops between
u and v to be h which again would be 6 k, since there are no fractional weights. Let the latency between
each hop be denoted by ki as shown in Figure 3. Messages reach the next node when either of the nodes
initiate a bidirectional exchange. For example, u’s rumor could reach node u1 either by a request initiated by
node u or by u1 , depending upon the direction of the edge uu1 . In the worst case nodes have to try all other
∆out − 1 links before initiating a connection along the required edge where ∆out is the maximum out-degree
of any node. After a connection is initialized it takes k1 time to exchange rumors. By generalization, we
22
observe that in the non-blocking model, the delay that can be incurred before rumor exchange among any
two adjacent nodes ui and ui−1 can be ∆out + ki in the worst case. In this way u’s rumor proceeds towards v
in individual steps, each step incurring a maximum cost of ∆out + ki . A node might receive multiple rumors
to propagate in the next round, which its adds to its rumor set and forwards to its neighbors in a round robin
fashion. As such, the total worst case delay in rumor exchange among node u and v would be represented by
h
X
(∆out + ki ) = h∆out +
i=1
h
X
ki .
i=1
Ph
But we know that both h and i=1 ki can have a maximum value equal to k . Therefore, we conclude that
for any two nodes v and u in Gk , v’s rumor would have reached u and u’s rumor would have reached v if all
nodes forward rumors in a round robin fashion for (k∆out + k) rounds.
Here, on the created spanner with stretch of O(log n), the maximum distance between any two nodes can
be O(D log n). Since the maximum out-degree (∆out ) is O(log n) w.h.p., we get the following corollary.
Corollary 24. The RR broadcast algorithm on the constructed spanner takes O(D log2 n) time and solves
all-to-all information dissemination w.h.p.
We combine all the previously defined techniques to a single algorithm called Efficient Information
Dissemination or EID.
EID (D)
1: for each vertex v in parallel do
2:
for iteration i = 1 to O(log n) do
3:
Perform D-DTG
4:
call Spanner Construction algorithm
5:
call algorithm RR Broadcast (O(D log n))
/* to gain neighborhood information */
/* executed locally */
Algorithm 2: Efficient information dissemination
Lemma 25. For a graph G with diameter D, Efficient Information Dissemination (EID) algorithm takes
O(D log3 n) time for solving all-to-all information dissemination w.h.p. when D is known to all the nodes.
5.3
Unknown Diameter
For unknown diameter, we apply the standard guess-and-double strategy: begin with an initial guess of 1
for D. Try the algorithm and see if it succeeds. If so, we terminate. Otherwise, double the estimate and
repeat. The challenge here is to correctly determine the termination condition i.e. how does a particular node
determine whether information dissemination has been achieved for all other nodes. Early termination might
lead to partial dissemination whereas late termination might cause the time complexity to increase.
The critical observation is as follows: if two nodes u and v cannot communicate in one execution of
all-to-all information dissemination (protocol RR Broadcast) for a given estimate of the diameter, then there
must be some edge (w, z) on the path from u to v where, in one execution: u is able to communicate with w
but not with z. There are two cases: If w is not able to communicate with z, then it is aware that it has an
unreachable neighbor and can flag the issue; the next time that u and w communicate, node u learns of the
problem. Otherwise, if w can communicate with z, then the next time that u and w communicate, node u
learns that there was a node it did not hear from previously. In either case, u knows that the estimate of D
was not correct and should continue. Each node also checks whether it has heard from all of its neighbors,
23
and raises an error flag if not. We then repeat all-to-all broadcast so that nodes can check if everyone has
the same “rumor set” and that no one has raised an error flag. In total, checking termination has asymptotic
complexity of O(D log2 n).
The Termination_Check algorithm checks for every node that v contacts or is contacted by (either directly
or indirectly) whether that node has (i) exactly the same rumor set as v and (ii) the value 0 as its flag bit. The
flag bit of a node is set to 1 if a neighbor of that node is not present in its rumor set or if the node has not
yet exchanged all the rumors known to it presently with all of its neighbors in G that are at a distance 6
to the current estimate of D (say k): this condition is easily checked by either doing an additional k-DTG
(which does not affect the complexity) or can be checked in parallel with the execution of RR Broadcast.
If both of the above conditions are not met, then node v sets its status to “failed” and v uses a broadcast
algorithm for propagating the “failed” message. Any broadcast algorithm that, given a parameter k, is able
to broadcast and collect back information from all nodes at a distance 6 k from v, can be used. It is easily
seen that RR Broadcast satisfies this criteria and can be used in this case. Note that broadcast is achieved
here (for General_EID algorithm) by execution of RR Broadcast, however when Path_Discovery algorithm
(described later) invokes Termination_Check, broadcast is achieved by execution of the sequence T (k) (also
described later). Here, the rumor set known to a particular vertex v is denoted by Rv , Γ(v) represents all its
neighbors in G whereas k-neighbors refers to only those nodes that are connected with v with an edge of
latency k or less. Also, initially node_status of all nodes is set to “default”.
Termination_Check (k)
1: if (node w ∈ Γ(v) and w ∈
/ Rv ) or (node v has not exchanged rumors with all k-neighbors) then
2:
set flag bit, vf lag = 1
3: else set flag bit, vf lag = 0
4:
5:
6:
7:
8:
9:
broadcast and gather all responses from any node u in v’s k-distance neighborhood
if ∃ any u such that (Rv 6= Ru ) or (uf lag = 1) then
set node_status = “failed”
broadcast “failed” message to the k-distance neighborhood
if received message = “failed” then
set node_status = “failed”
Algorithm 3: Termination_Check
We prove the following regarding the termination detection:
Lemma 26. No node terminates until it has exchanged rumors with all other nodes. Moreover, all nodes
terminate in the exact same round.
Proof. Suppose that a node v terminates without having exchanged rumors with some other node w. Considering any path from node v to node w, let u be the farthest node (in hop distance) with which v has exchanged
rumors with and let x be the next node in the path.
Case 1 : u has exchanged rumors with x. It implies that v has also exchanged rumors with x, from the
condition that all nodes that exchange rumors with one another have the same rumor set. Thus, contradicting
the fact that u is the farthest node on the path that v has exchanged rumors with.
Case 2 : u has not exchanged rumors with x. If u had not exchanged rumors with x, then u would have set
its flag bit as 1, which would have been detected by v during the broadcast and it would not have terminated.
This also gives us a contradiction. Thus, no such node w exists and v terminates only after it has exchanged
rumors with all the other nodes.
For the second part of the proof, let consider u and v to be nodes such that v is set for termination and
has not set its status to “failed” in the Termination_Check algorithm, whereas, in the same iteration, node u
24
has set its status to “failed” and hence is set to continue. We show that there cannot be two such nodes in the
same round. The node v did not set its status to “failed” implying all the nodes that it exchanged rumors
with had exactly the same set of rumors, none of the nodes had set its flag bit as 1 and in addition it did not
receive a “failed” message from any other node. From the first part, we know that the set of nodes that v
exchanged rumors with is the entire vertex set of the graph G. That implies, v has also exchanged rumors
with u: node u also has the exact set of rumors (which essentially is all the rumors from all the nodes) and
does not have a set flag bit. So in the current iteration, if any other node broadcasted a “failed” message
both v and u would have received it resulting in both nodes to set their status as “failed”. Again, since the
rumor sets of both nodes are identical, both nodes would observe the same flag bits of all the nodes. Then
node u will also not satisfy the termination condition and will not set its status as “failed”. This gives us a
contradiction that completes the proof.
General_EID (k)
k=1
2: repeat
3:
call algorithm EID (k)
4:
call algorithm Termination_Check (k)
5:
if node_status = “failed” then
6:
k = 2k
7:
set node_status to “default”
8:
else terminate
1:
Algorithm 4: General_EID; code for vertex v.
Combining the all-to-all dissemination protocol with the termination detection, we get the following:
Theorem 27. There exists a randomized gossip algorithm that solves the all-to-all information dissemination
problem w.h.p. and terminates in O(D log3 n) rounds.
5.4
An Alternative All-to-All Information Dissemination Algorithm
We propose an alternate algorithm to solve all-to-all information dissemination without any global knowledge
(polynomial upper bound of n need not be known) that takes O(D log2 n log D) time. This algorithm works
even when nodes cannot initiate a new exchange in every round, and wait till the acknowledgement of the
previous message, i.e., communication is blocking.
The algorithm involves repeatedly invoking the `-DTG algorithm with different parameters determined
by a particular pattern. The intuition behind the choice of the pattern is to make minimal use of the heavier
latency edges by collecting as much information as possible near the heavier latencies before making use of
that edge. The pattern for k is derived according to a sequence T (k) that is recursively defined as follows:
T (1) = 1-DTG
T (2) = T (1) · 2-DTG · T (1)
T (4) = T (2) · 4-DTG · T (2)
..
.
T (k) = T (k/2) · k-DTG · T (k/2)
We show that, when the above sequence is run for the particular pattern for length k, it guarantees that any
node u and v in the graph G, at a distance of 6 k, have exchanged their rumors with one another. Overall,
25
the pattern of values of the parameter ` is
1, 2, 1, 4, 1, 2, 1, 8, 1, 2, 1, 4, 1, 2, 1, . . . , k, . . . , 1, 2, 1, 4, 1, 2, 1, 8, 1, 2, 1, 4, 1, 2, 1,
and, for each value `, we perform the `-DTG protocol. That is, T (k) is a sequence of calls to `-DTG with
varying parameters according to a known pattern.
Lemma 28. After the execution of T (k), any node in the weighted graph G (V,E) has exchanged rumors with
all other nodes that are at distance k or less from it.
Proof. We proceed by induction over the path length k. For the base case, recall from [18] that, after running
T (1) on G1 , i.e., the subgraph of G induced by edges with latency 6 1, any node v has exchanged rumors
with all its distance 1 neighbors.
For the inductive step, suppose that the claim is true for T (k), i.e. after running the sequence, any node v
has exchanged rumors with all other nodes at a weighted distance 6 k. To prove the claim for T (2k) (i.e.
T (k) · 2k-DTG · T (k)), we consider the various possibilities of forming a path of length 2k.
Case 1: The path consists only of edges with latencies 6 k. Here we distinguish two sub-cases:
Case 1a: There exists a node m which is equidistant from both end points u and v (see Figure 4). By the
induction hypothesis, both nodes u and v would have exchanged rumors with node m in the initial
T (k). In the next T (k), node m propagates all rumors that it received from u to v and vice-versa.
path of length k
path of length k
u
m
v
Figure 4: Case 1a
Case 1b: No such node middle exists as depicted in Figure 5. Then, after the initial T (k), node u must have
exchanged rumors with m1 and node v with m2 , due to the induction hypothesis. In the invocation of
the 2k-DTG, node m1 propagates all rumors gained from u to m2 , and m2 also propagates all rumors
gained from v to m1 . This information then travels from m1 to u and from m2 to v in the final T (k).
path of length k
or less
u
edge of length
k-1 or less
m1
path of length k
or less
m2
v
Figure 5: Case 1b
Case 2: There exists at most one edge e with latency value in between [k + 1, 2k]. This situation can yield
one of the following two sub-cases:
Case 2a: Edge e is located at one end of the path (see Figure 6). By the induction hypothesis, node v would
have exchanged rumors with m in the initial T (k). In the 2k-DTG, u gets to know this (and other)
rumors from m and m also gets to know u’s rumors. In the next T (k), node m propagates all rumors
gained from u to v.
path of length k-1
or less
edge of length [k +1, 2k]
u
m
Figure 6: Case 2a
26
v
Case 2b: The edge is located between two inner nodes on the path (see Figure 7). In this case, by the
induction hypothesis, node u has exchanged rumors with m1 , whereas node v has exchanged rumors
with node m2 in the initial T (k). In the 2k-DTG, node m1 propagates all rumors gained from u to m2 .
Moreover, m2 propagates all rumors gained from v to m1 . These rumors then propagate from m1 to u
and from m2 to v in the final T (k).
path of length k-1
or less
u
path of length k-1
or less
m1
m2
v
Figure 7: Case 2b
Lemma 29. For known diameter, solving all-to-all information dissemination by executing the sequence
T (D), takes O(D log2 n log D) time.
Proof. From the way the sequence is constructed, we observe the recurrence relation T (k) = 2T (k/2) +
k log2 n. Using standard methods to solve the recurrence completes the proof.
When the graph diameter is known to all nodes, nodes can just invoke T (D) to solve all-to-all information
dissemination. For completeness, we also present an algorithm called Path_Discovery that uses the sequence
of invocations of `-DTG to solve all-to-all information dissemination, when the graph diameter is unknown.
This algorithm is similar in flavour to that of the General_EID algorithm described in Section 5.3 and also
makes use of the Termination_Check algorithm, albeit with a different broadcasting technique (calling T (k)
rather than RR Broadcast).
Path_Discovery (k)
1: k=1
2: repeat
3:
execute sequence T (k)
4:
call algorithm Termination_Check (k)
5:
if node_status = “failed” then
6:
k = 2k
7:
set node_status to “default”
8:
else terminate
Algorithm 5: Path_Discovery; code for vertex v.
Lemma 30. Path_Discovery algorithm takes O(D log2 n log D) time to solve all-to-all information dissemination.
Applying techniques similar to section 5.3, the complexity can be easily shown for the case with unknown
diameter as well.
6
Unified Upper Bounds
Combining the results, we can run both push-pull and the spanner algorithm in parallel to obtain unified upper
bounds for both the known and the unknown latencies cases. However, we point out that, for single source
broadcast, push-pull works with small message sizes whereas the spanner algorithm does not (because of its
27
reliance on DTG). Also, exchanging messages with the help of the spanner does not have good robustness
properties whereas push-pull is inherently quite robust.
Theorem 31. There exists randomized gossip algorithms that solves the all-to-all information dissemination
problem in O(min((D+∆) log3 n, (`∗ /φ∗ ) log n) time when latencies are unknown and in O(min(D log3 n, (`∗ /φ∗ ) log n))
time when latencies are known.
Corollary 32. There exists randomized gossip algorithms that solves the all-to-all information dissemination problem in O(min((D + ∆) log3 n, (L/φavg ) log n) time when latencies are unknown and in
O(min(D log3 n, (L/φavg ) log n)) time when latencies are known.
7
Conclusion
We have presented two different new concepts, namely the critical conductance and the average conductance,
that characterize the bottlenecks in communication for weighted graphs. We believe that these parameters
will be useful for a variety of applications that depend on connectivity.
A question that remains is whether the running time of O(D log3 n) for information dissemination
can be improved, e.g., using better spanner constructions or more efficient local broadcast to save the
polylogarithmic factors. (Recall that in the unweighted case, there are information dissemination protocols
that run in O(D + polylogn) time.) Another interesting direction would be the development of reliable robust
fault-tolerant algorithms in this regard.
Another issue is whether we can reduce the number of incoming messages in a round; recently, Daum et
al. [9] have considered such a more restricted model, yielding interesting results. It would also be interesting
to look at the bounds where each node is only allowed O(1) connections per round, whether initiated by the
node itself or by its neighbor.
Acknowledgment
We thank George Giakkoupis for the helpful conversations and useful ideas.
A
A.1
Appendix
The DTG Local Broadcast Protocol
In this section, we describe in more detail the DTG protocol that was originally developed in [18] as well as
the `-DTG algorithm.
It is clear that the algorithm solves local broadcast because it keeps on contacting new neighbors until it
has exchanged rumors with all of its neighbors. The author [18] makes use of binomial trees to derive the
time complexity and better explain the working of the algorithm.
The key idea used for deriving the time complexity is to show that when information is propagated in a
pipelined manner along the binomial trees (created on-the-fly), then for any node that is still active in the
ith iteration, it has a binomial tree of order 2i (i-tree of depth i: see Figure 8) rooted at it. Furthermore, it
is shown that for any two different nodes that are still active in iteration i, their i-trees are vertex disjoint.
Since an i-tree is formed by joining two (i − 1)-trees, the growth rate of an i-tree is exponential which limits
the number of iterations to O(log n). Also, each node on an average needs to contact O(log n) nodes (O(i)
nodes in the ith round). Thus, the overall complexity of the algorithm becomes O(log2 n). In our case, for
`-DTG, the additional waiting time of ` increases the time complexity to O(` log2 n).
28
0-tree
1-tree
2-tree
3-tree
Figure 8: i-trees for i ∈ 0, 1, 2, 3
The i-tree can be seen as witness structures that provides an explanation as to why a node was active in
that particular iteration. The i-tree rooted at a particular node is built recursively as the rounds progress and
essentially store the information about which other nodes communicated with one another in which particular
round as viewed from the root node. For example, in Figure 9 , the labels on the edges denote the time in
which the node of the higher level contacted the lower level node (as observed by the root node). The root
contacts the nodes in first level in rounds according to their label, the nodes on the first level similarly contact
the nodes in the second level in rounds according to their label and so on. This observation also helps in the
realization of the key idea of a node being active in the ith round having an i-tree rooted at it. The nodes in
the first level did not contact the root previously as they were busy contacting the nodes of the second level,
the nodes of the second level did not contact nodes on the first level as they were busy contacting the nodes in
the third level and so on.
Figure 9: 5-tree with edge labels
As shown in the pseudo code, in the initial PUSH sequence, the message is propagated in a decreasing
order of connection round number (as observed by the root node: given by the labels on the edges of Figure
8), helping in pipe-lining the roots message to all other nodes of the i-tree. Similarly, during the initial PULL
sequence the message from the nodes is pipelined up to the root. The subsequent PULL-PUSH sequence
helps in maintaining the symmetry of the algorithm such that if node u learns about node v, then node v also
learns about node u. Finally, the collection of rumors R is updated to the union of rumors collected in the
aforementioned sequences.
For ` being an integer > 1, we run the modified DTG algorithm on a sub-graph of G, G` , rather than on
G, where G` contains only the edges of length up to `. Lets denote this algorithm as `-DTG. The algorithm
is presented below and each node v belonging to G` runs it in parallel. Γ(v) can be considered as the
neighborhood of v comprising of set of nodes that are node v’s 1-hop neighbors.
29
`-DTG (`)
1: R = v
2: for i = 1 UNTIL Γ(v)\R = φ do
3:
link to any new neighbor ui ∈ Γ(v)
4:
R0 = v
5:
PUSH :
6:
for j = i downto 1 do
7:
send rumors in R0 to uj
8:
wait for ` time to receive uj ’s rumors
9:
add all received rumors to R0
10:
PULL:
11:
for j = 1 to i do
12:
send rumors in R0 to uj
13:
wait for ` time to receive uj ’s rumors
14:
add all received rumors to R0
15:
R00 = v
16:
perform PULL, PUSH with R00
17:
R = R0 ∪ R00
Algorithm 6: `-DTG
References
[1] John Augustine, Gopal Pandurangan, Peter Robinson, Scott Roche, and Eli Upfal. Enabling robust and
efficient distributed computation in dynamic peer-to-peer networks. In IEEE 56th Annual Symposium
on Foundations of Computer Science, FOCS Berkeley, USA, pages 350–369, 2015.
[2] Surender Baswana and Sandeep Sen. A simple and linear time randomized algorithm for computing
sparse spanners in weighted graphs. Random Structures and Algorithms, 30(4):532–563, 2007.
[3] Stephen Boyd, Arpita Ghosh, Balaji Prabhakar, and Devavrat Shah. Randomized gossip algorithms.
IEEE/ACM Trans. Netw., 14(SI):2508–2530, June 2006.
[4] Milan Bradonjić, Robert Elsässer, Tobias Friedrich, Thomas Sauerwald, and Alexandre Stauffer. Efficient broadcast on random geometric graphs. In Proceedings of the 21st Annual ACM-SIAM Symposium
on Discrete Algorithms, SODA ’10, pages 1412–1421, Philadelphia, PA, USA, 2010.
[5] Keren Censor-Hillel, Bernhard Haeupler, Jonathan Kelner, and Petar Maymounkov. Global computation
in a poorly connected world: Fast rumor spreading with no dependence on conductance. In Proceedings
of the 44th Annual ACM Symposium on Theory of Computing, STOC ’12, pages 961–970, New York,
NY, USA, 2012. ACM.
[6] Keren Censor-Hillel and Hadas Shachnai. Fast information spreading in graphs with large weak
conductance. In Proceedings of the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms,
SODA ’11, pages 440–448. SIAM, 2011.
[7] Flavio Chierichetti, Silvio Lattanzi, and Alessandro Panconesi. Almost tight bounds for rumour
spreading with conductance. In Proceedings of the 42nd ACM Symposium on Theory of Computing,
STOC ’10, pages 399–408, NY, USA, 2010. ACM.
30
[8] Flavio Chierichetti, Silvio Lattanzi, and Alessandro Panconesi. Rumour spreading and graph conductance. In Proceedings of the 21st Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’10,
pages 1657–1663, PA, USA, 2010. SIAM.
[9] Sebastian Daum, Fabian Kuhn, and Yannic Maus. Rumor Spreading with Bounded In-Degree, pages
323–339. Springer International, 2016.
[10] Alan Demers, Dan Greene, Carl Hauser, Wes Irish, John Larson, Scott Shenker, Howard Sturgis, Dan
Swinehart, and Doug Terry. Epidemic algorithms for replicated database maintenance. In Proceedings
of the 6th Annual ACM Symposium on Principles of Distributed Computing, PODC ’87, pages 1–12,
New York, NY, USA, 1987. ACM.
[11] Benjamin Doerr, Mahmoud Fouz, and Tobias Friedrich. Social networks spread rumors in sublogarithmic
time. In Proceedings of the 43rd Annual ACM Symposium on Theory of Computing, STOC ’11, pages
21–30, New York, NY, USA, 2011. ACM.
[12] Benjamin Doerr, Mahmoud Fouz, and Tobias Friedrich. Why rumors spread so quickly in social
networks. Commun. ACM, 55(6):70–75, June 2012.
[13] Uriel Feige, David Peleg, Prabhakar Raghavan, and Eli Upfal. Randomized broadcast in networks.
In Algorithms, volume 450 of Lecture Notes in Computer Science, pages 128–137. Springer Berlin
Heidelberg, 1990.
[14] Pierre Fraigniaud and George Giakkoupis. On the bit communication complexity of randomized rumor
spreading. In Proceedings of the 22nd Annual ACM Symposium on Parallelism in Algorithms and
Architectures, SPAA ’10, pages 134–143, NY, USA, 2010. ACM.
[15] R. Gandhi, A. Mishra, and S. Parthasarathy. Minimizing broadcast latency and redundancy in ad hoc
networks. Networking, IEEE/ACM Transactions on, 16(4):840–851, Aug 2008.
[16] George Giakkoupis. Tight bounds for rumor spreading in graphs of a given conductance. In Proceedings
of the 28th International Symposium on Theoretical Aspects of Computer Science (STACS), pages 57–68,
March 10–12 2011.
[17] George Giakkoupis, Thomas Sauerwald, and Alexandre Stauffer. Randomized Rumor Spreading in
Dynamic Graphs, pages 495–507. Springer Berlin Heidelberg, Berlin, Heidelberg, 2014.
[18] Bernhard Haeupler. Simple, fast and deterministic gossip and rumor spreading. In Proceedings of the
24th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’13, pages 705–716. SIAM, 2013.
[19] Bernhard Haeupler and Dahlia Malkhi. Optimal gossip with direct addressing. In Proceedings of the
2014 ACM Symposium on Principles of Distributed Computing, PODC ’14, pages 176–185, New York,
NY, USA, 2014. ACM.
[20] Shlomo Hoory, Nathan Linial, and Avi Wigderson. Expander graphs and their applications. Bull. Amer.
Math. Soc., 43(04):439–562, 2006.
[21] Mark Jerrum and Alistair Sinclair. Conductance and the rapid mixing property for markov chains: The
approximation of permanent resolved. In Proceedings of the 20th Annual ACM Symposium on Theory
of Computing, STOC ’88, pages 235–244, New York, NY, USA, 1988. ACM.
[22] R. Karp, C. Schindelhauer, S. Shenker, and B. Vocking. Randomized rumor spreading. In Foundations
of Computer Science, 2000. Proceedings. 41st Annual Symposium on, pages 565–574, 2000.
31
[23] David Kempe, Jon Kleinberg, and Alan Demers. Spatial gossip and resource location protocols. In
Proceedings of the 33rd Annual ACM Symposium on Theory of Computing, STOC ’01, pages 163–172,
New York, NY, USA, 2001. ACM.
[24] Damon Mosk-Aoyama and Devavrat Shah. Computing separable functions via gossip. In Proceedings of
the 25th Annual ACM Symposium on Principles of Distributed Computing, PODC ’06, pages 113–122,
New York, NY, USA, 2006. ACM.
[25] Calvin C. Newport. Radio network lower bounds made easy. In Distributed Computing - 28th
International Symposium, DISC 2014, Austin, TX, USA, October 12-15, 2014. Proceedings, pages
258–272, 2014.
[26] A.D. Sarwate and A.G. Dimakis. The impact of mobility on gossip algorithms. In INFOCOM 2009,
IEEE, pages 2088–2096, April 2009.
[27] Salil P Vadhan. Pseudorandomness. Foundations and Trends in Theoretical Computer Science: Vol. 7:
No. 1-3, pp 1-336., 7(1-3):1–336, 2012.
32
| 8 |
Cooperative control of multi-agent systems to locate
source of an odor
arXiv:1711.03819v1 [cs.SY] 10 Nov 2017
Abhinav Sinha, Rishemjit Kaur, Ritesh Kumar and Amol P. Bhondekar
Abstract—This work targets the problem of odor source
localization by multi-agent systems. A hierarchical cooperative
control has been put forward to solve the problem of locating
source of an odor by driving the agents in consensus when at
least one agent obtains information about location of the source.
Synthesis of the proposed controller has been carried out in a
hierarchical manner of group decision making, path planning
and control. Decision making utilizes information of the agents
using conventional Particle Swarm Algorithm and information
of the movement of filaments to predict the location of the odor
source. The predicted source location in the decision level is
then utilized to map a trajectory and pass that information
to the control level. The distributed control layer uses sliding
mode controllers known for their inherent robustness and the
ability to reject matched disturbances completely. Two cases of
movement of agents towards the source, i.e., under consensus
and formation have been discussed herein. Finally, numerical
simulations demonstrate the efficacy of the proposed hierarchical
distributed control.
Index Terms—Odor source localization, multi-agent systems
(MAS), sliding mode control (SMC), homogeneous agents, cooperative control.
I. I NTRODUCTION
A. Overview
Inspiration of odor source localization problem stems from
behavior of biological entities such as mate seeking by moths,
foraging by lobsters, prey tracking by mosquitoes and blue
crabs, etc., and is aimed at locating the source of a volatile
chemical. These behaviors have long been mimicked by autonomous robot(s). Chemical source tracking has attracted
researchers around the globe due to its applications in both
civilian and military domains. A plethora of applications are
possible, some of which include detection of forest fire, oil
spills, release of toxic gases in tunnels and mines, gas leaks
in industrial setup, search and rescue of victims and clearing
leftover mine after an armed conflict. A plume containing
filaments, or odor molecules, is generally referred to the
downwind trail formed as a consequence of mixing of contaminant molecules in any kind of movement of air. The
dynamical optimization problem of odor source localization
can be effectively solved using multiple robots working in
cooperation. The obvious advantages of leveraging multiagent systems (MAS) are increased probability of success,
A. Sinha is with School of Mechatronics & Robotics, Indian Institute
of Engineering Science and Technology; and Central Scientific Instruments
Organization (CSIR- CSIO), India.
email: [email protected]
R. Kaur, R. Kumar & A. P. Bhondekar are with CSIR- CSIO.
emails: [email protected],[email protected],
[email protected]
redundancy and improved overall operational efficiency and
spatial diversity in having distributed sensing and actuation.
B. Motivation
Odor source localization is a three stage problem– sensing,
maneuvering and control. Some of reported literature on odor
source localization date back to 1980s when Larcombe et al.
[1] discussed such applications in nuclear industry by considering a chemical gradient based approach. Other works in
1990s [2]–[6] relied heavily on sensing part using techniques
such as chemotaxis [7], infotaxis [8], anemotaxis [9], [10]
and fluxotaxis [11]. The efficiency of such algorithms was
limited by the quality of sensors and the manner in which they
were used. These techniques also failed to consider turbulence
dominated flow and resulted in poor tracking performance.
Bio-inspired algorithms have been reported to maneuver the
agents, some of which include Braitenberg style [12], E. coli
algorithm [13], Zigzag dung beetle approach [14], silkworm
moth style [15]–[17] and their variants. A tremendous growth
of research attention towards cooperative control has been
witnessed in the past decade [18], [19] but very few have
addressed the problem of locating source of an odor. Hayes et
al. [20] proposed a distributed cooperative algorithm based on
swarm intelligence for odor source localization and experimental results proved multiple robots perform more efficiently than
a single autonomous robot. A Particle Swarm Optimization
(PSO) algorithm [21] was proposed by Marques et al. [22],
[23] to tackle odor source localization problems. To avoid trapping into local maximum concentrations, Jatmiko et al. [23]
proposed modified PSO algorithms based on electrical charge
theory, where neutral and charged robots has been used. Lu et
al. [24] proposed a distributed coordination control protocol
based on PSO to address the problem. It should be noted
that simplified PSO controllers are a type of proportional-only
controller and the operating region gets limited between global
and local best. This needs complicated obstacle avoidance
algorithms and results in high energy expenditure. Lu et al.
[25] also proposed a cooperative control scheme to coordinate
multiple robots to locate odor source in which a particle filter
has been used to estimate the location of odor source based
on wind information, a movement trajectory has been planned,
and finally a cooperative control scheme has been proposed to
coordinate movement of robots towards the source.
Motivated by these studies, we have implemented a robust
and powerful hierarchical cooperative control strategy to tackle
the problem. First layer is the group level in which the
information about the source via instantaneous sensing and
swarm intelligence is obtained. Second layer is designed to
maneuver the agents via a simplified silkworm moth algorithm.
Third layer is based on cooperative sliding mode control and
the information obtained in the first layer is passed to the third
layer as a reference to the tracking controller.
C. Contributions
Major contributions of this paper are summarized below.
1) As opposed to existing works on cooperative control
to locate source of odor, we have considered a more
general formulation by taking nonlinear dynamics of
MAS into account. When the uncertain function is zero,
the problem reduces to stabilizing integrator dynamics.
2) The control layer is designed on the paradigms of sliding
mode, a robust and powerful control with inherent
robustness and disturbance rejection capabilities. The
reaching law, as well as the sliding manifold in this
study are nonlinear and novel resulting in smoother
control and faster reachability to the manifold. Use of
sliding mode controller also helps in achieving a finite
time convergence as opposed to asymptotic convergence
to the equilibrium point. The proposed control provides
stability and ensures robustness even in the presence of
bounded disturbances and matched uncertainties.
3) Odor propagation is non-trivial, i.e., odor arrives in
packets, leading to wide fluctuations in measured
concentrations. Plumes are also dynamic and turbulent.
As odor tends to travel downwind, direction of the wind
provides an effective information on relative position
of the source. Hence, we have used wind information
based on a measurement model describing movement
of filaments and concentration information from swarm
intelligence to locate the source of odor.
4) Formation keeping of agents to locate source of odor
has also been demonstrated in this work.
D. Paper Organization
After introduction to the study in section I, remainder
of this work in organized as follows. Section II provides
insights into preliminaries of spectral graph theory and sliding
mode control. Section III presents dynamics of MAS and
mathematical problem formulation, followed by hierarchical
distributed cooperative control scheme in section IV. Results
and discussions have been carried out in section V, followed
by concluding remarks in section VI.
II. P RELIMINARIES
A. Spectral Graph Theory for Multi-Agent Systems
A directed graph, also known as digraph is represented
throughout in this paper by G = (V, E, A). V is the nonempty
set in which finite number of vertices or nodes are contained
such that V = {1, 2, ..., N }. E denotes directed edge and is
represented as E = {(i, j) ∀ i, j ∈ V & i 6= j}. A is the
weighted adjacency matrix such that A = a(i, j) ∈ RN×N .
The possibility of existence of an edge (i, j) occurs iff the
vertex i receives the information supplied by the vertex j, i.e.,
(i, j) ∈ E. Hence, i and j are termed neighbours. The set Ni
contains labels of vertices that are neighbours of the vertex i.
For the adjacency matrix A, a(i, j) ∈ R+
0 . If (i, j) ∈ E ⇒
a(i, j) > 0. If (i, j) ∈
/ E or i = j ⇒ a(i, j) = 0.
The Laplacian matrix L [26] is central to the consensus
problem and is given by L = D − A where degree matrix,
D is a diagonal matrix,
Pn i.e, D = diag(d1 , d2 , ..., dn ) whose
entries are di =
j=1 a(i, j). A directed path from vertex j to vertex i defines a sequence comprising of edges
(i, i1 ), (i1 , i2 ), ..., (il , j) with distinct vertices ik ∈ V, k =
1, 2, 3, ..., l. Incidence matrix B is also a diagonal matrix
with entries 1 or 0. The entry is 1 if there exists an edge
between leader agent and any other agent, otherwise it is 0.
Furthermore, it can be inferred that the path between two
distinct vertices is not uniquely determined. However, if a
distinct node in V contains directed path to every other distinct
node in V, then the directed graph G is said to have a
spanning tree. Consequently,the matrix L + B has full rank
[26]. Physically, each agent has been modelled by a vertex or
node and the line of communication between any two agents
has been modelled as a directed edge.
B. Sliding Mode Control
Sliding Mode Control (SMC) [27] is known for its inherent robustness. The switching nature of the control is used
to nullify bounded disturbances and matched uncertainties.
Switching happens about a hypergeometric manifold in state
space known as sliding manifold, surface, or hyperplane.
The control drives the system monotonically towards the
sliding surface, i.e, trajectories emanate and move towards
the hyperplane (reaching phase). System trajectories, after
reaching the hyperplane, get constrained there for all future
time (sliding phase), thereby ensuring the system dynamics
remains independent of bounded disturbances and matched
uncertainties.
In order to push state trajectories onto the surface s(x),
a proper discontinuous control effort uSM (t, x) needs to be
synthesized satisfying the following inequality.
sT (x)ṡ(x) ≤ −ηks(x)k,
(1)
with η being positive and is referred as the reachability
constant.
∂s
∂s
ẋ =
f (t, x, uSM )
(2)
∵ ṡ(x) =
∂x
∂x
∂s
∴ sT (x) f (t, x, uSM ) ≤ −ηks(x)k.
(3)
∂x
The motion of state trajectories confined on the manifold is
known as sliding. Sliding mode exists if the state velocity
vectors are directed towards the manifold in its neighbourhood.
Under such consideration, the manifold is called attractive,
i.e., trajectories starting on it remain there for all future time
and trajectories starting outside it tend to it in an asymptotic
manner. Hence, in sliding motion,
∂s
f (t, x, uSM ) = 0.
(4)
∂x
uSM = ueq is a solution, generally referred as equivalent
control is not the actual control applied to the system but can
be thought of as a control that must be applied on an average
to maintain sliding motion and is mainly used for analysis of
sliding motion.
ṡ(x) =
III. DYNAMICS OF M ULTI -AGENT S YSTEMS & P ROBLEM
F ORMULATION
Consider first order homogeneous MAS interacting among
themselves and their environment in a directed topology.
Under such interconnection, information about the predicted
location of source of the odor through instantaneous plume
sensing is not available globally. However, local information
is obtained by communication among agents whenever at least
one agent attains some information of interest. The governing
dynamics of first order homogeneous MAS consisting of N
agents is described by nonlinear differential equations as
ẋi (t) = f (xi (t)) + uSMi (t) + ςi ; i ∈ [1, N ],
(5)
where f (·) : R+ × X → Rm is assumed to be locally Lipschitz
over some fairly large domain DL with Lipschitz constant L̄,
and denotes the uncertain nonlinear dynamics of each agent.
Also X ⊂ Rm is a domain in which origin is contained. xi and
uSMi are the state of ith agent and the associated control respectively. ςi represents bounded exogenous disturbances that
enter the system from input channel, i.e., kςi k ≤ ςmax < ∞.
The problem of odor source localization can be viewed as a
cooperative control problem in which control laws uSMi need
to be designed such that the conditions limt→∞ kxi − xj k = 0
and limt→∞ kxi −xs k ≤ θ are satisfied. Here xs represents the
probable location of odor source & θ is an accuracy parameter.
IV. H IERARCHICAL D ISTRIBUTED C OOPERATIVE
C ONTROL S CHEME
In order to drive the agents towards consensus to locate the
source of odor, we propose the following hierarchy.
A. Group Decision Making
This layer utilizes both concentration and wind information
to predict the location of odor source. Then, the final probable
position of the source can be described as
ψ(tk ) = c1 pi (tk ) + (1 − c1 )qi (tk ),
(6)
with pi (tk ) as the oscillation centre according to a simple
Particle Swarm Optimization (PSO) algorithm and qi (tk )
captures the information of the wind. c1 ∈ (0, 1) denotes
additional weighting coefficient.
Remark 1. The arguments in (6) represent data captured at
t = tk instants (k = 1, 2, ...) as the sensors equipped with the
agents can only receive data at discrete instants.
It should be noted that ψ is the tracking reference that is
fed to the controller. Now, we present detailed description of
obtaining pi (tk ) and qi (tk ).
Simple PSO algorithm that is commonly used in practice
has the following form.
vi (tk+1 ) = ωvi (tk ) + uPSO (tk ),
(7)
xi (tk+1 ) = xi (tk ) + vi (tk+1 ).
(8)
Here ω is the inertia factor, vi (tk ) and xi (tk ) represent the
respective velocity and position of ith agent. This commonly
used form of PSO can also be used as a proportional-only type
controller, however for the disadvantages mentioned earlier,
we do not use PSO as our final controller. PSO control law
uPSO can be described as
uPSO = α1 (xl (tk ) − xi (tk )) + α2 (xg (tk ) − xi (tk )).
(9)
In (9), xl (tk ) denotes the previous best position and xg (tk )
denotes the global best position of neighbours of ith agent
at time t = tk , and α1 & α2 are acceleration coefficients.
Since, every agent in MAS can get some information about the
magnitude of concentration via local communication, position
of the agent with a global best can be easily known. By the
idea of PSO, we can compute the oscillation centre pi (tk ) as
pi (tk ) =
α1 xl (tk ) + α2 xg (tk )
,
α1 + α2
(10)
where
xl (tk ) = arg
xg (tk ) = arg
max {g(xl (tk−1 )), g(xi (tk ))},
0<t<tk−1
(11)
max {g(xg (tk−1 )), max aij g(xj (tk ))}.
0<t<tk−1
j∈N
(12)
Thus, from (9), (10)
uPSO (tk ) = (α1 + α2 ){pi (tk ) − xi (tk )},
(13)
which is clearly a proportional-only controller with proportional gain α1 + α2 , as highlighted earlier.
In order to compute qi (tk ), movement process of a single filament that consists several order molecules has been
modelled. If xf (t) denotes position of the filament at time
t, v̄a (t) represent mean airflow velocity and n(t) be some
random process, then the model can be described as
ẋf (t) = v̄a (t) + n(t).
(14)
Without loss of generality, we shall regard the start time of
our experiment as t = 0. From (14), we have
Z t
Z t
xf (t) =
v̄a (τ )dτ +
n(τ )dτ + xs (0).
(15)
0
0
xs (0) denotes the real position of the odor source at t = 0.
Assumption IV.1. We assume the presence of a single, stationary odor source. Thus, xs (t) = xs (0).
Implications from remark 1 require (15) to be implemented
at t = tk instants. Hence,
xf (tk ) =
t
X
v̄a (τm )∆t +
m=0
t
X
n(τm )∆t + xs (tk ),
m=0
?
xf (tk ) = xs (tk ) + v̄a? (tk ) + w (tk ).
In (17),
w? (tk ).
Pt
m=0
v̄a (τm )∆t = v̄a? (tk ) and
(16)
(17)
Pt
m=0
n(τm )∆t =
Remark 2. In (17), the accumulated average of v̄a? (tk ) and
w? (tk ) can also be considered ∀ possible filament releasing
time.
From (17),
xf (tk ) − v̄a? (tk ) = xs (tk ) + w? (tk ).
(18)
C. Distributed Control
In the control layer, we design a robust and powerful
controller on the paradigms of sliding mode. It is worthy
to mention that based on instantaneous sensing and swarm
information, at different times, each agent can take up the role
of a virtual leader whose opinion needs to be kept by other
agents. ψ from (6) has been provided to the controller as the
reference to be tracked. The tracking error is formulated as
ei (t) = xi (t) − ψ(tk ) ; t ∈ [tk , tk+1 [.
(23)
In terms of graph theory, we can reformulate the error variable
as
i (t) = (L + B)ei (t) = (L + B)(xi (t) − ψ(tk )).
(24)
The above relationship, (18) can be viewed as the information
about xs (tk ) with some noise w? (tk ). Hence,
From this point onward, we shall denote L + B as H. Next,
we formulate the sliding manifold
qi (tk ) = xs (tk ) + w? (tk ).
si (t) = λ1 tanh(λ2 i (t)),
(19)
Therefore, ψ in (6) can now be constructed from (10) & (19).
B. Path Planning
Since, detection of information of interest is tied to the
threshold value defined for the sensors, the next state is
updated taking this threshold value into account. Thus, the
blueprints of path planning can be described in terms of three
types of behavior.
1) Surging: If the ith agent receives data well above threshold, we say that some clues about the location of the
source has been detected. If the predicted position of the
source at t = tk as seen by ith agent be given as xsi (tk ),
then the next state of the agent is given mathematically
as
xi (tk+1 ) = xsi (tk ).
(20)
2) Casting: If the ith agent fails to detect information at any
particular instant, then the next state is obtained using
the following relation.
xi (tk+1 ) =
kxi (tk ) − xsi (tk )k
+ xsi (tk ).
2
which is a nonlinear sliding manifold offering faster reachability to the surface. λ1 ∈ R+ represents the speed of
convergence to the surface, and λ2 ∈ R+ denotes the slope of
the nonlinear sliding manifold. These are coefficient weighting
parameters that affect the system performance. The forcing
function has been taken as
ṡi (t) = −µ sinh−1 (m + w|si (t)|)sign(si (t)).
(26)
In (26), m is a small offset such that the argument of sinh−1
function remains non zero and w is the gain of the controller.
The parameter µ facilitates additional gain tuning. In general,
m << w. This novel reaching law contains a nonlinear
gain and provides faster convergence towards the manifold.
Moreover, this reaching law is smooth and chattering free,
which is highly desirable in mechatronic systems to ensure
safe operation.
Theorem IV.1. Given the dynamics of MAS (5) connected in
a directed topology, error candidates (23, 24) and the sliding
manifold (25), the stabilizing control law that ensures accurate
reference tracking under consensus can be described as
(21)
uSMi (t) = − (ΛH)−1 µ sinh−1 (m + w|si (t)|)sign(si (t))Γ−1
3) Search and exploration: If all the agents fail to detect
odor clues for a time segment [tk , tk+l ] > δ0 for some
l ∈ N and δ0 ∈ R+ being the time interval for which
no clues are detected or some constraint on wait time
placed at the start of the experiment, then the next state
is updated as
xi (tk+1 ) = xsi (tk ) + zφσ .
(25)
(22)
In (22), zφσ is some random parameter with σ as its
standard deviation and φ as its mean.
+ (f (xi (t)) − ψ̇(tk ))
(27)
where Λ = λ1 λ2 , Γ = 1 − tanh2 (λ2 i (t)), w > supt≥0 {kςi k}
& µ > sup{kΛHςi Γk}.
Remark 3. As mentioned earlier, λ1 , λ2 ∈ R+ . This ensures
Λ 6= 0 and hence its non singularity. The argument of tanh is
always finite and satisfies λ2 i (t) 6= πι(κ + 1/2) for κ ∈ Z,
thus Γ is also invertible. Moreover the non singularity of H
can be established directly if the digraph contains a spanning
tree with leader agent as a root.
Proof. From (24) and (25), we can write
ṡi (t) = λ1 {λ2 ˙i (t)(1 − tanh2 (λ2 i (t)))}
(28)
2
= λ1 λ2 ˙i (t) − λ1 λ2 ˙i (t) tanh (λ2 i (t))
(29)
2
= λ1 λ2 ˙i (t){1 − tanh (λ2 i (t))}
(30)
= ΛH(ẋi (t) − ψ̇(tk ))Γ
(31)
with Λ & Γ as defined in Theorem IV.1. From (5), (31) can
be further simplified as
ṡi (t) = ΛH(f (xi (t)) + uSMi (t) + ςi − ψ̇(tk ))Γ.
(32)
Using (26), the control that brings the state trajectories on to
the sliding manifold can now be written as
uSMi (t) = − (ΛH)−1 µ sinh−1 (m + w|si (t)|)sign(si (t))Γ−1
+ (f (xi (t)) − ψ̇(tk )) .
(33)
Thus, the derivative of Lyapunov function candidate is negative
definite confirming stability in the sense of Lyapunov.
Since, µ > 0, ksi k > 0 and sinh−1 (·) > 0 due to the nature
of its arguments. Therefore, (37) and (26) together provide
implications that ∀si (0), si ṡi < 0 and the surface is globally
attractive. This ends the proof.
V. R ESULTS AND DISCUSSIONS
Interaction topology of the agents represented as a digraph
has been shown here in figure 1. The associated graph matrices
have been described below. The computer simulation has been
performed assuming that agent 1 appears as virtual leader to
all other agents, making the topology fixed and directed for
this study. It should be noted that, the theory developed so far
can be extended to the case of switching topologies and shall
be dealt in future.
This concludes the proof.
1
Remark 4. The control (27) can be practically implemented
as it does not contain the uncertainty term.
It is crucial to analyze the necessary and sufficient conditions for the existence of sliding mode when control protocol
(27) is used. We regard the system to be in sliding mode if
for any time t1 ∈ [0, ∞[, system trajectories are brought upon
the manifold si (t) = 0 and are constrained there for all time
thereafter, i.e., for t ≥ t1 , sliding motion occurs.
Theorem IV.2. Consider the system described by (5), error
candidates (23, 24), sliding manifold (25) and the control
protocol (27). Sliding mode is said to exist in vicinity of
sliding manifold, if the manifold is attractive, i.e., trajectories
emanating outside it continuously decrease towards it. Stating
alternatively, reachability to the surface is ensured for some
reachability constant η > 0. Moreover, stability can be
guaranteed in the sense of Lyapunov if gain µ is designed
as µ > sup{kΛHςi Γk}.
Proof. Let us take into account, a Lyapunov function candidate
Vi = 0.5s2i .
(34)
Taking derivative of (34) along system trajectories yield
V̇i = si ṡi
= si ΛH(f (xi (t)) + uSMi (t) + ςi − ψ̇(tk ))Γ .
5
Fig. 1: Topology in which agents are connected
0
0
A=
0
0
0
0
1
0
1
0
0
1
1
0
L = D−A =
0
0
1
0
0
0
, B =
0
0
0
0
0
0
−1
0
0
1
0
0
0
0
0
0
1
0
0
0
, D =
0
0
0
0
2
−1 0
0
0 0
,L + B =
0
1 0
0
−1 1
0
1
−1
0
0 0 0
0 0 0
,
0 1 0
0 0 1
(39)
−1 0
0 0
1 0
−1 1
(40)
Agents have the following dynamics.
(35)
ẋ2 = 0.1 sin(x2 ) + cos(2πt) + uSM2 (t) + ς2 ,
(42)
(36)
ẋ3 = 0.1 sin(x3 ) + cos(2πt) + uSM3 (t) + ς3 ,
(43)
ẋ4 = 0.1 sin(x4 ) + cos(2πt) + uSM4 (t) + ς4 ,
(44)
ẋ5 = 0.1 sin(x5 ) + cos(2πt) + uSM5 (t) + ς5 .
(45)
(37)
where η = µ sinh−1 (m + w|si |) − ΛHςi Γ > 0 is called
reachability constant. For µ > sup{kΛHςi Γk}, we have
V̇i < 0.
3
(41)
= −µ sinh−1 (m + w|si |)ksi k + ΛHςi Γksi k
= − µ sinh−1 (m + w|si |) + ΛHςi Γ ksi k
= −ηksi k,
4
ẋ1 = 0.1 sin(x1 ) + cos(2πt) + uSM1 (t) + ς1 ,
Substituting the control protocol (27) in (36), we have
V̇i = si − µ sinh−1 (m + w|si |)sign(si ) + ΛHςi Γ
2
(38)
In this study, advection model given in [28] has been used
to simulate the plume with both additive and multiplicative
disturbances. The initial conditions for simulation are taken
to be large values, i.e., far away from the equilibrium point.
Time varying disturbance has been taken as ςi = 0.3 sin(π 2 t2 ),
accuracy parameter θ = 0.001 and maximum mean airflow
velocity v̄amax = 1 m/s. Other key design parameters are
mentioned in table 1.
Agents progressing towards the source
11
x1
x2
x3
x4
x5
source info
position of agents (m)
2
Direction of movement of agents
towards the source
1.5
10.9
10.8
10.7
10.6
Direction of movement of filaments released
from the source
1
10.5
10.4
0.5
10.3
True Odor Source
10.2
0
10.1
10
12
-0.5
0
2
4
6
8
10
time progression for agents (sec)
Fig. 2: Agents in consensus to locate source of odor
Agents progressing towards the source in parallel formation
11
position of agents (m)
3
Formation
gap
2
x2
10.8
x4
10.7
10.6
10.5
1
10.4
True
odor
source
0
x1
10.9
10.3
x3
x5
source info (movement
of filaments away from
source)
odor source location
4
agent 1 initial point
agent 2 initial point
agent 3 initial point
agent 4 initial point
agent 5 initial point
agent 1 terminal point
10.2
-1
agent 2 terminal point
agent 3 terminal point
10.1
2
4
6
8
agent 5 terminal point
10
12
-2
0
agent 4 terminal point
10
time progression for agents (sec)
Fig. 3: Agents in formation to locate source of odor
Tracking errors
Control signals
8
e1
e2
e3
e4
e5
1
u1
u2
u3
u4
u5
6
4
2
u i (t)
Norm of error variables e i (t)
1.5
0
-2
0.5
-4
-6
-8
0
-10
0
1
2
3
4
5
6
7
time (sec)
Fig. 4: Norm of tracking errors
8
9
10
0
1
2
3
4
5
6
7
8
time (sec)
Fig. 5: Control signals during consensus
9
10
position of the source
2.5
Sliding manifolds
1.5
s1
s2
s3
s4
s5
surface variables s i (t)
1
0.5
0
-0.5
-1
-1.5
0
1
2
3
4
5
6
7
8
9
10
time (sec)
Fig. 6: Sliding manifolds during consensus
TABLE I: Values of the design parameters used in simulation
c1
ωmax
α1
α2
λ1
λ2
µ
m
w
0.5
2 rad/s
0.25
0.25
1.774
2.85
5
10−3
2
Figure 2 shows agents coming to consensus in finite time
to locate the source of odor and figure 3 shows agents moving
in parallel formation to locate the odor source. Norm of the
tracking errors has been depicted in figure 4. It is evident that
the magnitude of error is very small. Plot of control signals
during consensus has been shown in figure 5 and the plot of
sliding manifolds has been shown in figure 6.
VI. C ONCLUDING REMARKS
The problem of odor source localization by MAS has been
dealt with in a hierarchical manner in this work. The problem
translates into a cooperative control problem wherein agents
are driven towards consensus to locate the true odor source
in finite time. Through computer simulations, it has been
confirmed that the proposed strategy is faster and provides
accurate tracking even in the presence of time varying disturbances.
R EFERENCES
[1] M. H. E. Larcombe, Robotics in nuclear engineering: Computer assisted
teleoperation in hazardous environments with particular reference to
radiation fields. United States: Graham and Trotman, Inc, 1984.
[2] R. Rozas, J. Morales, and D. Vega, “Artificial smell detection for robotic
navigation,” in Advanced Robotics, 1991. ’Robots in Unstructured
Environments’, 91 ICAR., Fifth International Conference on, June 1991,
pp. 1730–1733 vol.2.
[3] V. Genovese, P. Dario, R. Magni, and L. Odetti, “Self organizing
behavior and swarm intelligence in a pack of mobile miniature robots
in search of pollutants,” in Proceedings of the IEEE/RSJ International
Conference on Intelligent Robots and Systems, vol. 3, Jul 1992, pp.
1575–1582.
[4] L. Buscemi, M. Prati, and G. Sandini, “Cellular robotics: Behaviour
in polluted environments,” in Proceedings of the 2nd International
Symposium on Distributed Autonomous Robotic Systems, 1994.
[5] R. A. Russell, “Laying and sensing odor markings as a strategy for
assisting mobile robot navigation tasks,” IEEE Robotics Automation
Magazine, vol. 2, no. 3, pp. 3–9, Sep 1995.
[6] Russell, R. Andrew, Odour Detection by Mobile Robots. River Edge,
NJ, USA: World Scientific Publishing Co., Inc., 2000.
[7] R. Russell, A. Bab-Hadiashar, R. L. Shepherd, and G. G. Wallace, “A
comparison of reactive robot chemotaxis algorithms,” Robotics and Autonomous Systems, vol. 45, no. 2, pp. 83 – 97, 2003. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0921889003001209
[8] M. Vergassola, E. Villermaux, and B. Shraiman, “”Infotaxis” as a
strategy for searching without gradients,” Nature, vol. 445, no. 7126,
pp. 406–409, 2007. [Online]. Available: https://hal.archives-ouvertes.fr/
hal-00326807
[9] J. A. Farrell, S. Pang, and W. Li, “Plume mapping via hidden markov
methods,” IEEE Transactions on Systems, Man, and Cybernetics, Part
B (Cybernetics), vol. 33, no. 6, pp. 850–863, Dec 2003.
[10] S. Pang and J. A. Farrell, “Chemical plume source localization,” IEEE
Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics),
vol. 36, no. 5, pp. 1068–1080, Oct 2006.
[11] D. Zarzhitsky, “Physics-based approach to chemical source localization
using mobile robotic swarms,” Ph.D. dissertation, 2008.
[12] V. Braitenberg, Vehicles: Experiments in Synthetic Psychology. Boston,
MA, USA: MIT Press, 1984.
[13] C. Lytridis, G. S. Virk, Y. Rebour, and E. E. Kadar, “Odorbased navigational strategies for mobile agents,” Adapt. Behav.,
vol. 9, no. 3-4, pp. 171–187, Apr. 2001. [Online]. Available:
http://dx.doi.org/10.1177/10597123010093004
[14] H. Ishida, K. Suetsugu, T. Nakamoto, and T. Moriizumi, “Study of
autonomous mobile sensing system for localization of odor source
using gas sensors and anemometric sensors,” Sensors and Actuators
A: Physical, vol. 45, no. 2, pp. 153 – 157, 1994. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/0924424794008299
[15] R. Russell, Chemical source location and the RoboMole project. Australian Robotics and Automation Association, 2003, pp. 1 – 6.
[16] L. Marques and A. T. D. Almeida, “Electronic nose-based odour
source localization,” in 6th International Workshop on Advanced Motion
Control. Proceedings (Cat. No.00TH8494), April 2000, pp. 36–40.
[17] L. Marques, U. Nunes, and A. T. de Almeida, “Olfaction-based
mobile robot navigation,” Thin Solid Films, vol. 418, no. 1, pp.
51 – 58, 2002, proceedings from the International School on Gas
Sensors in conjunction with the 3rd European School of the NOSE
Network. [Online]. Available: http://www.sciencedirect.com/science/
article/pii/S004060900200593X
[18] W. Ren and R. W. Beard, “Consensus seeking in multiagent systems
under dynamically changing interaction topologies,” IEEE Transactions
on Automatic Control, vol. 50, no. 5, pp. 655–661, May 2005.
[19] W. Yu, G. Chen, W. Ren, J. Kurths, and W. X. Zheng, “Distributed
higher order consensus protocols in multiagent dynamical systems,”
IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 58,
no. 8, pp. 1924–1932, Aug 2011.
[20] A. T. Hayes, A. Martinoli, and R. M. Goodman, “Swarm robotic odor
localization: Off-line optimization and validation with real robots,”
Robotica, vol. 21, no. 4, pp. 427–441, Aug. 2003. [Online]. Available:
http://dx.doi.org/10.1017/S0263574703004946
[21] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Neural
Networks, 1995. Proceedings., IEEE International Conference on, vol. 4,
Nov 1995, pp. 1942–1948 vol.4.
[22] L. Marques, U. Nunes, and A. T. de Almeida, “Particle swarmbased olfactory guided search,” Autonomous Robots, vol. 20, no. 3,
pp. 277–287, Jun 2006. [Online]. Available: https://doi.org/10.1007/
s10514-006-7567-0
[23] W. Jatmiko, K. Sekiyama, and T. Fukuda, “A pso-based mobile robot for
odor source localization in dynamic advection-diffusion with obstacles
environment: theory, simulation and measurement,” IEEE Computational
Intelligence Magazine, vol. 2, no. 2, pp. 37–51, May 2007.
[24] Q. Lu, S. r. Liu, and X. n. Qiu, “A distributed architecture with two layers
for odor source localization in multi-robot systems,” in IEEE Congress
on Evolutionary Computation, July 2010, pp. 1–7.
[25] Q. Lu and Q. L. Han, “Decision-making in a multi-robot system for
odor source localization,” in IECON 2011 - 37th Annual Conference of
the IEEE Industrial Electronics Society, Nov 2011, pp. 74–79.
[26] Fan R. K. Chung, Spectral Graph Theory, ser. CBMS Regional
Conference Series in Mathematics. AMS and CBMS, 1997, vol. 92.
[Online]. Available: http://bookstore.ams.org/cbms-92
[27] K. David Young, Vadim I. Utkin and Umit Ozguner, “A control
engineer’s guide to sliding mode control,” IEEE transactions on Control
Systems Technology, vol. 7, no. 3, pp. 328–342, May 1999.
[28] M. L. Cao, Q. H. Meng, Y. X. Wu, M. Zeng, and W. Li, “Consensus
based distributed concentration-weighted summation algorithm for gasleakage source localization using a wireless sensor network,” in Proceedings of the 32nd Chinese Control Conference, July 2013, pp. 7398–7403.
| 3 |
Coresets for Dependency Networks
Alejandro Molina
FIRST. LAST @ TU - DORTMUND . DE
CS Department
TU Dortmund, Germany
Alexander Munteanu
FIRST. LAST @ TU - DORTMUND . DE
arXiv:1710.03285v2 [cs.AI] 16 Oct 2017
CS Department
TU Dortmund, Germany
Kristian Kersting
LAST @ CS . TU - DARMSTADT. DE
CS Dept. and Centre for CogSci
TU Darmstadt, Germany
Abstract
Many applications infer the structure of a probabilistic graphical model from data to elucidate the
relationships between variables. But how can we train graphical models on a massive data set?
In this paper, we show how to construct coresets—compressed data sets which can be used as
proxy for the original data and have provably bounded worst case error—for Gaussian dependency
networks (DNs), i.e., cyclic directed graphical models over Gaussians, where the parents of each
variable are its Markov blanket. Specifically, we prove that Gaussian DNs admit coresets of size
independent of the size of the data set. Unfortunately, this does not extend to DNs over members
of the exponential family in general. As we will prove, Poisson DNs do not admit small coresets.
Despite this worst-case result, we will provide an argument why our coreset construction for DNs
can still work well in practice on count data. To corroborate our theoretical results, we empirically
evaluated the resulting Core DNs on real data sets. The results demonstrate significant gains over
no or naive sub-sampling, even in the case of count data.
1. Introduction
Artificial intelligence and machine learning have achieved considerable successes in recent years,
and an ever-growing number of disciplines rely on them. Data is now ubiquitous, and there is great
value from understanding the data, building e.g. probabilistic graphical models to elucidate the
relationships between variables. In the big data era, however, scalability has become crucial for
any useful machine learning approach. In this paper, we consider the problem of training graphical
models, in particular Dependency Networks Heckerman et al. (2000), on massive data sets. They
are cyclic directed graphical models, where the parents of each variable are its Markov blanket,
and have been proven successful in various tasks, such as collaborative filtering Heckerman et al.
(2000), phylogenetic analysis Carlson et al. (2008), genetic analysis Dobra (2009); Phatak et al.
(2010), network inference from sequencing data Allen and Liu (2013), and traffic as well as topic
modeling Hadiji et al. (2015).
Specifically, we show that Dependency Networks over Gaussians—arguably one of the most
prominent type of distribution in statistical machine learning—admit coresets of size independent of
the size of the data set. Coresets are weighted subsets of the data, which guarantee that models fitting
1
them will also provide a good fit for the original data set, and have been studied before for clustering
Badoiu et al. (2002); Feldman et al. (2011, 2013); Lucic et al. (2016), classification Har-Peled et al.
(2007); Har-Peled (2015); Reddi et al. (2015), regression Drineas et al. (2006, 2008); Dasgupta et al.
(2009); Geppert et al. (2017), and the smallest enclosing ball problem Badoiu and Clarkson (2003,
2008); Feldman et al. (2014); Agarwal and Sharathkumar (2015); we refer to Phillips (2017) for a
recent extensive literature overview. Our contribution continues this line of research and generalizes
the use of coresets to probabilistic graphical modeling.
Unfortunately, this coreset result does not extend to Dependency Networks over members of
the exponential family in general. We prove that Dependency Networks over Poisson random variables Allen and Liu (2013); Hadiji et al. (2015) do not admit (sublinear size) coresets: every single
input point is important for the model and needs to appear in the coreset. This is an important
negative result, since count data—the primary target of Poisson distributions—is at the center of
many scientific endeavors from citation counts to web page hit counts, from counts of procedures
in medicine to the count of births and deaths in census, from counts of words in a document to the
count of gamma rays in physics. Here, modeling one event such as the number of times a certain
lab test yields a particular result can provide an idea of the number of potentially invasive procedures that need to be performed on a patient. Thus, elucidating the relationships between variables
can yield great insights into massive count data. Therefore, despite our worst-case result, we will
provide an argument why our coreset construction for Dependency Networks can still work well in
practice on count data. To corroborate our theoretical results, we empirically evaluated the resulting
Core Dependency Networks (CDNs) on several real data sets. The results demonstrate significant
gains over no or naive sub-sampling, even for count data.
We proceed as follows. We review Dependency Networks (DNs), prove that Gaussian DNs
admit sublinear size coresets, and discuss the possibility to generalize this result to count data.
Before concluding, we illustrate our theoretical results empirically.
2. Dependency Networks
Most of the existing AI and machine learning literature on graphical models is dedicated to binary, multinominal, or certain classes of continuous (e.g. Gaussian) random variables. Undirected
models, aka Markov Random Fields (MRFs), such as Ising (binary random variables) and Potts
(multinomial random variables) models have found a lot of applications in various fields such as
robotics, computer vision and statistical physics, among others. Whereas MRFs allow for cycles in
the structures, directed models aka Bayesian Networks (BNs) required acyclic directed relationships
among the random variables.
Dependency Networks (DNs)—the focus of the present paper—combine concepts from directed
and undirected worlds and are due to Heckerman et al. (2000). Specifically, like BNs, DNs have directed arcs but they allow for networks with cycles and bi-directional arcs, akin to MRFs. This
makes DNs quite appealing for many applications because we can build multivariate models from
univariate distributions Allen and Liu (2013); Yang et al. (2015); Hadiji et al. (2015), while still permitting efficient structure learning using local estimtatiors or gradient tree boosting. Generally, if the
data are fully observed, learning is done locally on the level of the conditional probability distributions for each variable mixing directed and indirected as needed. Based on these local distributions,
samples from the joint distribution are obtained via Gibbs sampling. Indeed, the Gibbs sampling
neglects the question of a consistent joint probability distribution and instead makes only use of
2
local distributions. The generated samples, however, are often sufficient to answer many probability
queries.
Formally, let X = (X (1) , . . . , X (d) ) denote a random vector and x its instantiation. A Dependency Network (DN) on X is a pair (G, Ψ) where G = (V, E) is a directed, possibly cyclic, graph
where each node in V = [d] = {1, . . . , d} corresponds to the random variable X (i) . In the set of
directed edges E ⊆ V × V \ {(i, i) | i ∈ [d]}, each edge models a dependency between variables, i.e.,
if there is no edge between i and j then the variables X (i) and X (j) are conditionally independent
given the other variables X \i,j indexed by [d] \ {i, j} in the network. We refer to the nodes that have
an edge pointing to X (i) as its parents, denoted by pai = {X (j) | (j, i) ∈ E}. Ψ = {pi | i ∈ [d]} is
a set of conditional probability distributions associated with each variable X (i) ∼ pi , where
pi = p(x(i) | pai ) = p(x(i) | x\i ) .
As example of such a local model, consider Poisson conditional probability distributions as illustrated in Fig. 1 (left):
(i)
λi (x\i )x −λi (x\i )
p(x(i) | pai ) =
e
.
x(i) !
Here, λi (x\i ) highlights the fact that the mean can have a functional form that is dependent on X (i) ’s
parents. Often, we will refer to it simply as λi . The construction of the local conditional probability
distribution is similar to the (multinomial) Bayesian network case. However, in the case of DNs,
the graph is not necessarily acyclic and p(x(i) | x\i ) typically has an infinite range, and hence cannot
be represented using a finite table of probability values. Finally, the full joint distribution is simply
defined as the product of local distributions:
Y
p(x) =
p(x(i) | x\i ) ,
i∈[d]
also called pseudo likelihood. For the Poisson case, this reads
(i)
p(x) =
Y
i∈[d]
λix −λi
.
e
x(i) !
Note, however, that doing so does not guarantee the existence of a consistent joint distribution, i.e.,
a joint distribution of which they are the conditionals. Bengio et al. (2014), however, have recently
proven the existence of a consistent distribution per given evidence, which does not have to be
known in closed form, as long as an unordered Gibbs sampler converges.
3. Core Dependency Networks
As argued, learning Dependency Networks (DNs) amounts to determining the conditional probability distributions from a given set of n training instances xi ∈ Rd representing the rows of the
data matrix X ∈ Rn×d over d variables. Assuming that p(x(i) | pai ) is parametrized as a generalized linear model (GLM) McCullagh and Nelder (1989), this amounts to estimating the parameters
γ (i) of the GLM associated with each variable X (i) , since this completely determines the local distributions, but p(x(i) | pai ) will possibly depend on all other variables in the network, and these
dependencies define the structure of the network. This view of training DNs as fitting d GLMs to
the data allows us to develop Core Dependency Networks (CDNs): Sample a coreset and train a DN
over certain members of the GLM family on the sampled corest.
3
Relative Frequency
0.5
fit
data
0.4
data
fit
0.3
0.2
X (0)
0.1
0.0
X (1)
0
2
4
6
X (2)
8
Number of Goals
Figure 1: Illustration of Dependency Networks (DNs) using Poissons. (left) The number of goals
scored in soccer games follows a Poisson distribution. The plot shows the distribution
of home goals in the season 2012/13 of the German Bundesliga by the home team. The
home team scored on average λ = 1.59 goals per game. (right) Example structure of a
Poisson DN. The conditional distribution of each count variable given its neighbors is a
Poisson distribution. Similar to a Bayesian network a Poisson DN is directed, however, it
also contains cycles. (Best viewed in color)
A coreset is a (possibly) weighted and usually considerably smaller subset of the input data that
approximates a given objective function for all candidate solutions:
Definition 1 (ε-coreset) Let X be a set of points from a universe U and let Γ be a set of candidate
solutions. Let f : U × Γ → R≥0 be a non-negative measurable function. Then a set C ⊂ X is an
ε-coreset of X for f , if
∀γ ∈ Γ : |f (X, γ) − f (C, γ)| ≤ ε · f (X, γ).
We now introduce the formal framework that we need towards the design of coresets for learning
dependency networks. A very useful structural property for `2 based objective (or loss) functions is
the concept of an ε-subspace embedding.
Definition 2 (ε-subspace embedding) An ε-subspace embedding for the columnspace of X is a
matrix S such that
∀γ ∈ Rd : (1 − ε)kXγk2 ≤ kSXγk2 ≤ (1 + ε)kXγk2
We can construct a sampling matrix S which forms an ε-subspace embedding with constant probabilty in the following way: Let U be any orthonormal basis for the columnspace of X. This basis
can be obtained from the singular value decomposition (SVD) X = U ΣV T of the data matrix.
Now let ρ = rank(U ) = rank(X) and define the leverage scores li = kUi∗ k2 /kU k2F = kUi∗ k2 /ρ
for i ∈ [n]. Now we fix a sampling size parameter k = O(ρ log(ρ/ε)/ε2 ), sample the input points
one-by-one with probability qi = min{1, k · li } and reweight their contribution to the loss function by wi = 1/qi . Note that, for the sum of squares loss, this corresponds to defining a diagonal
4
√
(sampling) matrix S by Sii = 1/ qi with probability qi and Sii = 0 otherwise. Also note, that the
expected number of samples is k = O(ρ log(ρ/ε)/ε2 ), which also holds with constant probability by Markov’s inequality. Moreover, to give an intuition why this works, note that for any fixed
γ ∈ Rd , we have
X
X xi γ 2
2
E kSXγk =
qi =
(xi γ)2 = kXγk2 .
√
qi
The significantly stronger property of forming an ε-subspace embedding, according to Definition 2,
follows from a matrix approximation bound given in Rudelson and Vershynin (2007); Drineas et al.
(2008).
Lemma 3 Let X be an input matrix with rank(X) = ρ. Let S be a sampling matrix constructed
as stated above with sampling size parameter k = O(ρ log(ρ/ε)/ε2 ). Then S forms an ε-subspace
embedding for the columnspace of X with constant probability.
Proof Let X = U ΣV T be the SVD of X. By Theorem 7 in Drineas et al. (2008) there exists an
absolute constant C > 1 such that
r
T T
log k
E kU S SU − U T U k ≤ C
kU kF kU k
k
r
log k √
≤ C
ρ ≤ ε,
k
√
where we used the fact that kU kF = ρ and kU k = 1 by orthonormality of U . The last inequality
holds by choice of k = Dρ log(ρ/ε)/ε2 for a large enough absolute constant D > 1 such that
1+log D
< 4C1 2 , since
D
log k
log(Dρ log(ρ/ε)/ε2 )
2ε2 log(Dρ log(ρ/ε)/ε)
=
≤
k
Dρ log(ρ/ε)/ε2
Dρ log(ρ/ε)
2
2
4ε
1 + log D
ε2
4ε (log(ρ/ε) + log D)
≤
< 2 .
≤
Dρ log(ρ/ε)
ρ
D
C ρ
By an application of Markov’s inequality and rescaling ε, we can assume with constant probability
kU T S T SU − U T U k ≤ ε.
(1)
We show that this implies the ε-subspace embedding property. To this end, fix γ ∈ Rd .
| kSXγk2 − kXγk2 |
= kγ T X T S T SXγ − γ T X T Xγk
= kγ T V ΣU T S T SU ΣV T γ − γ T V ΣU T U ΣV T γk
= kγ T V Σ (U T S T SU − U T U ) ΣV T γk
≤ kU T S T SU − U T U k · kΣV T γk2
≤ kU T S T SU − U T U k · kXγk2 ≤ εkXγk2 ,
The first inequality follows by submultiplicativity, and the second from rotational invariance of the
spectral norm. Finally we conclude the proof by Inequality (1).
5
The question arises whether we can do better than O(ρ log(ρ/ε)/ε2 ). One can show by reduction
from the coupon collectors theorem that there is a lower bound of Ω(ρ log ρ) matching the upper
bound up to its dependency on √
ε. The hard instance is a dm ×d, m ∈ N orthonormal matrix in which
the scaled canonical basis Id / dm−1 is stacked dm−1 times. The leverage scores are all equal to
1/dm , implying a uniform sampling distribution with probability 1/d for each basis vector. Any rank
ρ = d preserving sample must comprise at least one of them. This is exactly the coupon collectors
theorem with d coupons which has a lower bound of Ω(d log d) Motwani and Raghavan (1995).
The fact that the sampling is without replacement does not change this, since the reduction holds for
arbitrary large m creating sufficient multiple copies of each element to simulate the sampling with
replacement Tropp (2011).
Now we know that with constant probability over the randomness of the construction algorithm,
S satisfies the ε-subspace embedding property for a given input matrix X. This is the structural key
property to show that actually SX is a coreset for Gaussian linear regression models and dependency
networks. Consider (G, Ψ), a Gaussian dependency network (GDN), i.e., a collection of Gaussian
linear regression models
Ψ = {pi (X (i) |X \i , γ (i) ) = N (X \i γ (i) , σ 2 ) | i ∈ [d]}
on an arbitrary digraph structure G Heckerman et al. (2000). The logarithm of the (pseudo-)likelihood
Besag (1975) of the above model is given by
Y
X
ln L (Ψ) = ln
pi =
ln pi .
A maximum likelihood estimate can be obtained by maximizing this function with respect to γ =
(γ (1) , . . . , γ (d) ) which is equivalent to minimizing the GDN loss function
X
fG (X, γ) =
kX \i γ (i) − X (i) k2 .
Theorem 4 Given S, an ε-subspace embedding for the columnspace of X as constructed above,
SX is an ε-coreset of X for the GDN loss function.
Proof Fix an arbitrary γ = (γ (1) , . . . , γ (d) ) ∈ Rd(d−1) . Consider the affine map Φ : Rd−1 ×
\i
[d] → Rd , defined by Φ(γ (i) ) = Id γ (i) − ei . Clearly Φ extends its argument from d − 1 to d
dimensions by inserting a −1 entry at position i and leaving the other entries in their original order.
Let β (i) = Φ(γ (i) ) ∈ Rd . Note that for each i ∈ [d] we have
Xβ (i) = XΦ(γ (i) ) = X \i γ (i) − X (i) ,
(2)
and each β (i) is a vector in Rd . Thus, the triangle inequality and the universal quantifier in Definition
2 guarantee that
X
X
|
kSXβ (i) k2 −
kXβ (i) k2 |
X
= |
(kSXβ (i) k2 − kXβ (i) k2 ) |
X
≤
|kSXβ (i) k2 − kXβ (i) k2 |
X
X
≤
εkXβ (i) k2 = ε
kXβ (i) k2 .
6
The claim follows by substituting Identity (2).
It is noteworthy that computing one single coreset for the columnspace of X is sufficient, rather
than computing d coresets for the d different subspaces spanned by X \i .
From Theorem 4 it is straightforward to show that the minimizer found for the coreset is a good
approximation of the minimizer for the original data.
Corollary 5 Given an ε-coreset C of X for the GDN loss function, let γ̃ ∈ argminγ∈Rd(d−1) fG (C, γ).
Then it holds that
fG (X, γ̃) ≤ (1 + 4ε) min fG (X, γ).
γ∈Rd(d−1)
Proof Let γ ∗ ∈ argminγ∈Rd(d−1) fG (X, γ). Then
fG (X, γ̃) ≤
≤
1
1
fG (C, γ̃) ≤
fG (C, γ ∗ )
1−ε
1−ε
1+ε
fG (X, γ ∗ ) ≤ (1 + 4ε)fG (X, γ ∗ ).
1−ε
The first and third inequalities are direct applications of the coreset property, the second holds by
optimality of γ̃ for the coreset, and the last follows from ε < 12 .
Moreover, the coreset does not affect inference within GDNs. Recently, it was shown for (Bayesian)
Gaussian linear regression models that the entire multivariate normal distribution over the parameter
space is approximately preserved by ε-subspace embeddings Geppert et al. (2017), which generalizes the above. This implies that the coreset yields a useful pointwise approximation in Markov
Chain Monte Carlo inference via random walks like the pseudo-Gibbs sampler in Heckerman et al.
(2000).
4. Negative Result on Coresets for Poisson DNs
Naturally, the following question arises: Do (sublinear size) coresets exist for dependency networks
over the exponential family in general? Unfortunately, the answer is no! Indeed, there is no (sublinear size) coreset for the simpler problem of Poisson regression, which implies the result for Poisson
DNs. We show this formally by reduction from the communication complexity problem known as
indexing.
To this end, recall that the negative log-likelihood for Poisson regression is McCullagh and
Nelder (1989); Winkelmann (2008)
X
`(γ) := `(γ|X, Y ) =
exp(xi γ) − yi · xi γ + ln(yi !).
Theorem 6 Let ΣD be a data structure for D = [X, Y ] that approximates likelihood queries ΣD (γ)
for Poisson regression, such that
∀γ ∈ Rd : η −1 · `(γ|D) ≤ ΣD (γ) ≤ η · `(γ|D).
If η <
exp( n
)
4
2n2
then ΣD requires Ω(n) bits of storage.
7
Proof We reduce from the indexing problem which is known to have Ω(n) one-way randomized
communication complexity Jayram et al. (2008). Alice is given a vector b ∈ {0, 1}n . She produces
for every i with bi = 1 the points xi = (r·ω i , −1) ∈ R3 , where ω i , i ∈ {0, . . . , n−1} denote the nth
3
unit roots in the plane, i.e., the vertices of a regular n-polygon of radius r = n/(1 − cos( 2π
n )) ≤ n
in canonical order. The corresponding counts are set to yi = 1. She builds and sends ΣD of size
3
s(n) to Bob, whose task is to guess the bit bj . He chooses to query γ = (ω j , r · cos( 2π
n )) ∈ R . Note
j
that this affine hyperplane separates r · ω from the other scaled unit roots since it passes exactly
through r · ω (j−1) mod n and r · ω (j+1) mod n . Also, all points are within distance 2r from each other
by construction and consequently from the hyperplane. Thus, −2r ≤ xi γ ≤ 0 for all i 6= j.
If bj = 0, then xj does not exist and the cost is at most
`(γ) =
X
exp(xi γ) − yi · xi γ + ln(yi !)
≤
X
1 + 2r + 1 ≤ 2n + 2nr ≤ 4n4 .
If bj = 1 then xj is in the expensive halfspace and at distance exactly
2π
j T j
xj γ = (rω ) ω − r · cos
n
2π
= r · 1 − cos
= n
n
So the cost is bounded below by `(γ) ≥ exp(n) − n + 1 ≥ exp( n2 ).
exp( n )
Given η < 2n24 , Bob can distinguish these two cases based on the data structure only, by
deciding whether ΣD (γ) is strictly smaller or larger than exp( n4 ) · 2n2 . Consequently s(n) = Ω(n),
since this solves the indexing problem.
Note that the bound is given in bit complexity, but restricting the data structure to a sampling
based coreset and assuming every data point can be expressed in O(d log n) bits, this means we still
have a lower bound of k = Ω( logn n ) samples.
Corollary 7 Every sampling based coreset for Poisson regression with approximation factor η <
exp( n
)
4
as in Theorem 6 requires at least k = Ω( logn n ) samples.
2n2
At this point it seems very likely that a similar argument can be used to rule out any o(n)-space
constant approximation algorithm. This remains an open problem for now.
5. Why Core DNs for Count Data can still work
So far, we have a quite pessimistic view on extending CDNs beyond Gaussians. In the Gaussian
setting, where the loss is measured in squared Euclidean distance, the number of important points,
i.e., having significantly large leverage scores, is bounded essentially by O(d). This is implicit in
the original early works Drineas et al. (2008) and has been explicitly formalized later Langberg
and Schulman (2010); Clarkson and Woodruff (2013). It is crucial to understand that this is an
inherent property of the norm function, and thus holds for arbitrary data. For the Poisson GLM, in
contrast, we have shown that its loss function does not come with such properties from scratch. We
8
constructed a worst case scenario, where basically every single input point is important for the model
and needs to appear in the coreset. Usually, this is not the case with statistical models, where the data
is assumed to be generated i.i.d. from some generating distribution that fits the model assumptions.
Consider for instance a data reduction for Gaussian linear regression via leverage score sampling
vs. uniform sampling. It was shown that given the data follows the model assumptions of a Gaussian
distribution, the two approaches behave very similarly. Or, to put it another way, the leverage scores
are quite uniform. In the presence of more and more outliers generated by the heavier tails of tdistributions, the leverage scores increasingly outperform uniform sampling Ma et al. (2015).
The Poisson model
yi ∼ Poi(λi ), λi = exp(xi γ).
(3)
though being the standard model for count data, suffers from its inherent limitation on equidispersed data since E [yi |xi ] = V [yi |xi ] = exp(xi γ). Count data, however, is often overdispersed
especially for large counts. This is due to unobserved variables or problem specific heterogeneity
and contagion-effects. The log-normal Poisson model is known to be inferior for data which specifically follows the Poisson model, but turns out to be more powerful in modeling the effects that can
not be captured by the simple Poisson model. It has wide applications for instance in econometric
elasticity problems. We review the log-normal Poisson model for count data Winkelmann (2008)
yi ∼ Poi(λi ),
λi = exp(xi γ)ui = exp(xi γ + vi ),
vi = ln ui ∼ N (µ, σ) .
2
A natural choice for the parameters of the log-normal distribution is µ = − σ2 in which case we
have
E [yi |xi ] = exp(xi γ + µ + σ 2 /2) = exp(xi γ) ,
V [yi |xi ] = E [yi |xi ] + (exp(σ 2 ) − 1)E [yi |xi ]2 .
It follows that V [yi |xi ] = exp(xi γ) + Ω(exp(xi γ)2 ) > exp(xi γ), where a constant σ 2 that is
independent of xi , controls the amount of overdispersion. Taking the limit for σ → 0 we arrive at
the simple model (3), since the distribution of vi = ln ui tends to δ0 , the deterministic Dirac delta
distribution which puts all mass on 0. The inference might aim for the log-normal Poisson model
directly as in Zhou et al. (2012), or it can be performed by (pseudo-)maximum likelihood estimation
of the simple Poisson model. The latter provides a consistent estimator as long as the log-linear mean
function is correctly specified, even if higher moments do not possess the limitations inherent in the
simple Poisson model Winkelmann (2008).
Summing up our review on the count modeling perspective, we learn that preserving the loglinear mean function in a Poisson model is crucial towards consistency of the estimator. Moreover,
modeling counts in a log-normal model gives us intuition why leverage score sampling can capture
the underlying linear model accurately: In the log-normal Poisson model, u follows a log-normal
distribution. It thus holds for ln λ = Xγ + ln u = Xγ + v, that
2
σ
2
v ∼ N − · 1, σ In
2
9
3.58
3.63
3.58
3.59
3.58
CDN
Uniform
Full
3.6×107
3.55×107
10%
3.48
20%
30%
40%
Training data (Sample size in percentage)
3.58
3.35
3.36
3.33
3.31
3.31
3.29
3.6×106
100%
3.26
CDN
Uniform
Full
3.5×106
3.4×106
3.3×106
3.2×10
6
10%
20%
30%
40%
Training data (Sample size in percentage)
Negative Log Pseudo
Likelihood
100%
−2.5
−2.6 −2.55 −2.64 −2.62 −2.65 −2.65 −2.66 −2.66 −2.67
CDN
Uniform
Full
−2.55
−2.6
−2.65
−2.7
2.2
3.5
3.0
1.72
1.76
2.38
2.46
2.79
2.9
3.08
3.18
4.0
CDN
Uniform
Full
2.5
2.0
1.5
10%
1.55
20%
30%
40%
Training data (Sample size in percentage)
1.68
1.4
1.48
1.36
1.42
1.35
1.4
1.8
1.0
100%
1.39
CDN
Uniform
Full
2.0
1.6
1.4
1.2
1.0
4.0
Log Time (in hours)
3.59
3.0
2.5
Log Time (in minutes)
3.58
Log RMSE (Root Mean Square Error)
5.23
Log RMSE (Root Mean Square Error)
3.7×10
3.59
7
3.65×107
Negative Poisson Pseudo Log–Likelihood
Negative Gaussian Pseudo Log–Likelihood
3.75×107
2.0
10%
20%
30%
40%
Training data (Sample size in percentage)
0.29 −0.25 0.48
0.09
0.81
0.54
1.05
0.86
100%
2.56
CDN
Uniform
Full
1.5
1.0
0.5
0.0
10%
20%
30%
40%
Training data (Sample size in percentage)
100%
RMSE
−0.5
10%
20%
30%
40%
Training data (Sample size in percentage)
100%
Training Time
Figure 2: (Q1) Performance (the lower, the better) of Gaussian CDNs on MNIST (upper row) and
Poisson CNDs on the traffic dataset (lower row) 10-fold cross-validated. Shown are the
negative log pseudo likelihood (left), the squared error loss (middle, in log-space) as
well as the training time (right, in log-space) on the y-axis for different proportions of
the data sampled (x axis). Please note the jump in the x-axis after 40%. As one can see,
CDNs (blue) quickly approach the predictive performance of the full dataset (Full, black).
Uniform sampling (Uniform, red) does not perform as well as CDNs. Moreover, CDNs
can be orders of magnitude faster than DNs on the full dataset and scale similar to uniform
sampling. This is also supported by the vertical lines. They denote the mean performances
(the more to the left, the better) on the top axes. (Best viewed in color)
by independence of the observations, which implies
σ2
2
ln λ ∼ N Xγ −
· 1, σ In .
2
2
Omitting the bias µ = − σ2 in each intercept term (which can be cast into X), we notice that this
yields again an ordinary least squares problem kXγ − ln(λ)k2 defined in the columspace of X.
There is still a missing piece in our argumentation. In the previous section we have used that the
coreset construction is an ε-subspace embedding for the columnspace of the whole data set including
the dependent variable, i.e., for [X, ln(λ)]. We face two problems. First, λ is only implicitly given
in the data, but is not explicitly available. Second, λ is a vector derived from X \i in our setting
and might be different for any of the d instances. Fortunately, it was shown via more complicated
arguments Drineas et al. (2008), that it is sufficient for a good approximation, if the sampling is
done obliviously to the dependent variable. The intuition comes from the fact that the loss of any
point in the subspace can be expressed via the projection of ln(λ) onto the subspace spanned by X,
and the residual of its projection. A good approximation of the subspace implicitly approximates
the projection of any fixed vector, which is then applied to the residual vector of the orthogonal
projection. This solves the first problem, since it is only necessary to have a subspace embedding
for X. The second issue can be addressed by increasing the sample size by a factor of O(log d) for
boosting the error probability to O(1/d) and taking a union bound.
10
Sample
portion
10%
20%
30%
40%
MNIST
GCDN
18.03%
0.57%
0.01%
0.01%
GUDN
11162.01%
13.86%
13.33%
2.3%
Traffic
PCDN
6.81%
2.9%
2.04%
1.59%
PUDN
9.6%
3.17%
1.68%
0.99%
Table 1: (Q1) Comparison of the empirical relative error (the lower, the better). Best results per
dataset are bold. Both Gaussian (GCDNs) and Poisson (PCDNs) CDNs recover the model
well, with a fraction of the training data. Uniformly sampled DNs (UDNs) lag behind as
the sample size drops.
6. Empirical Illustration
Our intention here is to corroborate our theoretical results by investigating empirically the following
questions: (Q1) How does the performance of CDNs compare to DNs with access to the full training
data set and to a uniform sample from the training data set? and how does the empirical error behave
according to the sample sizes? (Q2) Do coresets affect the structure recovered by the DN? To this
aim, we implemented (C)DNs in Python calling R. All experiments ran on a Linux machine (56
cores, 4 GPUs, and 512GB RAM).
Benchmarks on MNIST and Traffic Data (Q1): We considered two datasets. In a first experiment, we used the MNIST1 data set of handwritten labeled digits. We employed the training
set consisting of 55000 images, each with 784 pixels, for a total of 43,120,000 measurements, and
trained Gaussian DNs on it. The second data set we considered contains traffic count measurements
on selected roads around the city of Cologne in Germany Ide et al. (2015). It consists of 7994 timestamped measurements taken by 184 sensors for a total of 1,470,896 measurements. On this dataset
we trained Poisson DNs. For each dataset, we performed 10 fold cross-validation for training a full
DN (Full) using all the data, leverage score sampling coresets (CDNs), and uniform samples (Uniform), for different sample sizes. We then compared the predictions made by all the DNs and the
time taken to train them. For the predictions on the MNIST dataset, we clipped the predictions to
the range [0,1] for all the DNs. For the Traffic dataset, we computed the predictions bxc of every
measurement x rounded to the largest integer less than or equal to x.
Fig. 2 summarizes the results. As one can see, CDNs outperform DNs trained on full data and
are orders of magnitude faster. Compared to uniform sampling, coresets are competitive. Actually,
as seen on the traffic dataset, CDNs can have more predictive power than the “optimal” model
using the full data. This is in line with Mahoney (2011), who observed that coresets implicitly
introduce regularization and lead to more robust output. Table 1 summarizes the empirical relative
errors |f (X, γ̃) − f (X, γ ∗ )|/f (X, γ ∗ ) between (C/U)DNs γ̃ and DNs γ ∗ trained on all the data.
CDNs clearly recover the original model, at a fraction of training data. Overall, this answers (Q1)
affirmatively.
Relationship Elucidation (Q2): We investigated the performance of CDNs when recovering the
graph structure of word interactions from a text corpus. For this purpose, we used the NIPS2 bag1. http://yann.lecun.com/exdb/mnist/
2. https://archive.ics.uci.edu/ml/datasets/bag+of+words
11
skill
item
loss
component
rotation
direction
tree
cell
digit
map
skill
set
document
item
disparity
object
pca
oscillator
neuron
distance
pyramid
tangent
loss
component
estimator
dialogue
routing
saliency
road
iiii
policy
context
fuzzy
option
wavelet
speaker
user
control
call
star letter
delay
building
attractor
controller
subscriber
block
eeg
neural
evidence
potential
classifier
spike
channel
instruction
rule
stress
rules
chip
net
lesion
circuit
light
rotation
direction
tree
cell
digit
map
skill
set
document
item
disparity
object
pca
oscillator
neuron
distance
pyramid
tangent
estimator
dialogue
routing
saliency
road
iiii
policy
context
fuzzy
option
wavelet
speaker
user
control
call
star letter
delay
building
attractor
controller
subscriber
block
eeg
neural
evidence
potential
classifier
spike
channel
instruction
rule
stress
rules
chip
net
lesion
circuit
light
analog
set
document
loss
component
disparity
object
estimator
pca
oscillator
neuron
dialogue
distance
routing
pyramid
saliency
tangent
road
iiii
policy
context
fuzzy
option
wavelet
speaker
user
control
call
star letter
delay
building
attractor
controller
subscriber
block
eeg
neural
evidence
potential
classifier
spike
channel
instruction
rule
stress
rules
chip
net
lesion
circuit
light
rotation
direction
tree
cell
digit
map
analog
analog
Gaussian CDN
Poisson CDN
skill
item
loss
component
rotation
direction
tree
cell
road
digit
map
routing
policy
object
pca
neuron
pyramid
skill
set
document
item
disparity
estimator
oscillator
distance
dialogue
saliency
tangent
context
user
iiii
fuzzy
option
wavelet
speaker
control
call
star letter
delay
building
attractor
controller
subscriber
block
eeg
neural
evidence
potential
classifier
spike
channel
instruction
rule
stress
rules
chip
net
lesion
circuit
light
analog
loss
component
rotation
direction
tree
cell
road
digit
map
routing
policy
object
pca
neuron
pyramid
skill
set
document
item
disparity
estimator
oscillator
distance
dialogue
saliency
tangent
context
user
iiii
fuzzy
option
wavelet
speaker
control
call
star letter
delay
building
attractor
controller
subscriber
block
eeg
neural
evidence
potential
classifier
spike
channel
instruction
rule
stress
rules
chip
net
lesion
circuit
light
analog
loss
component
rotation
direction
tree
cell
road
digit
map
routing
policy
set
document
object
pca
neuron
pyramid
disparity
estimator
oscillator
distance
dialogue
saliency
tangent
context
user
iiii
fuzzy
option
wavelet
speaker
control
call
star letter
delay
building
attractor
controller
subscriber
block
eeg
neural
evidence
potential
classifier
spike
channel
instruction
rule
stress
rules
chip
net
lesion
circuit
light
analog
Figure 3: (Q2) Elucidating the relationships between random variables. Shown are the (positive)
dependency structures of Gaussian (top) and Poisson (bottom) CDNs on NIPS and different learning sampling sizes: using 40% (Left) , 70% (Middle) and 100% (Right). The
edges show the 70 top thresholded positive coefficients of the GLMs. The colors of the
edges represent modularity. As one can see, CDNs elucidate relationships among the
words that make semantically sense and approach the structure learned using the full
dataset. For a quantitative assessment, see Tab. 2. (Best viewed in color)
of-words dataset. It contains 1,500 documents with a vocabulary above 12k words. We considered
the 100 most frequent words.
Fig. 3 illustrates the results qualitatively. It shows three CDNs of sampling sizes 40%, 70% and
100% for Gaussians (top) after a log(x+1) transformation and for Poissons (bottom): CDNs capture
well the gist of the NIPS corpus. Table 2 confirms this quantitatively. It shows the Frobenius norms
between the DNs: CDNs capture the gist better than naive, i.e., uniform sampling. This answers
(Q2) affirmatively.
To summarize our empirical results, the answers to questions (Q1) and (Q2) show the benefits
of CDNs.
7. Conclusions
Inspired by the question of how we can train graphical models on a massive dataset, we have studied
coresets for estimating Dependency networks (DNs). We established the first rigorous guarantees
for obtaining compressed ε-approximations of Gaussian DNs for large data sets. We proved worstcase impossibility results on coresets for Poisson DNs. A review of log-normal Poisson modeling
of counts provided deep insights into why our coreset construction still performs well for count data
in practice.
12
Sample
portion
40%
70%
UDN
Gaussian
9.0676
4.8487
CDN
Poisson
6.4042
1.6262
Gaussian
3.9135
2.6327
Poisson
0.6497
0.3821
Table 2: (Q2) Frobenius norm of the difference of the adjacency matrices (the lower, the better)
recovered by DNs trained on the full data and trained on a uniform subsample (UDN) resp.
coresets (CDNs) of the training data. The best results per statiscal type (Gaussian/Poisson)
are bold. CDNs recover the structure better than UDNs.
Our experimental results demonstrate, the resulting Core Dependency Networks (CDNs) can
achieve significant gains over no or naive sub-sampling, even in the case of count data, making it
possible to learn models on much larger datasets using the same hardware.
CDNs provide several interesting avenues for future work. The conditional independence assumption opens the door to explore hybrid multivariate models, where each variable can potentially
come from a different GLM family or link function, on massive data sets. This can further be used
to hint at independencies among variables in the multivariate setting, making them useful in many
other large data applications. Generally, our results may pave the way to establish coresets for deep
models using the close connection between dependency networks and deep generative stochastic
networks Bengio et al. (2014), sum-product networks Poon and Domingos (2011); Molina et al.
(2017), as well as other statistical models that build multivariate distributions from univariate ones
Yang et al. (2015).
Acknowledgements: This work has been supported by Deutsche Forschungsgemeinschaft (DFG)
within the Collaborative Research Center SFB 876 ”Providing Information by Resource-Constrained
Analysis”, projects B4 and C4.
13
References
Pankaj K. Agarwal and R. Sharathkumar. Streaming algorithms for extent problems in high dimensions. Algorithmica, 72(1):83–98, 2015. doi: 10.1007/s00453-013-9846-4. URL https:
//doi.org/10.1007/s00453-013-9846-4.
Genevera I. Allen and Zhandong Liu. A local poisson graphical model for inferring networks from
sequencing data. IEEE Transactions on Nanobioscience, 12(3):189–198, 2013. ISSN 15361241.
Mihai Badoiu and Kenneth L. Clarkson. Smaller core-sets for balls. In Proc. of SODA, pages
801–802, 2003.
Mihai Badoiu and Kenneth L. Clarkson. Optimal core-sets for balls. Computational Geometry, 40
(1):14–22, 2008. doi: 10.1016/j.comgeo.2007.04.002. URL https://doi.org/10.1016/j.comgeo.2007.
04.002.
Mihai Badoiu, Sariel Har-Peled, and Piotr Indyk. Approximate clustering via core-sets. In Proceedings of STOC, pages 250–257, 2002.
Y. Bengio, E. Laufer, G. Alain, and J. Yosinski. Deep generative stochastic networks trainable by
backprop. In Proc. of ICML, pages 226–234, 2014.
Julian Besag. Statistical analysis of non-lattice data. Journal of the Royal Statistical Society, Series
D, 24(3):179–195, 1975.
Jonathan M. Carlson, Zabrina L. Brumme, Christine M. Rousseau, Chanson J. Brumme, Philippa
Matthews, Carl Myers Kadie, James I. Mullins, Bruce D. Walker, P. Richard Harrigan, Philip
J. R. Goulder, and David Heckerman. Phylogenetic dependency networks: Inferring patterns of
CTL escape and codon covariation in HIV-1 gag. PLoS Computational Biology, 4(11), 2008.
Kenneth L. Clarkson and David P. Woodruff. Low rank approximation and regression in input
sparsity time. In Proc. of STOC, pages 81–90, 2013.
Anirban Dasgupta, Petros Drineas, Boulos Harb, Ravi Kumar, and Michael W. Mahoney. Sampling
algorithms and coresets for `p regression. SIAM Journal on Computing, 38(5):2060–2078, 2009.
doi: 10.1137/070696507. URL https://doi.org/10.1137/070696507.
Adrian Dobra. Variable selection and dependency networks for genomewide data. Biostatistics, 10
(4):621–639, 2009.
Petros Drineas, Michael W. Mahoney, and S. Muthukrishnan. Sampling algorithms for `2 regression
and applications. In Proc. of SODA, pages 1127–1136, 2006. URL http://dl.acm.org/citation.cfm?
id=1109557.1109682.
Petros Drineas, Michael W. Mahoney, and S. Muthukrishnan. Relative-error CUR matrix decompositions. SIAM Journal on Matrix Analysis and Applications, 30(2):844–881, 2008. doi:
10.1137/07070471X. URL https://doi.org/10.1137/07070471X.
Dan Feldman, Matthew Faulkner, and Andreas Krause. Scalable training of mixture models via
coresets. In Proc. of NIPS, 2011.
14
Dan Feldman, Melanie Schmidt, and Christian Sohler. Turning big data into tiny data: Constant-size
coresets for k-means, PCA and projective clustering. In Proc. of SODA, pages 1434–1453, 2013.
Dan Feldman, Alexander Munteanu, and Christian Sohler. Smallest enclosing ball for probabilistic
data. In Proc. of SOCG, pages 214–223, 2014. doi: 10.1145/2582112.2582114. URL http:
//doi.acm.org/10.1145/2582112.2582114.
Leo N Geppert, Katja Ickstadt, Alexander Munteanu, Jens Quedenfeld, and Christian Sohler. Random projections for Bayesian regression. Statistics and Computing, 27(1):79–101, 2017.
Fabian Hadiji, Alejandro Molina, Sriraam Natarajan, and Kristian Kersting. Poisson dependency
networks: Gradient boosted models for multivariate count data. MLJ, 100(2-3):477–507, 2015.
Sariel Har-Peled. A simple algorithm for maximum margin classification, revisited.
1507.01563, 2015. URL http://arxiv.org/abs/1507.01563.
arXiv,
Sariel Har-Peled, Dan Roth, and Dav Zimak. Maximum margin coresets for active and noise tolerant
learning. In Proc. of IJCAI, pages 836–841, 2007.
D. Heckerman, D. Chickering, C. Meek, R. Rounthwaite, and C. Kadie. Dependency networks for
density estimation, collaborative filtering, and data visualization. Journal of Machine Learning
Research, 1:49–76, 2000.
Christoph Ide, Fabian Hadiji, Lars Habel, Alejandro Molina, Thomas Zaksek, Michael Schreckenberg, Kristian Kersting, and Christian Wietfeld. LTE connectivity and vehicular traffic prediction
based on machine learning approaches. In Proc. of IEEE VTC Fall, 2015.
T. S. Jayram, Ravi Kumar, and D. Sivakumar. The one-way communication complexity of Hamming
distance. Theory of Computing, 4(1):129–135, 2008. doi: 10.4086/toc.2008.v004a006. URL
https://doi.org/10.4086/toc.2008.v004a006.
Michael Langberg and Leonard J. Schulman. Universal epsilon-approximators for integrals. In
Proc. of SODA, 2010.
Mario Lucic, Olivier Bachem, and Andreas Krause. Strong coresets for hard and soft bregman
clustering with applications to exponential family mixtures. In Proc. of AISTATS, pages 1–9,
2016.
Ping Ma, Michael W. Mahoney, and Bin Yu. A statistical perspective on algorithmic leveraging.
JMLR, 16:861–911, 2015. URL http://dl.acm.org/citation.cfm?id=2831141.
Michael W. Mahoney. Randomized algorithms for matrices and data. Foundations and Trends in
Machine Learning, 3(2):123–224, 2011. doi: 10.1561/2200000035. URL https://doi.org/10.1561/
2200000035.
Peter McCullagh and John Nelder. Generalized Linear Models. Chapman and Hall, 1989.
Alejandro Molina, Sriraam Natarajan, and Kristian Kersting. Poisson sum-product networks: A
deep architecture for tractable multivariate poisson distributions. In Proc. of AAAI, 2017.
15
Rajeev Motwani and Prabhakar Raghavan. Randomized Algorithms. Cambridge Univ. Press, 1995.
ISBN 0-521-47465-5.
Aloke Phatak, Harri T. Kiiveri, Line Harder Clemmensen, and William J. Wilson. NetRaVE:
constructing dependency networks using sparse linear regression. Bioinformatics, 26(12):1576–
1577, 2010.
Jeff M Phillips. Coresets and sketches. In Handbook of Discrete and Computational Geometry.
2017.
Hoifung Poon and Pedro Domingos. Sum-Product Networks: A New Deep Architecture. Proc. of
UAI, 2011.
Sashank J. Reddi, Barnabás Póczos, and Alexander J. Smola. Communication efficient coresets for
empirical loss minimization. In Proc. of UAI, pages 752–761, 2015.
Mark Rudelson and Roman Vershynin. Sampling from large matrices: An approach through geometric functional analysis. Journal of the ACM, 54(4):21, 2007. doi: 10.1145/1255443.1255449.
URL http://doi.acm.org/10.1145/1255443.1255449.
Joel A. Tropp. Improved analysis of the subsampled randomized hadamard transform. Advances
in Adaptive Data Analysis, 3(1-2):115–126, 2011. doi: 10.1142/S1793536911000787. URL
https://doi.org/10.1142/S1793536911000787.
Rainer Winkelmann. Econometric Analysis of Count Data. Springer, 5th edition, 2008. ISBN
3540776486, 9783540776482.
Eunho Yang, Pradeep Ravikumar, Genevera I. Allen, and Zhandong Liu. On graphical models via
univariate exponential family distributions. JMLR, 16:3813–3847, 2015.
Mingyuan Zhou, Lingbo Li, David B. Dunson, and Lawrence Carin. Lognormal and gamma mixed
negative binomial regression. In Proceedings of ICML, 2012. URL http://icml.cc/2012/papers/665.
pdf.
16
| 2 |
PROC. OF THE 6th EUR. CONF. ON PYTHON IN SCIENCE (EUROSCIPY 2013)
15
CATOS: Computer Aided Training/Observing System
Jinook Oh∗†
arXiv:1404.6384v1 [cs.CE] 25 Apr 2014
F
Abstract—In animal behavioral biology, there are several cases in which an
autonomous observing/training system would be useful. 1) Observation of
certain species continuously, or for documenting specific events, which happen irregularly; 2) Longterm intensive training of animals in preparation for
behavioral experiments; and 3) Training and testing of animals without human
interference, to eliminate potential cues and biases induced by humans. The
primary goal of this study is to build a system named CATOS (Computer Aided
Training/Observing System) that could be used in the above situations. As a
proof of concept, the system was built and tested in a pilot experiment, in which
cats were trained to press three buttons differently in response to three different
sounds (human speech) to receive food rewards. The system was built in use
for about 6 months, successfully training two cats. One cat learned to press a
particular button, out of three buttons, to obtain the food reward with over 70
percent correctness.
Index Terms—animal training, animal observing, automatic device
1
I NTRODUCTION
It is often the case in animal behavioral biology that a large
amount of human resources, time, and data storage (such
as video recordings) are required in animal observation and
training. Some representative examples of these cases are:
• Observation of certain species continuously or monitoring
for specific events, which occur irregularly, when behavior of certain species during any time period or specific
time period, such as nocturnal behaviors, are investigated.
• Certain experiments require a prolonged training period,
sometimes over a year. This type of experiment requires
reliable responses, which may not correspond to usual
behavior patterns, from animals in tasks. Therefore, training may require a long period of time until the subject is
ready to be tested. Additionally, long periods of human
supervised training can introduce unintended cues and
biases for animals.
In the first case, an autonomous system for observing
animals can save human resources and reduce the amount of
data storage. The reduced amount of data can also conserve
other types of human resources such as investigation and
maintenance of large-scale data. There have been attempts
to build autonomous observing or surveillance systems in the
fields of biology, such as Kritzler et al. [Kri08]’s work, and
* Corresponding author: [email protected]
† Cognitive Biology Dept., University of Vienna
c 2014 Jinook Oh. This is an open-access article disCopyright ○
tributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original author and source are credited.
http://creativecommons.org/licenses/by/3.0/
security systems, such as Belloto et al. [Bel09], Vallejo et al.
[Val09], for instance. There are also commercial products for
surveillance systems with various degrees of automation, or
incorporating artificial intelligence. However, the intelligence
of each system is case-specific and it is difficult to apply
these specific systems to novel situations without considerable
adjustments. In the second case, an autonomous system for
prolonged, intensive training can also save human resources
and eliminate potential cues and biases caused by humans.
Training with an autonomous system is an extension of traditional operant conditioning chambers and many modern and
elaborated versions have been developed and used, such as in
Markham et al. [Mar96], Takemoto et al. [Tak11], Kangas
et al. [Kan12], Steurer et al. [Ste12], and Fagot & Bonte
[Fag09]. However, many of the previous devices use commercial software. Also, they do not possess the observational
features developed in the current project. It would be useful to
have an open-source, relatively low-budget, and modularized
system which could be customized for the observation, training
and the experimentation on animal subjects of various species.
CATOS, the system built in the present study, fulfills these
necessities. The difference between the previous systems and
CATOS (Computer Aided Training/Observing System) in the
present work is that the animals do not have to be captured
or transported to a separated space at a specific time in
order to be trained. The disadvantages of separating animals
(e.g., primates) are well-known, and include stress on animals
separated from their group or moved from their usual confines,
the risky catching procedure for both animal and human
(cf. Fagot & Bonte [Fag09]). Similar arguments apply to
most animal species, especially when they are social. The
automatic learning device for monkeys (ALDM) described in
Fagot & Bonte [Fag09] is very similar to the trainer aspect
of CATOS described in the present work, but CATOS is
different in following features. First of all, it aimed to be opensource based and more modular so that it can be more easily
adjusted and adopted to different species and experiments.
Another feature is that CATOS is equipped with various
observational features, including visual and auditory recording
and recognition through video camera and microphone, which
make the system able to interact with the subjects, such as
reacting immediately to a subject with a motion detection from
a camera or a sound recognition from a microphone. CATOS
should offer the following advantages.
•
The system should be flexible in terms of its adjustability
and the extendibility to various projects and species. The
16
PROC. OF THE 6th EUR. CONF. ON PYTHON IN SCIENCE (EUROSCIPY 2013)
•
•
•
•
software should be open-source, and both software and
hardware components should be modularized as much as
possible, thus the system reassembly for researchers in
animal behavioral biology is practical.
The system should have various observational features
applicable to a broad range of animal species and observational purposes.
The system should perform continuous monitoring, and
it should record video and/or sound only when a set of
particular conditions is fulfilled. This would reduce the
amount of data produced during the procedure.
The system should have actuators to react in certain
situations, which allows it to act as a trainer/experimenter.
The human trainer/experimenter designs the procedure by
adjusting parameters and modules, but the actual performance should be done by the system. In this way, the
system could help reducing the amount of time required
for training, and eliminating cues/biases which might be
induced by the human interferences.
With this system, the animal should not have to be
transported to a certain space, or separated from its group,
for training. The animals should be able to choose when
to start a trial on their own.
Two CATOS prototypes have been built during this study.
The first build of CATOS has 3 pushbuttons as a main input
device for cats and the second build has a touch-screen as
a main input device. The first build was an initial attempt
to build and test such a system. The second build is the
final product of the study. The basic structures of these two
builds are more or less the same. The differences are that the
second version has improved functions and it uses the touchscreen instead of pushbuttons. The first build of CATOS was
tested with domestic cats (Felis catus) to train them to press
three different buttons differently depending on the auditory
stimuli (three different human speech sounds). The final goal
of this training is to investigate human speech perception
in cats. There is no doubt in that many animal species can
recognize some words in human speech. The examples of
speech perception in dogs and chimpanzees can be found in
the work of Kaminski et al. [Kam04] and Heimbauer et al.
[Hei11] respectively. In some cases, animals can even properly
produce words with specific purposes. An example of speech
perception and production in a parrot can be found in the work
of Pepperberg [Pep87]. Despite these findings, there is ongoing
debate about whether the same perceptual mechanisms are
used in speech recognition by humans and animals (Fitch
[Fit11]). To investigate this issue, animals have to be trained
to show different and reliable responses to different human
speech sounds. Then, we can test which features of human
speech are necessary for different animal species to understand
it. Thus, the final aim of the training in this study would be
to obtain cats showing different responses to different human
speech sounds with statistical significance (over 75 percent).
Before reaching this final goal, several smaller steps and goals
are required.
Fig. 1: Overall system diagram.
2 B RIEF DESCRIPTION OF CATOS (C OMPUTER
A IDED T RAINING /O BSERVING S YSTEM )
The overall system is composed of a combination of
software and hardware components. The software components are mainly composed of the Python script named as
’AA.<version>.py’ and the program for the microcontroller.
The ’AA’ runs all of the necessary processes and communicates with the microcontroller program. The microcontroller
program operates sensors and actuators as it communicates
with the ’AA’ program. The hardware components are composed of various devices, some of which are directly connected
to the computer via USB cables. Some other devices only have
GPIO (General Purpose Input Output) pins; therefore they are
connected to the microcontroller. The microcontroller itself is
connected to the computer via a USB cable. The hardware
devices, which are directly connected via USB cables, can be
accessed using various software modules, which are imported
into the ’AA’ program. The access to other devices only
using GPIO pins is performed in the microcontroller and the
’AA’ program simply communicates with the microcontroller
program via a serial connection for sending commands to
actuators and receiving values from sensors.
The software for this system is called AA (Agent for
Animals). This software was build with helps of many external
libraries such as OpenCV [Bra00], and NumPy/SciPy [Jon01].
Once it starts, seven processes were launched using multiprocessing package of Python and it runs until the user terminates
the program. The multiprocessing was used because the heavy
calculation for image processing from multiple webcams were
concerned. The number of processes can be changed as some
of them can be turned on or off. These processes include
a video-in process for each camera, a video-out process, an
audio-in process, an audio-out process, a schema process, and
a message-board process. Figure 1. Even though some of
these processes have quite simple tasks, they were separated
in order to prevent them from interfering with each other
and/or becoming the bottleneck. The system has to process
the visual, auditory, and other sensory and motor information
simultaneously to recognize the change of the environment and
CATOS: COMPUTER AIDED TRAINING/OBSERVING SYSTEM
17
Fig. 2: AA_DataViewer
respond to it properly. The output data such as captured video
input images, recorded WAV files, movement-records, CSV
files for trial results, and the log file are temporarily stored in
the ’output’ folder. After the daily session is finished, all of
these output files go through an archiving process which can
include, but is not restricted to, generating movies, generating
images with the movement analysis, labeling sound files, and
moving different types of files into the categorized subfolders
of an archiving folder named with a timestamp.
Besides combining all the above modules and implementing
some common functions, one more Python program was
implemented to facilitate the process of analyzing the recorded
data. The program is called ”AA DataViewer”, which is based
on wxPython GUI toolkit and Matplotlib [Hun07] for drawing
graphs. Figure 2. It loads the log file, the result CSV (comma
separated values) file containing the results of the trial, the
movement-record CSV files, the MP4 movie files, and the
WAV files from one folder containing all data collected for
one session (day). For each video clip, there is a JPEG image
showing the movements of the blobs. The circles in the image
represent the positions of the blobs and their color represents
the time-flow, with the black corresponding to the beginning
of the movie, and the white to the end of the movie. A line
connecting multiple circles means that those blobs occurred at
the same time. Another feature of this program is its ability to
generate a graph with selected sessions. In the ’archive’ folder,
there are sub-folders, each of which contains all the data for a
session. When the ’select sessions’ button is clicked, a pop-up
window appears for selecting multiple folders. The result data
from these selected sub-folders of ’archive’ folder is drawn
as a graph using Matplotlib [Hun07]. By visualizing the data
for certain period, it helps the trainer or experimenter quickly
assess the current status of the training procedure.
The two feeders used in this study is a device mainly
comprising the Arduino microcontroller; (refer http://www.
arduino.cc/), a motor-shield for the microcontroller, a servomotor, and a frame encasing the whole feeder. Both Feeder
variants work in a similar way, by rotating the servomotor
by a certain number of degrees, although the second feeder
shows better performance in terms of consistent amount of
Fig. 3: Automatic feeder
Fig. 4: Circuit with a microcontroller
food released, due to the usage of an Archimedes’ screw.
Initially, an estimate of the amount of food left in the food
container was obtained using an IR distance sensor, but this
feature was discarded in the second build since the distance
information from the IR sensor was not accurate enough for
this application. The second feeder confirms the emission of a
food reward via the piezoelectric sensor, which is positioned
right below the Archimedes’ screw. Figure 3.
Communication between the Arduino chip and the main
computer was accomplished by using the Arduino module of
the ’AA’ program. In the circuit, Figure 4,
•
•
•
•
The temperature sensor measures the temperature inside
of the protective wooden platform.
The photocell sensor measures the ambient light level.
The light bulb can be turned on when the photocell sensor
indicates the ambient light level is below a user-defined
threshold.
Two fans are turned on when the temperature sensor
indicates the temperature is too high in the platform.
18
PROC. OF THE 6th EUR. CONF. ON PYTHON IN SCIENCE (EUROSCIPY 2013)
•
•
The piezoelectric sensor is read while the servomotor is
actuating, in order to confirm the occurrence of the food
reward. This sensor reading is required because occasionally the food dispensing fails due to the combination of
the short motor activation time (<0.5 seconds) and the
shape of the dry food pieces (which can fit into other
pieces easily and then fail to emerge).
The servomotor is responsible for the food dispense by
turning the Archimedes’ screw back and forth.
3
R ESULTS OF BUILDING CATOS AND ITS TESTING
ON 2 DOMESTICATED CATS
The hardware and software were built and tested. The software
is available at https://github.com/jinook0707/CATOS_alpha
with GNU General Public License, version 3. Both hardware
and software are curretnly in its alpha stage. Although its
potential to be used to train and test animal cognition was
tested and its usage seemed promising to save human resources
in certain situations, both hardware and software should be
developed further to be practically used for experimenting
animal cognition.
The two web-cams observed the experimental area for 8
to 12 hours per day for about 5 months (from the middle of
October 2012 to the middle of March 2013). The movement
records, MP4 movie files, JPEG image files, and WAV sound
files generated during this period took 37.35 Giga bytes of
storage. To obtain a rough idea of the degree of reduction in
data storage that was achieved using the system, the number
of recorded frames in the video recording was assessed. Data
for 15 days were taken to calculate it. The total observation
period was 406138 seconds, corresponding to 112.8 hours.
The number of frames recorded was 206024 and the average
FPS(Frame Per Second) was 7.5, therefore, approximately,
the video recordings were stored for 27470 seconds (=7.6
hours), which is about 6.7 percent of entire observation period.
These specific numbers are not very meaningful since they
can fluctuate with the increase or decrease of the subject’s
movements, but the point is that the most of the meaningless
recordings were successfully filtered out by CATOS.
Human presence during session is not necessary. Data transfer from one computer to another, maintenance, or modification of the system requires human interaction, but no time and
effort is required concerning the training and testing sessions.
Because no one attends the sessions, a periodic analysis of the
animal’s performance with the system is required. A simple
assessment of how much food the animals took, or more
specifically, how many correct and incorrect trials occurred,
can be done quickly since this information is already stored
in result CSV file displaying the number of correct and
incorrect trials generated with timestamps at the end of each
session. Also, the data-viewer utility program displays all the
timestamps and its JPEG image, which presents a brief report
on the movement detected in the recorded video-clip. Thus,
simply browsing the JPEG images is often enough to assess
the session. If it is not enough, then one can obtain a more
detailed assessment by playing the video-clips recorded around
the trial times.
Fig. 5: Recent performance of the trained cat on three human speech
discrimination task.
Two domesticated cats were trained for testing the system.
Both cats learned that approaching the feeder on a playback
sound could lead to a food reward. Then one cat further learned
that pressing one out of three buttons could lead to a food
reward. The training of the association between three different
sound stimuli and three different buttons is an ongoing process.
The most recent performance data Figure 5 shows over 70
percent of overall performance and also the performance on
each button is significantly higher than 33.3 percent of chance
level.
R EFERENCES
[Bel09] N. Bellotto, E. Sommerlade, B. Benfold, C. Bibby, I. Reid, D.
Roth, C. Fernandez, L.V. Gool and J. Gonzalez. A distributed
camera system for multi-resolution surveillance, Proc. of the
third ACM/IEEE Int. Conf. on Distributed Smart Cameras
(ICDSC), 2009.
[Bra00] G. Bradski. The OpenCV Library, Dr. Dobb’s Journal of Software
Tools, 25(11):122-125, Nov 2000.
[Jon01] E. Jones, T. Oliphant, P. Peterson, and others, SciPy: Open source
scientific tools for Python, 2001.
[Fag09] J. Fagot and D. Paleressompoulle. Automatic testing of cognitive
performance in baboons maintained in social groups, Behavior
Research Methods, 41(2):396-404, May 2009.
[Fag10] J. Fagot and E. Bonte. Automated testing of cognitive performance in monkeys: Use of a battery of computerized test
systems by a troop of semi-free-ranging baboons (Papio papio),
Behavior Research Methods, 42(2):507-516, May 2010.
[Fit11] T. Fitch. (2011). Speech perception: a language-trained chimpanzee weighs in, Current Biology, 21(14):R543-R546, July
2011.
[Hei11] L.A. Heimbauer, M.J. Beran and M.J. Owren. A chimpanzee recognizes synthetic speech with significantly reduced acoustic cues to
phonetic content, Current Biology, 21(14):1210-1214, June 2011.
[Hun07] J. D. Hunter, Matplotlib: A 2D graphics environment,
Computing In Science & Engineering, 9(3):90-95,
2007.
[Kam04] J. Kaminski, J. Call and J. Fischer. Word learning in a domestic
dog: evidence for ’fast mapping’, Science, 304:1682-1683,
June 2004.
[Kan12] B.D. Kangas and J. Bergman. A novel touch-sensitive apparatus
for behavioral studies in unrestrained squirrel monkeys, Journal of
Neuroscience Methods, 209(2):331-336, August 2012.
[Kri08] M. Kritzler, S. Jabs, P. Kegel and A. Krger. Indoor tracking of
laboratory mice via an RFID-tracking framework, Proc. of the
first ACM international workshop on Mobile entity localization
and tracking in GPS-less environments, 25-30, 2008.
[Mar96] M.R. Markham, A.E. Butt and M.J. Dougher. A computer touchscreen apparatus for training visual discriminations in rats, Journal
of the Experimental Analysis of Behavior, 65(1):173-182, 1996.
CATOS: COMPUTER AIDED TRAINING/OBSERVING SYSTEM
[Pep87] I.M. Pepperberg. Evidence for conceptual quantitative abilities in the
African grey parrot: labeling of cardinal sets, Ethology, 75(1):37-61,
1987.
[Ste12] M.M. Steurer, U. Aust and L. Huber. The Vienna comparative
cognition technology(VCCT): An innovative operant conditioning
system for various species and experimental procedures, Behavior
Research Methods, 44(4):909-918, December 2012.
[Tak11] A. Takemoto, A. Izumi, M. Miwa and K. Nakamura. Development of a compact and general-purpose experimental apparatus with a touch-sensitive screen for use in evaluating cognitive
functions in common marmosets, Journal of Neuroscience
Methods, 199(1):82-86, July 2011.
[Val09] D. Vallejo, J. Albusac, L. Jimenez, C. Gonzalez and J. Moreno.
A cognitive surveillance system for detecting incorrect traffic
behaviors, Expert Systems with Applications, 36(7):1050310511, September 2009.
19
20
PROC. OF THE 6th EUR. CONF. ON PYTHON IN SCIENCE (EUROSCIPY 2013)
| 5 |
Graphical Nonconvex Optimization for Optimal Estimation
in Gaussian Graphical Models
Qiang Sun⇤, Kean Ming Tan†, Han Liu‡ and Tong Zhang§
arXiv:1706.01158v1 [stat.ML] 4 Jun 2017
Abstract
We consider the problem of learning high-dimensional Gaussian graphical models. The graphical lasso is one of the most popular methods for estimating Gaussian
graphical models. However, it does not achieve the oracle rate of convergence. In
this paper, we propose the graphical nonconvex optimization for optimal estimation
in Gaussian graphical models, which is then approximated by a sequence of convex
programs. Our proposal is computationally tractable and produces an estimator
that achieves the oracle rate of convergence. The statistical error introduced by the
sequential approximation using the convex programs are clearly demonstrated via
a contraction property. The rate of convergence can be further improved using the
notion of sparsity pattern. The proposed methodology is then extended to semiparametric graphical models. We show through numerical studies that the proposed
estimator outperforms other popular methods for estimating Gaussian graphical
models.
Keywords: Adaptivity, Graphical nonconvex optimization, Nonconvexity, Semiparametric, Sequential convex approximation.
1
Introduction
We consider the problem of learning an undirected graph G = (V, E), where V =
{1, . . . , d} contains nodes that represent d random variables, and the edge set E describes the pairwise conditional dependence relationships among the d random variables.
Gaussian graphical models have been widely used to represent pairwise conditional dependencies among a set of variables. Let X be a d-dimensional random variables. Under
the Gaussian assumption X ⇠ N (0, ⌃⇤ ), the graph G is encoded by the sparse concentration matrix ⌦ = (⌃⇤ ) 1 , or the sparse inverse correlation matrix ⇤ = (C⇤ ) 1 . Here,
C⇤ is the correlation matrix such that ⌃⇤ = WC⇤ W and W2 is a diagonal matrix with
⇤
Department of Operations Research and
NJ 08544; e-mail: [email protected].
†
Department of Operations Research and
NJ 08544; e-mail: [email protected].
‡
Department of Operations Research and
NJ 08544, USA; e-mail:[email protected].
§
Tencent AI Lab, Shen Zhen, Guangdong,
Financial Engineering, Princeton University, Princeton,
Financial Engineering, Princeton University, Princeton,
Financial Engineering, Princeton University, Princeton,
China; e-mail:[email protected].
1
diagonal elements of ⌃⇤ . In particular, it is well known that the jth and kth variables
are conditionally independent given all of the other variables if and only if the (j, k)-th
element of ⌦⇤ (or ⇤ ) is equal to zero. Thus, inferring the conditional dependency structure of a Gaussian graphical model boils down to estimating a sparse inverse covariance
(or correlation) matrix.
A number of methods have been proposed to estimate the sparse concentration matrix under the Gaussian assumption. For example, Meinshausen and Bühlmann (2006)
proposed a neighborhood selection approach for estimating Gaussian graphical models
by solving a collection of sparse linear regression problems using the lasso penalty. In
addition, Yuan (2010) and Cai et al. (2011) proposed the graphical Dantzig and CLIME,
both of which can be solved efficiently. From a di↵erent perspective, Yuan and Lin
(2007) and Friedman et al. (2008) proposed the graphical lasso methodology, a penalized likelihood based approach, to estimate the concentration matrix ⌦⇤ directly. Various
extensions of the graphical lasso were proposed and the theoretical properties were also
studied (among others, Banerjee et al., 2008; Rothman et al., 2008; Ravikumar et al.,
2011). The Gaussian graphical models literature is vast and we refer the reader to Cai
et al. (2016a) and Drton and Maathuis (2016) for recent reviews on this topic.
Despite the large literature on using the graphical lasso to estimate concentration
matrices in Gaussian graphical models, the graphical lasso does not achieve the oracle
rate of convergence. More specifically, it is belived that the optimal rate of convergence
p
in spectral norm for the graphical lasso is at the order of s log d/n (Rothman et al.,
2008). Here, n is the sample size, d is the number of nodes, and s is the number of edges
in the true graph. In fact, the graphical lasso and all of the aforementioned methods are
based on the lasso penalty and it is well known that convex penalties usually introduce
non-negligible estimation bias. For example, in the linear regression setting, Fan and
Li (2001); Zhang (2010a,b); Fan et al. (2017) have shown that the nonconvex penalized
regression is able to eliminate the estimation bias and attain a more refined statistical
rate of convergence.
Based on these insights, we consider the following penalized maximum likelihood
estimation with nonconvex regularizers:
⇢
X
⌦
↵
b
b
⇥ = argmin ⇥, ⌃
log det(⇥) +
p ⇥ij ,
(1.1)
d
⇥2S+
i6=j
d = {A 2 Rd⇥d : A = AT , A
where S+
0} is the symmetric definite cone formed by all
b is the sample covariance
symmetric positive definite matrices in d ⇥ d dimensions, ⌃
matrix, and p (·) is a nonconvex penalty. Here, hA, Bi = tr(AT B) denotes the trace
of AT B. However, from the computational perspective, minimizing a folded concave
penalized problem is very complicated due to its intrinsic nonconvex structure. Indeed,
Ge et al. (2015) have shown that solving (1.1) with a general concave penalty, such
as the SCAD Fan and Li (2001) or the MCP Zhang (2010a), is strongly NP-hard. In
other words, there does not exist a fully polynomial-time approximation scheme for
problem (1.1) unless more structures are assumed. Recently, Loh and Wainwright (2015)
proposed an algorithm to obtain a good local optimum for (1.1), but an additional convex
constraint that depends on the unknown true concentration matrix is imposed. Moreover,
2
they failed to provide a faster rate of convergence statistically due to not taking the signal
strength into account.
In this paper, instead of directly solving the nonconvex problem (1.1), we propose to
approximate it by a sequence of adaptive convex programs. Even though the proposed
approach is solving a sequence of convex programs, under some regularity conditions, we
show that the proposed estimator for estimating the sparse concentration matrix achieves
p
the oracle rate of convergence of s/n, treating as if the locations of the nonzeros were
known a priori. This is achieved by a contraction property. Roughly speaking, each
convex program gradually contracts the initial estimator to the region of oracle rate of
convergence even when a bad initial estimator is used in the first place:
r
s
1 b (` 1)
⇤
⇤
b (`)
C
+
,
F
F
n
2
|
{z
}
| {z }
Oracle Rate
Contraction
where b (`) is the inverse correlation matrix estimator after the `-th convex approxip
mation, k · kF denotes the Frobenius norm, C is a constant, and s/n is referred to
as the oracle rate. Each iteration of the proposed method helps improve the accuracy
⇤ k dominates the statistical error. The error caused by each
only when k b (` 1)
F
iteration is clearly demonstrated via the proven contraction property. By rescaling the
inverse correlation matrix using the estimated marginal variances, we obtain an estimator of the concentration matrix with spectral norm convergence rate in the order of
p
p
log d/n _ s/n. Here, a _ b = max{a, b} is used to denote the maximum of a and
b. By exploiting a novel notion called sparsity pattern, we further sharpens the rate of
convergence under the spectral norm.
The rest of this paper proceeds as follows. In Section 2, we propose the new methodology and its implementation. Section 3 is devoted to theoretical studies. We show that
the proposed methodology can be extended to the semiparametric graphical models in
Section 4. Numerical experiments are provided to support the proposed methodology in
Section 5. We conclude the paper in Section 6. All the proofs and technical details are
collected in the supplementary material.
Notation: We summarize the notation that will be used regularly throughout the paper.
Given a vector u = (u1 , u2 , . . . , ud )T 2 Rd , we define the `q -norm of u by kukq =
P
( dj=1 |uj |q )1/q , where q 2 [1, 1). For a set A, let |A| denote its cardinality. For a
matrix A = (ai,j ) 2 Rd⇥d , we use A
0 to indicate that A is positive definite. For
q 1, we use kAkq = maxu kAukq /kukq to denote the operator norm of A. For index
sets I, J ✓ {1, . . . , d}, we define AI,J 2 Rd⇥d to be the matrix whose (i, j)-th entry is
equal to ai,j if i 2 I and j 2 J , and zero otherwise. We use A B = (aij bij ) to denote the
Hadamard product of two matrices A and B. Let diag(A) denote the diagonal matrix
consisting diagonal elements of A. We use sign(x) to denote the sign of x: sign(x) = x/|x|
if x 6= 0 and sign(x) = 0 otherwise. For two scalars fn and gn , we use fn & gn to denote
the case that fn
cgn , and fn . gn if fn Cgn , for two positive constants c and C.
We say fn ⇣ gn , if fn & gn and fn . gn . OP (·) is used to denote bounded in probability.
We use c and C to denote constants that may vary from line to line.
3
2
A Sequential Convex Approximation
Let X = (X1 , X2 , . . . , Xd )T be a zero mean d-dimensional Gaussian random vector.
Then its density can be parameterized by the concentration matrix ⇥⇤ or the inverse
correlation matrix ⇤ . The family of Gaussian distributions respects the edge structure
of a graph G = (V, E) in the sense that ⇤ij = 0 if and only if (i, j) 62 E. This family is
known as the Gauss-Markov random field with respect to the graph G. The problem of
estimating the edge corresponds to parameter estimation, while the problem of identifying the edge set, i.e., the set E ⌘ {i, j 2 V | i 6= j, ⇤ij 6= 0}, corresponds to the problem
of model selection.
Given n independent and identically distributed observations {X (i) }ni=1 of a zero
mean d-dimensional random vector X 2 Rd , we are interested in estimating the inverse
(i)
(i) T
b = n 1P
correlation matrix ⇤ and concentration matrix ⇥⇤ . Let ⌃
1in X (X )
b =W
c 1⌃
bW
c 1 , where W
c 2 = diag(⌃).
b To
be the sample covariance matrix and let C
estimate
b (`)
⇤,
we propose to adaptively solve the following sequence of convex programs
n⌦
o
↵
(` 1)
b
= argmin
,C
log det( ) + k
k1,o↵ , for ` = 1, . . . , T, (2.1)
d
2S+
P
(` 1)
where k⇥k1,o↵ = i6=j |⇥ij |, (` 1) = · w b ij
is a d ⇥ d adaptive regularization
matrix for a given tuning parameter and a weight function w(·), and T indicates the
number of total convex programs needed. The weight function w(·) can be taken to be
w(t) = p0 (t)/ , where p (t) is a folded concave penalty such as the SCAD or the MCP
proposed by Fan and Li (2001) and Zhang (2010a), respectively.
To obtain an estimate for the concentration matrix estimator ⇤ , we rescale b (T )
e (T ) = W
c 1 b (T ) W
c 1 after the T -th convex program. This rescaling helps
back to ⇥
e (T ) significantly by eliminating the e↵ect introimprove the rate of convergence for ⇥
duced through the unpenalized diagonal terms. The detailed routine is summarized in
Algorithm 1.
Algorithm 1 A sequential convex approximation for the graphical nonconvex optimization.
b regularization parameter .
Input: Sample covariance matrix ⌃,
b by C
b = W
c 1⌃
bW
c 1 , where W
c 2 is a
Step 1: Obtain sample correlation matrix C
b
diagonal matrix with diagonal elements of ⌃.
Step 2: Solve a sequence of graphical lasso problem adaptively
n
o
b (`) = argmin h , Ci
b
log det( ) + k (` 1)
k1,o↵ ,
d
2S+
and
(`)
=
(`)
· w( b ij ), for ` = 1, . . . , T.
e (T ) = W
c
Step 3: Obtain an estimate of ⇥⇤ by ⇥
1 b (T ) W
c 1.
The complexity of Step 2 in Algorithm 1 is O(d3 ) per iteration: this is the complexity
of the algorithm for solving the graphical lasso problem. We will show in the latter section
4
that the number of iteration can be chosen to be T ⇡ log log d based on our theoretical
analysis. Algorithm 1 can be implemented using existing R packages such as glasso.
3
Theoretical Results
In this section, we study the theoretical properties of the proposed estimator. We start
with the assumptions needed for our theoretical analysis.
3.1
Assumptions
Let S = (i, j) : ⇥⇤ij 6= 0, i 6= j be the support set of the o↵-diagonal elements in ⇥⇤ .
Thus, S is also the support set of the o↵-diagonal elements in ⇤ . The first assumption
we need concerns the structure of the true concentration and covariance matrices.
Assumption 3.1 (Structural Assumption). We assume that |S| s, k⌃⇤ k1 M <
1, 0 < "1 min max 1/"1 < 1, 0 < "2 min (⇥⇤ ) max (⇥⇤ ) 1/"2 < 1.
2
2
Here, max
= maxj ⌃⇤jj and min
= minj ⌃⇤jj , where ⌃⇤ = ⌃⇤ij .
Assumption 3.1 is standard in the existing literature for Gaussian graphical models
(see, for instance, Meinshausen and Bühlmann, 2006; Yuan, 2010; Cai et al., 2016b; Yuan
and Lin, 2007; Ravikumar et al., 2011). We need min and max to be bounded from above
and below to guarantee reasonable performance of the concentration matrix estimator
(Rothman et al., 2008). Throughout this section, we treat M, "1 , "2 as constants to
simplify the presentation.
The second assumption we need in our analysis concerns the weight functions, which
are used to adaptively update the regularizers in Step 2 of Algorithm 1. Define the
following class of weight functions:
n
o
W = w(t) : w(t) is nonincreasing , 0 w(t) 1 if t 0, w(t) = 1 if t 0 .
(3.1)
Assumption 3.2 (Weight Function). There exists an ↵ such that the weight function
w(·) 2 W satisfies w(↵ ) = 0 and w(u) 1/2, where u = c for some constant c.
The above assumption on the weight functions can be easily satisfied. For example, it
can be satisfied by simply taking w(t) = p0 (t)/ , where p (t) is a folded concave penalty
such as the SCAD or the MCP (Fan and Li, 2001; Zhang, 2010a). Next, we impose an
assumption on the magnitude of the nonzero o↵-diagonal entries in the inverse correlation
matrix ⇤ .
Assumption 3.3 (Minimal Signal Strength). Recall that S is the true support set.
The minimal signal satisfies that min(i,j)2S ⇤ij (↵ + c) & , where c > 0 is the same
constant that appears in Assumption 3.2.
Assumption 3.3 is rather mild. In the sub-Gaussian design case, can be taken to
p
be the order of log d/n, which diminishes quickly as n increases. It is an analogue
to the minimal signal strength assumption frequently assumed in nonconvex penalized
regression problems (Fan and Li, 2001; Zhang, 2010a). Taking the signal strength into
account, we can then obtain the oracle rate of convergence.
5
3.2
Main Theory
We now present several main theorems concerning the rates of convergence of the proposed estimator for the sparse inverse correlation and the concentration matrices. The
following theorem concerns the rate of convergence for the one-step estimator b (1) obtained from Algorithm 1 when ` = 1.
p
Proposition 3.4 (One-step Estimator). Let ⇣ log d/n. Under Assumption 3.1, we
have
r
s log d
(1)
⇤
b
.
F
n
with probability at least 1
8/d,
Proof of Proposition 3.4. We collect the proof of Proposition 3.4 in Appendix A in the
supplementary material.
The above proposition indicates that the statistical error under the Frobenius norm
p
for the one-step estimator is at the order of s log d/n, which is believed to be unimprovable when one-step convex regularization is used (Rothman et al., 2008; Ravikumar et al.,
2011). However, when a sequence of convex programs is used as in our proposal, the
rate of convergence can be improved significantly. This is demonstrated in the following
theorem.
Theorem 3.5 (Contraction Property). Suppose that n & s log d and take such that
p
⇣ log d/n. Under Assumptions 3.1, 3.2 and 3.3, b (`) satisfies the following contraction property:
b (`)
⇤
F
8k
|
with probability at least 1
⇤ 2
k2 krL(
{z
Oracle Rate
⇤
1 b (` 1)
)S kF +
} |2
{z
Contraction
⇤
, 1 ` T,
}
F
p
8/d. Moreover, if T & log( n) & log log d, we have
✓r ◆
s
⇤
b (T )
= OP
.
F
n
Proof of Theorem 3.5. The proof is collected in Appendix A in the supplementary material.
Theorem 3.5 establishes a contraction property: each convex approximation contracts
the initial estimator towards the true sparse inverse correlation matrix until it reaches
p
the oracle rate of convergence:
s/n. To achieve the oracle rate, we need to solve
no more than approximately log log d convex programs. Note that log log d grows very
slowly as d increases and thus, in practice, we only need to solve a few convex programs
to get a better estimator than existing method such as the graphical lasso. The rate
p
of convergence s/n is better than the existing literature on likelihood-based methods
for estimating sparse inverse correlation matrices (Rothman et al., 2008; Lam and Fan,
2009a; Ravikumar et al., 2011). By rescaling, we obtain a concentration matrix estimator
with a faster rate of convergence.
6
Theorem 3.6 (Faster Rate in Spectral Norm). Under the same conditions in Theorem
3.5, we have
r
✓r
◆
s
log d
(T
)
⇤
e
⇥
⇥ 2 = OP
_
.
n
n
Proof of Theorem 3.6. The proof is deferred to Appendix A in the supplementary material.
The theorem above provides the optimal statistical rate for estimating sparse concentration matrices using likelihood based methods (Rothman et al., 2008; Lam and Fan,
2009b; Ravikumar et al., 2011). The extra log d term is a consequence of estimating the
marginal variances. We further sharpen the obtained theory using a novel notion, called
sparsity pattern, as defined below.
Definition 3.7 (Sparsity Pattern). For a matrix A = aij , we say Asp = asp
ij is the
sp
sp
corresponding sparsity pattern matrix if aij = 1 when aij 6= 0; and aij = 0, otherwise.
Let M⇤ be the sparsity pattern matrix of ⇤ or ⇥⇤ . Our next theorem provides an
improved rate of convergence using this newly defined notion of sparsity pattern.
Theorem 3.8 (Improved Convergence Rate using Sparsity Pattern). Suppose that n &
p
(s + s2max ) log d and take such that ⇣ log d/n. Let T & log s. Under Assumptions
3.1, 3.2 and 3.3, we have
r ◆
✓
1
⇤
b (T )
= OP kM⇤ k2
, and
2
n
r
r
✓
◆
1
log d
(T )
⇤
⇤
e
⇥
⇥ 2 = OP kM k2
_
.
n
n
Proof of Theorem 3.8. The proof is deferred to Appendix B in the supplementary material.
Theorem 3.8 suggests that the rates of convergence can be bounded using the spectral
norm of the sparsity pattern matrix M⇤ , which are sometimes much sharper than those
provided in Theorems 3.5 and 3.6. To demonstrate this observation, we consider a
sequence of chain graphs specified by the following sparsity pattern matrices:
"
#
A
0
k
Mck =
, for k = 4, . . . , 50,
0 Id k 1
where Ak 2 R(k+1)⇥(k+1) such that the (i, j)-th entry Ak,ij = 1 if |i j| 1, and Ak,ij = 0
otherwise. Id k 1 2 R(d k 1)⇥(d k 1) is the identity matrix. Let sk be the total sparsity
of Mck , that is sk = 2k. We plot the ratio of the two rates of convergence for estimating
⇤ in Theorems 3.5 and 3.8, kMc k2 /s , versus s in Figure 1. From Figure 1, we can
k
k 2 k
see that the ratio goes to 0 as the total sparsity increases. This demonstrates that the
convergence rate in Theorem 3.8 is indeed much sharper than that in Theorem 3.5, as
least for the chain graphs constructed above. We also observe similar but less significant
improvement for star-shape graphs. In Figure 2, we give an geometric illustration of the
star and chain graphs.
7
Chain Graph
0.8
●
0.6
●
●
●
●
0.4
kMck k22 /sk
●
●
●
●
●
●
●
●
0.2
●
●
20
●
●
●
●
●
●
●
●
●
40
●
●
●
●
60
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
80
●
●
100
sk
Figure 1: Convergence rates using sparsity pattern matrix Mck and total sparsity sk .
Star Graph
Chain Graph
Figure 2: An illustration of the star and chain graphs.
4
Extension to Semiparametric Graphical Models
In this section, we extend the proposed method to modeling semiparametric graphical models. We focus on the nonparanormal family proposed by Liu et al. (2012),
which is a nonparametric extension of the normal family. More specifically, we replace the random variable X = (X1 , . . . , Xd )T by the transformation variable f (X) =
(f1 (X1 ), . . . , fd (Xd ))T , and assume that f (X) follows a multivariate Gaussian distribution.
Definition 4.1 (Nonparanormal). Let f = {f1 , . . . , fd }T be a set of monotone univariate functions and let ⌃npn 2 Rd⇥d be a positive-definite correlation matrix with
diag(⌃npn ) = 1. A d-dimensional random variable X = (X1 , . . . , Xd )T has a nonparanormal distribution X ⇠ NPNd (f, ⌃npn ) if f (X) ⌘ (f (X1 ), . . . , fd (Xd ))T ⇠ Nd (0, ⌃npn ).
We aim to recover the precision matrix ⇥npn = (⌃npn ) 1 . The main idea behind
this procedure is to exploit Kendall’s tau statistics to directly estimate ⇥npn , without
explicitly calculating the marginal transformation functions {fj }dj=1 . We consider the
following Kendall’s tau statistic:
X
2
(i)
(i0 )
(i)
(i0 )
⌧bjk =
sign (Xj
Xj )(Xk
Xk ) .
n(n 1)
0
1i<i n
The Kendall’s tau statistic ⌧bjk represent the nonparametric correlations between the empirical realizations of random variables Xj and Xk and is invariant to monotone trans8
ej and X
ek be two independent copies of Xj and Xk . The population
formations. Let X
ej ), sign(Xk X
ek ) . We need
version of Kendall’s tau is given by ⌧jk ⌘ Corr sign(Xj X
the following lemma which is taken from Liu et al. (2012). It connects the Kendall’s tau
statistics to the underlying Pearson correlation coefficient ⌃npn .
Lemma 4.2. Assuming X ⇠ NPNd (f, ⌃), we have ⌃0jk = sin ⌧jk ·⇡/2 .
b = [Sbjk ] for the
Motivated by this Lemma, we define the following estimators S
unknown correlation matrix ⌃npn :
(
sin ⌧bjk ·⇡/2 , j 6= k,
⌧
Sbjk =
1,
j = k.
Now we are ready to prove the optimal spectral norm rate for the Gaussian copula
graphical model. The results are provided in the following theorem.
p
Theorem 4.3. Assume that n & s log d and let ⇣ log d/n. Under Assumptions 3.1,
b (`) satisfies the following contraction property:
3.2 and 3.3, ⇥
b (`) ⇥⇤
⇥
F
1 b (` 1)
4k⇥⇤ k22 krL(⇥⇤ )S kF + ⇥
⇥⇤ F , 1 ` T,
|
{z
} |2
{z
}
with probability at least 1
Optimal Rate
Contraction
8/d. If T & log(
b (T ) ⇥⇤
⇥
F
p
n) & log log d, we have
✓r ◆
s
= OP
.
n
Proof of Theorem 4.3. The proof is deferred to Appendix C in the supplementary material.
5
Numerical Experiments
We compare our proposal to the graphical lasso (glasso) (Friedman et al., 2008) and
neighborhood selection (NS) (Meinshausen and Bühlmann, 2006). Each of these approaches learns a Gaussian graphical model via an `1 penalty on each edge. To evaluate
the performance across di↵erent methods, we define the true positive rate as the proportion of correctly identified edges in the graph, and the false positive rate as the
proportion of incorrectly identified edges in the graph. In addition, we calculate the difference between the estimated and true concentration matrix under the Frobenius norm.
We do not compute this quantity for the NS approach since they do not estimate the
concentration matrix directly.
For our proposal, we consider T = 4 iterations with the SCAD penalty proposed by
Fan and Li (2001) that takes the following form:
8
>
if |t| ,
>
<
p0 (t) =
>
>
:0
|t|
1
if
< |t| <
otherwise,
9
,
where > 2. In all of our simulation studies, we pick = 2.1. Each of the methods
involves a sparsity tuning parameter: we applied a fine grid of tuning parameter values
to obtain the curves shown in Figure 3.
We consider cases with n = {150, 200} and d = 150 with two set-ups for a p ⇥ p
adjacency matrix A: (i) random graph with 2.5% elements of A set to 1; (ii) band graph
with Ai,i+1 = Ai+1,i = 1 for 1 i d 1. We then use the adjacency matrix A to
create a matrix E, as
(
0
if Aij = 0
Eij =
0.4 otherwise,
and set E = 12 (E+ET ). Given the matrix E, we set ⇥ 1 equal to E+(0.1 emin )I, where
emin is the smallest eigenvalue of E. We then standardize the matrix ⇥ 1 so that the
i.i.d.
diagonals are equal to one. Finally, we generate the data according to X (1) , . . . , X (n) ⇠
N (0, ⌃). We present the results averaged over 100 data sets for each of the two simulation
settings with n = {150, 200} and p = 150 in Figure 3.
Random graph (n=200)
0.0
●
●
●
●
1.0
0.8
0.4
0.0
0.2
0.4
0.6
0.8
0.2
0.0
●
●●●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
3
Our Proposal
Glasso
2
0.0
0.1
0.2
●
●
●
●
●
●
●●●●●
●
●●●●
●●●●●●●●●● ●
●●●●● ●
● ●●● ●●
●
●
●
●
●
●
●
●
0.3
False positive rate
0.4
5
Our Proposal
NS
Glasso
0.2
●● ●
●
●
●
0.4
0.6
0.8
0.8
Our Proposal
NS
Glasso
●
●
0.0
7
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●●
●
Our Proposal
Glasso
2
0.0
0.1
0.2
0.3
0.4
6
0.2
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●
●
●
●
●
0.4
0.6
0.8
0.0
●
0.2
8
7
●
●
●
●
●
4
●
●
●
●
●
3
● ●
●
●
●
●
●
●
●
Our Proposal
Glasso
●
●
●
●
●
●
0.0
0.1
0.2
0.3
0.4
0.6
0.8
1.0
False positive rate
●
False positive rate
False positive rate
●●
Our Proposal
NS
Glasso
●
●
0.0
1.0
●
5
2
● ●
●
0.2
Band graph (n=150)
●
● ●● ●
●
False positive rate
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
0.4
●
0.0
●
●
●
●
●
●
●
●●●
●●●●
●●●
●●●●●●●●●
●● ●●● ●● ●
●
● ● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
0.6
0.2
1.0
1.0
●
●
8
●
3
●●
0.4
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
4
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
Random graph (n=200)
Frobenius norm
Frobenius norm
4
●
0.8
8
6
Band graph (n=200)
Band graph (n=150)
1.0
●
●●
0.6
7
●
● ●
False positive rate
7
5
●●
●
●
●
●
●
●
●
●
0.0
1.0
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●
●
●
● ●
●
●
●
Random graph (n=150)
6
●●
●
●
False positive rate
8
●●
●
●
●
●
0.6
Our Proposal
NS
Glasso
●● ●
● ●● ● ● ● ●●
●●●●
●
●●●● ● ●
●
●●
●●●
●●●
●●
●
●●
●
True positive rate
0.2
●
0.4
Frobenius norm
0.4
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
Frobenius norm
0.6
●
●
True positive rate
True positive rate
0.8
●
● ● ● ● ● ● ●● ● ●
●
●●●
●
● ●● ● ● ●● ●
●●●●●
●●
●●●
●●
●●●
●
●
●
True positive rate
Random graph (n=150)
1.0
6
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
Band graph (n=200)
●
●
5
●
●
●
●
4
●
●
●
●
3
2
●
●
● ●
●
●
●
●●
●
●
0.0
●
●
●
●
Our Proposal
Glasso
●
0.1
0.2
0.3
0.4
False positive rate
Figure 3: Row I: True and false positive rates, averaged over 100 data sets with p = 150,
for random and band graphs, respectively. Row II: Di↵erence between the estimated and
the true inverse covariance matrices under the Frobenius norm. The di↵erent curves are
obtained by varying the sparsity tuning parameter for each of the methods.
From Row I of Figure 3, we see that our proposal is very competitive relative to the
existing proposals for estimating Gaussian graphical models in terms of true and false
positive rates across all simulation settings. Row II of Figure 3 contains the di↵erence
between the estimated and the true inverse covariance matrices under the Frobenius
norm as a function of the false positive rate. For random graph with n = 150, we see
that the minimum error under the Frobenius norm for our proposal is smaller than that of
the graphical lasso. As we increase the number of observations to n = 200, the di↵erence
between the minimum error for the two proposals are more apparent. More interestingly,
10
the region for which our proposal has lower Frobenius norm than the graphical lasso is
the primary region of interest. This is because an ideal estimator is one that has a low
false positive rate while maintaining a high true positive rate with low error under the
Frobenius norm. In contrast, the region for which the graphical lasso does better under
the Frobenius norm is not the primary region of interest due to the high false positive
rate. We see similar results for the band graph setting.
6
Conclusion and Discussions
We propose the graphical nonconvex optimization, which is then approximated by a
sequence of convex programs, for estimating the inverse correlation and concentration
matrices with better rates of convergence comparing with existing approaches. The proposed methodology is sequential convex in nature and thus is computationally tractable.
Yet surprisingly, it produces estimators with oracle rate of convergence as if the global
optimum for the penalized nonconvex problem could be obtained. Statistically, a contraction property is established: each convex program contracts the previous estimator
by a 0.5-fraction until the optimal statistical error is reached.
Our work can be applied to many di↵erent topics: low rank matrix completion problems, high-dimensional quantile regression and many others. We conjecture that in all of
the aforementioned topics, a similar sequential convex approximation can be proposed
and can possibly give faster rate, with controlled computing resources. It is also interesting to see how our algorithm works in large-scale distributed systems. Is there
any fundamental tradeo↵s between statistical efficiency, communication and algorithmic
complexity? We leave these as future research projects.
References
Banerjee, O., El Ghaoui, L., and d’Aspremont, A. (2008), “Model selection through
sparse maximum likelihood estimation for multivariate gaussian or binary data,” The
Journal of Machine Learning Research, 9, 485–516.
Cai, T., Liu, W., and Luo, X. (2011), “A constrained `1 minimization approach to
sparse precision matrix estimation,” Journal of the American Statistical Association,
106, 594–607.
Cai, T., Ren, Z., and Zhou, H. H. (2016a), “Estimating structured high-dimensional
covariance and precision matrices: Optimal rates and adaptive estimation,” Electronic
Journal of Statistics, 10, 1–59.
Cai, T. T., Liu, W., and Zhou, H. H. (2016b), “Estimating sparse precision matrix:
Optimal rates of convergence and adaptive estimation,” The Annals of Statistics, 44,
455–488.
Drton, M. and Maathuis, M. H. (2016), “Structure learning in graphical modeling,”
Annual Review of Statistics and Its Application, 4, 365–393.
11
Fan, J. and Li, R. (2001), “Variable selection via nonconcave penalized likelihood and
its oracle properties,” Journal of the American Statistical Association, 96, 1348–1360.
Fan, J., Liu, H., Sun, Q., and Zhang, T. (2017), “I-LAMM for Sparse Learning: Simultaneous Control of Algorithmic Complexity and Statistical Error,” The Annals of
Statistics, in press.
Friedman, J., Hastie, T., and Tibshirani, R. (2008), “Sparse inverse covariance estimation
with the graphical lasso,” Biostatistics, 9, 432–441.
Ge, D., Wang, Z., Ye, Y., and Yin, H. (2015), “Strong NP-Hardness Result for Regularized Lq -Minimization Problems with Concave Penalty Functions,” arXiv preprint
arXiv:1501.00622.
Lam, C. and Fan, J. (2009a), “Sparsistency and Rates of Convergence in Large Covariance Matrix Estimation.” The Annals of Statistics, 37, 4254–4278.
— (2009b), “Sparsistency and rates of convergence in large covariance matrix estimation,” The Annals of Statistics, 37, 4254.
Liu, H., Han, F., Yuan, M., La↵erty, J., Wasserman, L., et al. (2012), “High-dimensional
semiparametric Gaussian copula graphical models,” The Annals of Statistics, 40, 2293–
2326.
Loh, P.-L. and Wainwright, M. J. (2015), “Regularized M-estimators with Nonconvexity:
Statistical and Algorithmic Theory for Local Optima,” Journal of Machine Learning
Research, 16, 559–616.
Meinshausen, N. and Bühlmann, P. (2006), “High-dimensional graphs and variable selection with the lasso,” The Annals of Statistics, 1436–1462.
Ravikumar, P., Wainwright, M. J., Raskutti, G., Yu, B., et al. (2011), “High-dimensional
covariance estimation by minimizing `1 -penalized log-determinant divergence,” Electronic Journal of Statistics, 5, 935–980.
Rothman, A. J., Bickel, P. J., Levina, E., Zhu, J., et al. (2008), “Sparse permutation
invariant covariance estimation,” Electronic Journal of Statistics, 2, 494–515.
Yuan, M. (2010), “High dimensional inverse covariance matrix estimation via linear
programming,” Journal of Machine Learning Research, 11, 2261–2286.
Yuan, M. and Lin, Y. (2007), “Model selection and estimation in the Gaussian graphical
model,” Biometrika, 94, 19–35.
Zhang, C.-H. (2010a), “Nearly unbiased variable selection under minimax concave
penalty,” The Annals of Statistics, 38, 894–942.
Zhang, T. (2010b), “Analysis of multi-stage convex relaxation for sparse regularization,”
The Journal of Machine Learning Research, 11, 1081–1107.
12
Supplementary Material to “Graphical Nonconvex
Optimization for Optimal Estimation in Gaussian Graphical
Models”
Qiang Sun, Kean Ming Tan, Han Liu and Tong Zhang
Abstract
This supplementary material collects proofs for the main theoretical results in the
main text and additional technical lemmas. The proofs of Proposition 3.4, Theorems
3.5 and 3.6 are collected in Section A. Section B provides the proof for Theorem 3.8.
Proofs related to semiparametric graphical models are given in Section C. Various
concentration inequalities and preliminary lemmas are postponed to Sections D and
E, respectively.
A
Rate of Convergence in Frobenius Norm
This section presents an upper bound for the adaptive estimator b (`) in Frobenius norm,
which in turn helps establish the scaling conditions needed to achieve the optimal spectral
norm convergence rate.
A.1
Proofs of Proposition 3.4, Theorems 3.5 and 3.6
In this section, we collect the proofs for Proposition 3.4, Theorems 3.5 and 3.6.
(` 1)
In order to suppress the noise at the `th step, it is necessary to control min(i,j)2S b ij
in high dimensions. For this, we construct an entropy set, E` , of S and analyze the mag(` 1)
nitude of
. The entropy set at the `-th stage, E` , is defined as
Ec
min
`
n
E` = (i, j) : (i, j) 2 S or
(` 1)
ij
< w(u), for u = 2 32k
⇤ 2
k2
+ k⌃⇤ k21 _ 1
o
. (A.1)
Thus the constant in Assumption 3.3 is c = 2(32k ⇤ k22 + k⌃⇤ k21 _ 1). Then it can be
seen that S ✓ E` , and thus E` is an entropy set of S for any `
1. Proposition 3.4
follows from a slightly more general result below, which establishes rate of convergence
for the one-step estimator of sparse inverse correlation matrix b (1) .
Proposition A.1 (One-step Estimator). Assume that assumption 3.1 holds. Suppose
p
p
8k ⇤ k22 s < 1. Take such that ⇣ (log d)/n and suppose n & log d. Then with
probability at least 1 8/d, b (1) must satisfy
r
s log d
⇤
b (1)
Ck ⇤ k22
.
F
n
1
b C⇤ kmax /2 . Then in the
Proof of Proposition A.1. Define the event J = kC
⇤ k 4k ⇤ k2 ·
event J , by applying Lemma q
A.4 and taking E = S, we obtain k b (1)
F
2
p
p
p
1
s. If we further take = 3c2
(log d)/n ⇣ (log d)/n, then by Lemma D.5, we
have event J hold with probability at least 1 8d 1 . The result follows by plugging the
choice of .
Theorems 3.5 and 3.6 follow form a slightly more general result below, which chare (T ) in spectral
acterizes the rate of convergence of b (`) in Frobenius norm and that of ⇥
norm.
p
Theorem A.2. Assume that assumptions 3.1, 3.2 and 3.3. Suppose that 8k ⇤ k22 s <
p
1. Take such that ⇣ log d/n. Then with probability at least 1 8d 1 , b (`) satisfies
b (`)
⇤
F
8k
|
Moreover, if that T & log(
⇤ 2
k2 krL(
⇤
{z
Optimal Rate
p
1 b (` 1)
)S kF +
} |2
{z
, 1 ` T.
}
F
Contraction
p
s/n , and
F
r
r ◆
⇤k
log d _ k ⇤ k22 s
2
.
2
n
n
min
min
n), we have
✓ 3
k
⇤
⇥ 2 = OP max 3
e (T )
⇥
⇤
b (T )
⇤
= OP k
⇤ k2
2
Proof of Theorem A.2. Under the conditions of theorem, combining Proposition A.7 and
Lemma D.5, we obtain the following contraction property of the solutions, { b (`) }T
`=1 ,
1 b (` 1)
⇤
.
F
2
Next, we introduce an inequality by induction analysis. Specifically, if an a0 +
↵an 1 , 8 n 2 and 0 ↵ < 1, then
b (`)
⇤
F
⇤ 2
k2 krL(
4k
an a0
1
⇤
↵n
1 ↵
1
) S kF +
+ ↵n
1
a1 .
⇤ k2 krL( ⇤ ) k , we obtain that b (`)
⇤
8k ⇤ k22 krL( ⇤ )S kF +
S F
2
F
⇤
⇤ k respec. In the sequel, we bound krL( ⇤ )S kF and k b (1)
F
F
p
⇤ k . 8k ⇤ k2
tively. By Proposition A.1, we have k b (1)
s.
Moreover,
if we
F
2
p
p
p
T
1
(1)
⇤
⇤
2
b
let T
log( n) log 2 & log( n), then (1/2)
k
kF 16k k2 · s/n.
p
⇤
⇤
2
On the other side, we have krL( )S kF = OP (k k2 · s/n), which follows from
⇤k =
Lemma D.4. Therefore, combining the above results obtains us that k b (T )
F
p
OP k ⇤ k22 s/n .
e (T ) ⇥⇤ k2 , we apply Lemma E.3 and obtain
To achieve the statistical rate for k⇥
Taking a0 = 4k
` 1 b (1)
1/2
that
e (T )
k⇥
⇥ ⇤ k2 =
c
W
+
c
kW
|
1
c
W
1
c
+kW
|
W
1
W
b (T )
1
1
W
⇤
⇤c 1
1 2 b (T )
k2 k
{z
W
⇤
(R1)
1
W
1
k2 k
{z
(R3)
c
W
⇤
W
c
+ W
2
c
k2 +kW
} |
c
k2 kW
2
1
1
1
+
2
1 b (T )
1
c
k2 +kW
} |
W
c
W
⇤
1
1
W
W
1
2
k2 k b (T ) k2 kW
{z
(R2)
1
1
k2 kW
k k b (T )
{z2
1
(R4)
b (T ) W
1
k2
}
⇤
k2 .
}
1
2
We now bound terms (R1) to (R4) respectively. Before we proceed, we apply Lemma
D.2 and the union sum bound to obtain that, for any " 0,
⇣
⌘
n
o
n
o
c 2 W2 k2 > " max ⌃⇤ d · exp
P kW
n
·
C(")
=
exp
n
·
C(")+log
d
,
ii
i
1 ("
where C(") = 2
log(1 + ")). Suppose that 0 " 1/2, then we have n · C(")
p
2
n · " /3. Further suppose that n 36 log d and take " = 3 (log d)/n, we obtain that
n · C(")+log d 2 log d and
r
✓
◆
log d
1
2
2
2
c
P kW
W k2 > 3 max ·
2,
n
d
2 . Therefore, we have W
c 2 W2 =
where we use the assumption that maxi ⌃⇤ii max
2
⇣
⌘
p
2
2
2
c
OP max · log d/n . Since W and W are diagonal and thus commutative. We note
that, for any two event A and B, P(A) = P(A \ B) + P(A \ B c ) holds. Therefore, for any
M > 0, we have
r
✓
◆
log
d
1
1
2
c
P W
W
> M max
2
n
r
✓
log d
c 1 W 1 >M 2
P W
,
max
2
n
◆
p
2
1
1
2
2
2
c
c
W
W
2( 2+1) W 2 min W W W 2
2
✓
◆
p
2
2
c 1 W 1 > 2( 2+1) W
c 2 W2
+P W
W
W
.
2
2 min
2
Further using Lemma E.7 yields that
r
✓
◆
log
d
1
1
2
c
P W
W
> M max
2
n
✓
p
2
c 2 W2
P 2( 2+1) W 2 min
W2 W
|
{z
+P
|
✓
(T1)
c2
W
W2
>2
2
{z
1
min
W2
◆
>M
2
2
max
r
log d
n
.
◆
}
}
(T2)
2
4
By taking M = M1 · kWk2 min
(W2 ) = M1 · max / min
and letting M1 ! 0, we
2
2
get (T1) ! 0. Under the assumption that max / min = O (n/ log d)1/3 , we have
p
2
2
c 1
n/ log d , and thus (T2) ! 0. Therefore we obtain that W
max / min = o
p
4 3
W 1 2 = OP min
max (log d)/n . Similarly, we have the following facts:
b (T )
2
= OP k
⇤
k2 ,
c
W
1
2
=
1
min
c = OP (
W
1
min ),
and W
1
2
=
Applying the above results to the terms (R1)-(R4). we obtain that
r
r ◆
✓
◆
✓
6
s
s
max log d
2
2
⇤ 2
⇤ 2
(R1) = OP min k k2
· 6
= OP min k k2
,
n
n
min n
r
r ◆
✓ 3
◆
✓
log d
s
max
2
⇤
⇤ 2
(R2) = (R3) = OP
k k2
, (R4) = OP min k k2
.
3
n
n
min
3
1
min .
Therefore, by combining the rate for terms (R1)-(R4), we obtain the final result.
A.2
Technical Lemmas
s (⇥, ⇥⇤ ) =
Define the symmetrized Bregman divergence for the loss function L(·) as DL
⌦
↵
⇤
⇤
d⇥d
d⇥d
rL(⇥) L(⇥ ), ⇥ ⇥ . For any matrix A 2 R , let A 2 R
be the o↵ diagonal
matrix of A with diagonal entries equal to 0, and A+ = A A be the diagonal mtrix.
Lemma A.3. For the symmetrized Bregman divergence defined above, we have
⌦
s
DL
(⇥, ⇥⇤ ) = rL(⇥)
rL(⇥⇤ ), ⇥
⇥⇤
↵
k⇥⇤ k2 + k⇥
2
⇥ ⇤ k2
k⇥
⇥⇤ k2F .
Proof of Lemma A.3. We use vec(A) to denote the vectorized form of any matrix A.
Then by the mean value theory, there exists a 2 [0, 1] such that,
⌦
s
DL
(⇥, ⇥⇤ ) = rL(⇥)
min (r
2
↵
⇥⇤ = vec(⇥
rL(⇥⇤ ), ⇥
L(⇥⇤ +
) k k2F ,
where
⇥⇤ )T r2 L(⇥⇤ +
=⇥
) vec(⇥
⇥⇤ )
⇥⇤ .
By standard properties of the Kronecker product and the Weyl’s inequality (Horn and
Johnson, 2012), we obtain that
⇣
⌘
⇣
⌘
1
2
⇤
) = min (⇥⇤ +
) ⌦ (⇥⇤ +
)
min r L(⇥ +
= k⇥⇤ +
Finally, observing that
1, we obtain
⌦
s
DL
(⇥, ⇥⇤ ) = rL(⇥)
Plugging the definition of
2
k⇥⇤ k2 + k k2
k2 2
↵
rL(⇥⇤ ),
.
k⇥⇤ k2 + k k2
2
⇥⇤ k2F .
k⇥
obtains us the final bound.
⇤ k by using localized
The following lemma characterizes an upper bound of k b
F
analysis.
p
Lemma A.4. Suppose 8k ⇤ k2 s < 1. Take E such that S ✓ E and |E| 2s. Further
assume k E c kmin
/2 krL( ⇤ )kmax . Let b be the solution to (B.4). Then b must
satisfy
kb
⇤
⇤ 2
k2
kF 4k
k
S kF +krL(
⇤
)E kF 8k
⇤ 2
k2
p
s.
Proof of Lemma A.4. We start by introducing an extra local parameter r which satp
p p
p
isfies 8k ⇤ k22 s < r k ⇤ k2 . This is possible since
|E| 2 s ! 0 and
p
8k ⇤ k2 s < 1 by assumption. Based on this local parameter r, we construct an inter⇤ ), where t is taken such that k( e
⇤ k = r,
mediate estimator: e = ⇤ + t · ( b
F
⇤
e
e
if k(
kF > r; t = 1 otherwise. Applying Lemma A.3 with ⇥1 =
and ⇥2 = ⇤
obtains us
⇤
2
+r
2
e
⇤ 2
F
⌦
rL( e )
4
rL(
⇤
), e
⇤
↵
.
(A.2)
To bound the right hand side of the above inequality, we use Lemma E.2 to obtain
s e
DL
( ,
⇤
s b
) tDL
( ,
⇤
⌦
) = t rL( b )
rL(
⇤
), b
⇤
↵
.
(A.3)
We note that the sub-di↵erential of the norm k · k1,o↵ evaluated at
consists the set
d⇥d
of all symmetric matrices 2 R
such that ij = 0 if i = j; ij = sign( ij ) if i 6= j
and ij 6= 0; ij 2 [ 1, +1] if i 6= j and ij = 0, where ij is the (i, j)-th entry of
. Then by the Karush-Kuhn-Tucker conditions, there exists a b 2 @k b k1,o↵ such that
b =C
b b 1+
b = 0. Plugging (A.3) into (A.2) and adding the term
rL( b )+
⇤ i on both sides of (A.3), we obtain
b, b
h
(k
⇤
k2 +r) 2 k e
), b
{z
⇤ 2
kF +t hrL(
⇤
|
b, b
{z
⇤
i+t h
} |
I
b, b
t hrL( b )+
|
{z
⇤
i
}
II
⇤
i.
}
III
(A.4)
Next, we bound terms I, II and III respectively. For a set E, let E c denote its complement
with respect to (w.r.t.) the full index set {(i, j) : 1 i, j d}. For term I, separating
⇤ to E [ D and E c \ D, in which D is the set consisting
the support of rL( ) and b
of all diagonal elements, and then using the matrix Hölder inequality, we obtain
⌦
⇤
rL(
), b
⇤
↵
=
⌦
⇤
rL(
)
⇤
rL(
E[D
)
⇤
rL(
, b
⇤
E[D F
)
E c \D F
b ), ( b
⇤
b )S[D , ( b
)i = h(
⇤
b
↵ ⌦
+ rL(
⇤
)S[D i +h(
For the last term in the above equality, we have
b )S c \D , ( b
h(
⇤
)S c \D i = h
S c \D , |
⇤
)
E c\D
E[D F
⇤
.
E c \D F
b ) and ( b
For term II, separating the support of (
obtain
h(
b
E[D
b S c \D |i = h
⇤)
, b
⇤
E c\D
↵
to S [ D and S c \ D, we
b )S c \D , ( b
S c \D , |(
⇤
b
⇤
)S c \D i. (A.5)
)S c \D |i.
(A.6)
Plugging (A.6) into (A.5) and applying matrix Hölder inequality yields
h(
b, b
⇤
b )S[D , ( b
i = h(
b )S , ( b
= h(
k
S kF k(
b
⇤
⇤
)S[D i + h
)S i + k
S c \D , |(
S c \D kF k(
) S kF + k
b
E c \D kF k(
b
b
⇤
⇤
)S c \D |i
)S c\D kF
⇤
)E c\D kF ,
where we use D = 0 in the second equality and E c \D ✓ S c \D in the last inequality.
⌦
↵
b, b
For term III, using the optimality condition, we have III = rL( b )+
= 0.
Plugging the bounds for term I, II and III back into (A.4), we find that
k
⇤
k2 + r
2
e
⇤ 2
+t
F
k
E c \D kF
t
rL(
k(rL(
⇤
5
)
E[D F
))E c \D kF · ( b
+ S
· b
⇤
F
⇤
⇤
)E c \D
F
.
F
Further observing the facts that k E c \D kF
⇤k = k e
rL( ⇤ ) E c\D F and tk b
F
we can simplify the above inequality to
(k
⇤
2
k2 +r)
ke
⇤
kF k
S kF +krL(
p
p
|E c \D| E c\D min
|E c \D| rL(
⇤ k , dividing both sides by k e
F
⇤
)E[D kF = k
⇤
S kF +krL(
) E kF 2
⇤)
max
⇤k ,
F
p
s,
b C⇤ )E[D kF = k(C
b C⇤ )E kF = krL( ⇤ )E kF in
where we use krL( ⇤ )E[D kF = k(C
the equality, and the last inequality follows from the Cauchy-Schwarz inequality, the fact
k kmax and the assumption that
2krL( ⇤ )kmax . Therefore, by the definition of
⇤ k 2(k ⇤ k + r)2 ps 8k ⇤ k2 ps < r, which implies e = b
r, we obtain k e
2
F
2
from the construction of e . Thus b satisfies the desired `2 error bound.
k
Recall the definition of E` , 1 ` T . We can bound k b (`)
⇤k
(` 1)
kF .
S
F
in terms of
Lemma A.5 (Sequential Bound). Under the same assumptions and conditions in Lemma
A.4, for ` 1, b (`) must satisfy
k b (`)
⇤
⇤ 2
2
kF 4
(` 1)
S
F
⇤
+ rL(
Proof of Lemma A.5. Now if we assume that for all `
) E`
F
.
1, we have the following
|E` | 2s, where E` is defined in (A.1) , and
(` 1)
E`c \D min
/2
⇤
krL(
Using the matrix Hölder inequality, we obtain
p
p
(` 1)
|S| S max
s and krL(
S
F
⇤
)kmax .
)E` kF
Therefore, we have
(` 1)
+
S
F
rL(
⇤
) E`
F
p
s+krL(
⇤
(A.7)
(A.8)
p
)E` kmax
|E` |krL(
⇤
)E` kmax .
p
p
|E` | 2 s,
(A.9)
where the second inequality is due to the assumption that krL( ⇤ )kmax /2. The `2
error bound is given by Lemma A.4 by taking = (` 1) and E = E` , i.e.
p
2
(` 1)
⇤
b (`)
4 ⇤ 2·
+ rL( ⇤ )E` F 8k ⇤ k22 ·
s,
(A.10)
S
F
F
where last inequality is due to (A.9). Therefore, we only need to prove that (A.7)
and (A.8) hold by induction. For ` = 1, we have
w(u) for any u and thus
E1 = S, which implies that (A.7) and (A.8) hold for ` = 1. Now assume that (A.7)
and (A.8) hold at ` 1 for some ` 2. Since (i, j) 2 E` \S implies that (i, j) 2
/ S and
(` 1)
(`)
b
w ij
= j < w(u) = /2. By assumption, and since w(x) is non-increasing, we
(` 1)
b
must have
u. Therefore by induction hypothesis, we obtain that
ij
p
|E` \S|
b (`
1)
E`\S
F
u
b (`
1)
u
⇤
F
8k
⇤ k2
2
u
·
p
s
p
s,
where the second last inequality follows from Lemma A.4, the fact that (A.7) and (A.8)
hold at ` 1. This implies that |E` | 2|S| = 2s. Now for such E`c , we have k E`c kmin
w(u)
/2 krL( )k1 , which completes the induction step.
6
Our next lemma establishes the relationship between the adaptive regularization
parameter and the estimator from the previous step.
Lemma A.6. Assume w(·) 2 T . Let ij = w |⇥ij | for some ⇥ = (⇥ij ) and w(⇥S ) =
w(⇥ij ) (i,j)2S , then for the Frobenius norm k · kF , we have
w |⇥⇤S |
S F
u
F
1
+ u
⇥⇤S
⇥S
F
.
Proof of Lemma A.6. By assumption, if |⇥⇤ij ⇥ij | u, then w |⇥ij | 1 u 1 |⇥ij
⇥⇤ij |; otherwise, w |⇥ij | w |⇥⇤ij | u . Therefore,the following inequality always hold:
w |⇥ij | w |⇥⇤ij |
1
u +u
|⇥⇤ij
⇥ij |.
Then by applying the k · k⇤ -norm triangle inequality, we obtain that
S F
w |⇥⇤S |
u
⇥⇤S
1
+ u
F
⇥S
?F
.
Our last technical result concerns a contraction property, namely, how the sequential
approach improves the rate of convergence adaptively.
Proposition A.7 (Contraction Property). Assume that assumptions 3.1, 3.2 and 3.3
p
hold. Assume that
2krL( ⇤ )kmax and 8k ⇤ k22 s < 1. Then b (`) satisfies the
following contraction property
b (`)
⇤
F
4k
⇤ 2
k2 krL(
⇤
)S kF +
1 b (`
2
⇤
1)
F
.
Proof of Proposition A.7. Under the conditions of the theorem, the proof of Lemma A.5
yields that
(` 1)
E`c \D kmin
|E` | 2s, where E` is defined in (A.1), and k
Thus, applying Lemma A.5 with b = b (`) ,
b (`)
⇤
F
4
⇤ 2
2
·
=
(` 1)
(` 1)
kF
S
(` 1)
S
F
w(|
⇤
S|
u)
F
+ u
⇤
+ rL(
1
) E`
.
F
b (`
⇤
1)
⇤
S|
u)
F
.
F
I
+4k
u
1
b (`
1)
⇤
F
.
⇤
) E`
F
rL(
7
⇤
)S
F
+ rL(
(A.11)
1)
⇤k :
F
(A.12)
⌘
(A.13)
In the next, we bound term I. Separating the support of rL(
then using triangle inequality, we obtain
I = rL(
)kmax .
in terms of k b (`
Plugging the bound (A.12) into (A.11) yields that
⇣
⇤
⇤ 2
b (`)
4
rL( ⇤ )E` F + w(|
F
2
|
{z
}
⇤ 2
k2
⇤
and E = E` , we obtain
(` 1)
k
S
On the other side, by Lemma A.6, we can bound k
krL(
⇤
⇤)
E`
)E` \S
to S and E`\S and
F
.
(A.14)
p
Moreover, we have the following facts. First, we have rL( ⇤ )E` \S 2 |E` \S| rL( ⇤ )
by the Hölder inequality. From the assumption, we know krL( ⇤ )kmax /2. Plugging
p
these bounds into (A.14) results that krL( ⇤ )E` kF krL( ⇤ )S kF +
|E` \S|. Now, by
p
(` 1)
following a similar argument in Lemma A.5, we can bound |E` \S| by b E` \S F u
⇤
b (` 1)
u. Therefore, term I can be bounded by krL( ⇤ )S kF + u 1 b (` 1)
⇤
F
. Plugging the upper bound for I into (A.13), we obtain
F
b (`)
⇤
F
4k
⇤ 2
k2
+ (4k
⇣
krL(
⇤ 2
k2
⇤
)S k2 + kw(|
⇤
S|
b (`
⇤
1
+ 1) u
1)
u)k2
F
.
⌘
Now observing that k ⇤S kmin u + ↵ ⇣ , thus w(|⇥⇤S | u) w(↵ · 1S ) = 0S , where
1S is a matrix with each entry equals to 1 and 0S is defined similarly. Further notice
that (4k ⇤ k22 + 1) u 1 1/2, we complete the proof.
B
Improved Convergence Rate Using Sparsity Pattern
We develop an improved spectral norm convergence rate using sparsity pattern in this
section. We collect the proof for Theorem 3.8 first and then give technical lemmas that
are needed for the proof.
B.1
Proof of Theorem 3.8
(`)
⇤
Proof of Theorem 3.8. Let us define S (`) = (i, j) : ij
u , where u is introij
(0)
⇤
duced in (A.1). Let S = {(i, j) : | ij | u} = S. Then Lemma B.5 implies
(` 1)
E`
F
w(|
⇤
S|
u)
+
F
q
|S (`
1)
\ S| +
q
E` /S
⇤ > u and thus (i, j) 2 S (` 1) /S.
For any (i, j) 2 E` /S, we must have b ij = b ij
ij
Therefore, applying Lemma B.5 and using the fact that k ⇤S kmax u + ↵ , we obtain
⇢q
q
p
p
⇤ 2
⇤ 2
(` 1) \ S +
(` 1) /S
b (`) b
32
S
S
32
2
S (` 1) .
F
2
2
On the other side, (i, j) 2 S (`) implies that
(`)
| b ij
b ij |
(`)
| b ij
⇤
ij |
| b ij
Exploiting the above fact, we can bound
q
|S (`) |
b (`)
64k
p
⇤
ij |
u
22
64k
|S (`) | in terms of k b (`)
b
⇤ k2
2
F
q
S (`
1)
/2.
By induction on `, we obtain
q
⇣ 1 ⌘`/2 q
⇣ 1 ⌘`/2 p
(`)
|S |
|S (0) | =
s.
2
2
8
⇤ 2
k2
b kF :
,
max
Since ` > log s/ log 2, we must have that the right hand side of the above inequality is
smaller than 1, which implies that
S (`) = ? and b (`) = b .
Therefore, the estimator enjoys the strong oracle property. Using Lemma B.4 obtains us
that
b (`)
⇤
b
2
⇤
2
. M⇤
b
C
2
Applying Lemma D.6 finishes the proof of theorem.
B.2
C⇤
S max
.
Technical Lemmas
We start with the definitions of some constants. For notational simplicity, let 1 = k⌃⇤ k1
and D = {(i, i) : 1 i d}. Define the oracle estimator as
n⌦
o
↵
b =
b
argmin
,C
log det( ) .
d
supp( )=S, 2S+
Recall that smax = maxj
P
⇤
i 1(⇥ij )
is the maximum degree.
Lemma B.1. Suppose that the weight function satisfies that w(u) 1/2 for u defined
p
in (A.1). Assume that 2 smax 1 2 k ⇤ k2 , 8k ⇤ k22 s < 1. If
2krL( b )kmax , we
must have
b (`)
|E` | 2s and
b
F
32
Proof of Lemma B.1. If we assume that for all `
⇤ 2
2
(` 1)
.
E`
F
1, we have the following
|E` | 2s, where E` is defined in (A.1), and
k
(` 1)
kmin
E`c
(B.1)
krL( b )kmax .
(B.2)
⇤k
⇤ k +2 s
Using lemma B.4, we obtain that k b k2 k ⇤ k2 +k b
1 k
2
2
max .
p
b
Therefore, the assumption of the lemma implies 4k k2 s < 1. Replacing S by E` in
Lemma B.3 and using Hölder inequality, we have
2
2
2 p
(` 1)
(` 1)
b (`) b
4 b 2 E`
16 ⇤ 2 E`
32 ⇤ 2 s,
(B.3)
F
F
F
For ` = 1, we have
w(u) and thus E1 = S, which implies that (B.1) and (B.2)
hold for ` = 1. Now assume that (B.1) and (B.2) hold at ` 1 for some ` 2. Since
(` 1)
(`)
j 2 E` \ S implies that j 2
/ S and w( j
) = j < w(u) by assumption, and since
(` 1)
|
j
w(x) is decreasing, we must have |
obtain that
b (`
1)
E` \S F
p
|E` \ S|
b (`
u. Therefore by induction hypothesis, we
b
1)
F
32
⇤ 2
2
u
u
u
where the last inequality follows from the definition of u hold at `
implies that |E` | 2|S| = 2s. Now for such E`c , we have
k
E`c kmin
w(u)
/2
krL( b )kmax ,
which completes the induction step. This completes the proof.
9
p
s
p
s,
1. This inequality
With some abuse of notation, we let | ⇤S | = (| ⇤ij |)(i,j)2S and |
u)(i,j)2S . The following inequality bounds the regularization parameter
w( ⇤ij ) (i,j)2E in terms of functionals of ⇤ and .
Lemma B.2. Let
E F
⇤|
S
E
= w | | . For any set E ◆ S, E must satisfy
q
⇤
w(| ⇤S | u) F +
E/S + {j 2 S : | ij
ij |
u = ( ⇤ij
= w(| ⇤E |) =
u}
1/2
p
Proof. By triangle inequality, we have k E kF k S kF +
|E/S|. We further bound
⇤
⇤|
k S kF . If | ij
u, then we have w | ij | 1 I | ij
u , otherwise,
ij |
ij
⇤
⇤
since because w(·) is non-increasing and thus | ij
ij | < u implies w | ij | w | ij |
u . Therefore, using the Cauchy Schwartz inequality completes our proof.
Define the following optimization problem
n⌦
↵
b = argmin
b
,C
log det( ) +
1,o↵
d
2S+
Lemma B.3. Let k
satisfy
krL( b )kmax and 4k b k2
S c /D kmin
b
b
F
4 b
2
2
p
o
.
(B.4)
s < 1. Then b must
S F.
e = ⇥⇤ + t(⇥
b ⇥⇤ ), where t is chosen
Proof. We construct an intermediate solution ⇥
e
e
such that k(⇥
⇥⇤ kF = r, if k(⇥
⇥⇤ kF > r; t = 1 otherwise. Here r satisfies
p
4k b k22 s < r k b k2 . Lemma A.3 implies that
⌦
↵
2 e
b
b
b ⌘ Ds e , b .
+r
rL( e ) rL( b ), e
(B.5)
L
2
F
Then, we use Lemma E.2 to upper bound the right hand side of the above inequality
⌦
↵
s e b
s b b
b .
DL
( , ) tDL
( , ) = t rL b
rL b , b
Plugging the above inequality into (B.5), we obtain
⌦
2 e
b
b 2 rL( b )
+r
2
F
rL( b ), e
b
↵
.
(B.6)
We further control the right hand side of the above inequality by exploiting the first
b = 0 and rL( b )S[D = 0. Therefore,
order optimality condition, which is rL( b )+
b to the right hand side of (B.6) and using the optimality
adding and subtracting term
condition obtains us that
⌦
↵ ⌦
↵
2 e
b
b 2+
b, e
b + rL( b ), e
b 0.
+r
(B.7)
2
F
|
{z
} |
{z
}
I
II
b k2 , it suffices to bound I and II separately. For term I, by
Therefore, to bound k e
F
decomposing the support to S and S c /D, then using matrix Hölder inequality, we have
I
S F
e
b
S F
+
S c /D min
10
vec e
b
S c /D 1
.
Again, by using the optimality condition, we has
⌦
↵
b
II = rL b S c /D , e
rL( b )S c /D
c
S /D
max
vec e
b
By plugging the upper bound for I and II back into (B.7), we have
b
2
2
+r
S F
(e
e
2
+
F
b
b )S
rL( b )S c /D
S c /D min
.
F
max
S c /D 1
vec e
b
.
1
By assumption, we know that k kmin
krL( b )kmax , which implies that the second
b
term in the right hand side of the above inequality is positive. Thus, we have
+
2
p
2 e
b
b 2 s < r k b k2 , we obtain that e
r
S F . Now since 4k k2
F
p
2
b
4 b 2 S F 4k b k22 s < r. By the construction of e , we must have t = 1,
F
and thus e = b .
Recall that M⇤ is the sparsity pattern matrix corresponding to ⇤ .
p
b C⇤ )S kmax cn /2 for a sequence
Lemma B.4. If 441 cn + 1 < 1+41 /smax and k(C
cn , then we have
b
⇤
max
b
21 cn and
⇤
2
21 cn kM⇤ k2 .
⇤ . It suffices to show that k k
Proof of Lemma B.4. Let
= b
max r, where
2
⇤
⇤ ).
e
r = 1 cn . To show this, we construct an intermediate estimator, =
+ t( b
⇤k
e = b , otherwise.
We choose t such that k e
max = r, if k kmax > r, and
For a matrix A, let AS be a matrix agreeing with A on S and having 0 elsewhere.
Using the two term Taylor expansion, we know that there exists a 2 [0, 1] such that
⇤ ),
e⇤ = ⇤ + (e
vec rL( e ) = vec rL(
which implies that
n
vec C⇤E
e
1
E
where E = S [ D. Let e = e E
n
vec C⇤E
o
⇣
⇤
) + r2 L e ⇤ vec e
e⇤ ⌦ e⇤
E
E
⌘
1
⇣
vec e E
⇤
⇤
E
⌘
= 0,
= t . Define f vec( e ) to be
o
1
⇤
⇤
eE
+ e E
,
EE vec
⇤
E
,
(B.8)
1
in which ⇤EE = ( ⇤E ⌦ ⇤E ) 1 . By the matrix expansion formula that (A+
P1
1 )m A 1 , f {vec( e )} reduces to
m=1 ( A
⇢ X
1
vec
( ⌃⇤ e ) m ⌃⇤
.
E
m=2
1
Using triangle inequality, we then obtain that
f vec( e ) max
(j,k)2E
1
X
m=2
11
eTj (⌃⇤ e )m ⌃⇤ ek .
)
1
A
1
=
Further applying Hölder inequality to each single term in the right hand side of the above
displayed inequality, we have
eTj (⌃⇤ e )m ⌃⇤ ek ⌃⇤
m+1
1
m 1
1
e
e
max
1
⇤
sm
max ⌃
m+1
1
where we use the fact k k1 smax k kmax . Therefore, we obtain
f vec( e )
1
X
m=2
1
⇤ m+1 e m
sm
max k⌃ k1 k kmax =
which, by triangle inequality, implies that
k e kmax k
⇤
EE k1
✓
vec
n
C⇤E
⇤
+ e
1
E
o
b E = b , the fact kC
b
Utilizing the KKT condition C
E
p
1+ 1+1 /smax , we obtain
k e kmax 21 cn
⇣1
2
+
e
m
,
max
31 smax k e k2max
,
1 1 smax k e kmax
◆
31 smax k e k2max
+
.
1
1 1 smax k e kmax
C⇤ kmax cn /2 and 441 cn <
31 smax r2 ⌘
< 21 cn ⌘ r,
1 1 smax r
which is a contradiction. Thus, e =
and b satisfies the desired maximum norm
bound. For the spectral norm bound, we utilize Lemma E.6 and obtain that
b
The proof is finished.
C
⇤
2
M⇤
2
b
⇤
max
21 cn kM⇤ k2 .
Semiparametric Graphical Model
Proof of Theorem 4.3. We need the follows lemma, which are taken from Liu et al.
b⌧ .
(2012). It provides a nonasymptotic probability bound for estimating ⌃npn using S
Lemma C.1. Let C be a constant. For any n & log d, with probability at least 1
we have
r
log d
npn
⌧
sup |Sbjk ⌃jk | C
.
n
jk
8/d,
The rest of the proof is adapted from that of Theorem 4.3 and thus is omitted.
D
Concentration Inequality
In this section, we establish the concentration inequalities which are the key technical
tools to the large probability bounds in Section 3.
12
Lemma D.1 (Sub-Gaussian Tail Bound). Let X = (X1 , X2 , . . . , Xd )T be a zero-mean
random vector with covariance ⌃⇤ such that each Xi / ii⇤ is sub-Gaussian with variance
proxy 1. Then there exists constants c1 and t0 such that for all t with 0 t t0 the
b satisfies the following tail probability bound
associated sample covariance ⌃
P |bij
⇤
ij |
t 8 exp
c1 nt2 .
Proof of Lemma D.1. By the definition of the sample covariance matrix, we have bij =
P
P
(k)
(k)
(k) (k)
n 1 nk=1 (Xi
X̄i )(Xj
X̄j ) = n 1 nk=1 Xi Xj
X̄i X̄j . Therefore we can deP
(k)
(k)
n
⇤
1
⇤
compose bij
ij as n
ij X̄i X̄j . By applying the union sum bound,
k=1 Xi Xj
we obtain that
◆
✓
◆
✓
◆
✓ X
n
1
t
t
(k) (k)
⇤
⇤
P bij
t P
Xi Xj
+ P X̄i X̄j
ij
ij
n
2
2
k=1
{z
}
|
{z
} |
(R2)
(R1)
In the sequel, we bound (R1) and (R2) separately. For term (R1), following the argument
of Lemma A.3 in Bickel and Levina (2008), there exists constant c01 and t00 not depending
n, d such that
✓ X
◆
⇢
n
1
t
(k) (k)
⇤
(R1) = P
Xi Xj
4 exp
c01 nt2
ij
n
2
k=1
for all t satisfying 0 t t0 . Next, we bound the term (R2). By the linear structure
p
of sub-Gaussian random variables, we obtain that nX̄i ⇠ sub-Gaussian(0, ii⇤ ) for all
p
p
1 i d. Therefore, by applying Lemma E.1, we obtain that | nX̄i · nX̄j | is a
p
p
sub-exponential random variable with 1 norm bounded by 2k nX̄i k 2 k nX̄j k 2 . We
p
p
give explicit bounds for the 2 -norm of nX̄i and nX̄j . By the Cherno↵ bound, the
p
tail probability of nX̄i can be bounded in the following
⇣p
⌘
n
t2 o
P | nX̄i | t 2 exp
.
2 ii⇤
For every non-negative random variable Z, integration by parts yields the identity EZ =
R1
p
u)du. We apply this for Z = | nX̄i |p and obtain after change of variables
0 P(Z
u = tp that
Z 1
Z 1
n
p
p
t2 o p 1
p
p 1
E| nX̄i | =
P(| nX̄i | t) · pt dt
2p · exp
t dt
2 ii⇤
0
0
⇣ p ⌘p/2
p
= p(2 ii⇤ )p/2 · ( ) p(2 ii⇤ )p/2 ·
,
2
2
p
p
which indicates that k nX̄i k 1 2 ii⇤ . The Gamma function is defined as (t) =
q
R1 t t 1
p
⇤ . Therefore we obtain
e
x
dx.
Similary,
we
can
bound
k
n
X̄
k
by
2 jj
j
2
0
q
p
p
2
2
⇤
⇤
⇤ ⇤
k nX̄i · nX̄j k 1 2
ii jj 2 max , where max = max{ 11 , . . . , dd }. Define Zij =
p
p
2 e2 ) 1 and write the Taylor expansion series of the
| nX̄i · nX̄j |. Let = (e 1)(2 max
expoential function, we obtain
E exp{ Zij } = 1 +
1
X
k=1
k E(Z k )
ij
k!
1+
1
X
k=1
13
k (2 2 k)k
max
k!
1+
1
X
k=1
(2
2
max
· e)k e,
where we use k!
(k/e)k in the last second inequality. Exponenting and using the
Markov inequalty yields that
⇣
⌘
⇣
⌘
⇣
⌘ Ee Zij
P Zij t = P Zij
t = P e Zij e t
exp{1
t},
et
for all t
0. Using the above result, we can boudn (R2) as
⇣
n
n
nt ⌘
nt o
(R2) P Zij
exp 1
4 exp 1
2
2
nt o
.
2
Combing the bounds for (R1) and (R2), taking c1 = min{c01 , } and t0 = min{1, t00 }
obtain us that
⇤
ij |
P |bij
t 8 exp
c1 nt2 8 t t0 ,
which completes the proof.
We then develop a large deviation bound for marginal variances.
Lemma D.2 (Large Deviation Bound for Marginal Variance). Let X = (X1 , X2 , . . . , Xd )T
p ⇤
be a zero-mean random vector with covariance ⌃⇤ such that each Xi
⌃ii is subn
(k)
Gaussian with variance proxy 1, and X
be n i.i.d. samples from X. Let
k=1
1
C(") = 2 " log(1 + ") > 0. Then, for any " 0, we must have
⇣
⌘
n
o
b ii ⌃⇤ > " · ⌃⇤ 2 · exp
P ⌃
n
·
C(")
.
ii
ii
(k)
Proof. We write Zi
(k)
(k)
(k)
= ⌃⇤ii
1/2
(k)
Xi
e ii = n
and ⌃
1
Pn
(k)
k=1 Zi
(k)
· Zi , for 1 i d.
Let &i = Zi · Zi ⇠ 21 , for 1 k n. Therefore, the moment-generating function of
(k)
&i is M& (k) (t) = (1 2t) 1/2 , for t 2 ( 1, 1/2). Next, we control the tail probability
i
e ii > 1 + " and ⌃
e ii < 1 ", respectively. For the tail probability of ⌃
e ii > 1 + ", by
of ⌃
applying Lemma E.8, we obtain
!
(1)
(n)
n
o
&i +. . . + &i
P
> 1+" exp
n · A(") ,
n
where A(") = supt (1 + ")t + 2 1 log(1 2t) = 2 1 " log(1 + ") . Similarly, for any
e ii < 1 " as
" > 0, we obtain the tail probability of ⌃
!
(1)
(n)
n
o
&i +. . . + &i
P
< 1 " exp
n · B(") ,
n
where B(") = supt (1 ")t + 2 1 log(1 2t) . After some algebra, we obtain B(") =
2 1 " + log(1 ") , if " < 1; B(") = +1, otherwise. Let C(") = min A("), B(") =
2 1 " log(1+")
. Therefore, combing the
⇣
⌘ above twon inequalities
o by union bound, we
obtain P n
b ii = (⌃⇤ )
⌃
ii
1
(1)
&i
1 ·⌃
e
(n)
1 > " 2 · exp
+ . . . + &i
ii = n
1 & (1) +. . .+& (n)
ii
ii
⇣
b ii
P ⌃
⌃⇤ii
n · C(") . Note that we have
. Thus, we obtain
⌘
n
o
> " · ⌃⇤ii 2 · exp
n · C(") .
14
Our next results characterizes a large deviation bound for sample correlation matrix.
Lemma D.3 (Large Deviation Bound for Sample Correlation). Let X = (X1 , X2 , . . . , Xd )T
p ⇤
be a zero-mean random vector with covariance matrix ⌃⇤ such that each Xi
⌃ii is
(k)
n
sub-Gaussian with variance proxy 1 and {X }k=1 be n independent and identically
b = 1/n Pn X (k) X (k) T denote the sample covariance
distributed copies of X. Let ⌃
k=1
b =W
c 1⌃
bW
c 1 denote the sample correlation matrix, where W
c 2 is the diagonal
and C
b Further let ⇢bij and ⇢ij be the (i, j)th element of
matrix with diagonal elements of ⌃.
⇤
b
C and C respectively. Define c2 = min{4 1 c1 min(⌃⇤ )2 , 1/6}. Then, for 0 "
min{1/2, t0 maxi ⌃⇤ii }, we have
⇣
⌘
n
P |b
⇢ij ⇢ij | > " 6 exp
ii
o
c2 n·"2 , where 1 i 6= j d.
b ii · ⌃
b jj ) 1/2 ⌃
b ij . To
Proof of Lemma D.3. We denote the sample correlation as ⇢bij = (⌃
prove the tail probability bound. It suffices to prove the tail probability bound for
⇢bij ⇢ij > " and ⇢bij ⇢ij < ", respectively. We start with the tail probability bound
for ⇢bij ⇢ij > ". Let us assume that ⇢ij 0. Using the basic probability argument, we
have P(A) = P(A \ B) + P(A \ B c ) P(A) + P(B c ). Thus, for any 0 t 1 we obtain
⇣
⌘
⇣
⌘
b ij (⌃
b ii ⌃
b jj ) 1/2 ·⇢ij > (⌃
b ii ⌃
b jj ) 1/2 ·"
P ⇢bij ⇢ij > " = P ⌃
⇣
⌘
b ij (⌃⇤ ⌃⇤ ) 1/2 (1 t) 1 ·⇢ij > (⌃⇤ ⌃⇤ ) 1/2 (1 t) 1 ·"
P ⌃
ii jj
ii jj
|
{z
}
⇣
b ii
+P ⌃
⌘
(R1.1)
⇣
b jj
⌃⇤ii > ⌃⇤ii ·t + P ⌃
⌘
⌃⇤jj > ⌃⇤jj ·t .
(D.1)
Next, we bound the term (R1.1). After some simple algebra, (R1.1) can be bounded by
✓
◆
⇤
⇤ ⇤
1/2
1
⇤
b
P ⌃ij ⌃ij > " + ⇢ij ·(⌃ii ⌃jj )
(1 t)
⌃ij
✓
◆
1/2
⇤
⇤ ⇤
⇤
b
P ⌃ij ⌃ij > " ⌃ii ⌃jj
(1 + t) + t·⌃ij
Let c02 = c1 mini (⌃⇤ii )2 , where c1 is defined in Lemma D.1.q
If we apply Lemma D.1 with a
better constant and Lemma D.2, then for any 0 " t0 ⌃⇤ii ⌃⇤jj , in which t0 is defined
in Lemma D.1, we must have
⇣
⌘
⇣
⌘
⇣
⌘
b ij ⌃⇤ij > " ⌃⇤ii ⌃⇤jj 1/2 + P ⌃
b ii ⌃⇤ii > t·⌃⇤ii
P ⇢bij ⇢ij > " P ⌃
⇣
⌘
b jj ⌃⇤ > t·⌃⇤
+P ⌃
jj
jj
n
o
n
o
1
4 exp
c02 n·"2 + 2 exp
n· t log(1 + t) .
2
Let c002 = min c02 , 1/6 . Further, for any 0 " min{1/2, t0 maxi ⌃⇤ii }, by taking t = "
and using the inequality t log(1 + t) 1/3·t2 for all t such that 0 t 1/2, we obtain
⇣
P ⇢bij
⌘
n
o
n 1
o
n
o
⇢ij > " 4 exp
c02 "2 · n + 2 exp
"2 · n 6 exp
c002 n·"2 .
6
15
If ⇢ij < 0, in the a similar fashion as before, we can obtain the the following tail probability bound
⇣
⌘
⇣
⌘
q
b ij ⌃⇤ij > " ⌃⇤ii ⌃⇤jj 1/2 + ⌃⇤ij ·(t2 t) " ⌃⇤ ⌃⇤ ·t
P ⇢bij ⇢ij > " P ⌃
ii jj
|
{z
}
⇣
b ii
+P ⌃
⌘
(R1.2)
⇣
b jj
⌃⇤ii > t·⌃⇤ii + P ⌃
⌘
⌃⇤jj > t·⌃⇤jj .
To continue, we bound the term (R1.2) in the next. If take t = " min 1/2, t0 maxi ⌃⇤ii
q
q
1/2 + 1/2|⇢ij |, we obtain that ⌃⇤ij ·(t2 t) " ⌃⇤ii ⌃⇤jj ·t
1/2 ⌃⇤ii ⌃⇤jj ·t. Thus, we have
⇣
P ⇢bij
⌘
⇣
⌘
⇣
⌘
b ij ⌃⇤ij > 1 " ⌃⇤ii ⌃⇤jj 1/2 + P ⌃
b ii ⌃⇤ii > t·⌃⇤ii
⇢ij > " P ⌃
2
⇣
⌘
⇤
b
+ P ⌃jj ⌃jj > t·⌃⇤jj
n 1
o
n 1
o
4 exp
c02 n·"2 + 2 exp
n· " log(1 + ")
2
n 4
o
2
6 exp
c2 n·" ,
where c2 = min{4 1 c02 , 1/6} = min{4 1 c1 min(⌃⇤ii )2 , 1/6} c002 . By combining above two
cases, for 0 " min{1/2, t0 maxi ⌃⇤ii }, we have P(b
⇢ij ⇢ij > ") 6 exp{ c2 n · "2 }.
In a similar fashion, we obtain the same tail probability bound for ⇢bij ⇢ij < ", for
0 " min{1/2, t0 maxi ⌃⇤ii }. Thus the proof is completed.
Lemma D.4. Under the same conditions in Lemma (D.3). We have the following result
hold
r ◆
✓
✓r ◆
1
s
⇤
⇤
lim lim sup P rL
>M
= 0, and krL( )S kF = OP
.
S max
M !1
n
n
n
b
Proof of Lemma D.4. It is easy to check that rL( ⇤ )S F =
C
C⇤ S F . By
applying Lemma D.3 and the union sum bound, for any M such that 0 M
p
min 1/2, t0 maxi ⌃⇤ii · n, in which t0 is defined in Lemma D.3, we obtain
r ◆
✓
1
⇤
P rL
>M
s · exp
c2 M 2 exp
c2 M 2 + log s .
S max
n
q
p
Taking M such that 2c2 1 log s M min 1/2, t0 maxi ⌃⇤ii · n and M ! 1 in the
above inequality obtains us that
r ◆
✓
1
⇤
lim lim sup P rL
>M
= 0,
S max
M !1
n
n
p
which implies that rL ⇤ S F = OP ( s/n).
b C⇤ ,
Lemma D.5 (A Concentration Inequality for Sample Correlation Matrix). Let C,
1
⇤
2
⇢bij and ⇢ij be defined in Lemma D.3. Suppose n
3 c 2 t1
· log d. Take
=
q
p
b must
3c2 1 · (log d)/n ⇣ log(d)/n, in which c2 is defined as in Lemma D.3. Then C
satisfy
⇣
⌘
b C⇤
P C
1 8/d.
max
16
b C⇤ . Therefore, applying Lemma D.3 and
Proof. It is easy to check that rL(C⇤ ) = C
union sum bound, we obtain that, for any t1 ⌘ min 1/2, t0 maxi {⌃⇤ii } with t0
defined in Lemma D.1,
⇣
⌘
b C⇤
P C
>
6d2 · exp{ c2 n 2 }.
max
where c2 = min{4
1c
⇤ 2
1 min(⌃ii ) , 1/6},
in which c1 is definedqin Lemma D.1. , for n
·log d, by taking = 3c2 1 · (log d)/n t1 , we
1
sufficiently large such that n 3 c2 t21
b C⇤ kmax
b C⇤ kmax >
obtain P kC
= 1 P kC
The proof is completed.
1 6d2 ·exp{ c2 n
Lemma D.6. Under the same conditions in Lemma D.5, we have
r ◆
✓
1
⇤
b
b C⇤
lim lim sup P C C S max > M
= 0, and
C
S
M !1
n
n
max
2}
1 8/d.
✓r ◆
1
= OP
.
n
Proof of Lemma D.6. The proof is similar to that of Lemma D.5 and thus is omitted.
E
Preliminary Lemmas
In this section we state and prove the technical lemmas used in previous sections. The
following lemma establishes the tail bound type of the product of two sub-Gaussian
random variables. Let k · k 1 and k · k 2 be the 1 - and 2 -norm defined in Vershynin
(2010).
Lemma E.1. For X and Y being two sub-Gaussian random variables, then the absolute
value of their product |X · Y | is a sub-exponential random variable with
kX · Y k
1
2 · kXk 2 kY k 2 .
Proof of Lemma E.1. To show X · Y is sub-exponential, it suffices to prove that the
1 -norm of X · Y is bounded. By the definition of the 1 -norm, we have
kX · Y k
1
= sup p
p 1
1
⇥
E|X · Y |p
We need to use the Hölder inequality as follows
⇥
⇤ ⇥
⇤1/r ⇥
⇤1/s
E |hf, gi| E|f |r
E|g|s
,
⇤1/p
.
(E.1)
1 1
+ = 1,
r s
where f and g are two random functions. If we choose f = X p , g = Y p and r = s = 2
in the Hölder inequality, then the right hand side of (E.1) can be bounded by
n ⇥
⇤1/(2p) ⇥
⇤1/(2p)o
sup p 1 E|X|2p
E|Y |2p
p 1
n
n
⇥
⇤1/(2p)o
⇥
⇤1/(2p)o
2 sup (2p) 1/2 E|X|2p
· sup (2p) 1/2 E|Y |2p
.
p 1
Therefore we obtain that kX · Y k
p 1
1
2kXk 2 kY k
17
2
< 1. The proof is completed.
⌦
↵
s (⇥ , ⇥ ) =
Lemma E.2. Let DL (⇥1 , ⇥2 ) = L(⇥1 ) L(⇥2 ) L(⇥2 ), ⇥1 ⇥2 and DL
1
2
DL (⇥1 , ⇥2 ) + DL (⇥2 , ⇥1 ). For ⇥(t) = ⇥⇤ + t(⇥ ⇥⇤ ) with t 2 (0, 1], we have that
s
s
DL
(⇥(t), ⇥⇤ ) tDL
(⇥, ⇥⇤ ).
⌦
Proof of Lemma E.2. Let Q(t) = DL (⇥(t), ⇥⇤ ) = L(⇥(t)) L(⇥⇤ )
rL(⇥⇤ ), ⇥(t)
↵
⇤
⇥ . Since the derivative of L(⇥(t)) with respect to t is hrL(⇥(t)), ⇥ ⇥⇤ i, then the
derivative of Q(t) is
⌦
↵
Q0 (t) = rL(⇥(t)) rL(⇥⇤ ), ⇥ ⇥⇤ .
s (⇥(t)
Therefore the Bregman divergence DL
⇥⇤ ) can written as
⌦
↵
s e
e
DL
(⇥(t) ⇥⇤ ) = rL(⇥(t))
rL(⇥⇤ ), t(⇥ ⇥⇤ ) = tQ0 (t) for 0 < t 1.
s (⇥, ⇥⇤ ) as a
By plugging t = 1 in the above function equation, we have Q0 (1) = DL
special case. If we assume that Q(t) is convex, then Q0 (t) is non-decreasing and thus
s
s
DL
(⇥(t), ⇥⇤ ) = tQ0 (t) tQ0 (1) = tDL
(⇥, ⇥⇤ ).
Therefore the proof is completed. It remains to prove that Q(t) is a convex function, i.e.
Q(↵1 t1 + ↵2 t2 ) ↵1 Q(t1 ) + ↵2 Q(t2 ), 8 t1 , t2 2 (0, 1], ↵1 , ↵2
0 s.t. ↵1 + ↵2 = 1. (E.2)
For 8↵1 , ↵2
0 such that ↵1 + ↵2 = 1, and t1 , t2 2 (0, 1), we have ⇥(↵1 t1 + ↵2 t2 ) =
↵1 ⇥(t1 ) + ↵2 ⇥(t2 ). By the bi-linearity property of the inner product function h·, ·i, and
using the linearity property of ⇥(·), we have the following equality hold
⌦
↵
rL(⇥⇤ ), ⇥(↵1 t1 + ↵2 t2 ) ⇥⇤
⌦
⌦
↵
= ↵1 rL(⇥⇤ ), ⇥(t1 ) ⇥⇤ i ↵2 rL(⇥⇤ ), ⇥(t2 ) ⇥⇤ .
(E.3)
On the other side, by the convexity of the loss function L(·), we obtain
L ⇥(↵1 t1 + ↵2 t2 ) = L ↵1 ⇥(t1 ) + ↵2 ⇥(t2 ) ↵1 L ⇥(t1 ) + ↵2 L ⇥(t2 ) .
(E.4)
By adding (E.3) and (E.4) together and using the definition of function Q(·), we
obtain
Q(↵1 t1 + ↵2 t2 ) ↵1 Q(t1 ) + ↵2 Q(t2 ),
which indicates Q(t) is a convex function. Thus we complete our proof.
Lemma E.3. Let Ai , Bi 2 Rd⇥d be square matrices for i = 1, 2. Then we have
A 1 B1 A 1
A2 B2 A2 = (A1
A2 )(B1
+ (A1
B2 )(A1
A2 )B2 A1 + A1 (B1
The next lemma characterizes an upper bound of kA
where k · k⇤ is any matrix norm.
18
A2 ) + (A1
1
B
1k
⇤
A2 )B2 A2
B2 )A2 .
in terms of kA Bk⇤ ,
Lemma E.4. Let A, B 2 Rd⇥d be invertible. For any matrix norm k · k⇤ , we have
1
kA
B
1
k⇤
kA 1 k2⇤ kA Bk⇤
.
1 kA 1 k⇤ kA Bk⇤
We need the following lemma for bounding the di↵erence with respect to the Kronecker product.
Lemma E.5. Let A and B be matrices of the same dimension. Then we have
kA ⌦ A
kA ⌦ Bk1 = kAk1 kBk1 ,
B ⌦ Bk1 kA
Bk21
and
+ 2 min kAk1 , kBk1 kA
Bk1 .
The proof of the above lemma can be carried out by using the definitions and thus
is omitted here for simplicity.
For a matrix A = aij , we say Asp = asp
ij is the corresponding sparsity pattern
sp
sp
matrix if aij = 1 when aij 6= 0; and aij = 0, otherwise.
Lemma E.6. Let A 2 Rd⇥d be a matrix such that kAkmax 1. Let Asp be the
corresponding sparsity pattern matrix. Then we have
kAk2 kAsp k2 .
Proof of Lemma E.6. Let aij be the (i, j)-th entry of matrix A and xj the j-th entry of
x. Following the definition of the spectral norm of a matrix, we obtain that
kAk2 = sup kAxk2 = sup
kxk2 =1
sup
kxk2 =1
=
kxk2 =1
⇢X
n ✓X
n
i=1
sup
x 0,kxk2 =1
i=1
aij xj
j=1
sgn(xj )1(aij 6= 0) · xj
j=1
⇢X
n ✓X
n
i=1
⇢X
n ✓X
n
j=1
1(aij 6= 0) · xj
◆2
◆2
◆2
kAsp k2 .
Thus the proof is completed.
b 2 Rd⇥d be a semi-positive definite random matrix, A 2 Rd⇥d a
Lemma E.7. Let A
positive definite deterministic matrix. Then we have
⇣
⌘
⇣
⌘
b 1 A 1 >2 2 A · A
b A
b A > 2 1 min A .
P A
P
A
min
2
2
2
b and A are commutative, that is AA
b = AA,
b then we have
If we further assume that A
⇣
⌘
p
b 1/2 A 1/2 > 2( 2 + 1) A 1/2 2 A A
b A
P A
min
2
2
2
⇣
⌘
b A > 2 1 min A .
P A
2
19
b 1 A 1 as A
b 1 (A
b A)A
Proof of Lemma E.7. We first write A
the sub-multiplicative property of the spectral norm that
b
A
1
A
1
2
b
A
b
(A
b
A
1
1
min
b 1
A
b A .
A · A
2
A)A
1
min
1
2
2
A
1,
1
2
then it follows from
b
A
A
2
(E.5)
b
b A , and thus min A
b
By Weyl’s inequality, we obtain that min (A) min (A)+
A
2
b
b
A
A 2 . Thus in the event of
A
A 2 2 1 min A , we have
min A
b
2 1 min A hold. Thus it follows from (E.5) that
min A
⇣
⌘
⇣
⌘
b 1 A 1 2 2 A · A
b A
b A 2 1 min A .
P A
P
A
min
2
2
2
b and A
This proves the first desired probability bound. If we further assume that A
1
b A 2
are commutative, under the event A
min A , we have
2
b
A
1/2
A
1/2
2
=
⇣
b
A
1/2
+A
1/2
1
⌘
b
A
1
A
1
2
b 1/2 + A 1/2 A
b 1 A 1
A
2
2
2
p
1/2 b 1
A 1 2
( 2 + 1) A 2 A
p
1/2
2
b A .
2( 2 + 1) A 2 min
A A
2
Therefore we prove the third result.
The following lemma is taken from Dembo and Zeitouni (2009), which leads to a
P
concentration bound of the empirical means X̄ = n 1 ni=1 Xi , where Xi ’s are i.i.d.
random copies of X. Define the logarithmic moment generating function associated
with X to be
⇥
⇤
⇤X ( ) ⌘ log MX ( ) = log E exp{ X} .
(E.6)
Lemma E.8 (Large Deviation Inequality). Let the logarithmic moment generating function of X, ⇤X ( ), be defined in E.6. Define the Fenchel-Legendre dual of ⇤X (x) to be
⇤⇤X (x) ⌘ sup 2R x ⇤( ) . Then, for any t 0, we have
✓ X
n
1
P
Xi
n
i=1
✓ X
n
1
P
Xi
n
i=1
EX
EX
⇥
where F1 = t, +1 and F2 =
t
◆
t
exp
◆
exp
n(EX + inf ⇤⇤ (x))
x2F1
and
n(EX + inf ⇤⇤ (x)) ,
x2F2
⇤
1, t .
References
Bickel, P. J. and Levina, E. (2008), “Regularized estimation of large covariance matrices,”
The Annals of Statistics, 36, 199–227.
20
Dembo, A. and Zeitouni, O. (2009), Large deviations techniques and applications, vol. 38,
Springer Science & Business Media.
Horn, R. A. and Johnson, C. R. (2012), Matrix analysis, Cambridge university press.
Liu, H., Han, F., Yuan, M., La↵erty, J., Wasserman, L., et al. (2012), “High-dimensional
semiparametric Gaussian copula graphical models,” The Annals of Statistics, 40, 2293–
2326.
Vershynin, R. (2010), “Introduction to the non-asymptotic analysis of random matrices,”
arXiv preprint arXiv:1011.3027.
21
| 10 |
arXiv:1802.02368v1 [math.ST] 7 Feb 2018
Group kernels for Gaussian process
metamodels with categorical inputs
O. Roustant1 , E. Padonou1 , Y. Deville2 , A. Clément3 , G.
Perrin4 , J. Giorla4 , and H. Wynn5
1
Mines Saint-Étienne, UMR CNRS 6158, LIMOS, F–42023 Saint-Étienne, France
2
3
AlpeStat, Chambéry, France
CEA/DAM/VA, F–21120, Is-sur-Tille, France
4
CEA/DAM/DIF, F–91297, Arpajon, France
5
London School of Economics, England
Abstract
Gaussian processes (GP) are widely used as a metamodel for emulating time-consuming computer codes. We focus on problems involving categorical inputs, with a potentially large number L of levels
(typically several tens), partitioned in G L groups of various sizes.
Parsimonious covariance functions, or kernels, can then be defined by
block covariance matrices T with constant covariances between pairs
of blocks and within blocks. However, little is said about the positive
definiteness of such matrices, which may limit their practical usage.
In this paper, we exploit the hierarchy group/level and provide a parameterization of valid block matrices T, based on a nested Bayesian
linear model. The same model can be used when the assumption
within blocks is relaxed, giving a flexible parametric family of valid
covariance matrices with constant covariances between pairs of blocks.
As a by-product, we show that the positive definiteness of T is equivalent to the positive definiteness of a small matrix of size G, obtained
by averaging each block.
We illustrate with an application in nuclear engineering, where one of
the categorical inputs is the atomic number in Mendeleev’s periodic
table and has more than 90 levels.
1
1
Introduction
This research is motivated by the analysis of a time-consuming computer
code in nuclear engineering, depending on both continuous and categorical
inputs, one of them having more than 90 levels. The final motivation is an
inversion problem. However, due to the heavy computational cost, a direct
usage of the simulator is hardly possible. A realistic approach is to use a
statistical emulator or metamodel. Thus, as a first step, we investigate the
metamodelling of such computer code. More precisely, we consider Gaussian
process (GP) regression models, also called kriging models ([Sacks et al.,
1989], [Rasmussen and Williams, 2006]), which have been successfully used
in sequential metamodel-based strategies for uncertainty quantification (see
e.g. [Chevalier et al., 2014]).
Whereas there is a flourishing literature on GP regression, the part concerned with categorical inputs remains quite limited. We refer to [Zhang and
Notz, 2015] for a review. As for continuous inputs, covariance functions or
kernels are usually built by combination of 1-dimensional ones, most often
by multiplication or, more rarely, by addition [Deng et al., 2017]. The question then comes down to constructing a valid kernel on a finite set, which is a
positive semidefinite matrix. Some effort has been spent on parameterization
of general covariance matrices [Pinheiro and Bates, 1996] and parsimonious
parameterizations of smaller classes [Pinheiro and Bates, 2009]. Some block
form have also been proposed [Qian et al., 2007], in order to deal with a potential large number of levels. However, their validity was not investigated.
Furthermore, to the best of our knowledge, applications in GP regression are
limited to categorical inputs with very few levels, typically less than 5.
Guided by the application, we investigate more deeply the so-called group
kernels cited in Qian et al. [2007], defined by block covariance matrices T with
constant covariances between pairs of blocks and within blocks. We exploit
the hierarchy group/level by revisiting a nested Bayesian linear model where
the response term is a sum of a group effect and a level effect. This leads to a
parameterization of T which is automatically positive definite. Interestingly,
the assumption on within blocks can be relaxed, and we obtain a parameterization of a wider class of valid group kernels. The positive definiteness
condition of T is also explicited: it is equivalent to the positive definiteness of
the smaller covariance matrix obtained by replacing each block by its average.
2
As mentioned above, this work has some connections with Bayesian linear models as well as linear mixed effect models (see e.g. Lindley and Smith
[1972], Smith [1973]) in a hierarchical view. Other related works concern hierarchical GPs with a tree structure. For instance, particular forms of group
kernels are obtained in multiresolution GP models ([Fox and Dunson, 2012],
[Park and Choi, 2010]). Given two resolution levels and a spatial partition
A = ∪i Ai of Rd , a parent GP on A, corresponding to the lowest resolution,
serves as a trend for children GPs on Ai , corresponding to the highest resolution. Children GPs are independent conditionaly on the parent GP, and
have the same covariance structure with a lengthscale parameter decreasing
with the diameter of Ai . As a result, for a given resolution, the covariance
matrix has a block form given by a sum of nested block diagonal covariance
matrices. In comparison, the two-resolution (group/level) GP corresponding
to a categorical input, does not assume a conditional independence between
children, and the block form of covariance matrices can be more general.
The paper is structured as follows. Section 2 gives some background
on GP regression with mixed categorical and continuous inputs. Section 3
presents new findings on group kernels. Section 4 illustrates on synthetic
examples. Section 5 is devoted to the application which motivated this work.
Section 6 gives some conclusions and perspectives for future research.
2
Background and notations
2.1
GPs with continuous and categorical variables
We consider a set of I continuous variables x1 , . . . , xI defined on a hypercubic domain ∆, and a set of J categorical variables u1 , . . . , uJ with L1 , . . . , LJ
levels. Without loss of generality, we assume that ∆ = [0, 1]I and that, for
each j = 1, . . . , J, the levels of uj are numbered 1, 2, . . . , Lj . We denote
x = (x1 , . . . , xI ), u = (u1 , . . . , uJ ), and w = (x, u).
We consider GP regression models defined on the product space
J
Y
D = [0, 1] × {1, . . . , Lj },
I
j=1
3
and written as:
yi = µ(w(i) ) + Z(w(i) ) + i ,
i = 1, . . . , N.
(1)
where µ, Z and are respectively the trend, the GP part and a noise term.
There exist a wide variety of trend functions, as in linear models. Our main
focus here is on the centered GP Z(w), characterized by its kernel
k : (w, w0 ) 7→ cov (Z(w), Z(w0 )) .
I
Kernels
QJ on D can be obtained by combining kernels on [0, 1] and kernels
on j=1 {1, . . . , Lj }. Standard valid combinations are the product, sum or
ANOVA. Thus if kcont denotes a kernel for the continuous variables x, kcat
a kernel for the categorical ones u, examples of valid kernels for w = (x, u)
are written:
(Product)
(Sum)
(ANOVA)
k(w, w0 ) = kcont (x, x0 )kcat (u, u0 )
k(w, w0 ) = kcont (x, x0 ) + kcat (u, u0 )
k(w, w0 ) = (1 + kcont (x, x0 ))(1 + kcat (u, u0 ))
For consiseness, we will denote by ∗ one of the operations: sum, product or
ANOVA. The three formula above can then be summarized by:
k(w, w0 ) = kcont (x, x0 ) ∗ kcat (u, u0 )
(2)
Then, in turn, kcont and kcat can be defined by applying these operations
to 1-dimensional kernels. For continuous variables, famous 1-dimensional
kernels include squared exponential or Matérn [Rasmussen and Williams,
i
2006]. We denote by kcont
(xi , x0i ) such kernels (i = 1, . . . , I). For a categorical
variable, notice that, as a positive semidefinite function on a finite space, a
kernel is a positive semidefinite matrix. We denote by Tj the matrix of
size Lj corresponding to kernels for uj (j = 1, . . . , J). Thus, examples of
expressions for kcont and kcat are written:
1
I
kcont (x, x0 ) = kcont
(x1 , x01 ) ∗ · · · ∗ kcont
(xI , x0I )
kcat (u, u0 ) = [T1 ]u1 ,u0 ∗ · · · ∗ [TJ ]uJ ,u0
1
J
(3)
(4)
The formulation given by Equations (2), (3), (4) is not the most general
one, since kernels are not always obtained by combining 1-dimensional ones.
4
Nevertheless, it encompasses the GP models used in the literature of computer experiments with categorical inputs. It generalizes the tensor-product
kernels, very often used, and the sum used recently by [Deng et al., 2017]
on the categorical part. It also contains the heteroscedastic case, since the
matrices Tj are not assumed to have a constant diagonal, contrarily to most
existing works [Zhang and Notz, 2015]. This will be useful in the application
of Section 5, where the variance of the material is level dependent.
Remark 1. Combining kernels needs some care to obtain identifiable models.
0
For instance, the product of kernels k1 , k2 with ki (xi , x0i ) = σi2 e−|xi −xi | (i =
1, 2), is a kernel depending on only one variance parameter σ 2 := σ12 σ22 .
The GP model is identifiable for this new parameter, but not for the initial
parameters σ12 , σ22 .
2.2
1-dimensional kernels for categorical variables
We consider here a single categorical variable u with levels 1, . . . , L. We
recall that a kernel for u is then a L by L positive semidefinite matrix T.
2.2.1
Kernels for ordinal variables
A categorical variable with ordered levels is called ordinal. In this case, the
levels can be viewed as a discretization of a continuous variable. Thus a GP
Y on {1, . . . , L} can be obtained from a GP Yc on the interval [0, 1] by using
a non-decreasing transformation F (also called warping):
Y (u) = Yc (F (u)).
Consequently, the covariance matrix T can be written:
[T]`,`0 = kc (F (`), F (`0 )),
`, `0 = 1, . . . , L.
(5)
When kc (x, x0 ) depends on the distance |x − x0 |, then k(`, `0 ) depends on the
distance between the levels `, `0 , distorted by F . In the general case, F is
piecewise-linear and defined by L − 1 parameters. However, a parsimonious
parameterization may be preferred, based on the cdf of a flexible probability distribution such as the Normal or the Beta. We refer to [McCullagh,
1980] for examples in regression and to [Qian et al., 2007] for illustrations in
computer experiments.
5
Remark 2. Notice that usual continuous kernels have non-negative values,
which is a necessary condition if they are valid radial kernels in all dimensions. As a consequence, the kernels for ordinal variables built by warping
do not allow negative correlations between levels.
2.2.2
Kernels for nominal variables
For simplicity we present here the homoscedastic case, i.e. when T has
a constant diagonal. It is immediately extended to situations where the
variance depends on the level, by considering the correlation matrix.
General parametric covariance matrices. There are several parameterizations of positive-definite matrices based on the spectral and Choleky
decompositions. The spectral decomposition of T is written
T = PDP>
(6)
where D is diagonal and P orthogonal. Standard parameterizations of P
involve the Cayley transform, Eulerian angles, Householder transformations
or Givens rotations, as detailed in [Khuri and Good, 1989] and [Shepard et al.,
2015]. Another general parameterization of T is provided by the Cholesky
decomposition:
T = LL> ,
(7)
where L is lower triangular. When the variance [T]`,` does not depend on
the level `, the columns of L have the same norm and represent points on
a sphere in RL . A spherical parameterization of L is then possible with one
variance term and L(L−1)/2 angles, representing correlations between levels
[see e.g. Pinheiro and Bates, 1996].
Parsimonious parameterizations. The general parametrizations of T
described above require O(L2 ) parameters. More parsimonious ones can be
used, up to additional model assumptions. Among the simplest forms, the
compound symmetry (CS) - often called exchangeable - covariance matrix
assumes a common correlation for all levels [see e.g. Pinheiro and Bates,
2009]. The CS matrix with variance v and covariance c is defined by:
v if ` = `0
[T]`,`0 =
, c/v ∈ (−1/(L − 1), 1) .
(8)
c if ` 6= `0
6
This generalizes the kernel obtained by substituting the Gower distance d
2
[Gower, 1982] into the exponential kernel, corresponding to c/v = e−d > 0.
The CS covariance matrix treats equally all pairs of levels, which is an important limitation, especially when L 1. More flexibility is obtained by
considering groups of levels. Assume that the L levels of u are partitioned in
G groups G1 , . . . , GG and denote by g(`) the group number corresponding to
a level `. Then a desired parameterization of T is given by the block matrix
(see e.g. Qian et al. [2007]):
v
if ` = `0
[T]`,`0 =
(9)
cg(`),g(`0 ) if ` 6= `0
where for all i, j ∈ {1, . . . , G}, the terms ci,i /v are within-group correlations,
and ci,j /v (i 6= j) are between-group correlations. Notice that additional
conditions on the ci,j ’s are necessary to ensure that T is a valid covariance
matrix, which is developed in the next section.
3
Block covariance matrices for levels grouping
We consider the framework of Section 2.2.2 where u denotes a categorical
variable whose levels are partitioned in G groups G1 , . . . , GG of various sizes
n1 , . . . , nG . Without loss of generality, we assume that G1 = {1, . . . , n1 }, G2 =
{n1 + 1, . . . , n1 + n2 }, . . . . We are interested in parsimonious parameterizations of the covariance matrix T, written in block form:
W1 B1,2
···
B1,G
..
...
.
B2,1 W2
T= .
(10)
...
...
..
BG−1,G
BG,1 · · · BG,G−1
WG
where the diagonal blocks Wg contain the within-group covariances, and the
off-diagonal blocks Bg,g0 are constant matrices containing the between-group
covariances. We denote:
g 6= g 0 ∈ {1, . . . , G}
Bg,g0 = cg,g0 Jng ,n0g ,
where Js,t denotes the s by t matrix of ones. This means that the betweengroup covariances only depends on groups (and not on levels).
7
We will also consider the particular case where diagonal blocks Wg are CS
covariance matrices with variance vg and covariance cg . In this subclass, the
between-group and within-group covariances only depends on groups (and
not on levels). When the variance term is the same for all groups, we obtain
block matrices of the form (9) as a special case.
Although the block matrices of the form (10) may be covariance matrices,
they are not positive semidefinite in general. In the next section, we provide a
proper characterization as well as a parameterization of such matrices which
automatically fulfills the positive semidefinite conditions.
We will use the following additional notations: for a given integer L ≥ 1, IL
is the identity matrix of size L, JL is the matrix of ones of size L, 1L is the
vector of ones of size L. Finally, for a vector or a matrix M, we denote by
M the real number equal to the average of its coefficients.
3.1
A Gaussian model for CS covariance matrices
We first focus on the case of a CS matrix. We denote by
ΓCS
L (v, c) = (v − c)IL + cJL
(11)
the CS matrix with a common variance term v and a common covariance
term c. It is well-known that ΓCS
L (v, c) is positive definite if and only if
− (L − 1)−1 v < c < v.
(12)
For instance, one can check that the eigenvalues of ΓCS
L (v, c) are v + (L −
1)c with multiplicity 1 (eigenvector 1L ) and v − c with multiplicity L − 1
(eigenspace 1⊥
L ). Notice that a CS matrix is positive definite for a range of
negative values of its correlation term.
Then we consider the following Gaussian model:
η ` = µ + λ` ,
` = 1, . . . , L
(13)
where µ ∼ N (0, vµ ) with vµ > 0, and λ1 , . . . , λL are i.i.d. random variables
from N (0, vλ ), with vλ > 0, assumed to be independent of µ.
A direct computation shows that the covariance matrix of η is the CS
covariance matrix ΓCS
L (vµ + vλ , vµ ). Clearly this characterizes the subclass of
8
positive definite CS covariance matrices ΓCS
L (v, c) such that c is non-negative.
The full parameterization, including negative values of c in the range (−(L −
1)−1 v, 0), can be obtained by restricting the average of level effects to be
zero, as detailed in the next proposition.
Proposition 1. When η and λ are related as in (13), the covariance of
η conditional on zero average errors λ = 0 is a CS matrix with variance
v = vµ + vλ [1 − 1/L] and covariance c = vµ − vλ /L. Conversely, given a
CS covariance matrix C with variance v and covariance c, there exists a
representation (13) such that C is the covariance of η conditional on zero
average errors λ = 0 where vµ = v/L + c[1 − 1/L] and vλ = v − c.
3.2
Parameterization of centered covariance matrices
The usage of Model (13) to describe CS covariance matrices involves Gaussian
vectors that sum to zero. This is linked to centered covariance matrices, i.e.
covariance matrices F such that F = 0, as detailed in the next proposition.
We further give a parameterization of centered covariance matrices.
Proposition 2. Let F be a covariance matrix of size L ≥ 2. Then, F is centered iff there exists a Gaussian vector z on RL such that F = cov(z|z = 0).
In that case, let A be a L × (L − 1) matrix whose columns form an orthonormal basis of 1⊥
L . Then F is written in an unique way
F = AMA>
(14)
where M is a covariance matrix of size L − 1.
In particular if F = v[IL − L−1 JL ] is a centered CS covariance matrix, then
M = vIL−1 , and we can choose z ∼ N (0, vIL ).
The choice of A in Prop. 2 is free, and can be obtained by normalizing
the columns of a L × (L − 1) Helmert contrast matrix (Venables and Ripley
[2002], §6.2.):
−1 −1 −1 · · ·
−1
1 −1 −1 · · ·
−1
0
2
−1
·
·
·
−1
.
..
..
.
.
0
3
.
.
.
.. . .
..
..
.
.
.
−1
0
0 ··· 0 L − 1
9
3.3
A hierarchical Gaussian model for block covariance
matrices
Let us now return to the general case, where the levels of u are partitioned
in G groups. It will be convenient to use the hierarchical notation g/`,
indicating that ` belongs to the group Gg . Then, we consider the following
hierarchical Gaussian model:
ηg/` = µg + λg/` ,
g = 1, . . . , G,
` ∈ Gg
(15)
where for each g the random variable µg represent the effect of the group g,
and the random variables λg/1 , . . . , λg/ng represent the effects of the levels in
this group. We further assume:
• The vector µ is normal N (0, Γµ ).
• The vectors λg/. are normal N (0, Γλg ).
• The vectors λ1/. , . . . , λG/. are independent.
• The vectors µ and λ are independent.
As an extension of Prop. 1, the next proposition and Cor. 1 show that
(15) gives a one-to-one parameterization of positive semidefinite matrices of
the form (10) with CS diagonal blocks, under the additional assumption that
the average of level effects is zero in each group. More generally, we obtain
a large parametric family of positive semidefinite matrices of the form (10).
Proposition 3. The covariance matrix of η conditional on {λg/. = 0, g =
1, . . . , G} has the form (10) with, for all g, g 0 ∈ {1, . . . , G}:
Wg = [Γµ ]g,g Jng + Fg ,
Bg,g0 = [Γµ ]g,g0 Jng ,ng0 ,
(16)
where Fg is a centered positive semidefinite matrix equal to cov(λg/. |λg/. = 0).
Therefore Wg − Wg Jng is positive semidefinite for all g = 1, . . . , G.
Conversely, consider a positive semidefinite matrix T having the block form
(10) such that Wg − Wg Jng is positive semidefinite for all diagonal blocks g.
e be the G × G matrix obtained by averaging each block of T. Then there
Let T
10
exists a representation (15) such that T is the covariance of η conditional
on zero average errors λg/. = 0, (g = 1, . . . , G), with:
cov(λg/. |λg/.
e
Γµ = T,
= 0) = Wg − Wg Jng .
Corollary 1. Positive semidefinite matrices of the form (10) with CS diagonal blocks exactly correspond to covariance matrices of η in (15) conditional
on the G constraints λg/. = 0 when cov(λg/. ) ∝ Ing .
As a by-product, we obtain a simple condition for the validity of block
covariance matrices of the form (10). Interestingly, it only involves a small
matrix whose size is the number of groups.
Proposition 4. Let T be a matrix having the block form (10) such that
Wg − Wg Jng is positive semidefinite for all g = 1, . . . , G. Then
e is positive semidefinite.
(i) T is positive semidefinite if and only if T
e is positive definite and the diagonal
(ii) T is positive definite if and only if T
blocks Wg are positive definite for all g = 1, . . . , G.
Furthermore, we have
e > + diag(W1 − W1 Jn1 , . . . , WG − WG Jn )
T = XTX
G
where X is the n × G matrix
X :=
1n1
0
..
.
0
(17)
...
0
..
.
1n2 . .
.
.
... ...
0
. . . 0 1nG
0
Remark 3. All the results depend on the conditional distribution λg/. |λg/. = 0.
Thus there is some flexibility in the choice of Γλg , since several matrices Γλg
can lead to the same conditional covariance matrix cov(λg/. |λg/. = 0).
Remark 4 (Groups of size 1). Prop. 3 is still valid for groups of size 1.
Indeed if ng = 1, then (λg/. |λg/. = 0) is degenerate and equal to 0. Thus
Fg = Wg − Wg J1 = 0 is positive semidefinite.
11
3.4
Related works
Model (15):
ηg/` = µg + λg/` ,
shares similarities with two-way Bayesian models and linear mixed effect
models (see e.g. Lindley and Smith [1972]), with Gaussian priors for the
effects µ and λg/. . The centering constraints λg/. = 0 are also standard
identifiability conditions in such models. Furthermore, the particular case
of CS covariance matrices corresponds to the exchangeable assumption of
the corresponding random variables. Typically, in the framework of linear
modelling, Model (15) could be written as
yg,` = m + µg + λg,` + εg,` ,
with additionals grand mean m and errors εg,` .
However, if the framework is similar, the goal is different. In linear modelling, the aim is to quantify the effects by estimating their posterior distribution (µ, λ)|y. On the other hand, we aim at investigating the form of
the covariance matrix of the response part µg + λg/` , or, equivalently, the
covariance matrix of the likelihood y|µg + λg/` .
3.5
Summary and comments
The results of the previous sections show that a wide class of valid block
covariance matrices can be parameterized by a family of covariance matrices
of smaller sizes. This class is formed by positive definite matrices of the form
(10) such that Wg − Wg Jng is positive semidefinite for all g = 1, . . . , G. It
contains the case where diagonal blocks are CS covariance matrices. The
algorithm is summarized below.
1. Generate a covariance matrix Γµ of size G.
2. For all g = 1, . . . , G,
If ng = 1, set Fg = 0, else:
• Generate a covariance matrix Mg of size ng − 1.
• Compute a centered matrix Fg = Ag Mg A>
g , where Ag is a ng ×
(ng − 1) matrix whose columns form an orthonormal basis of 1⊥
ng .
12
3. For all 1 ≤ g < g 0 ≤ G, compute the within-group blocks Wg and
between-group blocks Bg,g0 by Eq. (16).
In steps 1 and 2, the covariance matrices Γµ and Mg can be general, and
obtained by one of the parameterizations of §2.2.2. However, some specific
form, such as CS matrices, can also be chosen. Depending on the number of
groups and their sizes, different levels of parsimony can be obtained. Table 1
summarizes some possibilities.
Notice that it may be hard to choose the parametric setting Mg , Γµ in order to account for a specified constraint of the block matrix T, such as
homoscedasticity. An alternative to over-parameterization is to use the economic constraint on T of Prop. 4. Indeed, positive definiteness on T is
e of size G.
equivalent to positive definitiness of the small matrix T
Parametric setting
Mg
Γµ
CS
vλg Ing −1 Γ (vµ , cµ )
General
vλg Ing −1
CS
General Γ (vµ , cµ )
General
General
Resulting form of T
Wg
Bg,g0
CS
Γ (vg , cg ) cg,g0 ≡ cµ
ΓCS (vg , cg )
cg,g0
General
cg,g0 ≡ cµ
General
cg,g0
Number of parameters
2G + 1
G(G+3)
P 2 ng (ng +1)
2+ G
g=1
PG n2g (ng +1)
G(G+1)
+
g=1
2
2
Table 1: Parameterization details for some valid block-covariance matrices
T of the form (10).
4
Examples
Before considering the application in nuclear engineering, we consider two
toy functions with one continuous input and one categorical input, which
reproduce two specificities of that application. The first function mimics a
situation where the output variance depends on the level of a categorical
input. The second one investigates level grouping when the number of levels
is large.
13
4.1
Example 1 (An heretoscedastic case)
Consider the deterministic function
6.8πx
cos
2
f (x, u) = −2 cos 7.0πx
2
1
cos 7πx
2
2
if u = 1
if u = 2 ,
if u = 3
−2
−1
0
1
2
where x ∈ [0, 1] and u ∈ {1, 2, 3}. The expression of f is adapted from
[Han et al., 2009] by scaling the three output curves f (., u) according to the
level of u. Thus the output variance clearly depends on the level. As visible
in Figure 1, these three curves are strongly dependent, with a positive link
between f (x, 1) and f (x, 3), and negative between f (x, 1) and f (x, 2).
0.0
0.2
0.4
0.6
0.8
1.0
Figure 1: Test function 1. f (x, 1) in black, f (x, 2) in red and f (x, 3) in green.
The design points correspond to one realization of a sliced LHD.
The aim is to compare the accuracy of four GP models by reconstructing
f with few evaluations. For all of them, a Matérn 5/2 kernel [Rasmussen
and Williams, 2006] is chosen for x. The first GP model (IND) consists in
three independent GPs corresponding to the levels of u. The other ones have
tensor product kernels, with three different covariance matrices T for u. The
first two ones assume a constant variance: T is defined by a CS covariance
14
structure (Eq. 8), or by a general spherical parameterization (SPH) (See Section 2.2.2). Finally, we consider a heteroscedastic spherical parameterization
(H-SPH) where the covariance matrix is defined by a general variance vector,
and a spherical parameterization of the correlation matrix.
In order to benefit from the strong link between levels, we use a design that
spreads out the points between levels. For instance, the information given
by f (0, 1) may be useful to estimate f (x, 2) and f (x, 3) at 0, without computing f (0, 2) and f (0, 3). More precisely, we have used a (random) sliced
Latin hypercube design (SLHD) [Qian, 2012] with 5 points by level, for a
total budget of 15 points. Parameter estimation is by maximum likelihood.
As the likelihood surface may be multimodal, we have launched several optimizations with different starting points chosen at random in the domain.
Model accuracy is measured over a test set formed by a regular grid of size
1000, in terms of Q2 criterion. The Q2 criterion has a similar expression than
R2 , but is computed on the test set:
P
(yi − ŷi )2
2
,
(18)
Q = 1 − Pi
2
i (yi − ȳ)
where the yi denote the observations (on the test set), ȳ their mean, ŷi
the predictions. It is negative if the model performs worst than the mean,
positive otherwise, and tends to 1 when predictions are close to true values.
Finally, the process is repeated 100 times, in order to assess the sensitivity
of the result to the design.
We observe that the heteroscedastic model H-SPH clearly outperforms the
other ones. As expected, the estimated variances (2.5, 5.3, 1.4) for this model
are level-dependent, whereas a constant variance is wrongly estimated around
2 by CS and SPH. Moreover, we have represented in Figure 3 an estimation of
the correlations between levels, deduced from the parameterized covariance
matrix T . For representativeness, we have chosen one of the 100 designs used
to generate Figure 2, such that the corresponding Q2 is the closest to the
median of the 100 Q2 values. We can see that the strong dependence link of
f (., u) between the levels of u is only recovered correctly by H-SPH, with a
poor estimation of correlation parameters for the other ones.
4.2
Example 2 (Levels grouping)
The second function is defined as:
x
u
g(x, u) = cos 7π + p(u)π −
2
20
15
1.00
0.75
0.50
0.25
0.00
IND
CS
SPH
H−SPH
Figure 2: Q2 criterion for four GP models, based on 100 repetitions of the
design.
u
with x ∈ [0, 1], u ∈ {1, . . . , 13} and p(u) = 0.4 + 15
1u>9 . As visible in
Figure 4, there are two groups of curves corresponding to levels {1, . . . , 9}
and {10, . . . , 13} with strong within-group correlations, and strong negative
between-group correlations.
We aim at reconstructing g with five GP models based on levels grouping.
The first one uses a CS covariance matrix, corresponding to a single group.
The second one considers the two groups {1, . . . , 9} and {10, . . . , 13}. The
third model, based on the five groups {1, . . . , 9}, {10}, {11}, {12}, {13}, has
two variants: (a) when the inter-groups correlation is constant and (b) in
the general case. The fourth model uses the spherical parameterization of T ,
leading to 13 groups, and the last one considers an ordinal paramaterization
for T . The design of experiments is a 39-points SLHD. The remaining simulation settings are the same as in Example 1.
The estimated correlation parameters are shown in Figures 5. The right
correlation structure is well recovered with two groups and five groups, with
different between-groups correlations. The model with thirteen groups involves the estimation of 90 parameters, which is hard to achieve, especially
with 39 points. This is visible in the erratic values of the estimated correlations values, which seem not meaningful. On the opposite, considering
only one group or five groups with a common between-group correlation
oversimplifies and fails at recovering the right correlations. The ordinal ker16
CS
H-SPH
SPH
−1.0
−0.5
0.0
0.5
1.0
Figure 3: Estimated correlation parameters among levels for f , for a design
of experiments corresponding to a median Q2 .
0.0
0.2
0.4
0.6
0.8
1.0
Figure 4: Test function g.
nel recovers the two blocs of curves, but cannot detect negative correlation
between them (Remark 2).
In Figure 6, we can see that the best tradeoff between prediction accuracy
and parsimony is obtained with two groups. whereas it reduces the number
of observations by group. Notice the rather good performance of the ordinal model, at the cost of a larger number of parameters: the warping was
parameterized by an affine function (see Section 2.2.1). This is noticeable
since negative correlations are not possible (see above). This may be due
to the larger number of within-group levels combinations, which reduces the
influence of the negative between-group correlations.
17
1
1
1
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0
−0.2
−0.2
−0.2
−0.4
−0.4
−0.4
−0.6
−0.6
−0.6
−0.8
−0.8
−0.8
−1
−1
−1
1 group.
5 groups (b)
2 groups.
5 groups (a)
1
1
1
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0
−0.2
−0.2
−0.2
−0.4
−0.4
−0.4
−0.6
−0.6
−0.6
−0.8
−0.8
−0.8
−1
−1
−1
13 groups.
Ordinal.
Figure 5: Estimated correlation parameters among levels for g, based on a
representative design of experiments (design with median Q2 ).
5
5.1
Application in nuclear engineering
Position of the problem
As presented in Introduction, this research is originally motivated by the solving of an inverse problem confronting experimental measurements in nuclear
engineering and time-consuming numerical simulation. More precisely, this
analysis concerns the identification of the mass m of 239 Pu that is present in
a particular waste container using a non-destructive nuclear detection technique such as the gamma spectrometry [Knoll, 2010]. In that case, at each
energy level E,
m × (E; E) = y SG (E),
(19)
where y SG (E) is the quantity of interest provided by the gamma transmitter,
and (E; E) is the attenuation coefficient, which depends on the source envi18
1.0
0.8
0.6
0.4
1 group
2 groups
5 groups
5 groups (b)
13 groups
ordinal
Figure 6: Q2 of six GP models, based on 100 repetitions of the design.
Number of parameters used (bloxplot order): 5, 7, 10, 19, 90, 16.
ronment denoted by E. In practice, only discrete values of E are of interest,
corresponding to the natural energy levels of 239 Pu:
E ∈ {94.66, 129.3, 203.6, 345.0, 375.1, 413.7} (keV).
(20)
Then, based on previous studies [Guillot, 2015], the real source environment is parameterized by the following input variables:
• An equivalent geometric shape for the nuclear waste: sphere (‘sph’),
cylinder (‘cyl’) or parallelepiped (‘par’).
• An equivalent material for this waste, characterized by its chemical
element with atomic number in {1, . . . , 94},
• The bulk density of the waste, in [0, 1],
• The distance of measurement between the container and the measurement device, in [80, 140] (cm),
• The mean width and lateral surfaces (in logarithmic scale) crossed by
a gamma ray during the rotation of the object.
After normalization, the characteristics of the input space can be summed
up in Table 2.
19
Name of the input
Distance
Density
Width
Surface
Energy
Shape
Chemical element
Variation domain
[0,1]
[0,1]
[0,1]
[0,1]
{1, 2, 3, 4, 5, 6}
{sph, cyl, par}
{1, . . . , 94}
Table 2: Description of the input variables for the nuclear application.
To recapture the notation of the previous sections, let x and u be the
vectors gathering respectively the continuous and categorical inputs, and
w = (x, u). For a given value of w, Monte Carlo simulation codes as MCNP
[Goorley et al., 2013] can be used to model the measured scene and approach
the value of (w) = (E, E). The mass m can eventually be searched as the
solution of the following optimization problem:
(m? , w? ) = arg min ky obs − m × (w)k,
m,w
(21)
where k · k is the classical Euclidian norm, (w) and y obs respectively gather
the values of and y SG at the six values of E that are used for the measurements. To solve (21), it is therefore necessary to compute at a high
number of points. However, each evaluation of the MCNP code can be extremely demanding (between several minutes to several hours CPU for one
evaluation). Thus, surrogate models have to be introduced to emulate the
function w 7→ (w), which is now investigated in the frame of Gaussian process regression. We refer to Clement et al. [2018] for the second step, namely
the treatment of the inversion problem.
5.2
Model settings
For pedagogical purpose, a dataset of large size N = 5076 has been computed
with the MNCP code. The construction of the design of experiments was
guided by the categorical inputs, such that each of the 6 × 3 × 94 = N/3
combinations of levels appears 3 times. It was completed by a Latin hypercube of size N to define the values of the four continuous inputs.
20
From this full dataset, a training set of size n = 3 × 94 = 282 is extracted
by selecting at random 3 observations by chemical element. The remaining
N − n points serve as test set.
−12
−12
−14
−14
Y
Y
−16
−16
E1
E2
E3
E4
E5
E6
Sph
Y in function of the energy
Cyl
Par
Y in function of the geometric shape.
Figure 7: Y in function of the energy and geometric shape.
Model settings are now motivated by a graphical analysis. In Figure 7,
the output is displayed in function of the energy and the geometric shape.
We observe that successive energy levels correspond to close values. This fact
confirms that the energy is ordinal and we use the warped kernel defined by
Eq. (5). The influence of the geometric shape is less obvious, and we have
chosen an exchangeable (CS) covariance structure for it.
In Figure 8, Y is displayed in function of the 94 chemical elements, ordered by
atomic number. Two important facts are the high number of levels and heteroscedasticity. For this purpose, the 94 chemical elements are divided into 5
groups, provided by expert knowledge and represented by colors. This partition suggests to use a group kernel of the form (10), where the within-group
blocks Wg are CS covariance matrices. In order to handle heteroscedasticity,
the variance of Wg is assumed to depend on the group number g.
The influence of continuous variables can be observed by panels (not
represented), and does not reveal useful information for our purpose. A
Matérn 5/2 kernel is set for all continuous inputs, as we expect the output
to be a regular function of the continuous inputs. Indeed, for this kernel, the
corresponding Gaussian process is two times mean-square differentiable.
Finally, three candidate kernels for w are obtained by combining the kernels
of input variables defined above, by sum, product or ANOVA (see Section 2).
21
−12
−14
Y
−16
1
10
20
36
61
94
Figure 8: Y in function of chemical elements, ordered by atomic number.
5.3
Results
Following the model settings detailed above, Figure 9, Panel 3, presents the
results obtained with 60 random designs of size n and three operations on kernels. Furthermore, we have implemented three other kernels for the chemical
element, in order to compare other model choices for this categorical input.
In the first panel, we grouped all the 94 levels in a single group. In the second
one, we kept the 5-group kernel but forced the between-group covariances to
have a common value. Finally, in the fourth panel, we considered that the
levels were ordered by their atomic number, and used the warped kernel of
Eq. (5) with a Normal transform.
Figure 9: Q2 of several GP models, based on 60 random designs, corresponding to different model choices for the chemical element. First panel: Single
group. Second panel: 5 groups, with a common between-group covariance.
Third panel: 5 groups. Fourth panel: Ordered levels. Total number of parameters used in the panel order: ‘prod’ = (12, 21, 30, 14), ‘add’ = ‘prod’ +
6, ‘anova’ = ‘prod’ + 7.
22
First, comparing the three operations on kernels, we remark that in all
the panels, additive kernels provide the worst results. This suggests the existence of interactions between different inputs of the simulator. Second, the
ANOVA combination produces slight improvements, compared to the standard tensor-product, both in terms of accuracy and stability with respect to
design choice.
Now, comparing the four panels, we see that gathering the levels in a single group is the least efficient strategy. The 5-group kernel gives very good
performances, especially when the between-group covariances vary freely:
constraining them to be equal degrades the result. Surprisingly here, the
ordinal kernel gives the best performance. Indeed, for this application it was
not intuitive to the experts that the chemical element can be viewed as an
ordinal variable, simply sorted by its atomic number. This is confirmed by
the correlation plots of Figure 10, corresponding to a model with a median
Q2 score. We can see that the estimated correlations between levels seems
to decrease as the difference between levels increases, an indication that the
levels may be ordered by their atomic number.
Finally, we report several post-processing results. First, the estimated transformation of energy levels (Figure 11a) is concave and flat near high values,
which corresponds to the behaviour observed in Figure 7 (left panel). In
addition, the last three levels lead to similar results (Figure 11). This corresponds to the fact that when the energy is high, the gamma ray almost always
crosses the nuclear waste, leading to a high value for the output. Second, the
estimated correlation among the sphere, the cylinder and the parallelepiped
is very high (c = 0.9, Figure 12). This justifies considering a covariance
structure for that categorical input, rather than using three independent GP
models for all the three levels.
6
Conclusion
In the framework of GP regression with both continuous and categorical inputs, we focused on problems where categorical inputs may have a potentially
large number of levels L, partitioned in G L groups of various sizes. We
provided new results about parsimonious block covariance matrices, defined
by a few within- and between-group covariance parameters.
23
(a) 5 groups (common betweengroup covariance)
(b) 5 groups (general)
6
5
4
3
0.6
2
1.0
0.8
2
3
0.2
0.4
1
1
4
−0.2
0.2
1
Figure 10: Estimated correlation parameters among the chemical element.
5
0.8
0.6
0.4
0
−0.4
−0.6
−0.8
6
0.0
0.2
0.4
0.6
0.8
1.0
−1
(a) Transformation.
(b) Correlations.
3
2
1
Figure 11: Estimated correlation parameters for the energy.
1
0.8
1
0.6
0.4
0.2
2
0
−0.2
−0.4
3
−0.6
−0.8
−1
Figure 12: Estimated correlation parameters for the geometric shape.
24
We revisited a two-way nested Bayesian linear model, where the response
term is defined as a sum of a group effect and a level effect. We obtained
a flexible parameterization of block covariance matrices which automatically
satisfy the positive definiteness conditions. As a particular case, we recover
situations where the within-group covariance structures are compound symmetry, with possible negative correlations. Furthermore, we showed that the
positive definiteness of a given block covariance matrix can be checked by
verifying that the small matrix of size G obtained by averaging each block
is positive definite. This criterion can be useful if the proposed block matrix
has a desirable constraint, such as homoscedasticity, which is not directly
handled by the proposed parameterization.
We applied these findings on several toy functions as well as an application
in nuclear engineering, with 2 continuous inputs, 3 categorical inputs, one
of them having 94 levels corresponding to chemical numbers in Mendeleev’s
table. In this application, 5 groups were defined by experts. The results,
measured in terms of prediction accuracy, outperform those obtained with
oversimplifying assumptions, such as gathering all levels in a same group.
On the other hand, when the categorical input can be viewed as an ordinal
one, plugging the right order into warped kernels has lead to slightly better
results in our experiments.
There are several perspectives for this work. Firstly, one future direction
is to find a data-driven technique to recover groups of levels. This may be
not an easy task, due to the small number of observations available in the
context of GP regression. Similarly, if there is an order between levels, can
we infer it from the data? Secondly, the trend of the GP models has been
fixed to a constant. More complex forms, based on linear models, could be
explored.
Software information and acknowledgements
Implementations have been done with the R packages mixgp and kergp [Deville et al., 2015]. Illustrations use ggplot2 [Wickham, 2009] and corrplot
[Wei and Simko, 2016].
This research was conducted within the frame of the Chair in Applied Mathematics OQUAIDO, gathering partners in technological research (BRGM,
25
CEA, IFPEN, IRSN, Safran, Storengy) and academia (CNRS, Ecole Centrale de Lyon, Mines Saint-Etienne, University of Grenoble, University of
Nice, University of Toulouse) around advanced methods for Computer Experiments.
Appendix
Proof of Proposition 1. The vector (λ, λ1 + · · · + λL ) is a centered Gaussian
vector with covariance matrix
IL 1L
vλ
.
1>
L
L
Hence the conditional distribution of λ knowing λ = 0 is a centered Gaussian
vector with covariance matrix
−1
cov(λ | λ = 0) = vλ [IL − 1L L−1 1>
L ] = vλ [IL − L JL ].
Then, by using the independence between µ and the λ` ’s, we deduce
cov(η | λ = 0) = vµ JL + vλ [IL − L−1 JL ]
= vλ IL + [vµ − L−1 vλ ]JL .
−1
We recognize the CS covariance matrix ΓCS
L (v, c) with v = vµ + (1 − L )vλ
−1
and c = vµ − L vλ . As a covariance matrix, it is positive semidefinite.
Furthemore, we have c < v and c + (L − 1)−1 v = vµ [1 + (L − 1)−1 ] > 0, and
the conditions of positive definiteness (12) are satisfied.
Conversely, let C be a positive definite CS matrix ΓCS
L (v, c). Then we have
−1
−1
−(L−1) v < c < v, and we can define vµ = L [v +(L−1)c] and vλ = v −c.
From the direct sense, we then obtain that the covariance matrix of η | λ = 0
is ΓCS
L (v, c) = C.
Proof of Proposition 2. The first part of the proposition is obtained by remarking that if F = cov(z), then F = cov(z). Thus, assuming that z is
centered, F = 0 is equivalent to z = 0 with probability 1.
For the second part, notice that z = 0 means that z is orthogonal to 1L .
Thus, one can write the expansion of z in the orthonormal basis 1⊥
L defined
by A. Denoting by t the (L − 1)-vector of coordinates, we have z = At.
This gives F = cov(At) = A cov(t)A> , and (14) follows with M = cov(t).
26
To prove unicity, observe that, by definition, A> A = IL−1 , A> 1L = 0. Starting from F = AMA> , and multiplying by A> on the left and by A on the
right, we get M = A> FA, showing that M is unique.
Now, let F = v[IL − L−1 JL ]. Since JL = 1L 1>
L , we obtain
M = A> FA = v[A> A − L−1 (A> 1L )(1>
L A)] = vIL−1 .
As a by-product, notice that resubstituting M into F = AMA> gives
AA> = IL − L−1 JL .
Finally, if z ∼ N (0, vIL ), then the properties of conditional Gaussian vectors
lead immediately to cov(z|z = 0) = F.
Proof of Proposition 3. The expressions of Wg and Bg,g0 are obtained directly by using the independence assumptions about µ and the λ’s. Notice
that Fg , the covariance matrix of λg/. knowing λg/. = 0, is centered by
Proposition 2. This gives Wg − Wg Jng = Fg , which is positive semidefinite.
Conversely, let T be a positive semidefinite matrix of the form (10), such
e be
that Wg − Wg Jng is positive semidefinite for all g = 1, . . . , G. Let T
e is also a
the matrix obtained from T by averaging each block. Then T
positive semidefinite matrix. Indeed, since T is positive semidefinite, it is
e is the covariance matrix
the covariance matrix of some vector z. Then T
P
of e
z, the vector obtained from z by averaging by group: e
zg = n−1
g
`∈Gg z` .
Thus there exists a centered Gaussian vector (µg )1≤g≤p whose covariance
matrix is
e
Γµ = T.
Now, for g = 1, . . . , G, define
Fg = Wg − [Γµ ]g,g Jng = Wg − Wg Jng .
Observe that Fg = 0, and by assumption Fg is positive semidefinite. Hence,
from Proposition 2, there exists a centered Gaussian vector (λg/j )1≤`≤ng such
that
Fg = cov(λg/. |λg/. = 0).
We can assume that λ1/. , . . . , λG/. are independent, and µ and λ are independent. Finally, we set ηg/` = µg + λg/` . By the direct sense and (16), we
obtain that T is the covariance matrix of η conditional on {λg/. = 0, g =
1, . . . , G}.
27
Proof of Corollary 1. Let T be a positive semidefinite matrix of the form (10)
with CS diagonal blocks. Then the diagonal CS matrices are positive semidefinite, leading to vg − cg ≥ 0. Thus,
Wg − Wg Jng = (vg − cg )(Ing − n−1
g Jng )
is a positive semidefinite CS matrix. Hence, by Prop. 3, T is obtained from
Model (15), with cov(λg/. |λg/. = 0) = (vg − cg )(Ing − n−1
g Jng ). By Prop. 2
(last part), we can choose Γλg = vλg Ing , with vλg = vg − cg ≥ 0.
Conversely, if Γλg = vλ,g Ing , then by Prop. 2, Fg = cov(λg/. |λg/. = 0) is a CS
covariance matrix. The result follows by Prop. 3.
Proof of Proposition 4. The direct sense of (i) has already been derived in
e
the proof of Prop. 3. Furthermore, inspecting that proof, we see that if T
is positive semidefinite, then T admits the representation (15). Thus T is a
covariance matrix, and positive semidefinite.
For (ii), a proof is available in [Roustant and Deville, 2017]. Notice that we
need to add the condition that Wg is positive definite for all g = 1, . . . , G.
However, adding an equivalent condition for (i), namely that Wg is positive
semidefinite, was not necessary. Indeed, it is a consequence of the fact that
e is positive semidefinite,
Wg − Wg Jng is positive semidefinite and that T
e g,g = Wg ≥ 0.
which implies T
Finally, Eq. (17) is direct.
References
C. Chevalier, J. Bect, D. Ginsbourger, E. Vazquez, V. Picheny, and Y. Richet.
Fast parallel kriging-based stepwise uncertainty reduction with application
to the identification of an excursion set. Technometrics, 56(4):455–465,
2014.
A. Clement, N. Saurel, and G. Perrin. Stochastic approach for radionuclides
quantification. EPJ Web Conf., 170:06002, 2018. URL https://doi.org/
10.1051/epjconf/201817006002.
X. Deng, C. D. Lin, K.-W. Liu, and R. K. Rowe. Additive gaussian process for
computer models with qualitative and quantitative factors. Technometrics,
59(3):283–292, 2017.
28
Y. Deville, D. Ginsbourger, and O. Roustant. kergp: Gaussian Process Laboratory, 2015. URL https://CRAN.R-project.org/package=kergp. Contributors: N. Durrande. R package version 0.2.0.
E. Fox and D. B. Dunson. Multiresolution gaussian processes. In
F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors,
Advances in Neural Information Processing Systems 25, pages 737–745.
Curran Associates, Inc., 2012. URL http://papers.nips.cc/paper/
4682-multiresolution-gaussian-processes.pdf.
J. T. Goorley, M. Fensin, and G. Mckinney. MCNP6 Users Manual, Version
1.0, May 2013.
J. C. Gower. Euclidean distance geometry. Math. Sci, 7(1):1–14, 1982.
N. Guillot. Quantification gamma de radionucléides par modélisation
équivalente. PhD thesis, Université Blaise Pascal - Clermont-Ferrand II,
France, 2015.
G. Han, T. J. Santner, W. I. Notz, and D. L. Bartel. Prediction for computer
experiments having quantitative and qualitative input variables. Technometrics, 51(3):278–288, 2009. ISSN 00401706.
A. Khuri and I. Good. The parameterization of orthogonal matrices : a
review mainly for statisticians : review paper. South African Statistical
Journal, 23(2):231–250, 1989.
G. F. Knoll. Germanium Gamma-Ray Detectors, volume 3. John Wiley &
Sons, 2010.
D. Lindley and A. Smith. Bayes estimate for the linear model (with discussion) part 1. Journal of the Royal Statistical Society, Ser B, 34(1):1–41,
1972.
P. McCullagh. Regression models for ordinal data. Journal of the Royal
Statistical Society. Series B (Methodological), 42(2):109–142, 1980.
S. Park and S. Choi. Hierarchical gaussian process regression. In M. Sugiyama
and Q. Yang, editors, Proceedings of 2nd Asian Conference on Machine
Learning, volume 13 of Proceedings of Machine Learning Research, pages
95–110, 2010.
29
J. Pinheiro and D. Bates. Mixed-Effects Models in S and S-PLUS. Statistics
and Computing. Springer New York, 2009.
J. C. Pinheiro and D. M. Bates. Unconstrained parametrizations for variancecovariance matrices. Statistics and Computing, 6(3):289–296, 1996.
P. Z. G. Qian. Sliced latin hypercube designs. Journal of the American
Statistical Association, 107(497):393–399, 2012.
P. Z. G. Qian, H. C. F. Wu, and J. Wu. Gaussian process models for computer
experiments with qualitative and quantitative factors. Technical report,
Department of statistics, University of Wisconsin, 2007.
C. Rasmussen and C. Williams. Gaussian Processes for Machine Learning.
The MIT Press, 2006.
O. Roustant and Y. Deville. On the validity of parametric block correlation
matrices with constant within and between group correlations. working
paper or preprint, May 2017. URL https://hal.archives-ouvertes.
fr/hal-01527667.
J. Sacks, W. Welch, T. Mitchell, and H. Wynn. Design and analysis of
computer experiments. Statistical Science, (4):409–435, 1989.
R. Shepard, S. R. Brozell, and G. Gidofalvi. The representation and
parametrization of orthogonal matrices. The Journal of Physical Chemistry A, 119(28):7924–7939, 2015.
A. Smith. Bayes estimates in one-way and two-way models. Biometrika, 60
(2):319–329, 1973.
W. N. Venables and B. D. Ripley. Modern Applied Statistics with S. Springer,
4 edition, 2002.
T. Wei and V. Simko. corrplot: Visualization of a Correlation Matrix, 2016.
R package version 0.77.
H. Wickham. ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag
New York, 2009. ISBN 978-0-387-98140-6. URL http://ggplot2.org.
Y. Zhang and W. I. Notz. Computer experiments with qualitative and quantitative variables: A review and reexamination. Quality Engineering, 27
(1):2–13, 2015.
30
| 10 |
MAY 2017
1
A Survey on Trapping Sets and Stopping Sets
arXiv:1705.05996v1 [cs.IT] 17 May 2017
Aiden Price and Joanne Hall, Member, IEEE.
Abstract—LDPC codes are used in many applications, however,
their error correcting capabilities are limited by the presence of
stopping sets and trappins sets. Trappins sets and stopping sets
occur when specific low-wiehgt error patterns cause a decoder
to fail. Trapping sets were first discovered with investigation
of the error floor of the Margulis code. Possible solutions are
constructions which avoid creating trapping sets, such as progressive edge growth (PEG), or methods which remove trapping sets
from existing constructions, such as graph covers. This survey
examines trapping sets and stopping sets in LDPC codes over
channels such as BSC, BEC and AWGNC.
Index Terms—LDPC codes, trapping sets, stopping sets, QCLDPC codes, Margulis codes, AWGNC, PEG algorithm, graph
covers.
I. I NTRODUCTION
S technology advances, we wish to communicate over
longer distances and have the ability to stay connected
even over poor communication channels. While quasi-cyclic
low-density parity-check (QC-LDPC) codes are one of the best
ways to achieve this [1], their performance in many cases is
limited by the presence of trapping sets and stopping sets.
Trapping sets and stopping sets can cause iterative decoding
methods to fail with relatively few errors. Finding ways to
avoid or remove trapping sets and stopping sets will further
improve the already high performance of LDPC codes and
bring their performance curves even closer to the Shannon
limit [2], [3].
Performance optmization is becoming incresingly crucial as
the world moves further into the digital age. An increase in the
speed at which digital communication occurs through modern
applications such as WiFi [4] and DVB-S2 [5] has drastic
implications on the overall productivity of the world.
In 1962, Gallager [6] introduced low-density parity-check
codes (LDPCs). LDPC codes are a class of binary linear block
codes with a sparse parity-check matrix. An advantage of using
LDPC codes is that they are able to provide error control
which is very close to the capacity for many different channels
[2]. This categorizes LDPC codes as one of few capacityapproaching codes; error correction methods which can allow
the noise in a channel to be set very close to its theoretical
maximum while maintaining error-correcting ability [7].
The performance of an error correction method is based
upon two properties; the performance of such a code over
a channel with variable noise and the optimal bit error ratio
(BER) of a code with sufficient signal-to-noise ratio (SNR).
The optimal BER is known as the error floor of a code [8], and
A
A. Price is with the Science and Engineering Faculty, Queensland University of Technology, Queensland, QLD, Australia. e-mail:
[email protected].
J. Hall is with the School of Science, Royal Melbourne Institute of
Technology, Melbourne.
Manuscript received MO DATE, YEAR; revised MO DATE, YEAR.
is discussed in different papers in terms of bit error rate (BER),
frame error rate (FER), block error rate and symbol error rate,
depending on the application being addressed (see [9], [10]
for examples of such error floor analysis). Consideration of the
error floor is one of the most important aspects of constructing
a high-performing LDPC code [10].
Analysis of the performance of LDPC codes over the binary
erasure channel (BEC) led to the discovery of stopping sets
in 2002 [11]. The Margulis construction [12] improved upon
the performance of Gallager codes, though a weakness in
this construction led to a high error floor over the additive
white Gaussian noise channel (AWGNC) compared to the
performance of other constructions of the time [13]. This high
error floor was due to the presence of stopping sets. Stopping
sets over BEC as described in [11] became a well understood
problem and led to the definition of trapping sets, which are
defined over AWGNC and BSC. In some early works trapping
sets are called near-code words [8], [13].
Trapping sets and stopping sets are an important topic,
worthy of a stand-alone survey.
II. P RELIMINARIES AND N OTATION
In order to engage with the literature on stopping sets and
trapping sets an overview of the preliminaries is necessary.
We provide a short review of the literature surrounding LDPC
codes, the common transmission channels, and common decoding techniques.
Definition 1: [14] A binary [n, k, d] linear code, C, is a
k-dimensional subspace of an n-dimensional vector space, F2n
which is used to provide structure to a message vector for
transmission over a channel.
In order to transmit messages over communication channels
using an error correcting code, we encode the message using
a generator matrix.
Definition 2: [7] The generator matrix, G, of a code, C,
is a matrix which has dimensions k × n. The k rows of G
correspond to linearly independent code words which form a
basis of C.
One of the most important aspects of error correction
is the process of decoding. A parity-check matrix allows
us to identify whether errors have been introduced during
transmission. This matrix can also be represented by a Tanner
graph.
Definition 3: [7] A parity-check matrix, H, of C is a matrix
which generates the nullspace of the code. This means that a
code word, c, is in the code, C, iff H · cT = 0, where 0 is an
r × 1 null vector. H has dimensions (n − k) × n.
Definition 4: [14] The parity-check matrix, H, may be represented by a bipartite graph with variable node set, V, and check
node set, C. This bipartite graph is denoted G(H) = (V ∪C, E),
MAY 2017
2
where the columns of H indicate the variable nodes in V and
the rows of H indicate the check nodes in C. For i ∈ V and
j ∈ C, (i, j) ∈ E if and only if Hi j = 1. This bipartite graph is
known as a Tanner graph with r = n − k check nodes and n
variable nodes (see Fig. 1).
We can also refer to individual variable nodes; let the i-th
variable node be vi and the j-th check node be c j [15].
Definition 5: [16] If a parity-check matrix of a code is
sparse, then the corresponding code, C, is called a low-density
parity-check (LDPC) code.
We note that the classification of sparse used in the context
of LDPC codes is that there are fewer ones in the parity-check
matrix than there are zeros [7]. The sparse nature of LDPC
codes means that decoding processes have a fast run-time, as
there are fewer operations to compute when compared to a
non-sparse parity-check matrix. Two more important features
of Tanner graphs are neighbours and node degrees.
Definition 6: [16][17] For a variable node vi and check node
c j , if (i, j) ∈ E we say that nodes vi and c j are neighbours.
The degree of a node in the Tanner graph is defined as the
number of edges it is connected to.
From the node degree definition we can also define regular
LDPC codes.
Definition 7: [6] An LDPC code is called (dv, dc )−regular
if each variable node, v, has degree dv and each check node,
c, has degree dc . We denote an LDPC code of this form a
C(dv, dc ) code.
LDPC codes are designed to be used as error-correction
methods over a variety of communication channels. There
are three communication channels discussed in this paper;
the binary erasure channel (BEC), the binary symmetric
channel (BSC) and the additive white gaussian noise channel
(AWGNC) [18]. Though these channels handle data transmission in different ways, their encoding and decoding goals are
the same.
The upper bound on the error correcting ability of an LDPC
code is determined by the minimum distance of the code. In
order to define the minimum distance of a code, we will first
define Hamming weight and Hamming distance.
Definition 8: [19] The Hamming weight, w, of a vector is
the number of its non-zero elements. The Hamming weight of
a binary vector is therefore the number of ones in the vector.
The Hamming distance, d, between two vectors, x and y, is
the number of places in which they differ; written as d(x, y).
In the literature, Hamming weight and Hamming distance
are often referred to using the terms “weight” and “distance”
[14]. The weight of code words affects the number of operations performed in decoding and the distance between code
words affects how many errors can be corrected.
Definition 9: [18] The minimum distance of a code, C, is
defined as the smallest Hamming distance between any two
code words in the code,
A. Encoding and Verification
The distance between these codewords is given as d(c1, c2 ) =
4. Take the following error vector:
The process of transforming a message vector into its
associated code word is known as encoding. Every code word
c = [c0, c1, ..., cn−1 ] ∈ C can be expressed as
c = m · G.
Where m = [m0, m1, ..., mk−1 ] is the mesage vector [7]. The
code word, c, has the original k information bits as well as an
additional r parity bits to give the code word a length of n bits.
As the parity-check matrix, H, is the nullspace of the code,
we can use it as a verification method to test if a recieved
vector is a code word [14]. The product H · cT is denoted the
syndrome, s, of c through H. For a given vector, v,
v ∈ C iff s = H · vT = 0.
There must be an even number of ones in the components of
the product H·cT which add to give s = 0. This is known as the
even-parity constraint [2]. After a code word, c, is transmitted
through a channel, the other party receives a vector, v. If s = H·
vT , 0, then v < C and so we use error correcting techniques
in attempt to correct v and recover c.
d(C) = min{d(x, y) | x, y ∈ C, x , y}.
The following theorem and corollary describe a code’s error
detection and correction abilities using minimum distance [18].
Theorem 1: [18] A code C can detect up to s errors in any
code word if d(C) ≥ s + 1. A code C can correct up to t errors
in any code word if d(C) ≥ 2t + 1.
Corollary 1: [18] If a code C has minimum distance d, then
C can be used to either detect up to d − 1 errors, or to correct
up to (d − 1)/2 errors in any code word.
If the minimum distance of a code is too small, then it
cannot provide sufficient error correction. This is demonstrated
in Example 1
Example 1: Let two code words be given as;
c1 = [0100001111]
c2 = [0100111010].
e = [0000110101].
If e is added to c1 then the resulting code word is identical to
c2 , demonstrating the importance of minimum distance.
The minimum distance of an LDPC code is also related to
its code-rate; a large code rate lowers the upper bound on the
minimum distance of a code.
Definition 10: The code rate, R(C), of an LDPC code is
the portion of information bits sent in comparison to the entire
code vector sent, written as
R(C) =
k
,
n
where 0 ≤ R(C) ≤ 1.
The code rate and minimum distance often determine the
error correcting capability of an LDPC code, though the
decoding algorithm plays a direct part in the time it takes
to decode messages.
MAY 2017
3
v1
e
e
c1
v2
3e
→
1
(a)
1
0e
0
(b)
0e
0
1e
e
0e
1
1
1
1
→
0e
0
c5
v10
1
0e
0
v9
1
1e
1
1
c4
v8
1
1
1
v7
0e
1
c3
v6
1
1e
2e
1
v5
0
1
e
c2
v4
0e
0
0
v3
0
0e
1e
0e
0
(c)
(d)
Fig. 1. A 5 × 10 irregular Tanner graph (a) used to demonstrate the ER decoding algorithm with the received vector v = [e, 0, e, 1, 1, 1, 1, 0, 1, e] over BEC.
Nodes of interest are highlighted gray. (b) shows steps 1 and 2 of the ER algorithm’s first iteration. (c) shows the changes made in step 3 and then step 2 of
the second iteration. (d) shows the changes made in step 3 again, this time revealing that no further erasures exist. The algorithm then terminates on step 4
of this iteration, thus successfully correcting the received vector in 2 iterations.
B. Communication Channels and Decoding Basics
The communication channel by which transmission occurs
impacts the error correction algorithms that are chosen. A
communication channel can be modelled as a triple which
contains an input alphabet, an output alphabet and the probability of transition between a symbol in the input alphabet and
a symbol in the output alphabet [3].
The binary erasure channel (BEC) is one of the simplest,
non-trivial channel models [20],[21].
Definition 11: [22],[16] The Binary Erasure Channel
(BEC) is a communication channel with two input symbols,
0 and 1, and three output symbols, 0, 1 and e (the erasure
symbol). The BEC has an erasure probability, p, where given
an input, ci , the output, vi , is defined by the probability
formulae P[vi = ci ] = 1 − p and P[vi = e] = p.
Analysis of the BEC significantly advanced modern understanding of error correction [20]. An example of a simple decoding process over the BEC is the Edge Removal algorithm.
Definition 12: [16] Let c ∈ C ⊆ {0, 1} n be a binary code
word transmitted over the BEC and v ∈ {0, 1, e} n be the
received vector. The Edge Removal (ER) algorithm proceeds
as follows:
1) Initial Step: The value of each received vector bit, vi
is assigned to each variable node i ∈ V of the Tanner
Graph.
2) The check nodes ci ∈ C count the number of erased bits
which are neighbours in the Tanner graph, G(H).
3) If check node ci neighbours only one e symbol in v, the
even parity constraint uniquely determines the original
value of e for that variable node.
4) Repeat steps (2) and (3) until either all erasures have
been recovered or until every check node that is a
neighbour of an erased bit is a neighbour of at least
two erased bits.
For step (4) above, if the latter occurs, then the decoder has
failed due to the presence of a stopping set (see Section IV).
We provide an example of the ER decoding process in Fig.
1, where ◦ represents a variable node and a check node.
This example is found in [15].
For this decoding example, we use an irregular 5×10 paritycheck matrix, H and demonstrate the decoding of received
vector v = [e, 0, e, 1, 1, 1, 1, 0, 1, e] using the ER algorithm over
the BEC.
Another communication channel is the binary symmetric
channel (BSC).
Definition 13: [16], [22] The Binary Symmetric Channel
(BSC) is a communication channel with two input symbols,
0 and 1, and two output symbols, also 0 and 1. The BSC has
an error probability, p, where given an input, ci , the output,
vi , is defined by the probability formulae P[vi = ci ] = 1 − p
and P[vi = c¯i ] = p.
An example of a decoding process over the BSC is the
Gallager A algorithm [6].
Definition 14: [6], [22], [23] Let c ∈ C ∈ {0, 1} n be a
binary code word transmitted over the BSC and v ∈ {0, 1} n
be the received vector. The Gallager A algorithm proceeds as
follows:
1) Initial Step: The value of each received vector bit, vi
is assigned to each variable node i ∈ V of the Tanner
graph.
2) After this, a check node ci sends to all neighbouring
variable nodes vi, . . . , v j the sum (mod 2) of all of the
adjacent variable nodes except for the node itself (where
j is the degree of check node ci ).
3) Each variable node, vi then sends the following to their
adjacent check nodes: If all messages from check nodes
ci other than the target check node of the message are
equal, then vi sends that message back, otherwise it
resends its prior value.
4) Repeat steps (2) and (3) until either all variables nodes
send the same values over two consecutive iterations or
when a pre-set max iteration count is reached.
MAY 2017
4
v1
1, 0, 1
0
c1
v2
1
0
→
0, 1, 0
1
0, 1, 0
(a)
(b)
0
1, 1, 0
1
1
0
1, 1, 1
0, 0, 0
0, 1, 0
1
→
0
1, 1, 0
c5
v10
1, 1, 1
0
0
v9
0, 1, 0
1
0, 0, 0
1
c4
v8
1, 1, 1
0, 1, 0
1
v7
0
1, 1, 1
c3
v6
0, 0, 1
1
1
0
v5
0, 0, 0
0, 0, 1
1
c2
v4
1
0, 0, 0
1
v3
1, 0, 1
1
1
1
0, 1, 0
(c)
(d)
Fig. 2. A 5 × 10 (6,3)-regular Tanner graph (a) used to demonstrate the Gallager A decoding algorithm with the received vector v = [0, 1, 1, 0, 1, 0, 1, 0, 1, 1].
We represent a 0 being sent along an edge by a dashed line and a 1 with a full black edge. (b) shows step 1 of the Gallager A algorithm, as well the check
node calculation, taken as the addition mod 2 of all incoming message from variable nodes adjacent to each check node (denoted vi → ci ). (c) then shows
step 2, ci → vi . Lastly, (d) shows step 3, vi → ci . Due to the complexity of this algorithm, only one full iteration has been shown, though step 4 in Definition
14 describes how decoding continues.
The Gallager B algorithm offers improved decoding with
an additional step on each loop within the algorithm [23]. For
each degree, j, and each check node loop, i, there is a prechosen threshold value, bi, j . Throughout the steps involved in
check node ci for each variable node, v, and each adjacent
check node, c, if at least bi, j neighbours of v excluding c
sent the same information in the previous round, then v sends
that information to c; otherwise v sends its received value to c.
Algorithm A is a special case of algorithm B, where bi, j = j−1
independent of the round [23].
Throughout the decoding procedure, if the pre-set max
iteration count is reached without completion, the decoder has
failed due to the existence of a trapping set (see Section V).
An example of the first steps of the Gallager A algorithm
(see Fig. 2) demonstrates the differences between the decoding
considerations made between the BEC and the BSC.
The most complex channel considered here is the binary
input additive white gaussian noise channel, expressed commonly either as the BI-AWGNC or just as the AWGNC [24].
Definition 15: [24] Let X ∈ {0, 1}∗ be a message vector,
where ∗ denotes an arbitrary length. The additive white
Gaussian noise channel (AWGNC) maps the input vector,
X, to the vector X 0 ∈ {+1, −1}∗ and then adds the result with
Gaussian white noise to give an output vector Y = X 0 + W,
where W ∼ N (0, N0 /2Eb ).
Each code symbol, y ∈ Y , carries with it a signal to noise
ratio (SNR) of Eb /N0 and the conditional distribution of Y is
P(y|x 0) = PW (y− x 0) = p
1
2π(N0 /2Eb )
·exp
−(y − x 0)
, (1)
(N0 /Eb )
2
which gives the output alphabet for the AWGNC as y ∈ R. As
in the BEC and BSC, we would like to have some indication
of the errors that a channel is introducing to the code word. A
metric used for the AWGNC is the log likelihood ratio (LLR).
P(y|x = 0)
L(y|x) = ln
.
(2)
P(y|x = 1)
This L-value describes the likelihood that x is 0 or 1. If L
is positive then P(y|x = 0) > P(y|x = 1) and thus the input
estimate should be x̂ = 0. There are methods to map from
Y ∈ R to Y 0 ∈ {0, 1}∗ and, as such, all decoding methods used
over the BSC can be implemented on the AWGNC. However,
high performing decoding algorithms, such as maximumlikelihood decoders, the sum-product algorithm and the maxproduct algorithm [8],[25],[26] utilize the LLR information to
improve decoding speed [27]. The BSC can be used for these
channels as LLR values are defined over the BSC, though
the AWGNC more closely models the influence of real-world
communication channels and is favoured for high performance
simulations [7],[24].
Example 2: The BSC has a conditional LLR function as the
bit flipping probabilities are well understood for the outputs
( 1−p
ln( p ) y = 0
LBSC (y|x) =
(3)
p
ln( 1−p
) y=1
As the noise determines the values of y in the AWGNC and
y ∈ R, the LLR on this channel is defined as
L AW G NC (y|x) = 4
Eb
y.
N0
(4)
The decoding algorithms used on the AWGNC are far more
complex than those on the BEC and BSC and, as such, we
provide an overview of various methods rather than detailed
definitions and examples. Decoding methods used over the
AWGNC tend to be message passing algorithms, where nodes
send information to their neighbours to correct errors based
on the structure of the parity-check matrix [28]. The original
message-passing algorithm [6] is an example of a flooding
MAY 2017
5
v1
ability of a code as well as the frame error ratio (FER) [8],
[10], [15]. The FER is the ratio of frames or whole messages
transmitted which cannot be fully corrected versus the total
number of frames transmitted. The largest contributors to the
error floor stopping sets and trapping sets.
c1
v2
v3
v7
c2
v4
v5
c3
→
c1
c4
c5
v6
v7
c4
v9
v8
v9
c5
v10
(a)
(b)
Fig. 3. (a) Tanner graph for the irregular 5 × 10 parity-check matrix given
in Example 3. (b) Induced subgraph of the highlighted stopping set with
consistent labelling.
schedule [29] where in each iteration, all variable nodes and
subsequently all check nodes pass new messages to their
neighbours. Another example of a flooding schedule is the
sum-product algorithm (see [25]).
An improved schedule, where both variable nodes and
check nodes send messages to each other throughout a single
iteration is known by many names including serial scheduling
[30], [31], layered scheduling [32] and sequential scheduling
[33]. These algorithms offer an improved decoding performance as information is moving through the Tanner graph
more frequently. Examples of decoding algorithms using this
scheduling include the max-product algorithm (MPA) [26] and
the belief propagation algorithm (BPA) [33]. BPA is widely
used in LDPC code analysis and is based on the likelihood that
a node takes a value given its current value and the values of
nearby nodes from previous iterations [23].
Error correction must be implemented differently for each
channel. The edge removal algorithm, for example, deals
with erasures and thus is not suitable over the BSC. Errors
which occur during transmission that are not corrected by the
decoding algorithm form what is known as the bit error ratio
(BER); the ratio of bits that cannot be corrected versus the total
number of bits transmitted [8] [34].
In order to test the performance of LDPC codes, we can
simulate the transmission of messages over an increasing
signal-to-noise ratio (SNR) and calculate the BER of a code
under varying conditions. As SNR grows larger, the BER of
a code will suddenly decrease depending on the conditions of
the channel and the error correcting capability of the LDPC
code in use. This curve is known as the waterfall region [8].
The best scenario for correcting errors is when the probability
of error during transmission over a channel is negligible and
when the implemented error correcting code can correct many
errors.
Definition 16: [8] The waterfall region eventually ends in
all BER graph curves as anomalous errors cause decoders to
fail even with a high SNR ratio. The lowest the BER becomes
before levelling is called the error floor of a code.
The BER is a standard way to analyze the error correcting
III. C YCLES AND G IRTH
The decoding method we choose has direct implications
for the accuracy and efficiency of decoding. Cycles were
the first known negative characteristic of LDPC codes and
were extensively studied as they impacted on the accuracy of
high performance LDPC codes [35]. A cycle in a graph is a
sequence of connected nodes which form a closed loop where
the initial and final node are the same and no edge is used
more than once [7]. The cycle length is the number of edges
a cycle contains, and the length of the smallest cycle in a graph
is denoted as its girth [17].
If no cycles exist within the Tanner graph of a paritycheck matrix, then the iterative belief propagation decoding
technique is always successful with sufficient iterations [36].
However, if the neighbours of a node are not conditionally independent then belief propagation methods become inaccurate
[35]. The inferred solution is to construct a parity-check matrix
with no cycles. However, as discussed in Section VII, this is
unessecary as not all cycles negatively impact the decoding
efficiency of LDPC codes. In fact, the restriction of girth can
lead to constraints on the structure of the code which further
impedes the decoding efficiency [35].
The cycles which negatively impact the decoding efficiency
of LDPC codes combine to form what are known as stopping
sets and trapping sets [37]. These sets lead to a high error floor
in otherwise efficienct LDPC code constructions throughout
various communication channels and affect all high performing decoding algorithms.
IV. S TOPPING S ETS OVER BEC
Stopping sets are collections of variable and check nodes in
the Tanner graph of an LDPC code which greatly reduce its
error correcting ability. These sets cause decoding to fail when
certain variable nodes are affected by errors after transmission.
Stopping sets were first described in 2002 by Di et al [11],
who were researching the average erasure probabilities of bits
and blocks over the BEC.
Definition 17: [11] Let G(H) be a Tanner graph and V
be the set of variable nodes in G(H). A stopping set, S, is a
subset of V, such that all neighbours of S are connected to
S at least twice.
The empty set is also a stopping set and the space of
stopping sets is closed under union [11]; if S1 and S2 are
both stopping sets then so is S1 ∪ S2 . The following lemma
describes a stopping set by the performance of the LDPC
code’s decoding algorithm.
Lemma 1: [11] Let G be the generator for an LDPC code
over the BEC and E denote the subset of the set of variable
nodes which is erased by the channel after the transmission of
a message. Then the set of erasures which remain when the
MAY 2017
6
v1
0
0
c1
v2
2e
0
v3
1
1
1
→
e
e
(a)
2e
0
e
e
2e
0
(b)
0e
1
e
2e
0
→
2e
0
c5
v10
1
2e
0
v9
1
0e
e
c4
v8
0e
1
1
0e
1
v7
1
0e
1
c3
v6
0
1e
e
v5
2e
0
c2
v4
0
3e
2e
0
(c)
(d)
Fig. 4. The effect of a stopping set on the ER decoding process for the 5 × 10 irregular Tanner graph (a) on the right hand side of the line. (b) shows
steps 1 and 2 of the ER algorithm’s first iteration. (c) shows the changes made in step 3 and then step 2 of the second iteration. Finally, (d) shows that no
further erasures can be corrected, and thus we see that the received vector v = [0, 0, 1, e, 1, 1, e, 0, e, 0] produces a scenario in the Tanner graph where the
ER algorithm cannot retrieve the original code word. From Definition 12, we know that this is due to the presence of a stopping set.
decoder stops is equal to the unique maximal stopping set
of E.
Definition 17 is now widely accepted [38],[39],[40]. Given a
BEC with erasure probability , the performance of the code
over the BEC is completely determined by the presence of
stopping sets [8]. Since stopping sets have a combinatorial
characterization, their distributions through various Tanner
graphs can be analyzed rigorously [11], [38].
Definition 18: [38] Let S denote the collection of all
stopping sets in a Tanner graph, G(H). The stopping number,
s∗ , of G(H) is the size of the smallest, non-empty stopping set
in S.
The stopping number of a code aids in the analysis of the
code’s error floor. It is known that the performance of an LDPC
code over the BEC is dominated by the small stopping sets in
the graph [8]. The larger this value is, the lower the error floor
of the code. In some cases, this stopping number increases
linearly with the number of variable nodes, |V |, in the Tanner
graph [38]. This can be seen more easily using the stopping
ratio.
Definition 19: [38] Let G(H) be a Tanner graph with n
variable nodes and stopping number s∗ . The stopping ratio,
σ ∗ , of a Tanner graph is defined by s∗ /n; the ratio of its
stopping number to the number of variable nodes.
A stopping set in the parity-check matrix of an LDPC code
is shown in Example 3.
Example 3: [15] Let C be the code with the following check
matrix
0
1
H = 1
0
0
0
0
0
1
1
1
0
1
0
0
1
1
0
0
0
0
0
1
1
0
0
1
0
1
0
1 1 1
0 0 0
0 1 0
1 0 1
1 1 1
0
1
1 .
0
1
Columns 7 and 9 in H have been highlighted as they belong
to a stopping set. The Tanner graph for C with the stopping
set highlighted is shown in Fig. 3.
A stopping set must be either empty or at least contain two
variable nodes. The stopping number, s∗ , of C is therefore 2
and its stopping ratio, σ ∗ , 0.2.
An example showing the impact of a stopping set on the
decoder is shown in Fig. 4, where the edge removal decoding
algorithm is used over the BEC.
Solutions to the problem of stopping sets (covered in Section VII) involve either avoiding or removing small stopping
sets in the Tanner graph, leaving only LDPC codes with large
stopping sets [39]. While stopping sets are well defined and
some solutions exist to minimize their effect of the error floor
of LDPC codes, the terminology does not support channels
without erasure.
V. T RAPPING S ETS OVER BSC, AWGN
Trapping sets, much like stopping sets, are also collections
of variable nodes and check nodes which impede the error
correcting ability of LDPC code. Only small, elementary
trapping sets impact the error floor of LDPC codes over the
BSC and AWGNC because of clustering [8], [41].
The definition of trapping sets came shortly after stopping
sets were defined. Similarly to the BEC, when decoding over
the BSC and AWGNC, sometimes the maximum iteration
count is reached when only a small set of variable nodes are in
error. Experiments with the argulis codes lead to the definition
of trapping sets [8], [13].
Definition 20: [8] Let G(H) be a Tanner graph. For a
received vector, y, of length n, we define the failure set, T(y),
to be the set of bits that are not eventually correct using some
arbitrary iterative decorder. Decoding is successful on y if and
only if T(y) = ∅.
Definition 21: [41], [8] If T(y) , ∅ then T(y) is a trapping
set. More specifically, T is an (a,b) trapping set in H if it
has a variable nodes for which the sub-graph induced by T
contains b ≥ 0 odd-degree check nodes.
MAY 2017
Fig. 5. A (5,3) trapping set (left) with critical number k = 3 and a (4,4)
trapping set (right) with critical number k = 4 [10]. These k values are found
using the Gallager B decoding algorithm and may vary when other decoding
algorithms are applied.
Iterative techniques on the BSC and AWGNC distinguish
trapping sets from stopping sets over the BEC [8]. If there is
only iteration by which the decoding algorithm can become
trapped then the notion of trapping sets becomes irrelevant.
Lemma 2: [8] Let C be a code using a one-step maximum
likelihood decoder, then the trapping sets are precisely the
non-zero code words.
Though, if the channel is BEC, then an iterative decoding
failure is said to be due to stopping sets, making stopping sets
and trapping sets equivalent over the BEC.
Lemma 3: [8] Let C be a code using a belief propagation
algorithm over BEC, then the trapping sets are precisely the
stopping sets.
Lemma 3 is an important bridge between trapping sets and
stopping sets, allowing us to relate the BEC to the BSC and
AWGNC. Decoding failure in an LDPC code over the BSC
and AWGNC is largely due to the existence of trapping sets.
Trapping sets pose a real threat to the error correcting ability
of LDPC codes; even though there may be very few nodes in
error after transmission, if enough of those nodes belong to a
trapping set, the decoder will fail.
Definition 22: [10] Let T be a trapping set. The critical
number, k, is the minimal number of variable nodes that have
to initially be in error for the decoder to become “trapped" in
T.
It is important to note that the variables nodes that are
initially in error do not necessarily belong to the trapping set;
it is possible that, at some iteration, the trapping set is entered,
causing the decoder to fail. In order to become trapped, the
decoder must, after some finite number of iterations, be in
error on at least one variable node from T at every iteration
thereafter.
Only trapping sets with a small number of variable nodes
and check nodes impact the error-floor of LDPC codes [8],
[41].
Definition 23: [41] An (a, b) √
trapping set in a [n, k, d] code
is a small trapping set if a ≤ n and b ≤ 4a.
Only these small trapping sets contribute to a larger errorfloor [8]. Small trapping sets are also of elementary form [41].
Definition 24: [41] An elementary (a,b) trapping set in a
[n, k, d] code is a trapping set for which all check nodes in
the induced subgraph have either degree one or two, and there
are exactly b degree-one check nodes.
While check nodes of odd degree larger than one are
possible, they are very unlikely within small trapping sets
7
[8],[41]. Techniques to find and remove elementary trapping
sets have become crucial when constructing high perfoming
codes [10], [42].
Two examples of trapping sets [10] are shown in Fig. 5.
In Fig. 5 the trapping set on the right has a smaller number
of variable nodes than the one of the left, however, under the
Gallager B decoding algorithm the larger trapping set has a
smaller critical number. Thus the performance of the code is
limited by the larger trapping set. This idea is quite unintuitive
and shows the depth of consideration which must be made
when attempting to improve the error floor of LDPC codes.
The problems which trapping sets and stopping sets introduce to LDPC code ares important to research and solve. There
do exist methods for constructing LDPC codes by avoiding
or removing trapping sets and stopping sets, however, these
methods come at the cost of restraining other properties such
as code length, density or error correcting ability [9].
VI. T HE INFLUENCE OF S TOPPING S ETS AND T RAPPING
S ETS ON LDPC C ODE P ERFORMANCE
The original LDPC codes proposed in 1962 by Gallager
[6], were construction methods which allowed for varied code
rates.
Definition 25: [13] A Gallager code is an LDPC code
constructed using a parity-check matrix with uniform row
weight i and uniform column weight j. The code has length n
code words and has code rate R(C), which gives a parity-check
matrix, H, with n columns and k rows, where k = n(1− R(C)).
Naive analysis indicated that failed decoding is due to
received vectors containing too many errors for the decoding
algorithm [6]. Analysis of a range of error patterns by Di et
al [11] determined that this was not always the case; leading
to the definition of stopping sets over the BEC.
A variety of analyses of Gallager codes have shown high
performance [2] [43]. A construction in 1982 by Margulis [12]
promised an improved performance over the AWGNC.
For each prime, p, let SL2 (p) be the Special Linear Group
whose elements consist of 2×2 matrices of determinant 1 over
Z p . This group has k = (p2 − 1)(p2 − p)/(p − 1) = (p2 − 1)p
elements. For p ≥ 5, the Margulis code is of length n = 2k,
with code rate R(C) = 1/2 [12]. The rows of the parity-check
matrix are indexed by the elements of SL2 (p) and the columns
are indexed by two copies of SL2 (p); detailed in the following
definition.
Definition 26: [13][12] Let SL2 (p) be generated by the
following matrices;
1 2
1 0
A=
,B =
0 1
2 1
If g ∈ SL2 (p) is the index of a row of the parity-check
matrix, a one is placed in the columns corresponding to g A2 ,
g ABA−1 and gB on the left hand side of the matrix and also in
the columns corresponding to g A−2 , g AB−1 A−1 and gB−1 on
the right hand side of the matrix. This results in a (3,6)-regular
parity-check matrix for a Margulis code.
An example of a parity-check matrix generated using the
Margulis construction is shown in Fig. 6 to demonstrate the
MAY 2017
8
Parity-Check Matrix for Margulis Code (p=7)
Frame Error Rate
10 -2
0
(336,672)-Margulis
(336,672)-Gallager
50
100
150
10 -3
FER
200
250
300
0
100
200
300
400
500
600
10 -4
Fig. 6. Parity-check matrix generated using the Margulis construction, setting
p = 7 to give a (3,6)-regular 1/2-rate code with n = 672. The blue dots
represent ones in this matrix with the remaining white space representing
zeros.
10 -5
sparse nature of LDPC codes. Another example of a Margulis
parity-check matrix can be found in [13] where p = 11 which
corresponds to a 1/2-rate code with n = 2640. While the code
has a higher performance than a random Gallager code, the
error floor is still quite high [13]. This error floor was claimed
to be due to near-code words [13]. A comparison between the
Margulis code and a random Gallager code, both with n = 672,
can be seen in Fig. 7.
Definition 27: [13] Let H be a parity-check matrix. If x is
a vector of weight w and
HxT = s
where s is of weight v, then x is a (w, v) near-code word.
Near-code words are different from stopping sets. Typical
(w, v) near-code words contain v check nodes which are only
connected to the variables nodes once. The near-code words in
the Margulis code are the (12, 4) and (14, 4) near-code words
[13].
The high error floors of the Margulis code can be reproduced with a 5 bit approximation to a belief propagation
algorithm [8]. Near-code words account for 98% of the error
floor performance of the Margulis code. Near code words are
trapping sets [8].
Trapping sets are often clustered [8]; if one trapping set
is found it will often contain nodes which belong to another
trapping set. This makes the search for trapping sets somewhat
simpler.
Finding both stopping and trapping sets are NP-hard problems [16], [44], which makes solutions to these sets difficult
to analyse.
The ER decoding algorithm is simple and the effect of
stopping sets can be demonstrated easilly. However, decoding
over the BSC or AWGNC is much more complex (see Fig. 2).
Iterative decoding methods tend to have maximum iteration
counts as termination conditions and, as such, demonstrating
the effect of trapping sets is difficult to show. In lieu of an
example, we remind the reader of the termination conditions
for the Gallager A algorithm. This decoder terminates either
when all variable nodes send the same values over two
consecutive iterations or when some maximum iteration count
is reached. In the latter case, the decoder has failed due to the
existence of a trapping set.
3
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9
4
Eb/No (dB)
Fig. 7. BER comparison between the p = 7 Margulis code (using the same
example as presented in Fig. 6) and a random Gallager code, both with n =
672 and decoded using MPA over the AWGNC. These graphs are also known
in the literature as waterfall curves [8].
While we have only discussed the issues with the Margulis
code using specific decoding algorithms here, there are many
code constructions which contain trapping and stopping sets
and decoding algorithms which terminate for the same reasons.
For further reading, see [45].
Further reading on stopping sets in LDPC codes include
the message passing (MP) algorithm [46] and the maximumlikelihood (ML) decoder [47]; both over the BEC. Further
reading on trapping sets include finite alphabet iterative decoders (FAIDs) [48] and constructions based on Latin squares
[49]. This construction offers high structure by which stopping
sets and trapping sets can be analysed.
If constructions which avoid trapping and stopping sets
exist, then the error floors of the associated LDPC codes will
lower significantly. This would improve the speed at which
almost all digital communication occurs given the already high
performance of LDPC codes in modern applications including
WiFi [4] and DVB-S2 [5].
VII. C URRENT S OLUTIONS
The simple goal is to avoid or completely remove every
stopping set or trapping set from an LDPC code. This is
both not reasonable given the number of cycles in an LDPC
construction [7] and, more importantly, not necessary [38],
[41].
Only small, elementary trapping sets impact the error floor
due to clustering [8], [41]. If there are enough errors in
transmission for a decoder to get trapped in a large trapping
set then it is highly likely that it would also be trapped in a
small trapping set [8]. If there are not enough errors for the
decoder to become trapped in a large trapping set, the received
vector can either be successfully decoded or the decoder will
fail due to the presence of at least one small trapping set.
The current solutions to trapping sets are the development of
constructions which avoid small trapping sets and the removal
of trapping sets from existing constructions.
MAY 2017
9
vi
Depth-0
...
...
...
Depth-1
...
...
..
.
..
.
...
...
..
.
...
Depth-l
...
...
...
Fig. 8. [51] A subgraph (tree) contained in the depth l neighbourhood
spreading from the variable node vi . Note that ◦ here represents a variable
node and represents a check node.
A. Avoiding Trapping Sets
Stopping sets pose threats to the error correction of messages sent over the BEC, however, in practice the AWGNC
is used and so, when discussing proposed solutions, we focus
on the influence of trapping sets over the AWGNC.
The progressive-edge-growth (PEG) construction [50], [51]
is a method of constructing Tanner graphs with high girth.
Many trapping sets include small cycles, so the likelihood of
a small trapping set being constructed is small with a graph
of high girth [52]. In order to give the definition for the PEG
construction, some definitions are needed. The PEG construction method uses variable and check node degree sequences
[51]. The variable node degree sequence is denoted
Dv = {dv0 , dv1 , . . . , dvn−1 },
where dvi is the degree of variable node vi , 0 ≤ i ≤ n − 1,
and dv0 ≤ dv1 ≤ · · · ≤ dvn−1 . The parity check sequence is
denoted
Dc = {dc0 , dc1 , . . . , dcm−1 },
where dc j is the degree of check node c j , 0 ≤ j ≤ m − 1, and
dc0 ≤ dc1 ≤ · · · ≤ dcm−1 .
The construction partitions the set of edges, E, to
E = Ev0 ∪ Ev1 ∪ · · · ∪ Evn−1 ,
where Evi contains all edges incident on symbol node vi . The
k th edge incident on vi is denoted as Evki , where 0 ≤ k ≤
dvi − 1.
The neighbourhood of depth l for variable node vi is Nvli
and is defined as the set of all check nodes included in a
subgraph (tree) spreading from variable node vi within depth
l. This is demonstrated in Fig. 8. The complement of Nvli is
Nv−li = C\Nvli , where C is the set of check nodes. The subgraph
(tree) generated this way is constructed breadth-first with vi
as the root. Given the parameters n, m and Dv , we define the
PEG construction as follows.
Progressive edge-growth algorithm (PEG) [51]:
for i = 0 to n − 1 do
begin
for k = 0 to dvi − 1 do
begin
if k = 0
Ev0i ← edge (c j , vi ), where ev0i is the first edge
incident to vi and c j is a check node that it
has the lowest check-node degree in
Ev0 ∪ Ev1 ∪ · · · ∪ Evi−1 .
else
Expand a subgraph from symbol node vi up to
depth l in Ev0 ∪ Ev1 ∪ · · · ∪ Evi−1 until the
cardinality of Nvli stops increasing but is less
than m, or Nv−li , but Nvl+1
= , then Evki ←
i
k
edge (c j , vi ), where Evi is the k th edge incident
to vi and c j is a check node chosen from the
set Nv−li having the lowest check-node degree.
end
end
end
When presented with check nodes of the same degree,
a decision must be made; the selection of a check node
at random or the selection of a check node according to
some order. An improved construction, the randomized-PEG
construction [53], chooses at random, though the deterministic
nature of the ordered check node process might be of use [51].
An example of PEG construction is given in Fig. 9, setting
n = 10, m = 5 and Dv = {2, 2, 2, 2, 2, 2, 2, 2, 2, 2}.
The PEG construction maximises the local girth of a variable node when a new edge is added to the node [51]. After the
discovery of stopping and trapping sets the PEG construction
was modified [37], [53].
The PEG construction is notable for its ability to create
high girth LDPC codes, however, the number of cycles is
not controlled. Trapping sets are formed by a combination of
several cycles [37]. The PEG algorithm, while having a higher
girth than aternate constructions, contains more trapping sets;
thus leaving the error floor open to improvement.
The RandPEG construction improves upon the PEG algorithm by minimizing cycles at the same time as reducing
the computational complexity of the PEG algorithm [53].
This can be further improved by adding an objective function
to avoid small trapping sets [37]. Quasi-cyclic LDPC (QCLDPC) codes, which are used in many applications [1], [54]
can be constructed using the Improved RandPEG algorithm
[37]. The objective function used in the improved RandPEG
algorithm detects all (5,3) and (6,4) trapping sets, removing
all (5,3) trapping sets and as many (6,4) trapping sets as
possible without adversely affecting the performance of the
LDPC code. The characterization of trapping sets is achieved
in [37] through the locations of check nodes in different levels
of depth-l trees (see Fig. 8). The resulting construction is as
follows:
Improved RandPEG algorithm (RPEG) [37]:
for i = 0 to n − 1 do
MAY 2017
10
v0
v0
v0
c0
v1
c0
v1
v2
v2
v2
v4
→
v5
→
v6
c3
v7
v8
v8
c4
v9
(b)
c2
v5
v6
c4
v9
→
c3
v7
v8
(a)
v5
v6
c4
v9
v4
c2
c3
v7
v8
v3
v4
c2
c3
v7
c1
v3
v4
v6
v2
c1
v3
v5
v1
c1
v3
c0
v1
c1
c2
v0
c0
c4
v9
(c)
(d)
Fig. 9. A PEG construction with n = 10, m = 5 and D v = {2, 2, 2, 2, 2, 2, 2, 2, 2, 2}. Check nodes are chosen based on index order. The first edge chosen for
each variable node is chosen from the check nodes with lowest degree at random. The generation of the subgraphs and subsequent edge placement are the
factors which highlight this construction method. In (a), the edge choices are simplistic as the breadth-first subgraphs are of low depth. This decision making
continues in (b) until s5 is considered, where the edge choice is restricted to c2 or c3 due to connections in the subgraph to check nodes c1 and c4 . These
choices can be observed in both remaining figures (c) and (d). One notable choice is the 2 n d edge decision for v9 in (d), where the only remaining option
once c2 is chosen becomes c4 , which gives a uniform check node degree sequence.
begin
for k = 0 to dvi − 1 do
begin
if k = 0
Ev0i ← edge (c j , vi ), where ev0i is the first edge
incident to vi and c j is a check node such that
it has the lowest check-node degree in
Ev0 ∪ Ev1 ∪ · · · ∪ Evi−1 .
else
Expand a subgraph from symbol node vi up to
depth l in Ev0 ∪ Ev1 ∪ · · · ∪ Evi−1 until the
cardinality of Nvli stops increasing but is less
than m, or Nv−li , but Nvl+1
= . Remove from
i
Nv−li all check nodes that appear at least once in
the depth-3 tree spreading from vi . This
removes all check nodes that would create cycles
of size < 8.
for cm in Nv−li do
Compute the number of (5,3) and (6,4) trapping sets that would be created if cm is
selected. Remove all check nodes that would
create (5,3) trapping sets and remove check
nodes which create more than the smallest
number of (6,4) trapping sets.
If Nv−li ,
Evki ← edge (cm, vi ), where Evki is the k th
edge incident to vi and cm is a check node
chosen from the remaining nodes in Nv−li .
else
Declare a design failure.
end
end
end
end
end
The Improved RandPEG construction algorithm, while having a high computational complexity, performs the task of
avoiding trapping sets optimally for given dimensions of
an LDPC code. Possible improvements to this construction
method include lowering the computational complexity and
potentially lowering the girth. The removal of all cycles is
unnecessary as not all cycles contribute to trapping sets [38],
[41]. The inclusion of a lower girth into a construction which
also contains no small trapping sets could lead to an LDPC
code with a higher decoding performance.
B. Removing Stopping and Trapping Sets
The performance of LDPC codes is constrained by the
presence of cycles and trapping sets within the code’s paritycheck matrix. We discuss two methods of removing trapping
sets; the addition of a redundant parity-check equation [55]
and the use of Tanner graph covers [10].
Redundant Parity-Check Equations
Adding a redundant parity-check equation is equivalent to
adding a redundant row to the parity-check matrix. This has
been used in an attempt to remove the trapping sets present in
the [2640, 1320] Margulis code [55]. The (12, 4) and (14, 4)
trapping sets in the [2640, 1320] Margulis code are elementary
point trapping sets [13]. Point trapping sets are subsets of
variable nodes that contain all errors ever to occur throughout
the decoding process [55].
A redundant parity check row is identified which, when
added to the parity-check matrix, potentially disrupts the (12,4)
and (14,4) elementary trapping sets. This parity-check row is
identified through a genie-aided random search which relies
on information about trapped variables not available during
MAY 2017
11
decoding [55]. As random searches cannot be used in applied
error correction a structured search was considered to be more
useful. The structured search identifies variable and check
nodes which connect to both the (12,4) and (14,4) trapping
sets and combines the projection of the involved nodes such
that a redundant parity-check row can be added to eliminate
the effect of those trapping sets.
The only way to disrupt the (12,4) and (14,4) trapping sets in
the Margulis code is if the projection of both the 12 variables
and the 14 variables on the redundant parity-check equation
has row weight one [53]. This can be most reliably achieved by
extending (12,4) trapping sets to (14,4) trapping sets (see Fig.
10). Given that the Margulis code has a (3,6)-regular paritycheck matrix, an elementary (a,b) trapping set contains a fixed
number of check nodes. Let e denote the number of check
nodes connected to two variable nodes within the trapping
set, then e = (3a − b)/2 and therefore two variable nodes and
three check nodes must be added to extend a (12,4) trapping
set to a (14,4) trapping set [53]. At most one check node can
be connected to both of the added variable nodes such that
4-cycles are not created. Such an extension which avoids the
creation of 4-cycles in the Margulis code is only possible in
two configurations.
(a) The two degree-one check nodes of the basic (12,4) trapping set are connected through two additional variable
nodes to one additional check node.
(b) In the second configuration, the additional variable
nodes do not share a check node.
These configurations are demonstrated in Fig. 10. The existing
check and variable nodes neighbouring the additional check
and variable nodes are linearly combined to generate a redundant parity-check equation. A structured search is then used
to ensure that this projection has row weight one in both the
(12,4) and (14,4) trapping set.
The addition of a redundant parity-check equation focuses
on point elementary trapping sets from the Margulis code,
where the structure of the trapping sets are well known. If
this method were applied to other LDPC codes, the location
and structure of the trapping sets within such a code are unknown. The redundant rows are computationally inexpensive
to compute, however, the code rate of the resulting LDPC
code will be reduced and the extra row increases the number
of operations per decoding iteration, though by a negligible
amount. Another potential problem is the success rate of this
solution. The addition of a redundant parity-check equation
does not guarantee that the trapping set will be disrupted [55].
Tanner Graph Covers
Another method capable of eliminating trapping sets is the
utilization of graph covers [10]. This method constructs an
LDPC code C (2) of length 2n given a code C of length n.
The parity check matrix of this code is denoted H (2) and is
initialized to
H (2) =
H
0
0
H
cBO,1
cBO,2
vBO,1
vBO,2
vBO,3
vBO,4
cBO,3
cBO,4
(12, 4) TS
cBO,1
vBO,1
vBO,2
vBO,3
vBO,4
cBO,2
cBO,3
cBO,4
(12, 4) TS
vE,1
vE,2
cE
cEO,1
cEO,2
Expansion to (14,4) TS
(a)
vE,1
vE,2
cE
cEO,1
cEO,2
Expansion to (14,4) TS
(b)
Fig. 10. [55] The trapping set structure of a (12, 4) trapping set and its (14, 4)
expansion configurations; (a) left and (b) right. For (a), the two expansion
variables are denoted vE,1 and vE,2 and the check node connected to both of
the variables nodes is denoted c E . The unsatisfied check nodes in the original
(12, 4) trapping set are denoted c BO,1 through c BO,4 and the variable nodes
connected to these check nodes are denoted vBO,1 through vBO,4 . The check
nodes of degree one in the expansion of the trapping set are denoted c E O,1
and c E O,2 . The node labels for (b) follow similarly.
(2)
(2)
The operation of changing the value of Ht,k
and Hm+t,n+k
(2)
(2)
to “0", and Hm+t,k
and Hm,n+k
to “1" is termed as edge swapping e. The graph covers method requires that the locations
of dominant trapping sets are known. The method of edge
swapping is then described as follows.
Graph covers algorithm [10]:
1) Take two copies, C1 and C2 , of the code C. Since the
codes are identical they share the same trapping sets.
Initialize SwappedE dges = , FrozenE dges = ;
2) Order the trapping sets by their critical numbers.
3) Choose a trapping set T1 in the Tanner graph of C1 , with
minimal critical number. Let ET1 denote the set of all
edges in T1 . If ET1 ∩ SwappedE dges , go to step 5,
else go to step 4.
4) Swap an arbitrarily chosen edge e
∈
ET1 \
FrozenE dges. Set SwappedE dges = SwappedE dges∪
e.
5) “Freeze" the edges ET1 from T1 so that they cannot be
swapped in the following steps. Set FrozenE dges =
FrozenE dges ∪ ET1 .
6) Repeat steps 2 to 4 until all trapping sets of the desired
size are removed.
Possible improvements to the graph cover method are to
prioritize specific edges for swapping and freezing to avoid
creating trapping sets of the same critical number [10]. However, experimentally, all trapping sets with minimal critical
number were removed using the above algorithm [10]. The
graph covers method gave improved FER results for a Tanner
code [56], a Margulis code [57] and a MacKay code [58]
using the Gallager B decoding algorithm. While the decoding
method was constant throughout these results, the application
of graph covers will optimize FER performance using an
MAY 2017
12
arbitrary decoding algorithm [10].
The LDPC code C (2) created from code C has code rate
(2)
r ≤ r and minimum distance 2d ≥ d (2) ≥ d. An increase
to the minimum distance of the code C (2) gives higher error
correcting capabilities than C. However, a lower code rate
could decrease the overall efficiency. The lower row and
column weight of H (2) gives C (2) higher FER performance
than C, though with a trade-off of low decoding complexity.
The trade-offs associated with removing trapping sets are
more severe than the surveyed construction methods which
avoid them (code length, decoding speed from check nodes,
etc). The current research goal remains the creation of a
construction or modifiable construction which can either avoid
or remove small elementary trapping sets without penalty to
the code’s error correcting ability or decoding efficiency.
VIII. C ONCLUSION
Throughout this survey we have covered the literature surrounding LDPC codes, communication channels and decoding
techniques. The negative impact cycles have on LDPC code
efficiency is noted and the problem of stopping sets and
trapping sets have been defined and discussed including the
dominance of small elementary trapping sets over AWGNC.
A small variety of partial solutions such as the randomized
progressive edge-growth algorithm and Tanner graph covers
are discussed. The research goal remains to find constructions
of LDPC codes without small trapping sets.
ACKNOWLEDGMENT
The authors would like to thank Professor Ian Turner, who
worked closely with us throughout our research, Emeritus
Professor Ed Dawson and Dr Harry Bartlett for their help in
the final stages before submission, Dr Dhammika Jayalath for
his help with the decoding simulations over AWGNC, and
Xuan He for his suggestions on how to present our BER
data. Computational resources and services used in this work
were provided by the HPC and Research Support Group,
Queensland University of Technology, Brisbane, Australia. A.
Price is supported by an APA Scholarship.
R EFERENCES
[1] M. Diouf, D. Declercq, S. Ouya, and B. Vasic, “Improved PEG construction of large girth QC-LDPC codes,” IEEE 9th Intern. Symp. Turbo
Codes and Iterative Inf. Proc. (ISTC), pp. 146–150, 2016.
[2] D. J. MacKay and R. M. Neal, “Near Shannon limit performance of
low density parity check codes,” Electron. Lett., vol. 32, no. 18, pp.
1645–1646, 1996.
[3] C. E. Shannon, “A mathematical theory of communication,” Bell system
Technical Journal, vol. 27, no. 1, pp. 379–423, 1948.
[4] “IEEE standard for information technology– local and metropolitan
area networks– specific requirements– part 11: Wireless lan medium
access control (mac) and physical layer (phy) specifications,” IEEE Std
802.11n-2009, pp. 1–565, Oct 2009.
[5] E. ETSI, “302 307 v1. 3.1,” Digital Video Broadcasting (DVB); Second
generation framing structure, channel coding and modulation systems
for Broadcasting, Interactive Services, News Gathering and other broadband satellite applications (DVB-S2), 2013.
[6] R. G. Gallager, “Low-density parity-check codes,” IRE Trans. Inf.
Theory, vol. 8, no. 1, pp. 21–28, 1962.
[7] N. Bonello, S. Chen, and L. Hanzo, “Low-density parity-check codes
and their rateless relatives,” IEEE Commun. Surv. Tuts., vol. 13, no. 1,
pp. 3–26, 2011.
[8] T. Richardson, “Error floors of LDPC codes,” Proc. annual Allerton
conference on commun. control and computing, vol. 41, no. 3, pp. 1426–
1435, 2003.
[9] S. J. Johnson and S. R. Weller, “Codes for iterative decoding from partial
geometries,” IEEE Trans. Commun., vol. 52, no. 2, pp. 236–243, 2004.
[10] M. Ivkovic, S. K. Chilappagari, and B. Vasic, “Eliminating trapping sets
in low-density parity-check codes by using Tanner graph covers,” IEEE
Trans. Inf. Theory, vol. 54, no. 8, pp. 3763–3768, 2008.
[11] C. Di, D. Proietti, I. E. Telatar, T. J. Richardson, and R. L. Urbanke,
“Finite-length analysis of low-density parity-check codes on the binary
erasure channel,” IEEE Trans. Inf. Theory, vol. 48, no. 6, pp. 1570–1579,
2002.
[12] G. A. Margulis, “Explicit constructions of graphs without short cycles
and low density codes,” Combinatorica, vol. 2, no. 1, pp. 71–78, 1982.
[13] D. J. MacKay and M. S. Postol, “Weaknesses of Margulis and
Ramanujan-Margulis low-density parity-check codes,” Electronic Notes
in Theoretical Computer Science, vol. 74, pp. 97–104, 2003.
[14] M. Baldi, QC-LDPC code-based cryptography. Springer Science &
Business, 2014.
[15] G. Richter, “Finding small stopping sets in the tanner graphs of LDPC
codes,” Turbo Codes & Related Topics; 6th International ITG-Conf. on
Source and Channel Coding (TURBOCODING) pp. 1–5, 2006.
[16] A. McGregor and O. Milenkovic, “On the hardness of approximating
stopping and trapping sets,” IEEE Trans. Inf. Theory, vol. 56, no. 4, pp.
1640–1650, 2010.
[17] R. Shedsale and N. Sarwade, “A review of construction methods for
regular LDPC codes,” Indian Journal of Comput. Sci. Eng., vol. 3, no. 2,
pp. 380–385, 2012.
[18] R. Hill, “A first course in coding theory,” Oxford: Clarendon Press,
1986.
[19] W. W. Peterson and E. J. Weldon, “Error-correcting codes,” MIT press,
1972.
[20] T. Richardson and R. Urbanke, “Modern coding theory,” Cambridge
University Press, 2008.
[21] P. Elias, “Coding for two noisy channels,” Third London Symposium,
vol. 67, 1955.
[22] R. Poddar, “Low density parity check codes,” Complexity, vol. 7, p. 8,
2007.
[23] A. Shokrollahi, “LDPC codes: An introduction,” Digital Fountain, Inc.,
Tech. Rep, p. 2, 2003.
[24] N. Traore, S. Kant, and T. L. Jensen, “Message passing algorithm
and linear programming decoding for LDPC and linear block codes,”
Aalborg University, pp. 32–53, 2007.
[25] G. Colavolpe and G. Germi, “On the application of factor graphs and the
sum-product algorithm to ISI channels,” IEEE Trans. Commun., vol. 53,
no. 5, pp. 818–825, 2005.
[26] E. Sharon, S. Litsyn, and J. Goldberger, “An efficient message-passing
schedule for LDPC decoding,” Proc. 23rd Conv. Electric. Electron.
Engineers, pp. 223–226, 2004.
[27] P. Hailes, L. Xu, R. G. Maunder, B. M. Al-Hashimi, and L. Hanzo, “A
survey of FPGA-based LDPC decoders,” IEEE Commun. Surv. Tuts.,
vol. 18, no. 2, pp. 1098–1122, 2015.
[28] E. Sharon, S. Litsyn, and J. Goldberger, “Convergence analysis of serial
message-passing schedules for LDPC decoding,” Turbo Codes & Related
Topics; 6th International ITG-Conf. on Source and Channel Coding
(TURBOCODING) pp. 1–6, 2006.
[29] F. R. Kschischang, B. J. Frey, and H.-A. Loeliger, “Factor graphs and
the sum-product algorithm,” IEEE Tran. Inf. theory, vol. 47, no. 2, pp.
498–519, 2001.
[30] J. Zhang and M. Fossorier, “Shuffled belief propagation decoding,”
Asilomar Conf. on Signals, Systems and Computers, vol. 1, pp. 8–15,
2002.
[31] H. Kfir and I. Kanter, “Parallel versus sequential updating for belief
propagation decoding,” Physica A: Statistical Mechanics and its Applications, vol. 330, no. 1, pp. 259–270, 2003.
[32] D. E. Hocevar, “A reduced complexity decoder architecture via layered
decoding of LDPC codes,”, Signal Processing Systems, 2004, pp. 107–
112, 2004.
[33] A. I. V. Casado, M. Griot, and R. D. Wesel, “Informed dynamic
scheduling for belief-propagation decoding of LDPC codes,”, IEEE
International Conf. on Commun., pp. 932–937, 2007.
[34] S. Haykin, Communication systems. John Wiley & Sons, 2008.
[35] T. Tian, C. Jones, D. Villasenor, and R. Wesel, “Construction of irregular
LDPC codes with low error floors,” IEEE Intern. Conf. on Commun.
(ICC), vol. 5, no. 1, pp. 3125–3129, 2003.
MAY 2017
[36] T. Richardson, M. Shokrollahi, and R. Urbanke, “Design of capacityapproaching irregular low-density parity-check codes,” IEEE Trans. on
Inf. Theory, vol. 47, no. 2, pp. 619–637, 2001.
[37] M. Diouf, D. Declercq, S. Ouya, and B. Vasic, “A PEG-like LDPC code
design avoiding short trapping sets,” IEEE Intern. Symp. Inf. Theory
(ISIT), pp. 1079–1083, 2015.
[38] A. Orlitsky, K. Viswanathan, and J. Zhang, “Stopping set distribution
of LDPC code ensembles,” IEEE Trans. Inf. Theory, vol. 51, no. 3, pp.
929–953, 2005.
[39] J. C. S. Ripoll and N. R. Barraza, “A new algorithm to construct LDPC
codes with large stopping sets,” SIMULATION, vol. 505, p. 1, 2013.
[40] S. V. Ranganathan, D. Divsalar, K. Vakilinia, and R. D. Wesel, “Design of high-rate irregular non-binary LDPC codes using algorithmic
stopping-set cancellation,” IEEE Intern. Symp. Inf. Theory, pp. 711–715,
2014.
[41] S. Laendner and O. Milenkovic, “Algorithmic and combinatorial analysis
of trapping sets in structured LDPC codes,” 2005 Intern. Conf. on
Wireless Networks, Commun. and Mobile Computing, vol. 1, pp. 630–
635, 2005.
[42] G. Richter and A. Hof, “On a construction method of irregular LDPC
codes without small stopping sets,” 2006 IEEE Intern. Conf. on Commun., vol. 3, pp. 1119–1124, 2006.
[43] D. J. MacKay, “Good error-correcting codes based on very sparse
matrices,” IEEE Trans. Inf. Theory, vol. 45, no. 2, pp. 399–431, 1999.
[44] M. K. Krishnan and P. Shankar, “Computing the stopping distance of a
Tanner graph is NP-hard,” IEEE Trans. Inf. Theory, vol. 53, no. 6, pp.
2278–2280, 2007.
[45] S. Sankaranarayanan, S. K. Chilappagari, R. Radhakrishnan, and B. Vasic, “Failures of the Gallager B decoder: Analysis and applications,”
Proc. Inf. Theory and Applic. Works. UCSD, vol. 17, 2006.
[46] M. G. Luby, M. Mitzenmacher, M. A. Shokrollahi, and D. A. Spielman,
“Efficient erasure correcting codes,” IEEE Trans. Inf. Theory, vol. 47,
no. 2, pp. 569–584, 2001.
[47] H. Pishro-Nik and F. Fekri, “On decoding of low-density parity-check
codes over the binary erasure channel,” IEEE Trans. Inf. Theory, vol. 50,
no. 3, pp. 439–454, 2004.
[48] L. Danjean, D. Declercq, S. K. Planjery, and B. Vasic, “On the selection
of finite alphabet iterative decoders for LDPC codes on the BSC,” IEEE
Inf. Theory Works. (ITW), pp. 345–349, 2011.
[49] S. Laendner and O. Milenkovic, “LDPC codes based on latin squares:
Cycle structure, stopping set, and trapping set analysis,” IEEE Trans.
Commun., vol. 55, no. 2, pp. 303–312, 2007.
[50] X.-Y. Hu, E. Eleftheriou, and D.-M. Arnold, “Progressive edge-growth
Tanner graphs,” IEEE Global Telecommun. Conf., 2001, vol. 2, pp. 995–
1001, 2001.
[51] X.-Y. Hu, E. Eleftheriou, and D.-M. Arnold, “Regular and irregular
progressive edge-growth Tanner graphs,” IEEE Trans. on Inf. Theory,
vol. 51, no. 1, pp. 386–398, 2005.
[52] O. Milenkovic, E. Soljanin, and P. Whiting, “Asymptotic spectra of
trapping sets in regular and irregular LDPC code ensembles," IEEE
Trans. on Inf. Theory, vol. 51, no. 1, pp. 39–55, 2007.
[53] A. Venkiah, D. Declercq, and C. Poulliat. “Design of cages with a
randomized progressive edge-growth algorithm," IEEE Commun. Lett.,
vol. 12, no. 4, pp. 301–303, 2008.
[54] Y. Wang, J. Yedidia, and S. Draper, “Construction of high-girth QCLDPC codes,” IEEE 5th Intern. Symp. Turbo Codes and Related Topics,
pp. 180–185, 2008.
[55] S. Laendner, T. Hehn, O. Milenkovic, and J. B. Huber, “When does
one redundant parity-check equation matter?” IEEE Globecom 2006,
pp. 1–6, 2006.
[56] R. Tanner, D. Sridhara, and T. Fuja, “A class of group-structured LDPC
codes,” Proc. ISCTA, pp. 365–370, 2001.
[57] J. Rosenthal, and P. Vontobel, “Constructions of LDPC codes using
Ramanujan graphs and ideas from Margulis,” Proc. of the 38-th Allerton
Conference on Commun., Control, and Computing, 2000.
[58] D. MacKay, “Encyclopedia of sparse graph codes," 2003.
13
Aiden Price was awarded the QUT Dean’s scholarship and received a BSc and 1 s t class honours in
mathematics from Queensland University of Technology. Aiden is currently studying a PhD at QUT
in the school of mathematics under the supervision
of Doctor Harry Bartlett and Emeritus Professor Ed
Dawson and is funded by the APA scholarship. His
research interests are coding theory and its application to digital communications and cryptography.
Joanne Hall received a BSc and MPhil in mathematics from the Australian National University. She
graduated with a PhD from RMIT University in
2011 under the supervision of Asha Rao in the Information Security and Informatics research group.
Dr Hall spent one year as a postdoctoral research
scientist at Charles University in Prague and four
years as a Lecturer at the Queensland University of
Technology in Brisbane. In 2017 she has returned
to RMIT University as a Lecturer in the School
of Science. Her research interests are algebraic and
combinatorial structures and their applications in digital communication.
| 7 |
arXiv:1705.09058v1 [cs.AI] 25 May 2017
An Empirical Analysis of Approximation Algorithms for the Euclidean Traveling
Salesman Problem
Yihui He
Xi’an Jiaotong University
Xi’an, China
Ming Xiang
Xi’an Jiaotong University
Xi’an China
[email protected]
[email protected]
Abstract
space, and the cost function is the Euclidean distance.
That is, the Euclidean distance between two cities x =
(x1 , x2 , ..., xd ), y = (y1 , y2 , ..., yd ) is:
With applications to many disciplines, the traveling
salesman problem (TSP) is a classical computer science optimization problem with applications to industrial engineering, theoretical computer science, bioinformatics, and several other disciplines [2]. In recent years, there have been
a plethora of novel approaches for approximate solutions
ranging from simplistic greedy to cooperative distributed
algorithms derived from artificial intelligence. In this paper, we perform an evaluation and analysis of cornerstone
algorithms for the Euclidean TSP. We evaluate greedy, 2opt, and genetic algorithms. We use several datasets as input for the algorithms including a small dataset, a mediumsized dataset representing cities in the United States, and a
synthetic dataset consisting of 200 cities to test algorithm
scalability. We discover that the greedy and 2-opt algorithms efficiently calculate solutions for smaller datasets.
Genetic algorithm has the best performance for optimality for medium to large datasets, but generally have longer
runtime. Our implementations is public available 1 .
d
X
( (xi − yi )2 )1/2
(1)
i=0
This simplification allows us to survey several cornerstone
algorithms without introducing complex scenarios. The remainder of this paper is organized as follows. In Section 2,
we briefly review the first solutions and survey variants to
the TSP. We describe the algorithms used in our experiment
in Section 3. A description of the benchmark datasets and
results of the experiment are detailed in Section 4, and explains the findings and compares the performance of the algorithms. We then conclude and describe future work in
Section 5.
2. Background
An example TSP is illustrated in Figure 1. The input is
a collection of cities in the two dimensional space. This input can be represented as a distance matrix for each pair of
cities or as a list of points denoting the coordinate of each
city. In the latter method, distances are calculated using
Euclidean geometry. A non-optimal tour is shown in subfigure (b). Although not shown in the figure, each edge will
have some non-negative edge weight denoting the distance
between two nodes or cities. Due to the computational complexity of the TSP, it may be necessary to approximate the
optimal solution. The optimal tour is shown in sub-figure
(c). For small graphs, it may be possible to perform an exhaustive search to obtain the optimal solution. However, as
the number of cities increases, so does the solutions space,
problem complexity, and running time.
If n is
number of cities. The number of possible
Pthe
n−1
edges is i=0 i. The number of possible tours is (n−1)!/2.
since the same tour, with start point X and Y appears twice:
once with X as the start node and once with Y as the start
node.
1. Introduction
Known to be NP-hard, the traveling salesman problem
(TSP) was first formulated in 1930 and is one of the most
studied optimization problems to date [8]. The problem is
as follows: given a list of cities and a distance between each
pair of cities, find the shortest possible path that visits every
city exactly once and returns to the starting city. The TSP
has broad applications including: shortest-path for lasers to
sculpt microprocessors and delivery logistics for mail services, to name a few.
The TSP is an area of active research. In fact, several
variants have been derived from the original TSP. In this
paper, we focus on the Euclidean TSP. In the Euclidean
TSP, the vertices correspond to points in a d-dimensional
1 https://github.com/yihui-he/TSP
1
3.3. 2opt Algorithm
In optimization, 2-opt is a simple local search algorithm
first proposed by Croes in 1958 for solving the TSP [5]. The
main idea behind it is to take a route that crosses over itself
and reorder it so that it does not.
A complete 2-opt local search will compare every possible valid combination of the swapping mechanism. This
technique can be applied to the travelling salesman problem
as well as many related problems. These include the vehicle routing problem (VRP) as well as the capacitated VRP,
which require minor modification of the algorithm.
This is the mechanism by which the 2-opt swap manipulates a given route:
Figure 1.
The TSP was first formulated in the 1930s by Karl
Menger in Vienna and Harvard. By the mid-1950s, solutions for TSP began to appear. The first solution was
published by Dantzig, Fulkerson, and Johnson using a
dataset of 49 cities. In 1972, Richard M. Karp proved that
the Hamiltonian cycle problem was NP-Complete, which
proves that the TSP is NP-Hard.
In modern day, the TSP has a variety of applications
to numerous fields. Examples among these applications
include genome sequencing, air traffic control, supplying
manufacturing lines, and optimization.
1. take route[1] to route[i-1] and add them in order to new
route
2. take route[i] to route[k] and add them in reverse order
to new route
3. take route[k+1] to end and add them in order to new
route
3. Algorithms
4. return new route
We now move to a discussion of the algorithms used in
our evaluation. First, we describe an upper bound for TSP
in Section 3.1. The traditional greedy and 2-opt approaches
are discussed in Section 3.2 and Section 3.3. We finally
discuss the genetic algorithm in Section 3.4.
3.4. Genetic Algorithm
Genetic algorithms (GA) are search heuristics that attempt to mimic natural selection for many problems in optimization and artificial intelligence [6]. In a genetic algorithm, a population of candidate solutions is evolved over
time towards better solutions. These evolutions generally
occur through mutations, randomization, and recombination. We define a fitness function to differentiate between
better and worse solutions. Solutions, or individuals, with
higher fitness scores are more likely to survive over time.
The final solution is found if the population converges to a
solution within some threshold. However, great care must
be taken to avoid being trapped at local optima.
We will now apply a genetic algorithm to the TSP [3].
We define a fitness function F as the length of the tour. Supposed we have an ordering of the cities A = x1 , x2 , ..., xn
where n is the number of cities. The fitness score for the
TSP becomes the cost of the tour d(x, y) denote the distance from x to y.
3.1. Random Path
Finding the worst case of TSP is as hard as the best one.
So we uniformly generate a random path for all available
edges, and use this as a upper bound of optimal path benchmark for all other algorithms.
3.2. Greedy Algorithm
The greedy heuristic is based on Kruskals algorithm to
give an approximate solution to the TSP [11]. The algorithm forms a tour of the shortest route and can be constructed if and only if: The edges of the tour must not form
a cycle unless the selected number of edges is equal to the
number of vertices in the graph. The selected edge (before
being appended to the tour) does not increase the degree of
any node to be more than 2. The algorithm begins by sorting all edges from least weight to most heavily weighted.
After the edges are sorted, the least heavily-weighted edge
is selected and it is added to the tour if it does not violate the
above conditions. The algorithm continues by selecting the
next least-cost edge and adding it to the tour. This process
is repeated until all vertices can be reached by the tour. The
result is a minimum spanning tree and is a solution for the
TSP. The runtime for the greedy algorithm is O(n2 log(n))
and generally returns a solution within 15-20% of the HeldKarp lower bound [15].
F (A) =
n−1
X
d(xi , xi+1 ) + d(xn , x0 )
(2)
i=0
The genetic algorithm begins with an initial, P0 , random
population of candidate solutions. That is, we have a set of
paths that may or may not be good solutions. We then move
forward one time step. During this time step, we perform a
set of probabilistic and statistical methods to select, mutate,
and produce an offspring population, P1 , with traits similar to those of the best individuals (with the highest fitness)
2
runtime comparison
10 2
10 1
time/sec
10 0
greedy
2opt
sa
10 -1
10 -2
10 -3 0 p15
Figure 2. the ATT48 dataset in the 2D plane. (the United States)
att48
rand200
cities
Figure 3. runtime comparison, the y axis is in log scale
from P0 . We then repeat this process until our population
becomes homogeneous.
The running time of genetic algorithms is variable and
dependent on the problem and heuristics used. However, for
each individual in the population, we require O(n) space for
storage of the path. For genetic crossover, the space requirement remains O(n). The best genetic algorithms can find
solutions within 2% of the optimal tour for certain graphs
[9].
tour length comparison
tour length
10 0
greedy
2opt
sa
optimal
random
10 -1
rand200
10 -2
att48
We benchmark our algorithms using publicly available
datasets. Additionally, to test the scalability of the algorithms, we generated a synthetic dataset consisting of 200
cities. In all dataset names, the numeric digits represent
the number of cities in the dataset. The datasets are as follows: P15, ATT48, and R200. All datasets except R200 can
be found online [4, 14]. The ATT48 and SGB128 datasets
represent real-data consisting of locations of cities in the
United States. A visual representation of the ATT48 dataset
in the 2D plane is shown in Figure 2
Not all datasets have a known optimal tour. When this
is the case, we use random path algorithm to infer a upper
bound of the optimal tour.
p15
4. Experiment
dataset
Figure 4. tour length comparison,
the y axis is in log scale, divided
by random tour length
a solution similar with the optimal for small datasets and
become worse for larger datasets.
In terms of running time (Figure 3), the best algorithm is
greedy algorithm. However, in terms of optimal tour length
of solution, the best algorithm is GA. This is in line with our
expectations and alludes to the fact that different heuristics
are better suited for different situations.
As shown in Figure 4, genetic algorithm performs fairly
consistently, in comparison to the 2-opt and greedy algorithms, across all datasets. Highlighted in Figure 3, the running time of genetic is almost linear. This suggests that for
larger datasets, if running time is a concern, then the genetic algorithm should be used. Figure 4 further demonstrates that genetic algorithm maintains a smaller percent
above optimal than the other algorithms. From this, we can
see that genetic algorithm has high accuracy and better complexity than other heuristics, especially for larger datasets.
Surprisingly, genetic algorithm got the optimal solution for
4.1. Random Dataset
The R200 dataset was generated by plotting 200 random,
uniformly distributed points (x, y), in R2 with (x, y) ∈
[0, 4000]. As a result, all distances satisfy the triangle inequality and this dataset can be classified as a Euclidean
TSP dataset. The running time for creating the dataset is
O(n). The output is a list of all cities represented as (x, y)
points.
4.2. Comparison
As we can see in Figure 3, the greedy is the most efficient. In Figure 4, we can see that most algorithms return
3
[3] K. Bryant and A. Benjamin. Genetic algorithms and the traveling salesman problem. Department of Mathematics, Harvey Mudd College, pages 10–12, 2000.
[4] J. Burkardt. Data for the traveling salesperson problem, 2011. http://people.sc.fsu.edu/ jburkardt/datasets/tsp/tsp.html.
[5] G. A. Croes. A method for solving traveling-salesman problems. Operations research, 6(6):791–812, 1958.
[6] J. Grefenstette, R. Gopal, B. Rosmaita, and D. Van Gucht.
Genetic algorithms for the traveling salesman problem. In
Proceedings of the first International Conference on Genetic Algorithms and their Applications, pages 160–168.
Lawrence Erlbaum, New Jersey (160-168), 1985.
[7] A. Haque, J. Shah, F. Ejaz, and J. X. Xu. An empirical evaluation of approximation algorithms for the metric traveling
salesman problem.
[8] A. Hoffman, J. Wolfe, R. Garfinkel, D. Johnson, C. Papadimitriou, P. Gilmore, E. Lawler, D. Shmoys, R. Karp, J. Steele,
et al. The traveling salesman problem: a guided tour of combinatorial optimization. J. Wiley & Sons, 1986.
[9] A. Homaifar, S. Guan, and G. E. Liepins. Schema analysis
of the traveling salesman problem using genetic algorithms.
Complex Systems, 6(6):533–552, 1992.
[10] I. Hong, A. B. Kahng, and B.-R. Moon. Improved largestep markov chain variants for the symmetric tsp. Journal of
Heuristics, 3(1):63–81, 1997.
[11] B.-I. Kim, J.-I. Shim, and M. Zhang. Comparison of tsp
algorithms. Project for Models in Facilities Planning and
Materials Handling, 1998.
[12] M. Mucha. \ frac {13}{9}-approximation for graphic tsp.
Theory of computing systems, 55(4):640–657, 2014.
[13] F. Qiu, J. Zhang, and H. Yan. An adaptive markov chain
monte carlo algorithm for tsp. In Computer Science and Software Engineering, 2008 International Conference on, volume 1, pages 439–442. IEEE, 2008.
[14] G. Reinelt.
Tsplib, 1995.
http://comopt.ifi.uniheidelberg.de/software/TSPLIB95/.
[15] D. J. Rosenkrantz, R. E. Stearns, and P. M. Lewis, II. An
analysis of several heuristics for the traveling salesman problem. SIAM journal on computing, 6(3):563–581, 1977.
Figure 5. Solution generated by genetic algorithm for att48 dataset
att48 dataset, shown in Figure 5.
5. Conclusion
Most of our algorithms attempt to solve the TSP in a
linear fashion. Originating from artificial intelligence, the
genetic algorithm is very different compared to greedy, and
2-opt. Literature suggests that the best algorithms focus on
iteration and convergence to find optimal tours – something
genetic algorithms attempt to achieve. For example, the
Large Step Markov Chain [10] relies on Markov chains to
find convergence of many paths to form a global optimum
and several papers cite Markov Chains as the best known solution to TSP. Recent studies include using adaptive Markov
Chain Monte Carlo algorithms [13]. Many of these extend
the Metropolis algorithm [9], a simulated annealing algorithm which attempts to mimic randomness with particles
as the temperature varies. This further supports our conclusion that algorithms inspired from artificial intelligence
perform well for finding solutions for the TSP. However,
these may not be suitable when a guarantee is required.
In this paper, we surveyed several key cornerstone approaches to the TSP. We selected four well-known algorithms and tested their performance on a variety of public
datasets. Our results suggest that genetic algorithms (and
other approaches from artificial intelligence) are able to find
a near-optimal solution.
References
[1] S. Arora. Polynomial time approximation schemes for euclidean tsp and other geometric problems. In Foundations of
Computer Science, 1996. Proceedings., 37th Annual Symposium on, pages 2–11. IEEE, 1996.
[2] BaoJunpeng.
Introduction to Artificial Intelligence.
[M].BeiJing:MachineryIn-dustryPress, 2010.
4
| 2 |
arXiv:1705.08738v1 [cs.CE] 24 May 2017
Doppler Synthetic Aperture Radar Interferometry:
A Novel SAR Interferometry for Height Mapping
using Ultra-Narrowband Waveforms
Birsen Yazıcı1,∗ , Il-Young Son1 and H. Cagri Yanik
1
Department of Electrical and Computer Systems Engineering, Rensselaer
Polytechnic Institute, Troy, NY, USA
∗
Corresponding author
E-mail: [email protected]
Abstract. This paper introduces a new and novel radar interferometry based
on Doppler synthetic aperture radar (Doppler-SAR) paradigm. Conventional SAR
interferometry relies on wideband transmitted waveforms to obtain high range
resolution. Topography of a surface is directly related to the range difference between
two antennas configured at different positions. Doppler-SAR is a novel imaging
modality that uses ultra-narrowband continuous waves (UNCW). It takes advantage
of high resolution Doppler information provided by UNCWs to form high resolution
SAR images.
We introduced the theory of Doppler-SAR interferometry.
We derived
interferometric phase model and develop the equations of height mapping. Unlike
conventional SAR interferometry, we show that the topography of a scene is related
to the difference in Doppler between two antennas configured at different velocities.
While the conventional SAR interferometry uses range, Doppler and Doppler due to
interferometric phase in height mapping, Doppler-SAR interferometry uses Doppler,
Doppler-rate and Doppler-rate due to interferometric phase in height mapping. We
demonstrate our theory in numerical simulations.
Doppler-SAR interferometry offers the advantages of long-range, robust,
environmentally friendly operations; low-power, low-cost, lightweight systems suitable
for low-payload platforms, such as micro-satellites; and passive applications using
sources of opportunity transmitting UNCW.
Submitted to: Inverse Problems
Doppler Synthetic Aperture Radar Interferometry
2
1. Introduction
Synthetic Aperture Radar (SAR) interferometry is a powerful tool in mapping surface
topography and monitoring dynamic processes. This tool is now an integral part of
wide range of applications in many disciplines including environmental remote sensing,
geosciences and climate research, earthquake and volcanic research, mapping of Earth’s
topography, ocean surface current monitoring, hazard and disaster monitoring, as well
as defense and security related research [1].
Basic principles of SAR interferometry were originally developed in radio
astronomy [2, 3]. Interferometric processing techniques and systems were later developed
and applied to Earth observation [4, 5, 6, 7, 8].
SAR interferometry exploits phase differences of two or more SAR images to
extract more information about a medium than present in a single SAR image [9] [10].
Conventional SAR interferometry relies on wideband transmitted waveforms to obtain
high range resolution [10, 1, 11, 12, 13]. The phase difference of two wideband SAR
images are related to range difference. There are many different interferometric methods
depending on the configuration of imaging parameters in space, time, frequency etc [1].
When two images are acquired from different look-directions, the phase difference is
related to the topography of a surface.
In this paper, we develop the basic principles of a new and novel interferometric
method based on Doppler-SAR paradigm to determine topography of a surface. Unlike
conventional SAR, Doppler-SAR uses ultra-narrowband continuous waves (CW) to form
high resolution images [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. Conventional SAR
takes advantage of high range resolution and range-rate due to the movement of SAR
antenna for high resolution imaging. Doppler-SAR, on the other hand, takes advantage
of high temporal Doppler resolution provided by UNCWs and Doppler-rate for high
resolution imaging.
We develop the phase relationship between two Doppler-SAR images and show
that the phase difference is related to Doppler difference. We approximate this phase
difference as Doppler-rate and derive the equations of height mapping for Doppler-SAR
interferometry.
Conventional wideband SAR interferometry for height mapping requires two
different look-directions. Doppler-SAR interferometry provides a new degree of
freedom in system design by allowing antennas to have the same look-direction, but
different velocities to obtain height mapping. Additional advantages of Doppler-SAR
interferometry include the following: (i ) Small, lightweight, inexpensive, easy-to-design
and calibrate hardware, high Signal-to-Noise-Ratio(SNR) and long effective range of
operation. All of these make Doppler-SAR interferometry a suitable modality for
applications requiring high SNR, long range of operation and low payload platforms
such as micro-satellites or small uninhabited aerial vehicles. (ii ) Effective use of
electromagnetic spectrum and environmentally friendly illumination. (iii ) Passive
applications. Doppler-SAR may not require dedicated transmitters, since existing Radio
Doppler Synthetic Aperture Radar Interferometry
3
Frequency (RF) signals of opportunity often have the ultra-narrowband properties.
To the best of our knowledge, this is the first interferometric method that is
developed in Doppler-SAR paradigm. We present the theory for two monostatic
Doppler-SAR. However, the method can be easily be extended to bistatic and multistatic configurations and synthetic aperture imaging applications in acoustics.
The rest of the paper is organized as follows: In Section 2, SAR geometry and
notation are defined. In Section 3, wideband SAR image formation, layover effect and
basic principles of wideband SAR interferometry are described in a perspective relevant
to our subsequent development. In Section 4, Doppler-SAR data model, image formation
and layover are summarized. Section 5 introduces the basic principles of Doppler-SAR
interferometry and compares the results to wideband SAR case. Section 6 presents
numerical simulations and Section 7 concludes the paper.
2. Configurations and Notation
We consider two mono-static SAR systems as shown in Fig. 1.
Antenna 2
Antenna 1
: Location of the scatter
: Height of the scatter
Figure 1: Imaging geometry for an interferometric SAR system with two antennas
following trajectories γ1 (s) and γ2 (s). The scatterer is located at x ∈ R3 where its
height is h(x) and x = [x1 , x2 ] ∈ R2 .
Let γ1 (s) and γ2 (s), s ∈ [S1 , S2 ] ⊆ R, denote the trajectories of the first and second
antennas, respectively.
Unless otherwise stated, bold Roman, bold italic, and Roman lower-case letters will
denote elements in R3 , R2 and R, respectively, i.e., x = [x1 , x2 ] ∈ R2 , x3 ∈ R, and
x = [x, x3 ] ∈ R3 . The Earth’s surface is located at x = [x, h(x)], where h : R2 → R,
is the unknown height representing ground topography. Let V : R3 → R denote target
reflectivity where we assume that the scattering takes place only on the surface of the
Earth. Major notation used throughout the paper is tabulated in Table 1.
Doppler Synthetic Aperture Radar Interferometry
Table 1: Notation
Symbol
Description
x = [x, h(x)], x ∈ R2
Location on earth’s surface
h(x)
Unknown height of a scatter at x
V (x)
Surface reflectivity
γi (s)
i-th antenna trajectory
s
Slow-time
t
Fast-time
Ri (x, s)
Range of i-th antenna
ω0
Center frequency of the transmitted waveforms
si0
Zero-Doppler time for the i-th antenna
Li (x, s)
Look-direction of the i-th antenna
B
dW
(t, s)
i
Wideband SAR demodulated received signal at i-th
antenna
|z − γi (s)| = C.
Iso-range surface
\
(z −
γi (s)) · γ̇i (s) = C
Iso-Doppler surface
KiW B
Filtered backprojection (FBP) operator for wideband SAR
IiW B
Wideband SAR image
B
ΦW
s0 (x)
Wideband interferometric phase
b
Baseline vector in wideband SAR interferometry
L1 (z, s10 ) · b = C
Interferometric phase cone
l
Vector from a known scatterer position to the unknown
location of a scatterer
l⊥
1
Component of l perpendicular to (z0 \
− γ1 (s))
B
ΦW
f lat (x)
Flattened wideband SAR interferometric phase
φ(t)
Smooth windowing function
Tφ
Duration of φ(t)
NB
dU
(µ, s)
i
Doppler-SAR data
Li (z, s) · γ̇i (s) = C
Iso-Doppler surface
Li (z, s) · γ̈i (s) −
Iso-Doppler-rate surface
γ̇i (s)·γ̇i⊥ (s)
Ri (z,s)
=C
KiU N B
FBP operator for Doppler-SAR
IiU N B
Doppler-SAR image
NB
ΦU
(x)
sd
Doppler-SAR interferometric phase
v
Baseline velocity
NB
ΦU
f lat (x)
Flattened Doppler-SAR interferometric phase
4
Doppler Synthetic Aperture Radar Interferometry
5
3. Wideband SAR Interferometry
The basic principles of SAR interferometry are described by many sources [10], [25], [9],
[26] [27], [1] and [28]. In this section, we summarize the principles and theory of
SAR interferometry in a notation and context relevant to our subsequent presentation
of Doppler-SAR interferometry. We begin with the wideband SAR received signal
model, derive the interferometric phase model, provide a geometric interpretation of
the interferometric phase from which we develop the equations of height mapping.
3.1. Wideband SAR received signal model
We assume that the SAR antennas are transmitting wideband waveforms. Let ri (t, s)
denote the received signals, i = 1, 2 where s and t are the slow-time and fast-time
variables, respectively. Under the start-stop and Born approximations, the received
signals can be modeled as [29, 30, 31]:
Z
ri (t, s) = e−iω(t−2Ri (x,s)/c) Ãi (x, s, ω)V (x)dxdω
(1)
where
Ri (x, s) = |x − γi (s)|
(2)
is the range of the ith antenna, c is the speed of light in free-space, ω is the temporal
frequency variable, V (x) is the scene reflectivity function. Ãi is a slowly-varying
function of ω that depends on antenna beam patterns, geometrical spreading factors
and transmitted waveforms.
Let ω = ω0 + ω 0 , ω 0 ∈ Ω where Ω is the bandwith and ω0 is the center frequency of
the transmitted waveforms. We demodulate the received signals and write
B
dW
(t, s) = eiω0 t ri (t, s),
i
ZΩ
ω0
0
=
e−iω (t−2Ri (x,s)/c) Ãi (x, s, ω 0 )ei2 c Ri (x,s) V (x)dxdω 0 .
(3)
(4)
−Ω
Next, we approximate Ri (x, s) in ei
ω0
Ri (x,s)
c
around s = si0 as follows:
Ri (x, s) ≈ Ri (x, si0 ) + (s − si0 ) ∂s Ri (x, s)|s=si
0
(s − si0 )2 2
+
∂s Ri (x, s) s=si , i = 1, 2
0
2
(5)
where ∂s denotes derivative with respect to s and si0 is the zero-Doppler time for the
ith antenna, i.e.,
\
∂s Ri (x, s)|s=si = (x −
γi (si0 )) · γ̇i (si0 ) = 0.
0
(6)
b denotes the unit vector in the direction of x and γ̇i (s) denotes the velocity of
In (6) x
th
the i antenna.
Doppler Synthetic Aperture Radar Interferometry
6
We define
Li (x, s) = (x \
− γi (s))
(7)
and refer to Li (x, s) as the look-direction of the ith antenna. Note that at the zeroDoppler time, the antenna look-direction is orthogonal to the antenna velocity.
Let
i2
Ai (x, s, ω 0 ) = Ãi (x, s, ω 0 )e
i 2
ω0 (s−s0 )
c
2
∂s2 Ri (s,x)|
s=si0
.
(8)
Finally, we write the demodulated received signal as follows:
B
dW
(t, s)
i
ZΩ
≈
0
e−iω (t−2Ri (x,s)/c) Ai (x, s, ω 0 )ei2
ω0
Ri (x,si0 )
c
V (x)dxdω 0 .
(9)
−Ω
3.2. Wideband SAR image formation and layover
Many different algorithms were developed to form wideband SAR images such as
range-Doppler [25], seismic migration [32], backprojection [31] and chirp scaling [33]
algorithms. All of these algorithms take advantage of high range resolution provided by
wideband transmitted waveforms and pulse-to-pulse Doppler information provided by
the movement of antennas. The location of a scatterer is identified by intersecting the
iso-range and iso-Doppler surfaces and the ground topography as shown in Fig. 2.
Range sphere
Doppler cone
Velocity vector
Sensor position
x
Scatterer position
Figure 2: The SAR image of a scatterer is reconstructed at the intersection of the
iso-range (sphere) and iso-Doppler (cone) surfaces and the height of the scatterer.
More precisely, the image of a scatterer is formed at z satisfying the following
equations:
|z − γi (s)| = Ri (x, s)
\
Iso-Doppler surface: (z −
γi (s)) · γ̇i (s) = ∂s Ri (x, s)
(10)
Height:
(12)
Iso-range surface:
z3 = h(x), z = [z, z3 ].
(11)
Doppler Synthetic Aperture Radar Interferometry
7
Note that Ri (x, s) and ∂s Ri (x, s) are the measured range and Doppler and h(x) is
the height of the scatterer. As functions of z, (10) and (11) define the iso-range and
iso-Doppler surfaces, respectively.
Iso-range contours are defined as the intersection of the iso-range surface, i.e.,
sphere, and the ground topography. Without loss of generality, we consider a filtered
backprojection (FBP) type method where the received and demodulated signals are
backprojected onto iso-range contours defined on a reference surface [31], [29]. In the
absence of heigh information, demodulated signal is backprojected onto the intersection
of the iso-range surface and a known reference surface. Without loss of generality, we
assume a flat reference surface at zero height and backproject the demodulated signals
onto the following iso-range contours:
HiRange (z0 ) = z0 ∈ R3 z0 = [z, 0] and |z0 − γi (s)| = Ri (x, s) . (13)
Let KiW B be an FBP operator. Then, the reconstructed image of the scatterer at x
becomes
W B ˜W B
i
IiW B (zi0 ) := K
Z i [di ](z0 ),
0
i
B
B
=
eiω (t−2Ri (z0 ,s)/c) QW
(z0i , ω 0 , s)dW
(t, s)dω 0 dtds,
(14)
i
i
B
where QW
is a filter that can be chosen with respect to a variety of criteria [31], [34].
i
From (9), the image of the scatterer at x becomes
IiW B (zi0 ) = |IiW B (zi0 )|ei2
ω0
Ri (x,si0 )
c
.
(15)
The magnitude of reconstructed images is a measure of target reflectivity, whereas the
phase of the reconstructed image depends on the true location, x = [x, h(x)] of the
scatterer. However, since the true height h(x) of the scatter is unknown and hence
different than that of the reference surface, the location, z0i , at which the scatterer
is reconstructed is different than its true location, x. This positioning error due to
incorrect height information is known as layover. Fig. 3 depicts the layover effect.
We see that without the knowledge of ground topography, additional information
or measurements are needed to reconstruct the scatterers at correct locations. This
additional information is provided by a second antenna that has a different vantage
point than the first one.
3.3. Wideband SAR interferometric height reconstruction
An interferogram is formed by multiplying one of the SAR images with the complex
conjugate of the other SAR image [9, 10]. Prior to multiplying the SAR images, the
two intensity images, |IiW B (zi0 )|, i = 1, 2 are co-registered so that pixel locations z01 and
z02 , each corresponding to the scatterer at position x in the scene, are roughly aligned‡.
Multiplying I1W B (z10 ) with the complex conjugate of I2W B (z20 ), we get
I1W B (z10 )I2W B (z20 ) = |I1W B (z10 )||I2W B (z20 )|ei2
ω0
(R1 (x,s10 )−R2 (x,s20 ))
c
.
(16)
‡ The positioning errors due to layover are different in the two SAR images due to different imaging
geometries.
Doppler Synthetic Aperture Radar Interferometry
8
Figure 3: Layover in wideband SAR - The range sphere depicts the iso-range surface of
the monostatic SAR configuration. Since the correct height of the scatterer at location
x is unknown, the image of the scatterer at x is formed at z0 on a flat surface.
We refer to the phase of the interferogram as the wideband interferometric phase
ω0
B
(R1 (x, s10 ) − R2 (x, s20 ))
ΦW
(17)
s0 (x) = 2
c
B
provides us the
where s0 is a multi-index for {s10 , s20 }. The interferometric phase ΦW
s0
third measurement needed to determine the location of a scatterer in R3 . In general
the range difference can be many multiples of 2π. Unique phase proportional to range
difference can be determined by a phase unwrapping process [1].
Now consider the following surface
c WB
|z − γ1 (s10 )| − |z − γ2 (s20 )| =
Φ (x)
(18)
2ω0 s0
B
where ΦW
(18) defines a two-sheet
s0 (x) is the measured interferometric phase.
2
1
hyperboloid with foci at γ1 (s0 ) and γ2 (s0 ). We assume that the distance between the
antennas is much smaller than the ranges of the antennas to the scene and approximate
this hyperboloid as follows:
c WB
L1 (z, s10 ) · b ≈
(19)
Φ (x)
2ω0 s0
where
b = γ2 (s20 ) − γ1 (s10 )
(20)
is the baseline vector. (19) defines a cone whose vertex is the first antenna and the axis
of rotation is the baseline vector. We call this surface the interferometric phase cone.
The interferometric phase cone provides the third equation needed to locate the position
of a scatterer in R3 . More precisely, the location of the scatterer is given by the solution
of the following equations:
Range sphere:
|z − γ1 (s10 )| = R1 (x, s10 )
\
(z −
γ1 (s10 )) · γ̇1 (s10 )) = ∂s R1 (x, s)|s=s1
0
c WB
1
Interferometric phase cone: L1 (z, s0 ) · b =
Φ (x).
2ω0 s0
Doppler cone:
(21)
(22)
(23)
Doppler Synthetic Aperture Radar Interferometry
9
The right-hand-side of (21)-(23) are measured quantities defined in terms of the true
location, x, of the scatterer in the scene and the left hand-side-defines the three surfaces
in terms of the location of the scatterer z in the image. Fig. 4 geometrically illustrates
the solution of these three equations in wideband SAR interferometry. Typically the
Figure 4: Wideband SAR interferometry provides a third algebraic equation by which
the unknown location of a scatters in R3 is determined. The scatterer is located at the
intersection of the Doppler-cone, iso-range sphere, and the interferometric phase cone.
The axis of rotation of the Doppler-cone is the velocity of the first antenna and the axis
of rotation of the interferometric cone is the baseline vector extending from the first to
the second antenna.
variation in the color coding of interferogram is “flattened” by subtracting the expected
phase from a surface of constant elevation. Let x = l + z0 . Then, under the assumption
that |l| |z0 − γ1 (s)|
\
(x −
γ1 (s)) ≈ (z0 \
− γ1 (s)) +
l⊥
1
|z0 − γ1 (s)|
(24)
where
"
l⊥
1
#
\
l
·
(z
−
γ
(s))
0
1
= l − (z0 \
− γ1 (s))
.
|z0 − γ1 (s)|
(25)
\
In other words, the vector l⊥
1 is the component of l perpendicular to (z0 − γ1 (s)). The
flattened phase then becomes
ω0
B
ΦW
L1 (x, s10 ) − L1 (z0 , s10 ) · b
(26)
f lat (x) = 2
c
ω0 l⊥
1 ·b
≈2
.
(27)
c R(z0 , s10 )
⊥
⊥
1
Since b⊥
1 · l = l1 · b where b1 is the component of b perpendicular to L1 (z, s0 ), (27) can
be alternatively expressed as
B
ΦW
f lat (x)
ω0 b⊥
1 ·l
=2
.
c R(z0 , s10 )
(28)
Doppler Synthetic Aperture Radar Interferometry
10
Fig. 5 illustrates the key concepts and vectors involved in the wideband
interferometry.
Figure 5: A two-dimensional illustration of vectors involved in wideband interferometric
phase. l = x − z0 where z0 = [z0 , 0], γi (si0 ) denotes the location of the ith antenna
at the zero-Doppler time si0 , L1 (z0 , s10 ) denotes the look-direction of the first antenna
with respect to the reference scatterer located at z0 , and l⊥
1 is the component of l
1
perpendicular to L1 (z0 , s0 ). The wideband interferometric phase is related to the
projection of the baseline vector, b, onto the the look-direction, L1 (x, s10 ), of the antenna
with respect to scatterer location x. Known vectors are shown in red and unknown
vectors are shown in black.
4. Data Model and Image Formation for Doppler-SAR
4.1. Data Model for Doppler-SAR
We consider two mono-static antennas following the trajectories γi (t), i = 1, 2,
transmitting ultra-narrowband CWs as shown in Fig. 1. Let p(t) ≈ p̃(t)eiω0 be the
transmitted waveform where ω0 is the center frequency. The scattered field model at
the ith antenna is then given by
Z −iω0 (t−2|x−γi (t)|/c)
ω02
e
ri (t) =
p̃(t − 2|x − γi (t)|/c)V (x)dx.
(29)
(4π)2
|x − γi (t)|2
Let µ ∈ R+ and φ(t) be a smooth windowing function with a finite support, t ∈ [0, Tφ ].
Following [15, 14, 23], we correlate ri (t) with a scaled and translated version of the
transmitted signal over φ(t) as follows:
Z
UNB
di
(µ, s) = ri (t)eiω0 µ(t−sTφ )/c p̃∗ (µ(t − sTφ ))φ(t − sTφ )dt.
(30)
Doppler Synthetic Aperture Radar Interferometry
11
Inserting (29) into (30), we obtain
Z −iω0 (t−2|x−γi (t)|/c)
ω02
e
UNB
(µ, s) =
di
p̃(t − 2|x − γi (t)|/c)V (x)
2
(4π)
|x − γi (t)|2
× eiω0 µ(t−sTφ )/c p̃∗ (µ(t − sTφ ))φ(t − sTφ )dtdx.
(31)
Approximating γi (t) around t = sTφ , γi (t) ≈ γi (sTφ ) + γ̇i (sTφ )(t − sTφ ), and making
the far-field approximation, we write
|x − γi (t)| ≈ |x − γi (sTφ )| − Li (x, sTφ ) · γ̇i (sTφ )(t − sTφ ),
(32)
where Li (x, sTφ ) = (x −\
γi (sTφ )) and γ̇i (sTφ ) = ∂s γi (sTφ ) is the velocity of the ith
antenna.
To simplify our notation, for the rest of the paper, we set Li (x, sTφ ) = Li (x, s),
γi (sTφ ) = γi (s), γ̇i (sTφ ) = γ̇i (s), ∂s2 γi (sTφ ) = γ̈i (sTφ ) = γ̈i (s) and Ri (x, sTφ ) =
Ri (x, s). We next define Doppler for the ith antenna
ω0
(33)
fid (x, s) = − Li (x, s) · γ̇i (s).
c
Inserting (32) and (31) into (33), the data model becomes
Z
d
d
UNB
di
(µ, s) = e−it[ω0 (1−µ)−2fi (x,s)] Ãi (t, x, s, µ)ei2fi (x,s)sTφ V (x)dtdx, (34)
where Ãi (t, x, s, µ) is a slow varying function of t composed of the rest of the terms in
(31).
We now approximate fid (x, s) around s = sid as follows:
fid (x, s) ≈ fid (x, sid )+(s−sid ) ∂s fid (x, s)
s=sid
+
(s − sid )2 2 d
∂s fi (x, s)
2
s=sid
.(35)
We choose sid such that
∂fid (x, s)
∂s
=0
⇒
Li (x, sid ) · γ̈i (sid ) −
s=sid
γ̇i (sid ) · γ̇i⊥ (sid )
=0
Ri (xsid )
(36)
where γ̈i (sid ) is the acceleration of the ith antenna and γ̇i⊥ (sid ) is the component of γ̇i (sid )
perpendicular to the look-direction Li (x, sid ) as described in (25). We refer to sid as the
zero-Doppler-rate time for the ith antenna.
d
Using (35) in ei2fi (x,s)sTφ and redefining the slow-varying function in t,
i2sid Tφ
Ai (t, x, s, µ) = Ãi (t, x, s, µ)e
(s−sid )2 2 d
∂s fi (x,s)|s=si
2
d
,
(37)
we obtain the following data model for Doppler-SAR image reconstruction:
Z
d
d
i i
UNB
di
(µ, s) ≈ e−it[ω0 (1−µ)−2fi (x,s)] Ai (t, x, s, µ)ei2fi (x,sd )sd Tφ V (x)dtdx.(38)
Doppler Synthetic Aperture Radar Interferometry
12
4.2. Doppler-SAR Image Formation and Layover
Similar to the wideband case, we reconstruct images by backprojection as described in
[35, 15] [14]. The forward model in (38) shows that the data, dUi N B (s, µ), is the weighted
integral of the scene reflectivity over iso-Doppler contours. It was shown in [14] that
a scatterer located at x in the scene is reconstructed at the intersection of iso-Doppler
surface and iso-Doppler-rate surface and ground topography. More precisely, the image
of a scatterer located at x in the scene is reconstructed at z satisfying the following
equations:
c
(39)
Iso-Doppler surface:
Ldi (z, s) · γ̇i (s) = fid (x, s)
ω0
γ̇i (s) · γ̇i⊥ (s)
c
Iso-Doppler-rate surface: Li (z, s) · γ̈i (s) −
(40)
= ∂s fid (x, s)
Ri (z, s)
ω0
Height:
z3 = h(x), z = [z, z3 ] (41)
where the right-hand-side of (39)-(40) corresponds to measurements and the left-handside defines surfaces in image parameter z.
The iso-Doppler-rate surface, given by the following set,
γ̇i (s) · γ̇i⊥ (s)
c
Dop−rate
3
d
Hi
(z) = z ∈ R
Li (z, s) · γ̈i (s) −
= ∂s fi (x, s) . (42)
Ri (z, s)
ω0
can be viewed as a continuum of intersections of cones and expanding spheres centered
at the sensor location. The axis of rotation for the surface is the acceleration vector of
the antenna trajectory. Fig. 6 illustrates iso-Doppler and iso-Doppler-rate surfaces and
the reconstruction of a point scatterer by the intersection of these surfaces and ground
topography. The reconstruction is analogous to the wideband SAR image reconstruction
shown in Fig. 2.
In the absence of ground topography information, we backproject data onto isoDoppler contours on a reference surface. Without loss of generality, we consider the
following iso-Doppler contours:
c d
Dop
3
Hi (z0 ) = z0 ∈ R
z0 = [z, 0] and Li (z, s) · γ̇i (s) = fi (x, s)
(43)
ω0
where the right-hand-side of the equality in (43) is the high resolution measurement
provided by ultra-narrowband CW.
Let KiU N B be an FBP operator as described in [14]. Then, the reconstructed image
is given by:
UNB UNB
[di
](zi0 )
IiU N B (zi0 ) := K
Zi
≈
d
i
eit(ω0 (1−µ)−2fi (z0 ,s)) QUi N B (s, z0i , t)dUi N B (s, µ)dtdµds
(44)
where QUi N B is a filter that can be chosen as in [35, 15, 14]. The reconstructed image is
given by
d
i
i
IiU N B (zi0 ) = |IiU N B (zi0 )|ei2fi (x,sd )sd Tφ .
(45)
Doppler Synthetic Aperture Radar Interferometry
13
Figure 6: In Doppler-SAR image reconstruction, a scatterer located at x in the scene
is correctly reconstructed at the intersection of the iso-Doppler and iso-Doppler-rate
surfaces and the ground topography. Iso-Doppler surface is a cone in which its vertex
is the antenna location and its axis of rotation is the antenna velocity. The geometry
of the iso-Doppler-rate surface depends on the antenna trajectory. Figure is drawn for
a linear trajectory at a constant height.
In the absence of topography information, we see that a scatterer located at x in the
scene is reconstructed at zi0 6= x in the image. This position error in the reconstructed
image is the counterpart of the layover effect observed in conventional wideband SAR
images. Fig. 7 illustrates the layover effect in Doppler-SAR. However, the phase of the
reconstructed image is a function of the scatterer’s true location, x, and hence, includes
its height information, h(x).
Figure 7: If the height of a scatter is not known, it is reconstructed at an incorrect
position. Both the correct scatterer location x and its image z0 lie on the same isoDoppler surface, i.e., the Doppler cone. z0 lies at the intersection of the Doppler cone
defined f1d (x, s) and the flat topography.
Note that the phases of the reconstructed images depend on the Doppler-rate,
Doppler Synthetic Aperture Radar Interferometry
14
fid (x, sid ), the duration of the windowing function, Tφ , and the corresponding zeroDoppler-rate times, sid . The height information is included in the Doppler-rate.
However, since each imaging geometry may yield different zero-Doppler-times, Dopplerrate in the phase of each image is multiplied by a different zero-Doppler-rate time. To
equalize the effect of this multiplication factor, we multiply one of the reconstructed
images with itself so that the Doppler-rate in the phase of both images are multiplied
by the same factor, say s1d . As a result, each image becomes
d
i
1
IiU N B (zi0 ) = |IiU N B (zi0 )|ei2fi (x,sd )sd Tφ ,
i = 1, 2.
(46)
5. Doppler-SAR Interferometric Height Reconstruction
Similar to the wideband case, we form two Doppler-SAR images, IiU N B (zi0 ), i = 1, 2,
co-register the intensity images |IiW B (zi0 )| and multiply one of them by the complex
conjugate of the other to form an interferogram. Then the interferometric phase, i.e.,
the phase function of I1U N B (x)I2U N B (x) is given by
ΦUsdN B (x) = 2s1d Tφ f1d (x, s1d ) − f2d (x, s2d )
(47)
where sd denotes multi-index for {s1d , s2d }. Thus the scatterer lies on the following surface:
c
L1 (z, s1d ) · γ̇1 (s1d ) − L2 (z, s2d ) · γ̇2 (s2d ) = −
f1d (x, s1d ) − f2d (x, s2d ) (48)
2ω0
where the right-hand-side is the measured interferometric phase. The left-hand-side of
(48) defines a surface that can be described as the intersections of two cones one of
which has a continuously changing solid angle.
Assuming that the distance between the antennas is much smaller than the ranges of
the antennas to the scene, we can approximate the look-direction of the second antenna
in terms of the look-direction of the first one as follows:
b⊥
1
L2 (x, s2d ) = L1 (x, s1d ) +
(49)
R1 (x, s1d )
where b = γ2 (s2d ) − γ1 (s1d ) is the baseline vector and b⊥
1 is the component of b
perpendicular to the look-direction of the first antenna. Using (49), we approximate
the interferometric phase as follows:
2
c
b⊥
1 · γ̇2 (sd )
UNB
1
− 1
Φ
(x) ≈ L1 (x, sd ) · v +
2sd Tφ ω0 sd
R1 (x, s1d )
(50)
v = γ̇2 (s2d ) − γ̇1 (s1d ).
(51)
where
We refer to v as the baseline velocity. We see that (50) approximates the interferometric
phase as a Doppler-rate. Additionally, (50) shows that Doppler-SAR interferometry
involves not only configuring antennas in position space, but also in velocity space.
The larger the difference in antenna velocities in the look-direction of the first antenna,
the larger the interferometric phase becomes. If on the other hand, the velocities of the
antennas are the same, the second term in (50) defines the interferometric phase surface.
Doppler Synthetic Aperture Radar Interferometry
15
Clearly, in Doppler-SAR interferometry (50) provides the third equation needed to
determine the location of a scatterer in R3 . More precisely, the location of a scatterer
is given by the solution of the following three equations:
c
(52)
(z −\
γ1 (s1d )) · γ̇1 (s1d ) = f1d (x, s1d )
ω0
γ̇1 (s1d ) · γ̇1⊥ (s1d Tφ )
1
1
\
Iso-Doppler-rate: (z − γ1 (sd )) · γ̈1 (sd ) −
= ∂s f1d (x, s1d )
(53)
1
R1 (sd , z)
c
b⊥ · γ̇2 (s2d )
=− 1
ΦU N B (x). (54)
Interferometric Doppler-rate: L1 (z, s1d ) · v + 1
1
R1 (z, sd )
2sd Tφ ω0 sd
Iso-Doppler:
Fig. (8) depicts the intersection of the three surfaces at the scatterer location in R3 .
Doppler-rate surface
Interferometric
phase
Doppler-rate surface
Doppler cone
Velocity vector
Sensor position
Scatterer position
Figure 8: Determination of the scatterer location in Doppler-SAR interferometry. The
scatterer is located at the intersection of the Doppler cone and the two iso-Dopplerrate surfaces. Interferometric phase measurement provides the third surface, i.e., the
interferometric phase iso-Doppler-rate surface.
Similar to the wideband SAR interferometry, the interferometric phase can be
“flattened” by subtracting the phase due to a scatterer with known height. Without loss
of generality, let z0 = [z, 0] with R1 (z0 , s) = R1 (x, s) and x = z0 + l. Thus, identifying
the location of a scatterer is equivalent to determining l.
Using (24), we see that
l⊥
1
1 ·v
UNB
UNB
UNB
Φf lat (x) = Φsd (x) − Φsd (z0 ) ≈
+O
(55)
R1 (z0 , s1d )
R12 (z0 , s1d )
1
where l⊥
1 is the component of l perpendicular to L1 (z0 , sd ). (55) shows that the flattened
interferometric phase for Doppler-SAR interferometry is related to the projection of the
unknown l⊥
1 onto the baseline velocity vector scaled by the range of the first antenna to
⊥
z0 . Since l1 · v = v1⊥ · l where v1⊥ is the component of v perpendicular to L1 (z0 , s1d ), we
alternative express (55) as follows:
v1⊥ · l
NB
ΦUf lat
(x) ≈
.
(56)
R1 (z0 , s1d )
Fig. 9 shows the key concepts and vectors involved in Doppler-SAR interferometry.
Doppler Synthetic Aperture Radar Interferometry
16
Figure 9: An illustration of key concepts and vectors in Doppler-SAR interferometry.
l = x − z0 where z0 = [z0 , 0], γi (s) denotes the ith antenna position, L1 (z0 , s1d )
denotes the look-direction with respect to a reference surface, l⊥
1 is the component of
1
l perpendicular to L1 (z0 , sd ), γ̇1 (s) denotes the antenna velocity, L1 (x, s1d ) denotes the
look-direction of the antenna with respect to the correct target location. Doppler-SAR
interferometric phase is proportional to the projection of the baseline velocity vector
onto l⊥
1 . Known vectors are shown in red and unknown vectors are shown in black.
5.1. Comparison of Doppler-SAR Interferometry wide Wideband Case
Table II tabulates the interferometric phase for the wideband SAR and Doppler-SAR
cases. We compare and contrast the two interferometric phases below:
• For WB and UNB, the “baseline” is the difference in range and difference in velocity,
respectively.
• The larger the ω0 , the center frequency, the larger the interferometric phase in both
WB and UNB cases.
• The larger the range, R1 , the smaller the interferometric phase in both WB and
UNB cases.
• For UNB, larger the Tφ , the larger the interferometric phase.
• For WB, the larger the b, the difference between the positions of the two antennas,
the larger the interferometric phase. For UNB, the larger the v, the difference
between the velocities of the two antennas, the larger the interferometric phase.
Doppler Synthetic Aperture Radar Interferometry
17
Table 2: Raw and flattened interferometric phase functions for wideband SAR and
Doppler-SAR.
Wideband SAR
Doppler-SAR
Interferometric Phase
2 ωc0 L1 (x, s10 ) · b
h
−2 ωc0 sd Tφ L1 (x, s1d ) · v +
Flattened Interferometric Phase
2 ωc0 R1 (z10 ,s1 ) b⊥
1 ·l
0
i
1
⊥
2
b
·
γ̇
(s
)
−2 ωc0 sd Tφ R1 (z10 ,s1 ) v1⊥ · l
1
2
1
d
R1 (x,s )
d
d
6. Numerical Experiments
6.1. Experimental Setup
We conducted numerical experiments for both wideband and Doppler-SAR. Our
experimental setup was as follows:
• A scene of size 128 × 128m at 1m resolution was imaged.
• A single point target was placed at (−20, −31, 50)m with the origin (0, 0, 0) at the
scene center.
• Two antennas flying on a linear trajectory parallel to the y-axis was used with
both antennas placed at 7.1km from the scene center in the x-axis direction. The
midpoint of the linear trajectories for both antennas was aligned at y = 0.
• Wideband: First antenna was placed at height of 3km and the second at 4km.
The length of the trajectories were 1km in length for both antennas. Both antennas
were moving at velocity of 100m/s. A waveform with flat spectrum of 100M Hz
bandwidth at center frequency of 8GHz was transmitted from both antennas. 512
frequency samples and 1024 slow-time, s, samples were used for imaging.
• Doppler: First antenna was placed at height of 2km and the second at 4km. The
length of the trajectories were 1km for both antennas. The first antenna was moving
at velocity of 100m/s and the second at 400m/s. A continuous waveform at center
frequency of 8GHz was transmitted from both antennas. A window of 0.01s was
used for processing at each slow time. 512 fast time, t, samples and 1024 slow-time,
s, samples were used for imaging.
6.2. Wideband SAR Interferometry
Fig. 10a and Fig. 10b show the reconstructed images of the point target located at
(−20, −31, 50)m from the first and the second antenna, respectively assuming a flat
ground topography at height of 0m. In both Fig. 10a and Fig. 10b, we see that there is
a displacement due to layover effect in the range direction (x-axis). The first antenna
reconstructs the target at (−41, −31, 0)m. The second antenna reconstructs the target
at (−48, −31, 0)m.
Doppler Synthetic Aperture Radar Interferometry
(a)
18
(b)
Figure 10: (a) Wideband reconstruction of the target located at (−20, −31, 50)m using
the first antenna assuming flat ground topography. The target is reconstructed at
(−41, −31, 0)m. (b) Wideband reconstruction of the target located at (−20, −31, 50)m
using the second antenna assuming flat ground topography. The target is reconstructed
at (−48, −31, 0)m.
We next align the peaks in the two images and multiply the first image with the
complex conjugate of the second as in (16) to generate the interferogram. The resulting
interferogram is shown in Fig. 11.
Figure 11: The interferogram from wideband SAR reconstructed images.
In order to reconstruct the height we use the set of equations (21), (22), and (23).
The Doppler cone equation (22) at zero-Doppler point s10 gives us that the iso-Doppler
contours are in the look-direction, which in our scenario is parallel to the x-axis. Thus,
iso-Doppler contours have constant y value at the target’s y position. Using this fact,
Doppler Synthetic Aperture Radar Interferometry
19
we need only to compute the intersection of iso-range contour (21) and interferometric
phase contour (23) fixing the y position. From Figs. 10a and 10b we see that both
targets are reconstructed at y position of −31m. Thus we reconstruct the true target
position using y = −31m. For reconstruction, we sampled the height in the interval
[1, 100]m at 0.5m resolution.
Fig. 12a shows the magnitude image of |z − γ1 (s10 )| − R1 (x, s10 ) at y = −31m. Note
that R1 (x, s10 ) is the measured value derived from the phase of the reconstructed image.
The dark blue area indicates the iso-range contour where the magnitude of the difference
is minimized.
(a)
(b)
Figure 12: (a) Image of the magnitude of |z − γ1 (s10 )| − R1 (x, s10 ) at y = −31m. The
iso-range contour is indicated by dark blue area where the magnitude of |z − γ1 (s10 )| −
B
R1 (x, s10 ) is minimized. (b) Image of the magnitude of L1 (z, s10 ) · b − 2ωc 0 ΦW
s0 (x) at
y = −31m. The interferometric phase contour is indicated by dark blue area where the
B
magnitude of L1 (z, s10 ) · b − 2ωc 0 ΦW
s0 (x) is minimized.
Similarly, Fig. 12b shows the magnitude image of the difference L1 (z, s10 ) · b −
c
ΦW B (x). As before, the dark blue area indicates the interferometric phase contour.
2ω0 s0
Combining the two images, Fig. 13 shows the intersection of the two contours
indicated by the dark blue area. The white ‘x’ in Fig. 13 indicates the exact intersection
computed and where the target is reconstructed. The white ‘o’ indicates the true target
position. It is clear that the target is reconstructed at the correct position and height.
6.3. Doppler-SAR
We proceed similar as in the wideband case for the Doppler-SAR case. Figs. 14a and 14b
show the reconstructed image for Doppler-SAR for the first and second antennas,
respectively. The first antenna reconstructs the target at (−34, −31, 0)m and the second
antenna at (−48, −31, 0)m.
Doppler Synthetic Aperture Radar Interferometry
20
Figure 13: Image of the intersection of the iso-range contour with the interfermetric
phase contour at y = −31m. The exact intersection is indicated by white ‘x’. The
true target position is indicated by white ‘o’. The target is reconstructed at the correct
position and height.
(a)
(b)
Figure 14: (a) Doppler-SAR reconstruction of the target located at (−20, −31, 50)m
using the first antenna assuming flat ground topography. The target is reconstructed
at (−34, −31, 0)m.
(b) Doppler-SAR reconstruction of the target located at
(−20, −31, 50)m using the second antenna assuming flat ground topography. The target
is reconstructed at (−48, −31, 0)m.
Doppler Synthetic Aperture Radar Interferometry
21
As in the wideband case, we align the peaks of the two images and multiply the first
image with the conjugate of the second image to form the interferogram of the Doppler
images. The resulting interferogram is shown in Fig. 15.
Figure 15: The interferogram from Doppler-SAR reconstructed images.
To reconstruct the height we use the set of equations given in (52), (53),
and (54). The zero-Doppler-rate points, s1d , is approximated by the end of the
antenna’s trajectories farthest from the target position. By (36), for a linear trajectory
with constant velocity, true zero-Doppler-rate point would be where γ̇i (s1d ) ⊥ γ̇i⊥ (s1d ).
Namely, where the look-direction is parallel to the velocity vector. The best estimate
would be at a point in the trajectory farthest away from the target location.
Fig. 16a illustrates the iso-Doppler surface at y = −31m, which is the y-position
where the target position is reconstructed and the true target’s y position. Notice that
both images reconstruct the scatterer at the correct y position. The iso-Doppler contour
is given by the dark blue area as before.
Similarly, Figs. 16b and 16c illustrate the iso-Doppler-rate and interferometric
Doppler-rate surfaces, respectively at y = −31m.
Fig. 17 combines Figs. 16a, 16b, and 16c. The intersection of the three contours is
indicated by white ‘x’. The white ‘o’ shows the true target location. Clearly, the target
is reconstructed at the correct position and height.
7. Conclusions
We present a novel radar interferometry based on Doppler-SAR imaging paradigm.
Doppler-SAR uses single frequency transmitted waveforms. It has several advantages
over conventional SAR including simpler, inexpensive hardware, high SNR and long
effective range of operation, and is suitable for use in passive radar applications.
We derived the interferometric phase relationship for Doppler-SAR. Doppler-SAR
interferometric phase depends on the difference in the velocity of the antennas as opposed
Doppler Synthetic Aperture Radar Interferometry
22
(a)
(b)
(c)
Figure 16: (a) Image of the magnitude of (z −\
γ1 (s1d )) · γ̇1 (s1d ) − ωc0 f1d (x, s1d ) at
y = −31m. The iso-Doppler contour is indicated by dark blue area where the
magnitude of (z −\
γ1 (s1d )) · γ̇1 (s1d ) = ωc0 f1d (x, s1d ) is minimized. (b) Image of the
γ̇ (s1 )·γ̇ ⊥ (s1 T )
magnitude of (z −\
γ1 (s1 )) · γ̈1 (s1 ) − 1 d 11 d φ − ∂s f d (x, s1 ) at y = −31m. The
d
d
R1 (sd ,z)
1
d
iso-Doppler-rate contour is indicated by dark blue area where the magnitude of
γ̇ (s1 )·γ̇ ⊥ (s1 T )
(z −\
γ1 (s1d ))·γ̈1 (s1d )− 1 Rd 1 (s11 ,z)d φ −∂s f1d (x, s1d ) is minimized. (c) Image of the magnitude
b⊥ ·γ̇2 (s2d )
L1 (z, s1d ) · v + R11 (z,s
1)
d
d
of
+ 2s1 Tcφ ω0 ΦUsdN B (x) at y = −31m. The interferometric Dopplerd
rate contour is indicated by dark blue area where the magnitude of L1 (z, s1d ) · v +
2
b⊥
1 ·γ̇2 (sd )
+ 2s1 Tcφ ω0 ΦUsdN B (x) is minimized.
R1 (z,s1 )
d
d
Doppler Synthetic Aperture Radar Interferometry
23
Figure 17: Image of the intersection of the iso-Doppler, iso-Doppler-rate and
interferometric Doppler-rate contours at y = −31m. The intersection is indicated by
white ‘x’. The true target position is indicated by white ‘o’. The target is reconstructed
at the correct position and height.
to the range difference observed in wideband SAR. Thus, in Doppler-SAR interferometry,
one can reconstruct the ground topography even with the same look-direction from both
antennas so long as their velocities are different). Furthermore, we showed that the true
target position is determined by the intersection of iso-Doppler, iso-Doppler-rate, and
interferometric Doppler-rate surfaces. This is different from conventional wideband SAR
in that the surfaces that determine the true target position are iso-range, iso-Doppler,
and interferometric Doppler-rate surfaces.
We presented numerical simulations for a single point scatterer using two antennas
moving in linear trajectories to verify our interferometric method. We also conduct
conventional wideband SAR interferometric reconstruction as a comparison. We
show that both wideband SAR and Doppler-SAR interferometry is able to accurately
reconstruct the target location. Thus, our numerical simulations show that DopplerSAR interferometry retains the accuracy of conventional SAR interferometry while
having the advantage that Doppler-SAR affords.
In the future, we will analyze the sensitivity of height estimation with respect to
other observables and parameters.
Acknowledgement
This material is based upon work supported by the Air Force Office of Scientific
Research (AFOSR) under award number FA9550-16-1-0234, and by the National Science
Foundation (NSF) under Grant No. CCF-1421496.
Doppler Synthetic Aperture Radar Interferometry
24
Appendix A. Approximations
Appendix A.1. Far-field approximation
Let x and y be two vectors such that |x| |y|. Then, by using Taylor series expansion
we can make the following approximation:
p
|x − y| = (x1 − y1 )2 + (x2 − y2 )2 + (x3 − y3 )2 ,
(A.1)
p
= |x|2 − 2(x1 y1 + x2 y2 + x3 y3 ) + |y|2 ,
(A.2)
s
2(x · y) |y|2
= |x| 1 −
+ 2,
(A.3)
|x|2
|x|
1 2x · y
,
(A.4)
≈ |x| 1 −
2 |x|
≈ |x| − x̂ · y
(A.5)
where x̂ is the unit vector x̂ =
x
.
|x|
Appendix A.2. Approximation of look-direction under far-field assumption
Let (x \
− γ(s)) denote a look direction where x = y + z and |y − γ(s)| |z|. Then by
using far field expansion we can write
x − γ(s)
|x − γ(s)|
y + z − γ(s)
=
,
|y + z − γ(s)|
z
y − γ(s)
+
,
≈
|y − γ(s)| + (y \
− γ(s)) · z |y − γ(s)| + (y \
− γ(s)) · z
y − γ(s)
≈ (|y − γ(s)| − (y \
− γ(s)) · z)
|y − γ(s)|2
(|y − γ(s)| − (y \
− γ(s)) · z)z
+
,
|y − γ(s)|2
i
h
\
\
z − (y − γ(s)) (y − γ(s)) · z
≈ (y \
− γ(s)) +
,
|y − γ(s)|
z⊥
≈ (y \
− γ(s)) +
|y − γ(s)|
(x \
− γ(s)) =
(A.6)
(A.7)
(A.8)
(A.9)
(A.10)
(A.11)
where z⊥ is the transverse z, i.e. projection of z onto the plane whose normal vector is
\
along the look direction (γ(s)
− y). Therefore, difference of look directions is given by:
(x \
− γ(s)) − (y \
− γ(s)) ≈
where x = y + z.
z⊥
|γ(s) − y|
(A.12)
Doppler Synthetic Aperture Radar Interferometry
25
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
Bamler R and Hartl P 1998 Inverse problems 14 R1
Rogers A and Ingalls R 1969 Science 165 797–799
Rogers A, Ingalls R and Rainville L 1972 The Astronomical Journal 77 100
Graham L C 1974 Proceedings of the IEEE 62 763–768
Zebker H A and Goldstein R M 1986 Journal of Geophysical Research: Solid Earth 91 4993–4999
Goldstein R M and Zebker H A 1987 Nature 328 707–709
Gabriel A K and Goldstein R M 1988 International Journal of Remote Sensing 9 857–872
Gabriel A K, Goldstein R M and Zebker H A 1989 Journal of Geophysical Research: Solid Earth
94 9183–9191
Hanssen R F 2001 Radar interferometry: data interpretation and error analysis vol 2 (Springer)
Rosen P A, Hensley S, Joughin I R, Li F K, Madsen S N, Rodriguez E and Goldstein R M 2000
Proceedings of the IEEE 88 333–382
Cherniakov M and Moccia A 2008 Bistatic radar: emerging technology (John Wiley & Sons) ISBN
9780470026311 URL http://books.google.com/books?id=a6nMEY2bKp4C
Fritz T, Rossi C, Yague-Martinez N, Rodriguez-Gonzalez F, Lachaise M and Breit H 2011
Interferometric processing of tandem-x data IEEE International Geoscience and Remote Sensing
Symposium (IGARSS) (IEEE) pp 2428–2431
Duque S, Lopez-Dekker P and Mallorqui J 2010 Geoscience and Remote Sensing, IEEE
Transactions on 48 2740–2749
Wang L and Yazici B 2012 IEEE Trans. Image Process. 21 3673–3686
Wang L and Yazici B 2013 Geoscience and Remote Sensing, IEEE Transactions on 51 4893–4910
ISSN 0196-2892
Wang L and Yazici B 2014 SIAM Journal on Imaging Sciences 7 824–866
Wang L and Yazici B 2012 Synthetic aperture radar imaging of moving targets using ultranarrowband continuous waveforms 9th European Conf. Synthetic Aperture Radar (Nuremberg,
Germany) pp 324–327
Wang L and Yazici B 2012 Detection and imaging of multiple ground moving targets using ultranarrowband continuous-wave SAR SPIE Defense, Security, and Sensing (Baltimore, MD) pp
83940H–83940H
Wang L and Yazici B 2011 Bistatic synthetic aperture radar imaging using ultranarrow-band
continuous waveforms IEEE Radar Conf. (Kansas City, MO) pp 062–067 ISSN 1097-5659
Wang L and Yazici B 2011 Ultranarrow-band synthetic aperture radar imaging for arbitrary flight
trajectories 17th Int. Conf. Digital Signal Process. (Corfu, Greece) pp 1–6
Yarman C E, Wang L and Yazici B 2010 Inverse Problems 26 065006
Wang L, Yarman C E and Yazici B 2011 IEEE Transactions on Geoscience and Remote Sensing
49 3521–3537
Borden B and Cheney M 2004 Inverse Problems 21 1
Wang L, Yarman C E and Yazici B 2013 Theory of passive synthetic aperture imaging Excursions
in Harmonic Analysis, Volume 1 (Springer) pp 211–236
Zebker H and Rosen P 1994 On the derivation of coseismic displacement fields using differential
radar interferometry: The landers earthquake Geoscience and Remote Sensing Symposium,
1994. IGARSS’94. Surface and Atmospheric Remote Sensing: Technologies, Data Analysis and
Interpretation., International vol 1 (IEEE) pp 286–288
Madsen S N, Zebker H A and Martin J 1993 IEEE Transactions on Geoscience and Remote sensing
31 246–256
Prati C, Rocca F, Guarnieri A M and Damonti E 1990 IEEE Transactions on Geoscience and
Remote Sensing 28 627–640
Rodriguez E and Martin J 1992 Theory and design of interferometric synthetic aperture radars
IEE Proceedings F-Radar and Signal Processing vol 139 (IET) pp 147–159
Doppler Synthetic Aperture Radar Interferometry
[29]
[30]
[31]
[32]
[33]
26
Nolan C and Cheney M 2003 IEEE Transactions on Image Processing 12 1035–1043
Yarman C and Yazıcı B 2008 IEEE Transactions on Image Processing 17 2156–2173
Yarman C, Yazıcı B and Cheney M 2008 IEEE Transactions on Image Processing 17 84–93
Prati C and Rocca F 1990 International Journal of Remote Sensing 11 2215–2235
Raney R, Runge H, Bamler R, Cumming I and Wong F 1994 Geoscience and Remote Sensing,
IEEE Transactions on 32 786–799
[34] Yazici B, Cheney M and Evren Y C 2006 Synthetic aperture inversion in the presence of noise and
clutter Inverse Problems vol 22 (IOP Publishing) pp 1705–1729
[35] Wang L and Yazici B 2011 Doppler synthetic aperture radar imaging Society of Photo-Optical
Instrumentation Engineers (SPIE) Conference Series vol 8051 p 12
| 5 |
Séminaire BOURBAKI
69ème année, 2016-2017, no 1125
Janvier 2017
ISOMORPHISMES DE GRAPHES EN TEMPS QUASI-POLYNOMIAL
[d’après Babai et Luks, Weisfeiler-Leman, . . .]
arXiv:1701.04372v2 [math.GR] 12 Oct 2017
par Harald Andrés HELFGOTT
Résumé : Soient donnés deux graphes Γ1 , Γ2 à n sommets. Sont-ils isomorphes ? S’ils le
sont, l’ensemble des isomorphismes de Γ1 à Γ2 peut être identifié avec une classe H · π
du groupe symétrique sur n éléments. Comment trouver π et des générateurs de H ?
Le défi de donner un algorithme toujours efficace en réponse à ces questions est
resté longtemps ouvert. Babai a récemment montré comment résoudre ces questions
– et d’autres qui
y sont liées – en temps quasi-polynomial, c’est-à-dire en temps
O(1)
exp O(log n)
. Sa stratégie est basée en partie sur l’algorithme de Luks (1980/82),
qui a résolu le cas de graphes de degré borné.
1. INTRODUCTION
Soient x, y deux chaı̂nes de caractères, à savoir, deux applications Ω → Σ, où Σ
(l’alphabet) et Ω (le domaine) sont des ensembles finis. Tout groupe de permutations (1)
G < Sym(Ω) agit sur l’ensemble ΣΩ des chaı̂nes de domaine Ω sur un alphabet Σ. Pour
nous, décrire un groupe G, ou être donné un groupe G, voudra toujours dire « donner,
voire être donné, un ensemble de générateurs de G » ; décrire une classe Hπ voudra
dire « donner un élément π de la classe et un ensemble de générateurs de H ».
Le problème de l’isomorphisme de chaı̂nes consiste à déterminer, étant donnés x, y et
G, s’il y a au moins un élément π de G qui envoie x sur y, et, si de tels éléments (isomorphismes) existent, à les décrire. Il est clair que l’ensemble des isomorphismes IsoG (x, y)
forme une classe AutG (x)π du groupe AutG (x) d’automorphismes de x dans G, c’està-dire du groupe consistant dans les éléments de G qui envoient x sur lui-même.
Le défi consiste à donner un algorithme qui résolve le problème en temps polynomial
en la taille n = |Ω| de Ω, voire en temps raisonnable. Par exemple, le temps
employé
pourrait être quasi-polynomial en n, ce qui veut dire exp O(log n)O(1) . Ici, comme
toujours, O(f (n)) désigne une quantité bornée par C · f (n), pour n assez grand et
C > 0 une constante, et Oǫ indique que la constante C dépend de ǫ.
Une grande partie de la motivation pour le problème de l’isomorphisme de chaı̂nes
vient du fait que le problème de l’isomorphisme de graphes se réduit à lui. Ce problème
1. Pour nous, G < S (ou S > G) veut dire « G est un sous-groupe de S, pas forcement propre. »
1125–02
consiste à déterminer si deux graphes finis Γ1 et Γ2 sont isomorphes, et, s’ils le sont,
à décrire la classe de leurs isomorphismes. (Un isomorphisme π : Γ1 → Γ2 est une
bijection π de l’ensemble de sommets de Γ1 vers celui de Γ2 telle que π(Γ1 ) = Γ2 .) Une
solution permettrait, par exemple, de trouver une molécule dans une base de données.
Le problème de l’isomorphisme de graphes se réduit en temps polynomial au problème
de l’isomorphisme de chaı̂nes, de la façon suivante. Supposons sans perte de généralité
que Γ1 et Γ2 ont le même ensemble de sommets V . Alors, nous pouvons définir Ω
comme l’ensemble des paires d’éléments de V (ordonnés ou non ordonnés, suivant que
nos graphes sont orientés ou pas). La chaı̂ne xi , i = 1, 2, est définie comme suit : pour
la paire a = {v1 , v2 } (ou a = (v1 , v2 ), si nos graphes sont orientés), la valeur de xi (a)
est 1 s’il y a une arête entre v1 et v2 en Γ1 , et 0 dans le cas contraire. Soit G l’image de
l’homomorphisme ι : Sym(V ) → Sym(Ω) définie par σ ι ({v1 , v2 }) = {σ(v1 ), σ(v2 )}, où
σ ι = ι(σ). Alors ι induit une bijection entre la classe des isomorphismes de Γ1 à Γ2 et
la classe IsoG (x1 , x2 ).
Théorème 1.1 (Babai). — Le problème de l’isomorphisme de chaı̂nes Ω → Σ peut être
résolu en temps quasi-polynomial en le nombre d’éléments du domaine Ω.
En novembre 2015, Babai a annoncé une solution en temps quasipolynomial, avec
un algorithme explicite. La préparation de cet exposé m’a conduit à trouver une erreur
non triviale dans l’analyse du temps, mais Babai a réussi à le réparer en simplifiant
l’algorithme. La preuve est maintenant correcte.
Corollaire 1.2 (Babai). — Le problème de l’isomorphisme de graphes peut être résolu
en temps quasi-polynomial en le nombre de sommets.
Notre référence principale sera [Ba] ; nous nous servirons aussi de la version courte
[Ba2]. Nous essayerons d’examiner la preuve de la façon la plus détaillée possible dans
un exposé de ce format, en partie pour aider à éliminer tout doute qui pourrait rester
sur la forme actuelle du résultat.
La meilleure borne générale connue antérieurement pour le temps requis par le
√
problème de l’isomorphisme de graphes, due à Luks [BKL], était exp(O( n log n)),
***
L’usage de la canonicité joue un rôle crucial dans la stratégie de Babai. Comme
dans la théorie de catégories, voire dans l’usage courant, un choix est canonique s’il est
fonctoriel. La situation typique pour nous sera la suivante : un groupe G < Sym(Ω)
agit sur Ω, et donc sur ΣΩ ; il agit aussi sur un autre ensemble S, et donc aussi sur
les applications S → C , où C est un ensemble fini. Une application S → C s’appelle
un coloriage ; l’ensemble C s’appelle l’ensemble de couleurs. Un choix canonique (en
relation à G) d’un coloriage de Ω pour chaque chaı̂ne x ∈ ΣΩ est une application qui
va de ΣΩ aux coloriages et qui commute avec l’action de G.
En particulier, un choix canonique peut être un outil pour détecter des nonisomorphismes : si les coloriages C(x) et C(y) induits canoniquement par x et y
1125–03
ne sont pas isomorphes l’un à l’autre – par exemple, s’ils ont un nombre différent
d’éléments vermeils – alors x et y ne sont pas isomorphes l’un à l’autre. Même quand il
y a des isomorphismes dans G qui envoient C(x) sur C(y), la classe IsoG (C(x), C(y))
de tels isomorphismes sert à délimiter la classe d’isomorphismes IsoG (x, y) de x à y,
puisque cette dernière est forcément un sous-ensemble de IsoG (C(x), C(y)).
La preuve assimile aussi plusieurs idées développées lors d’approches antérieures au
problème. La première étape de la procédure consiste à essayer de suivre ce qui est
en essence l’algorithme de Luks [Lu]. Si cet algorithme s’arrête, c’est parce qu’il s’est
heurté contre un quotient H1 /H2 isomorphe à Alt(Γ), où H2 ⊳ H1 < G et Γ est plutôt
grand.
Notre tâche majeure consiste à étudier ce qui se passe à ce moment-là. La stratégie
principale sera de chercher à colorier Γ d’une façon qui dépend canoniquement de x.
Cela limitera les automorphismes et isomorphismes possibles à considérer. Par exemple,
si la moitié de Γ est coloriée en rouge et l’autre en noir, le groupe d’automorphismes
possibles se réduit à Sym(|Γ|/2) ×Sym(|Γ|/2). Un coloriage similaire induit par y limite
les isomorphismes aux applications qui alignent les deux coloriages. Nous trouverons
toujours des coloriages qui nous aident, sauf quand certaines structures ont une très
grande symétrie, laquelle, en revanche, permettra une descente à Ω considérablement
plus petit. Cette double récursion – réduction du groupe H1 /H2 ou descente à des
chaı̂nes considérablement plus courtes – résoudra le problème.
2. FONDEMENTS ET TRAVAUX PRÉCÉDENTS
En suivant l’usage courant pour les groupes de permutations, nous écrirons r g pour
l’élément g(r) auquel g ∈ Sym(Ω) envoie r ∈ Ω. Étant donnés une chaı̂ne
x : Ω → Σ et
−1
un élément g ∈ Sym(Ω), nous définissons xg : Ω → Σ par xg (r) = x r g .
Par contre, nous écrivons Ωk pour l’ensemble des ~x = (x1 , . . . , xk ) avec l’action à
gauche donnée par (φ(~x))r = ~xφ(r) . L’idée est que ceci est défini non pas seulement pour
φ une permutation, mais pour toute application φ : {1, . . . , k} → {1, . . . , k}, même non
injective. Nous appelons les éléments de Ωk tuples plutôt que chaı̂nes.
2.1. Algorithmes de base
2.1.1. Schreier-Sims. — Plusieurs algorithmes essentiels se basent sur une idée de
Schreier [Sch]. Il a remarqué que, pour tout sous-groupe H d’un groupe G et tout
sous-ensemble A ⊂ G qui engendre G et contient des représentants de toutes les classes
de H dans G,
A′ = AAA−1 ∩ H = σ1 σ2 σ3−1 : σi ∈ A ∩ H
est un ensemble de générateurs de H.
1125–04
L’étape suivante est celle de Sims [Si1], [Si2], qui a montré l’utilité de travailler avec
un groupe de permutations G < Sym(Ω), Ω = {x1 , . . . , xn }, en termes d’une chaı̂ne de
stabilisateurs
G = G0 > G1 > G2 > . . . > Gn−1 = {e},
où Gk = G(x1 ,x2 ,...,xk ) = {g ∈ G : ∀1 ≤ i ≤ k xgi = xi } (stabilisateur de points).
L’algorithme de Schreier-Sims (Algorithme 1 ; description basée sur [Lu, §1.2])
construit des ensembles Ci de représentants de Gi /Gi+1 tels que ∪i≤j<n−1 Cj engendre
Gi pour tout 0 ≤ i < n − 1. Le temps pris par l’algorithme est O(n5 + n3 |A|), où A est
l’ensemble de générateurs de G qui nous est donné : la fonction Filtre prend O(n) de
temps, et tout g pour lequel elle est appelée satisfait g ∈ AC ∪ CA ∪ C 2 , où C est la
valeur de ∪i Ci à la fin de la procédure. Bien sûr, |C| ≤ n(n + 1)/2.
Grâce à l’algorithme lui-même, nous pourrons toujours supposer que nos ensembles
de générateurs sont de taille O(n2). Le temps pris par l’algorithme est donc O(n5 ). (2)
Algorithme 1 Schreier-Sims : construction d’ensembles Ci
1: fonction SchreierSims(A, ~
x)
⊲ A engendre G < Sym({x1 , . . . , xn })
assure ∪i≤j<n−1Cj engendre Gi et Ci 7→ Gi /Gi+1 est injectif ∀i ∈ {0, 1, . . . , n − 2}
2:
Ci ← {e} pour tout i ∈ {0, 1, . . . , n − 2}
3:
B←A
4:
tantque B 6= ∅
5:
Choisir g ∈ B arbitraire, et l’enlever de B
6:
(i, γ) ← Filtrer(g, (Ci ), ~x)
7:
si γ 6= e alors
8:
ajouter γ à Ci
S
S
9:
B ← B ∪ j≤i Cj γ ∪ j≥i γCj
10:
retourner (Ci )
fonction Filtrer(g, (Ci ), ~x) ⊲ retourne (i, γ) tel que γ ∈ Gi , g ∈ C0 C1 · · · Ci−1 γ
requiert Ci ⊂ Gi et Ci → Gi /Gi+1 injectif ∀i ∈ {0, 1, . . . , n − 2}
assure g ∈
/ C0 C1 · · · Ci Gi+1 sauf si (i, γ) = (n − 1, e)
12:
γ←g
13:
pour i = 0 jusqu’à n − 2
14:
si ∃h ∈ Ci tel quel xhi = xgi alors
15:
γ ← h−1 γ
16:
sinon
17:
retourner (i, γ)
11:
18:
retourner (n − 1, e)
2. Nous supposons que l’ensemble de générateurs initial, spécifiant le groupe G du problème, est
de taille O(nC ), C une constante. Le temps pris par la première utilisation de l’algorithme est donc
O nmax(5,3+C) .
1125–05
Une fois les ensembles Ci construits, il devient possible d’accomplir plusieurs tâches
essentielles rapidement.
Exercice 2.1. — Montrer comment accomplir les tâches suivantes en temps polynomial, étant donné un groupe G < Sym(Ω), |Ω| = n :
(a) Déterminer si un élément g ∈ Sym(Ω) est dans G.
(b) Étant donnés un homomorphisme φ : G → Sym(Ω′ ), |Ω′ | ≪ |Ω|O(1) , et un sousgroupe H < Sym(Ω′ ), décrire φ−1 (H).
(c) [FHL] Soit H < G avec [G : H] ≪ nO(1) . Étant donné un test qui détermine
en temps polynomial si un élément g ∈ G appartient à H, décrire H. Astuce :
travailler avec G > H > H1 > H2 > . . . à la place de G = G0 > G1 > G2 > . . . .
Ici, comme toujours, « décrire » veut dire « trouver un ensemble de générateurs », et
un groupe nous est « donné » si un tel ensemble nous est donné.
L’algorithme de Schreier-Sims décrit le stabilisateur de points G(x1 ,...,xk ) pour
x1 , . . . , xk ∈ Ω arbitraires. Par contre, nous ne pouvons pas demander allègrement
un ensemble de générateurs d’un stabilisateur d’ensemble G{x1 ,...,xk } = {g ∈ G :
{xg1 , . . . , xgk } = {x1 , . . . , xk }} pour G, xi arbitraires : faire ceci serait équivalent à
résoudre le problème de l’isomorphisme lui-même.
2.1.2. Orbites et blocs. — Soit donné, comme toujours, un groupe de permutations
G agissant sur un ensemble fini Ω. Le domaine Ω est l’union disjointe des orbites
{xg : g ∈ G} de G. Ces orbites peuvent être déterminées en temps polynomial (3) en
|Ω|. Ceci est un exercice simple. La tâche se réduit à celle – simple elle aussi – de
trouver les composantes connexes d’un graphe.
Supposons que l’action de G soit transitive. (Il y a donc une seule orbite.) Un bloc
de G est un sous-ensemble B ⊂ Ω, B ∈
/ {∅, Ω}, tel que, pour g, h ∈ G quelconques,
g
h
g
h
soit B = B , soit B ∩ B = ∅. La collection {B g : g ∈ G} (système de blocs) pour B
donné partitionne Ω. L’action de G est primitive s’il n’y a pas de blocs de taille > 1 ;
autrement, elle s’appelle imprimitive. Un système de blocs est minimal (4) si l’action de
G sur lui est primitive.
Voyons comment déterminer si l’action de G est primitive, et, s’il ne l’est pas, comment trouver un système de blocs de taille > 1. En itérant la procédure, nous obtiendrons un système de blocs minimal en temps polynomial. (Nous suivons [Lu], qui cite
[Si1].)
Pour a, b ∈ Ω distincts, soit Γ le graphe avec Ω comme son ensemble de sommets et
l’orbite {{a, b}g : g ∈ G} comme son ensemble d’arêtes. La composante connexe qui
3. Pour être précis : O |Ω|O(1) + |A||Ω| , où A est la taille de l’ensemble de générateurs de G qui
nous est donné. Nous omettrons toute mention de cette taille par la suite, puisque, comme nous l’avons
déjà dit, nous pouvons la garder toujours sous contrôle.
4. Pour paraphraser [Lu, §1.1] : il faut avouer qu’un tel système pourrait s’appeler plutôt maximal.
La taille des blocs est maximale, leur nombre est minimal.
1125–06
contient a et b est le bloc le plus petit qui contient a et b. (Si Γ est connexe, alors le
« bloc » est Ω.) L’action de G est imprimitive ssi Γ est non connexe pour un a arbitraire
et au moins un b ; dans ce cas-là, nous obtenons un bloc qui contient a et b, et donc
tout un système de blocs de taille > 1.
Un dernier mot : si G < Sym(Ω), nous disons que G est transitif, voire primitif, si
son action sur Ω l’est.
2.2. Luks : le cas de groupes avec facteurs d’ordre borné
Luks a montré comment résoudre le problème de l’isomorphisme de graphes en temps
polynomial dans le cas spécial de graphes de degré borné. (Le degré, ou valence, d’un
sommet dans un graphe non orienté est le nombre d’arêtes qui le contiennent.) Il réduit
ceci au problème de décrire le groupe d’automorphismes de chaı̂nes dans le cas d’un
groupe G tel que tout facteur de composition de G – c’est-à-dire, tout quotient dans une
suite principale (Jordan-Hölder) de G – est borné. Le processus de réduction, élégant
et loin d’être trivial, ne nous concerne pas ici. Voyons plutôt comment Luks résout ce
cas du problème de l’isomorphisme de chaı̂nes.
Nous suivrons la notation de [Ba], même si les idées viennent de [Lu].
Définition 2.2. — Soient K ⊂ Sym(Ω) et ∆ ⊂ Ω (la « fenêtre »). L’ensemble
d’isomorphismes partiels Iso∆
K est
τ
Iso∆
K (x, y) = {τ ∈ K : x(x) = y(x ) ∀x ∈ ∆}.
∆
L’ensemble d’automorphismes partiels Aut∆
K (x) est égal à IsoK (x, x).
Iso∆
K est donc l’ensemble de toutes les permutations g ∈ K qui envoient x sur y – au
moins à en juger par ce qui peut se voir par la fenêtre ∆. Nous travaillerons en général
avec K de la forme Hπ, où H laisse ∆ invariante (en tant qu’ensemble).
Il est clair que, pour K, K1 , K2 ⊂ Sym(Ω) et σ ∈ Sym(Ω),
∆
∆
σ−1
σ,
(1)
IsoKσ (x, y) = IsoK x, y
(2)
∆
∆
Iso∆
K1 ∪K2 (x, y) = IsoK1 (x, y) ∪ IsoK2 (x, y).
Il est aussi clair que, si G est un sous-groupe de Sym(Ω) et ∆ est invariant sous G,
alors AutG (x) est un sous-groupe de G, et, pour tout σ ∈ Sym(Ω), IsoGσ (x, y) est soit
vide, soit une classe à droite de la forme AutG (x)τ , τ ∈ Sym(Ω). Soient ∆1 , ∆2 ⊂ Ω,
′
1
∆1 invariant sous G. Pour G′ = AutG (x) et σ, τ tels que Iso∆
Gσ (x, y) = G τ ,
∆1 ∪∆2
∆2
τ −1
2
τ,
x,
y
(3)
IsoGσ
(x, y) = Iso∆
(x,
y)
=
Iso
′
′
Gτ
G
où la deuxième équation est une application de (1). Babai appelle (3) la règle de la
chaı̂ne.
L’énoncé suivant n’utilise pas la classification de groupes finis simples.
1125–07
Théorème 2.3 ([BCP] (6) ). — Soit G < Sym(Ω) un groupe primitif. Soit n = |Ω|. Si
tout facteur de composition de G est d’ordre ≤ k, alors |G| ≤ nOk (1) .
Ici, comme d’habitude, Ok (1) désigne une quantité qui dépend seulement de k.
Théorème 2.4 (Luks [Lu]). — Soient Ω un ensemble fini et x, y : Ω → Σ deux chaı̂nes.
Soit donné un groupe G < Sym(Ω) tel que tout facteur de composition de G est d’ordre
≤ k. Il est possible de déterminer IsoG (x, y) en temps polynomial en n = |Ω|.
Preuve — Cas 1 : G non transitif. Soit ∆1 ( Ω, ∆1 6= ∅, ∆1 stable sous l’action
1
de G. Définissons ∆2 = Ω \ ∆1 . Alors, par (3), il suffit de calculer Iso∆
G (x, y) (égal à
−1
′
′
τ
2
une classe que nous notons G′ τ ) et Iso∆
. Or, pour déterminer
G′ (x, y ) pour y = y
∆1
IsoG (x, y), nous déterminons, de façon récursive, IsoG (x|∆1 , y|∆1 ), puis, par Schreier′
2
Sims, le stabilisateur de points G(∆1 ) . De la même manière, déterminer Iso∆
G′ (x, y ) pour
−1
y′ = yτ se réduit à déterminer le groupe d’isomorphismes (dans un groupe G′ ) entre
deux chaı̂nes de longueur |∆2 |. Comme |∆1 |+|∆2| = n et Schreier-Sims prend du temps
O(n5 ), tout va bien. (La comptabilité est laissée au lecteur.)
Cas 2 : G transitif. Soit N le stabilisateur d’un système de blocs minimal pour G ;
donc, G/N est primitif. Par le Théorème 2.3, |G/N| ≤ mOk (1) , où m est le nombre de
blocs. Or, pour σ1 , . . . , σℓ (ℓ = |G/N|) tels que G = ∪1≤i≤ℓ Nσi ,
[
[
−1
(4)
IsoG (x, y) = Iso∪i N σi (x, y) =
IsoN σi (x, y) =
IsoN (x, yσi )σi
1≤i≤ℓ
1≤i≤ℓ
par (1) et (2). Comme les orbites de N sont contenues dans les blocs, qui sont de taille
−1
n/m, déterminer IsoN (x, yi ) (yi = yσi ) se réduit – par la règle (3) – à déterminer les
groupes d’isomorphismes de m paires de chaı̂nes de longueur n/m. Nous avons donc
réduit le problème à la solution de ℓ·m = mOk (1) problèmes pour des chaı̂nes de longueur
n/m.
Le pas final consiste à faire l’union de classes en (4). Nous avons une description de
chaque IsoN (x, yi ), soit comme l’ensemble vide, soit comme une classe à droite Hτi du
groupe H = AutN (x), dont nous avons trouvé une description, c’est-à-dire un ensemble
de générateurs A. Alors
[
[
Hτi σi
IsoN (x, yi )σi =
IsoG (x, y) =
1≤i≤ℓ
1≤i≤ℓ
= A ∪ τi σi (τ1 σ1 )−1 : 1 ≤ i ≤ ℓ
τ1 σ1 .
Nous aurions pu éviter quelques appels à Schreier-Sims en travaillant toujours avec
des isomorphismes partiels, mais cela a peu d’importance qualitative.
6. À vrai dire, [BCP, Thm 1.1] est plus général que ceci ; par exemple, des facteurs abéliens arbitraires (non bornés) sont admis. Cela donne une généralisation du Théorème 2.4.
1125–08
2.3. Relations, partitions, configurations
Soit C (« couleurs ») un ensemble fini que nous pouvons supposer ordonné (disons, de
rouge à violet). Une relation k-aire sur un ensemble fini Γ est un sous-ensemble R ⊂ Γk .
Une structure (relationnelle) k-aire est une paire X = (Γ, (Ri )i∈C ), où, pour chaque
i ∈ C , Ri est une relation k-aire sur Γ. Si les Ri sont tous non vides et partitionnent Γk ,
nous disons que X est une structure de partition k-aire. Dans ce cas-là, nous pouvons
décrire X par une fonction c : Γk → C qui assigne à chaque ~x ∈ Γk l’indice i de la
relation Ri à laquelle il appartient. Nous disons que c(~x) est la couleur de ~x.
Un isomorphisme entre deux structures k-aires X = (Γ, (Ri )i∈C ) et X′ = (Γ′ , (Ri′ )i∈C )
est une bijection Γ → Γ′ qui envoie Ri à Ri′ pour chaque i. Il est possible de construire
un foncteur F1 qui envoie chaque structure k-aire X sur Γ à une structure de partition
k-aire F1 (X) sur Γ ; qui plus est, Iso(X, Y) = Iso(F1 (X), F1 (Y)). La procédure est plutôt
triviale ; nous la détaillons (Algorithme 2) pour montrer
ce qu’indexer veut dire. Cela
nous permet de ne pas utiliser plus de min |Γ|k , 2|C | couleurs, où n = |Ω|, tout en gardant leur signification en termes des couleurs originales C . Le temps pris pour calculer
F1 (X) est O(|C ||Γ|O(k)). Nous ne nous occupons pas des détails d’implémentation de la
collection de tuples I , mais il peut s’agir tout simplement d’une liste ordonnée lexicographiquement ; dans ce cas, |Γ|O(k) est |Γ|2k . (Dans la réalité, I serait implémentée
avec du hachage, ce qui n’est que l’art de bien organiser une bibliothèque.)
Algorithme 2 Raffinement d’une structure de relation. Indexeur.
1: fonction F1 (Γ,k,C ,(Ri )i∈C )
2:
I ←∅
3:
pour ~x ∈ Γk
4:
a ← {i ∈ C : ~x ∈ Ri }
5:
c(~x) ← Indexeur(I , a)
6:
7:
8:
9:
10:
11:
retourner (I , c)
⊲ retourne c : Γk → C ′
⊲ C ′ est l’ensemble d’indices de I ; I explique C ′ en termes de C
fonction Indexeur(I ,a)
⊲ I est une collection modifiable
si a n’est pas dans I alors
ajouter a à I
retourner indice de a dans I
Un élément ~x ∈ Γk définit une relation d’équivalence ρ(~x) sur {1, . . . , k} : i ∼ j ssi
xi = xj . Le monoı̈de M(S) (S un ensemble) consiste en les applications S → S, avec la
composition comme opération.
Définition 2.5. — Une structure de partition k-aire X = (Γ, c) est dite configuration
k-aire si
(a) Pour tous ~x, ~y ∈ Γk , si c(~x) = c(~y ), alors ρ(~x) = ρ(~y ).
(b) Il y a un homomorphisme de monoı̈des η : M({1, . . . , k}) → M(C ) tel que, pour
tout τ ∈ M({1, . . . , k}), c (τ (~x)) = τ η (c(~x)) pour tout ~x ∈ Γk .
1125–09
Alors, par exemple, pour k = 2, (a) veut dire que la couleur de ~x = (x1 , x2 ) « sait » si
x1 = x2 ou pas, dans le sens où, si nous connaissons c(~x), alors nous savons si x1 = x2
ou pas. De la même façon, (b) nous indique que la couleur de ~x connaı̂t les couleurs de
(x2 , x1 ), (x1 , x1 ) et (x2 , x2 ).
Nous pouvons définir un foncteur F2 qui envoie chaque structure de partition k-aire X
sur Γ à une configuration k-aire ; comme pour F1 , le fait que F2 (X) est un raffinement
de X implique que Iso(X, Y) = Iso(F2 (X), F2 (Y)). La procédure pour calculer F2 est
très similaire à celle pour calculer F1 (Algorithme 2). Au lieu d’assigner à ~x la couleur
{i ∈ C : ~x ∈ Ri }, nous lui assignons la couleur ρ(~x), (c(φ(~x)))φ∈M({1,...,k}) .
Il est aisé de voir que F2 (X) est le raffinement le plus grossier d’une structure de
partition X qui est une configuration, de la même manière que F1 (X) est le raffinement
le plus grossier d’une structure X qui est une structure de partition.
Définition 2.6. — Soit X = (Γ, c), c : Γk → C , une structure de partition k-aire.
Pour 1 ≤ l ≤ k, nous définissons c(l) : Γl → C comme suit :
c(l) (~x) = c(x1 , x2 , . . . , xl , xl , . . . xl ).
La structure de partition l-aire X(l) = Γ, c(l) est dite le (l-)squelette de X.
La chaı̂ne vide sera viride.
Exercice 2.7. — Tout squelette d’une configuration est une configuration.
Ici le fait que l’axiome (b) dans la définition de configuration soit valable même pour η
non injectif est crucial.
Pour X = (Γ, c) une structure de partition et Γ′ ⊂ Γ, la sous-structure induite X[Γ′ ]
est la structure (Γ′ , c|Γ′ ) définie par la restriction de c à Γ′ . Il est clair que, si X est une
configuration, alors X[Γ′ ] l’est aussi.
***
Il ne faut pas confondre une structure de partition (partition structure) avec ce que
nous appellerons un découpage (colored partition). Un découpage d’un ensemble Γ est
un coloriage de Γ supplémenté d’une partition de chaque classe de couleur. (Une classe
de couleur est l’ensemble de sommets d’une couleur donnée.) Un découpage est dit
admissible si chaque ensemble B dans chaque partition est de taille ≥ 2. Pour α < 1,
un α-découpage est un découpage admissible tel que |B| ≤ α|Γ| pour chaque B.
Un découpage est une structure plus fine que le coloriage qu’il raffine, mais moins fine
que la structure que nous obtiendrions si nous donnions à chaque élément de chaque
partition une couleur différente. Un automorphisme ou isomorphisme d’un découpage
doit préserver les couleurs de celui-ci, mais pourrait permuter les ensembles de la même
taille qui appartiennent à la partition d’une couleur. Comme les ensembles de taille
différente ne peuvent, évidemment, être permutés, il est clair que nous pouvons supposer
sans perte de généralité que toute couleur est partitionnée en ensembles de la même
taille. Nous ajoutons ceci à la définition de α-découpage à partir de maintenant.
1125–10
2.4. Configurations cohérentes k-aires
Pour ~x ∈ Γk , z ∈ Γ et 1 ≤ i ≤ k, nous définissons ~xi (z) ∈ Γk comme suit :
~xi (z) = (x1 , x2 , . . . , xi−1 , z, xi+1 , . . . , xk ).
Définition 2.8. — Une configuration cohérente k-aire X = (Γ, c) est une configuration
k-aire ayant la propriété suivante : il y a une fonction γ : C k × C → Z≥0 telle que,
pour ~k ∈ C k et j ∈ C arbitraires et tout ~x ∈ Γk tel que c(~x) = j,
|{z ∈ Γ : c(~xi (z)) = ki ∀1 ≤ i ≤ k}| = γ(~k, j).
Les valeurs γ(~k, j) sont appelées nombres d’intersection de X.
Une configuration cohérente est dite classique si k = 2.
Remarque 2.9. — Les configurations cohérentes classiques ont été introduites par Higman [Hi]. Les premiers exemples étaient du type schurien : une configuration est schurienne si elle est la partition de Γ2 dans ses orbites (« orbitales ») sous l’action d’un
groupe G < Sym(Γ).
Définition 2.10. — Si une configuration cohérente classique n’a que deux couleurs,
une pour {(x, x) : x ∈ Γ} et l’autre pour son complément, la configuration est dite une
clique, ou triviale.
Exercice 2.11. — Tout squelette d’une configuration cohérente est cohérent.
Encore une fois, l’axiome (b) des configurations joue un rôle clé.
Exercice 2.12. — Soient X = (Γ, c) une configuration cohérente et Γ′ ⊂ Γ une classe
de couleurs en relation au coloriage induit par c sur Γ. Alors la sous-structure induite
X[Γ′ ] est une configuration cohérente.
Ici, c’est un cas spécial de (b) qu’il faut utiliser : la couleur c(x1 , . . . , xn ) « connaı̂t »
les couleurs c(x1 ), . . . , c(xn ), puisque c(xi ) = c(xi , . . . , xi ).
Soient 0 ≤ l < k et ~x ∈ Γl . Nous colorions Γk−l comme suit : pour ~y ∈ Γk−l ,
c~x (~y ) = c(~x~y ).
En résulte une structure de partition (k − l)-aire X~x = (Γ, c~x ).
Exercice 2.13. — Soit X = (Γ, c) une structure de partition ; soit ~x ∈ Γl , 0 ≤ l < k.
Alors
(a) c~x est un raffinement du coloriage du squelette X(k−l) .
(b) Si X est cohérente, X~x l’est aussi.
Il est clair que, de plus, X~x est canonique en relation à ~x, ce qui veut dire que X → X~x
commute avec l’action sur Γ du stabilisateur dans Sym(Γ) des points x1 , . . . , xl .
1125–11
Définition 2.14. — Une configuration cohérente (Γ, c) est dite homogène si la couleur
c(x, x, . . . , x) de tout sommet x ∈ Γ est la même. Une configuration cohérente classique
est dite primitive si elle est homogène et les graphes Gr = {(x, y) : x, y ∈ Γ, c(x, y) = r}
(pour toute couleur r telle que c(x, y) = r pour au moins une paire (x, y) avec x 6= y)
sont tous connexes. Elle est dite uniprimitive si elle est primitive et non triviale.
Nous n’avons pas besoin de préciser si ces graphes son connexes dans le sens propre
(à savoir, il y a un chemin de tout sommet à tout autre, respectant l’orientation)
ou dans le sens faible (sans compter l’orientation) : le fait que (Γ, c) soit cohérente,
classique et homogène implique que d+
r (x) = |{y ∈ Γ : (x, y) ∈ Gr }| est indépendant
de x (pourquoi ?), ce qui implique que toute composante faiblement connexe de Gr est
connexe (exercice).
Exercice 2.15. — Soit X = (Γ, c) une configuration cohérente classique uniprimitive.
Il n’y a aucun ensemble B ⊂ Γ, |B| > |Γ|/2, tel que la restriction de X à B soit une
clique.
Solution – Si les arêtes de la grande clique sont sensées être blanches, soit noir une
autre couleur d’arêtes de X, et soit G = Gnoir . Or, pour un graphe orienté birégulier (7) G
non vide avec Γ comme ensemble de sommets, il est impossible qu’il y ait un ensemble
B ⊂ Γ, |B| > |G|/2, tel que la réduction du graphe à B soit vide (pourquoi ?).
Exercice 2.16. — Soit (Γ, c) une configuration cohérente classique homogène.
(a) Soit r0 , . . . , rk une séquence de couleurs. Alors, si x0 , xk ∈ Γ sont tels que
c (x0 , xk ) = r0 , le nombre de x1 , . . . , xk−1 ∈ Γ tels que c (xi−1 , xi ) = ri pour tout
1 ≤ i ≤ k dépend seulement de r0 , . . . , rk .
(b) Pour toute couleur r, toute composante connexe de Gr est de la même taille.
Solution (esquisse) — En (a), le cas k = 2 vaut par la définition de « cohérent » ;
prouvez les cas k > 2 par induction. Pour prouver (b), utilisez (a).
2.5. Le raffinement canonique k-aire à la façon de Weisfeiler-Leman
Définissons un foncteur F3 qui envoie une configuration X = (Γ, c) à une configuration cohérente F3 (X) = (Γ, c′ ). Comme F3 (X) sera un raffinement de X, nous aurons
Iso(X, Y) = Iso(F3 (X), F3(Y)).
L’algorithme 3, qui calcule F3 , est basé sur une idée de Weisfeiler et Leman (8) [WL].
Il s’agit d’itérer une procédure de raffinement. Si, dans une itération, aucun raffinement
ne se produit – c’est-à-dire, si les classes d’équivalence du nouveau coloriage Ci sont les
mêmes que celles de l’ancien coloriage Ci−1 – alors, (a) aucun raffinement ne se produira
dans le futur, (b) le coloriage Ci−1 est déjà cohérent.
7. Voir la définition du §2.6.
8. Aussi appelé Lehman, mais [Ba] indique que le deuxième auteur préférait Leman. Deux transformations naturelles L → Л, Л → L peuvent ne pas être l’inverse l’une de l’autre.
1125–12
Si le coloriage C = C0 a r couleurs différentes du début, il est clair qu’il ne peut
être raffiné que |Γ|k − r fois. Alors, |Γ|k − r itérations sont suffisantes pour produire une
configuration cohérente. En particulier, si l’indexation est faite en temps logarithmique,
et le vecteur dans le pas 6 de l’algorithme 3 est représenté comme un vecteur creux
(puisque son nombre d’entrées non-nulles est au plus |Γ|), le temps pris par l’algorithme
est O(k 2 |Γ|2k+1 log |Γ|). (En outre, [Ba, §2.8.3] affirme une borne plus forte.)
Les algorithmes de type Weisfeiler-Leman étaient autrefois regardés comme une approche plausible au problème de l’isomorphisme de graphes. Depuis [CFI], [EvP], il est
clair qu’il ne se suffisent pas à eux-mêmes. Ils sont quand même un outil précieux. La
version k-aire ici est due à Babai-Mathon [Ba3] et Immerman-Lander [ImL].
Algorithme 3 Weisfeiler-Leman pour les configurations k-aires.
1: fonction WeisfeilerLeman(Γ, k, c : Γk → C )
2:
C0 ← C ; c0 ← c ; i0 ← |Γk | − |c(Γk )|
3:
pour i = 1 jusqu’à i0
4:
Ii ← ∅
5:
pour ~x ∈ Γk
j
6:
ν ← ci−1 (~x), (|{z ∈ Γ : ci−1 (~x (z)) = rj ∀ 1 ≤ j ≤ k}|)~r∈C k
i−1
7:
ci (~x) = Indexeur (Ii , ν) ⊲ Indexeur est comme dans l’algorithme 2
8:
9:
Ci ← indices de Ii
retourner cn−i0 : Γk → Cn−i0 , (Ii )1≤i≤n−i0
⊲ (Ii ) donne du sens à Cn−r
2.6. Graphes, hypergraphes et designs en blocs
Nous savons déjà qu’un graphe est une paire (V, A), où V est un ensemble (« sommets ») et A est une collection de paires d’éléments de V (voire de sous-ensembles de V
avec deux éléments, si le graphe est non orienté). Un graphe non orienté est dit régulier si
le degré de tout sommet est le même ; un graphe orienté est dit birégulier si le degré sortant d+ (v) = |{w ∈ V : (v, w) ∈ A}| et le degré entrant d− (v) = |{w ∈ V : (v, w) ∈ A}|
sont indépendants de v. (Pour V fini, ils sont forcément la même constante.)
Un graphe biparti est un triplet (V1 , V2 ; A) avec A ⊂ V1 × V2 . Un graphe biparti est
semirégulier si le degré (9) d+ (v1 ) est indépendant de v1 ∈ V1 , et le degré d− (v2 ) est
indépendant de v2 ∈ V2 .
Exercice 2.17. — Soit X = (Γ, c) une configuration cohérente classique homogène.
(a) Soient C1 , C2 deux classes de couleur, et soit vert une couleur d’arêtes en C1 ×C2 .
Alors, le graphe biparti (C1 , C2 ; Gvert ) est semirégulier.
9. Nous omettons les mots « entrant » et « sortant », puisqu’il est évident qu’il s’agit du degré
entrant dans le cas de v1 et du degré sortant dans le cas de v2 .
1125–13
(b) Soit y ∈ Γ, et Li (y) = {x ∈ Γ : c(x, y) = i}. Soient lin, bis et terre trois couleurs
d’arêtes. Alors, pour L1 = Llin (y) et L2 = Lbis (y), le graphe biparti (L1 , L2 ; Rterre ∩
(L1 × L2 )) est semirégulier.
Exercice 2.18. — Soit X une configuration cohérente classique homogène. Soient C1 ,
C2 deux classes de couleur. Soient vert une couleur d’arêtes en C1 × C2 et rouge une
couleur d’arêtes en C2 × C2 . Soient B1 , . . . , Bm les composantes connexes de Grouge en
C2 . Définissons le graphe biparti Y = (C1 , {1, . . . , m}; D) comme suit : (x, y) ∈ D ssi
(x, y) ∈ Gvert pour au moins un y ∈ Bi . Alors Y est semirégulier.
Solution — Notez que, pour y ∈ Bi et x ∈ C1 , (x, y ′ ) est vert pour au moins un
y ′ ∈ Bi ssi il existe x0 = x, x1 , . . . , xm tels que (xi , xi+1 ) est rouge pour 0 ≤ i < m et
(xm , y) est vert. Concluez par l’exercice 2.16a que tous les sommets en {1, . . . , m} ont
le même degré en Y .
De façon analogue, montrez que, pour x ∈ V1 et y ∈ Bi tels que (x, y) est rouge,
le nombre de z ∈ Bi tels que (x, z) est rouge ne dépend pas de x, y ou i. Notons ce
nombre q. Alors, le degré de tout v ∈ C1 est son degré en X, divisé par q. Par (a), il ne
dépend donc pas de v.
Un graphe biparti est complet (en tant que graphe biparti) si A = V1 × V2 . Un graphe
biparti qui n’est ni vide ni complet est appelé non trivial.
Un hypergraphe H = (V, A ) consiste en un ensemble V (« sommets ») et une collection A de sous-ensembles de V (« arêtes »), peut-être avec des sous-ensembles répétés.
Un hypergraphe est dit u-uniforme si |A| = u pour tout A ∈ A . Il est dit régulier de
degré r si tout v ∈ V appartient à exactement r ensembles A dans A .
L’hypergraphe u-uniforme complet sur V est (V, {A ⊂ V : |A| = u}), où chaque
ensemble A est compté une fois. Un coloriage des arêtes de l’hypergraphe complet est
une application de {A ⊂ V : |A| = u} à un ensemble fini C .
Un block design équilibré (BDE) de paramètres (v, u, λ) est un hypergraphe avec
|V | = v sommets, u-uniforme et régulier de degré r ≥ 1, tel que toute paire {v1 , v2 } de
sommets distincts est contenue dans exactement λ ≥ 1 arêtes (« blocks »). Un block
design dégénéré a la même définition, mais avec λ = 0, et la condition additionnelle
d’être un hypergraphe régulier. (La régularité peut être déduite de la définition si λ ≥ 1.)
Un block design est incomplet si u < v. Notons b le nombre |A | d’arêtes d’un BDE.
Proposition 2.19 (Inégalité de Fisher (11) [F]). — Pour tout block design équilibré
incomplet, b ≥ v.
Il est aisé de voir que cette inégalité est vraie même pour les designs dégénérés.
Les blocks designs admettent une généralisation. Un design t-(v, u, λ) est un hypergraphe (V, A ) u-uniforme avec v = |V | sommets tel que tout T ⊂ V de taille t est
contenu dans exactement λ arêtes. Ici t ≥ 2 et λ ≥ 1. Nous écrivons toujours b = |A |.
11. Si, R. A. Fisher, le statisticien. Ici design vient d’experimental design.
1125–14
Proposition 2.20
([RChW]). — Pour tout design t-(v, u, λ) et tout s ≤ min(t/2, v−u),
nous avons b ≥ vs .
2.7. Schémas de Johnson
Un schéma d’association est une configuration cohérente classique (Γ, c : Γ2 → C )
telle que c(x, y) = c(y, x) ∀x, y ∈ Γ. (Il s’agit donc d’un sens du mot schéma qui n’a
rien à voir avec les schémas de la géométrie algébrique.)
Soient s ≥ 2 et r ≥ 2s + 1. Un schéma de Johnson J (r, s) = (Γ, c) est donné par
Γ = Ss (Λ) = {S ⊂ Λ : |S| = s},
c(S1 , S2 ) = |S1 \ (S1 ∩ S2 )|,
où Λ est un ensemble à r éléments. La relation Ri est bien sûr l’ensemble
Ri = {(S1 , S2 ) : c(S1 , S2 ) = i}.
Notons que nous avons défini implicitement un foncteur de la catégorie d’ensembles Λ
avec |Λ| = r à la catégorie de schémas de Johnson. Ceci est un foncteur plein ; autrement
dit, les seuls automorphismes de J (r, s) sont ceux qui sont induits par Sym(Λ).
2.8. Identification de groupes et de schémas
Il est une chose de démontrer que deux groupes G, H sont isomorphes, et une autre
de construire un isomorphisme φ de façon explicite entre eux. Cette dernière tâche
implique, au moins, de donner les images φ(g1), . . . , φ(gr ) de générateurs g1 , . . . , gr de
G.
Voyons un cas particulier qui nous sera crucial. Nous aurons un groupe de permutation
G < Sym(Γ), et nous saurons qu’il est isomorphe au groupe abstrait Altm . Comment
construire un isomorphisme ?
Si m n’est pas trop petit en relation à n = |Γ|, il est connu que G doit être isomorphe
que le groupe Altm
à un groupe de permutation de la forme Alt(k)
m , qui n’est autre
m
agissant sur l’ensemble Sk (Λ0 ) = {S ⊂ Λ0 : |S| = k} à k éléments, où Λ0 est un
ensemble à m éléments. (12) En d’autres termes, il existe une bijection ι0 : Γ → Sk (Λ0 )
et un isomorphisme φ0 : G → Alt(Λ0 ) tels que
ι0 (ω g ) = ι0 (ω)φ0 (g) .
Le problème consiste à construire ι : Γ → Sk (Λ) et φ : G → Alt(Λ), calculables en
temps polynomial, avec ces mêmes propriétés.
Nous suivons [BLS]. Soient Υ ⊂ Γ × Γ l’orbitale la plus petite de G (hors la diagonale
({ω, ω} : ω ∈ Γ}) ; soit ∆ ⊂ Γ × Γ l’orbitale la plus grande. Nous supposerons que
12. Babai nomme les groupes Alt(k)
m groupes de Johnson, par analogie avec les schémas de Johnson. Puisque Alt(k)
n’est
qu’un
déguisement
de Altm , ne faudrait-il pas appeler ce dernier groupe de
m
Ramerrez ?
1125–15
m > (k + 1)2 − 2, ce qui revient à dire que n n’est pas trop grand en relation à m. (13)
Alors,
φ(Υ) = R1 = {(S1 , S2 ) ∈ Sk (Λ0 ) : |S1 ∩ S2 | = k − 1},
(5)
φ(∆) = Rk = {(S1 , S2 ) ∈ Sk (Λ0 ) : S1 ∩ S2 = ∅}.
Définissons, pour (x, y) ∈ Υ,
B(x, y) = {z ∈ Γ : (x, z) ∈
/ ∆, (y, z) ∈ ∆}.
Ceci est l’ensemble de tous les z tels que ι0 (z) intersecte ι0 (x) mais pas ι0 (y). Soit
[
C(x, y) = Γ \
{r : (z, r) ∈ ∆(z)}.
z∈B(x,y)
Alors
ι0 (C(x, y))
= {S ∈ Sk (Λ0 ) : S ∩ S ′ 6= ∅ ∀S ′ ∈ Sk (Λ0 ) t.q. S ′ ∩ ι0 (x) 6= ∅, S ′ ∩ ι0 (y) = ∅}
= {S ∈ Sk (Λ0 ) : i ∈ S},
où i est l’élément de ι0 (x) qui n’est pas dans ι0 (y).
Soit Λ la collection {C(x, y) : (x, y) ∈ Υ}, sans multiplicités. Nous pouvons calculer et
comparer C(x, y) pour (x, y) donné, et calculer et indexer Λ, tout en temps polynomial.
Nous calculons, aussi en temps polynomial, l’action de G sur Λ induite par l’action de
G sur Υ. Ceci définit φ : G → Alt(Λ).
Il y a une bijection naturelle j : Λ → Λ0 qui commute avec l’action de G : elle envoie
C(x, y) à i, où i est l’élément de Λ0 tel que ι0 (C(x, y)) = {S ∈ Sk (Λ0 ) : i ∈ S}. Il
est clair que, pour ω ∈ Γ, ω ∈ C(x, y) ssi j(C(x, y)) ∈ ι0 (ω). Ainsi, nous obtenons la
bijection ι : Γ → Sk (Λ), donnée par
ι(ω) = {γ ∈ Λ : ω ∈ γ}.
Celle-ci satisfait ι (ω g ) = ι(ω)φ(g) .
Les applications φ, ι sont donc celles que nous désirions ; nous avons construit un
isomorphisme explicite entre G et Alt(Λ). Notons que cette même procédure nous permet de construire un isomorphisme explicite entre, d’un côté, un schéma d’association
(§2.7) qu’on sait être isomorphe à un schéma de Johnson J (m, k), et, de l’autre côté,
ce même schéma.
13. Si m ≤ (k + 1)2 − 2, alors n est si grand que m! = nO(log n) . En ce cas, nous pouvons enlever
le groupe G (c’est-à-dire, dans l’application qui nous intéressera, un quotient G/N ) de façon brutale,
comme dans le cas 2 de la preuve du théorème 2.4 (Luks). Nous pourrions aussi nous passer de la
supposition m > (k + 1)2 − 2 au coût de quelques complications en ce qui suit. En particulier, φ(∆)
ne serait pas Rk comme dans (5), sinon un autre Rj .
1125–16
3. LA PROCÉDURE PRINCIPALE
input :
G < Sym(Ω)
x, y : Ω → Σ
réduction
de G/N
à Altm′
m′ ≤ |m|/2
G
transitif ?
aligner
coupe
coupe ou
Johnson ?
coupe
réduction
J de G/N
à Altm′
√
m′ ≪ m
coupe ou
relations ?
rels.
plénitude
> 1/2 ?
non
G/N ∼
Altm ?
blocs ∼
Γ
k
récursion
n′ ≤ n/2
non
oui
m petit ?
non
oui
k = 1?
oui
cas trivial
non
non
certificats
locaux
symétrie
> 1/2 ?
non
non
récursion
n′ < n
oui : G/N ∼ Alt(Γ)
G
primitif ?
Lemme
des designs
non
oui
oui
une
couleur
domine ?
output :
IsoG (x, y)
Fonction Isomorphisme-de-Chaı̂nes
Weisfeiler
- Leman
k-aire
oui
x, y →
relations
k-aires
sur Γ
oui
pullback
1125–17
3.1. Premiers pas : récursion à la façon de Luks
G
transitif ?
non
récursion
n′ < n
non
récursion
n′ ≤ n/2
oui
G/N ∼
Altm ?
oui
m petit ?
oui
Les premiers pas de la procédure sont ceux de la preuve du Théorème 2.4 (Luks). En
particulier, si G < Sym(Ω) n’est pas transitif, nous procédons exactement comme dans
le cas non transitif de la preuve du Théorème 2.4. Bien qu’il soit possible que n = |Ω|
ne décroisse que très légèrement, la récursion marche, puisque son coût est aussi très
léger dans ce cas : nous n’avons qu’à subdiviser le problème selon les orbites de G.
Supposons que G soit transitif. Nous savons que nous pouvons trouver rapidement
un système de blocs minimal R = {Bi : 1 ≤ i ≤ r}, Bi ⊂ Ω (§2.1.2). Par Schreier-Sims,
nous trouvons aussi, en temps polynomial, le sous-groupe N ⊳ G des éléments g ∈ G
tels que Big = Bi pour tout i. Le groupe H = G/N agit sur R.
Au lieu du Théorème 2.3 [BCP], nous utiliserons une conséquence de la Classification
des Groupes Finis Simples (CGFS). Elle a été dérivée pour la première fois par Cameron,
puis raffinée par Maróti.
Théorème 3.1 ([Cam], [Ma]). — Soit H < Sym(R) un groupe primitif, où |R| = r est
plus grand qu’une constante absolue. Alors, soit (14)
(a) |H| < r 1+log2 r , soit
(b) il y a un M ⊳ H tel que R se subdivise (15) en un système de m
blocs sur lequel
k
(k)
M agit comme un groupe Altm , m ≥ 5. En plus, [H : M] ≤ r.
La borne [H : M] ≤ r se déduit de m > 2, |H| ≥ r 1+log2 r , |H| ≤ m!s s!, ms ≤ r et
[H : M] ≤ 2s s!, où s ≥ 1 est un paramètre dans Cameron-Maróti.
Il est possible [BLS] de trouver en temps polynomial le sous-groupe normal M et les
blocs de l’action de M. Nous avons déjà vu au §2.8 comment identifier explicitement
l’action de M avec celle de Alt(k)
m .
Par ailleurs, l’algorithme de Schreier-Sims nous permet de calculer |H| en temps
polynomial, et donc nous dit aussi si nous sommes dans le cas (a). Si c’est le cas, nous
14. Pour nous, log2 désigne le logarithme en base 2, et non pas le logarithme itéré log log.
15. L’énoncé dans [Cam], [Ma] est plus fort : il décrit toute l’action de H sur R. À vrai dire, le groupe
m s
s
M est isomorphe, en tant que groupe de permutation, à (Alt(k)
m ) , s ≥ 1. Nous avons r = k .
1125–18
procédons comme dans le cas transitif de la preuve du Théorème 2.4. Nous réduisons
ainsi le problème à r 1+log2 r instances du problème pour des chaı̂nes de longueur ≤ n/r.
Si nous sommes dans le cas (b) nous commençons toujours par réduire le problème à
[H : M] instances du problème avec M à la place de H : par l’équation (2) et comme
dans l’équation (4),
[
−1
IsoH (x, y) =
IsoM x, yσ
σ,
σ∈S
où S est un système de représentants des classes de M dans H.
Si m ≤ C log n, où C est une constante,
|M| =
m!
< mm ≤ mC log n ≤ (m′ )C log n ,
2
où m′ = m
. Donc, ici comme dans le cas (a), nous nous permettons de procéder
k
comme dans le cas transitif de la preuve du Théorème 2.4. Nous obtenons une réduction
à ≤ r · (m′ )C log n = (m′ )O(log n) instances du problème pour des chaı̂nes de longueur
n/m′ . Ceci est tout à fait consistant avec l’objectif d’avoir une solution en temps quasipolynomial en n (ou même en temps nO(log n) ).
Il reste à savoir que faire si nous sommes dans le cas suivant : il y a un isomorphisme
φ : G/N → Alt(Γ), |Γ| > C log n, C une constante. (Ici nous avons déjà (i) remplacé G
par la préimage de M dans la réduction G → G/N, et, après cela, (ii) remplacé N par
le stabilisateur des blocs dans la partie (b) du Théorème 3.1.) Ce cas nous occupera
pour le reste de l’article.
***
Babai indique comment enlever la dépendance de CGFS à cette étape. Soient G et N
comme avant, avec G transitif. Alors G/N est un groupe primitif agissant sur l’ensemble
de blocs R.
Si un groupe de permutations sur un ensemble R est tel que son action sur l’ensemble
des paires d’éléments distincts de R est transitive, le groupe est dit doublement transitif.
Or, un résultat de Pyber qui ne dépend pas de CGFS [Py2] nous dit qu’un tel groupe
2
est soit Alt(R), soit Sym(R), soit d’ordre ≤ |R|O(log |R|) .
Si G/N est Alt(R) ou Sym(R), nous sommes dans le cas que nous discuterons d’ici
jusqu’à la fin. Si G/N est doublement transitif, mais n’est égal ni à Alt(R) ni à Sym(R),
nous pouvons procéder comme dans le cas transitif de la preuve du Théorème 2.4,
2
puisque |G/N| ≤ r O(log r) , r = |R| ≤ n. (Babai propose aussi un traitement alternatif,
même plus efficace et élémentaire.)
Supposons donc que G/N n’est pas doublement transitif. Alors la configuration
cohérente schurienne (§2.4) qu’elle induit n’est pas une clique. En conséquence, nous
pouvons donner cette configuration à la procédure Coupe-ou-Johnson (§5.2), et reprendre le fil de l’argument à ce point-là.
1125–19
4. LA STRUCTURE DE L’ACTION DE Alt
4.1. Stabilisateurs, orbites et quotients alternants
Nous aurons besoin de plusieurs résultats sur les épimorphismes G → Altk . Ils joueront un rôle crucial dans la méthode des certificats locaux (§6.1). Dans la version originale [Ba], ils ont aussi été utilisés dans le rôle joué par [BLS] dans cet exposé.
Lemme 4.1. — Soit G < Sym(Ω) primitif. Soit φ : G → Altk un épimorphisme avec
k > max(8, 2 + log2 |Ω|). Alors φ est un isomorphisme.
Prouver ce lemme est à peu près un exercice en théorie des groupes finis ; il faut
utiliser [BaPS, Prop. 1.22] pour le cas de socle abélien et la conjecture de Schreier pour
le cas de socle non abélien. La conjecture de Schreier est un théorème, mais un théorème
dont la preuve dépend, à son tour, de CGFS.
Par contre, Pyber [Py] a donné une preuve du Lemme 4.1 qui n’utilise pas CGFS,
avec une condition plus stricte : k > max(C, (log |Ω|)5 ), C constante. La dépendance
de CGFS a donc été complètement enlevée de la preuve du théorème principal.
Définition 4.2. — Soit G < Sym(Ω). Soit φ : G → Symk un homomorphisme dont
l’image contient Altk . Alors x ∈ Ω est dit atteint si φ(Gx ) ne contient pas Altk .
Lemme 4.3. — Soit G < Sym(Ω). Soit φ : G → Altk un épimorphisme avec k >
max(8, 2 + log2 n0 ), où n0 est la taille de la plus grande orbite de G.
(a) Si G est transitif, tout x ∈ Ω est atteint.
(b) Au moins un x ∈ Ω est atteint.
Preuve (esquisse) — (a) Ceci découle immédiatement du Lemme 4.1 si G est primitif,
ou si K < ker(φ) pour K le stabilisateur d’un système de blocs minimal. Il reste le cas
de φ : K → Altk surjectif. En général :
Lemme.— Pour Ki arbitraires, K < K1 × · · · × Ks et un épimorphisme
φ : K → S, S simple, il doit y avoir un i tel que φ se factorise comme suit :
ψ
K → Ki → S, ψ un épimorphisme.
En utilisant ce lemme pour les restrictions Ki de K aux orbites de K, nous passons à
une orbite Ki , et procédons par induction.
(b) Soient Ω1 , . . . , Ωm les orbites de G, et soit Gi = G|Ωi la restriction de G à Ωi . Par
ψ
le Lemme en (a), il doit y avoir un i tel que φ se factorise en G → Gi → Altk , ψ un
épimorphisme. Alors, par (a), (Gx )ψ = ((Gi )x )ψ 6= Altk pour tout x ∈ Ωi .
La proposition suivante jouera un rôle crucial au §6.
Proposition 4.4. — Soient G < Sym(Ω) transitif et φ : G → Altk un épimorphisme.
Soit U ⊂ Ω l’ensemble des éléments non atteints.
(a) Supposons que k ≥ max(8, 2 + log2 n0 ), où n0 est la taille de la plus grande orbite
de G. Alors (G(U ) )φ = Altk .
1125–20
(b) Supposons que k ≥ 5. Si ∆ est une orbite de G qui contient des éléments atteints,
alors chaque orbite de ker(φ) contenue dans ∆ est de longueur ≤ |∆|/k.
Rappelons que G(U ) = {g ∈ G : xg = x ∀x ∈ U} (stabilisateur de points).
Preuve — (a) Il est facile de voir que G fixe U en tant qu’ensemble. Alors, G(U ) ⊳ G,
et donc (G(U ) )φ ⊳ Gφ . Or, Gφ = Altk . Supposons que (G(U ) )φ = {e}. Alors φ se factorise
comme suit :
ψ
G → G|U → Altk ,
puisque G(U ) est le noyau de G → G|U . Ici ψ est un épimorphisme, et donc, par le Lemme
4.3 (b), il existe un x ∈ U tel que ((G|U )x )ψ 6= Altk . Or ((G|U )x )ψ = (Gx )φ = Altk ,
parce que x est dans U, c’est-à-dire non atteint. Contradiction.
(b) Comme ∆ contient des éléments atteints et est une orbite de G, tout élément de
∆ est atteint. Soit N = ker(φ), x ∈ ∆. La longueur de l’orbite xN est
xN = [N : Nx ] = [N : (N ∩ Gx )] = [NGx : Gx ] =
=
[G : Gx ]
[G : NGx ]
|∆|
|∆|
=
.
[Gφ : (Gx )φ ]
[Altk : (Gx )φ ]
Or, tout sous-groupe propre de Altk est d’indice ≥ k. Donc xN ≤ |∆|/k.
4.2. Le cas de grande symétrie
G
primitif ?
oui
k = 1?
oui
cas trivial
non
symétrie
> 1/2 ?
oui
pullback
Considérons le cas de G primitif. Nous pouvons supposer que G est isomorphe en
tant que groupe de permutation à Alt(k)
m , puisque nous avons déjà éliminé les autres
cas au §3 (peut-être en passant à un groupe non primitif M ; le cas non primitif sera
traité au §6). Comme nous l’avons vu au §2.8, nous pouvons construire une bijection ι
entre Ω et l’ensemble Sk (Γ) des sous-ensembles avec k éléments d’un ensemble Γ. Cette
bijection induit un isomorphisme φ : G → Alt(Γ).
Si k = 1, alors Ω est en bijection avec Γ, et G ∼ Altn = Altm . Nous sommes donc
dans le cas trivial : le groupe AutG (x) consiste en les éléments de Altn qui permutent
les lettres de x de la même couleur, et IsoG (x, y) est non vide ssi x et y ont exactement
le même nombre de lettres de chaque couleur – où, si aucune lettre n’est répétée ni en x
ni en y, nous ajoutons la condition que la permutation de {1, . . . , n} qui induit x 7→ y
soit dans Altn .
Alors, soit G primitif, k > 1.
1125–21
Deux éléments γ1 , γ2 ∈ Γ sont des jumeaux par rapport à un objet si la transposition
(γ1 γ2 ) le laisse invariant. Il est clair que les jumeaux forment des classes d’équivalence,
et que, pour toute telle classe d’équivalence C, tout Sym(C) laisse l’objet invariant.
Notre objet sera la chaı̂ne x (ou y) : γ1 , γ2 sont des jumeaux par rapport à x si, pour
−1
tout i ∈ Ω, x(i) = x(τ φ (i)), où τ = (γ1 γ2 ).
Nous pouvons donc déterminer facilement (et en temps polynomial) les classes
d’équivalence en Γ (dites classes de jumeaux), et vérifier s’il y a une classe d’équivalence
C de taille > |Γ|/2. Examinons cette possibilité puisque nous devrons l’exclure après.
La classe C de taille > |Γ|/2 est évidemment unique et donc canonique. Si x a une
telle classe et y ne l’a pas, ou si les deux ont de telles classes, mais de tailles différentes,
alors x et y ne sont pas isomorphes.
Si x, y ont des classes de jumeaux Cx , Cy de la même taille > |Γ|/2, nous choisissons
′
σ ∈ Alt(Γ) tel que Cx = (Cy )σ . (Nous supposons m > 1.) En remplaçant y par yσ , où
σ ′ = φ−1 (σ −1 ), nous réduisons notre problème au cas Cx = Cy . (Voilà l’exemple le plus
simple de ce que Babai appelle aligner ; nous avons aligné Cx et Cy .)
Alors, soit C = Cx = Cy . La partition {C, Γ \ C} de Γ induit une partition {Ωj }0≤j≤k
de Ω : ω ∈ Ωj ssi ψ(ω) contient k − j éléments de C et j éléments de Γ \ C. Il est aisé
de montrer que αk−j (1 − α)j kj < 1/2 pour α ∈ (1/2, 1], 1 ≤ j ≤ k ; donc, |Ωj | < n/2
pour 1 ≤ j ≤ k.
Nous avons réduit notre problème à celui de déterminer IsoH (x, y), où H =
φ (Alt(Γ)C ). Ici le besoin de prendre un stabilisateur d’ensemble (à savoir, Alt(Γ)C )
ne pose aucun souci : nous engendrons H en prenant des préimages φ−1 (h1 ), . . . , φ−1(h5 )
de deux générateurs h1 , h2 de Alt(C) < Alt(Γ), deux générateurs h3 , h4 de
Alt(Γ \ C) < Alt(Γ) et un élément h5 ∈ Alt(Γ) de la forme (γ1 γ2 )(γ3 γ4 ), où γ1 , γ2 ∈ C,
γ3 , γ4 ∈ Γ \ C. (Si |Γ| < 8, le nombre de générateurs est moindre, et la discussion se
simplifie.) Notre problème se réduit à celui de déterminer IsoH ′ (x, y′ ) pour y′ = y et
y′ = yh5 , où H ′ = φ−1 (Alt(C) × Alt(Γ \ C)) = φ−1 (hh1 , . . . , h4 i).
−1
Comme C est une classe de jumeaux pour x, tout élément de φ−1 (Alt(C)) laisse x
invariant. Si x|Ω0 6= y|Ω0 , alors IsoH ′ (x, y) = ∅.
Soit alors x|Ω0 = y|Ω0 . Nous avons réduit notre problème à celui de déterminer
IsoH ′ |Ω′ (x|Ω′ , y|Ω′ ), où Ω′ = Ω \ Ω0 . Rappelons que H ′ |Ω′ agit sur Ω′ avec des orbites
de longueur |Ωi | < n/2. Nous procédons donc comme dans le cas non transitif de la
méthode de Luks (preuve du Thm. 2.4).
1125–22
5. DES CHAÎNES AUX SCHÉMAS DE JOHNSON
x, y →
relations
k-aires
sur Γ
Weisfeiler
- Leman
k-aire
Lemme
des designs
une
couleur
domine ?
oui
coupe ou
Johnson ?
non
récursion
n′ ≤ n/2
Discutons maintenant le cas de G primitif et, plus précisément, G isomorphe à Alt(k)
m ,
k ≥ 2. Maintenant nous pouvons supposer que nos chaı̂nes x, y n’ont pas de classes de
jumeaux de taille > m/2. Les outils principaux que nous développerons (Lemme des
designs, coupe-ou-Johnson) nous seront utiles, voire essentiels, aussi dans le cas de G
imprimitif.
Nous avons une bijection entre les éléments de Ω et {S ⊂ Γ : |S| = k}. Pour
x : Ω → Σ donné, nous avons donc une structure relationnelle X = (Γ, (Ri )i∈Σ ) k-aire
sur Γ : (x1 , . . . , xk ) ∈ Ri si x1 , . . . , xk sont tous différents et x(ω) = i, où ω est l’élément
de Ω qui correspond à {x1 , . . . , xk }.
Nous appliquons à X le foncteur F1 (§2.3), qui fait d’elle une structure de partition,
puis le foncteur F2 (encore §2.3), qui nous donne une configuration k-aire, et, finalement, le foncteur F3 défini par Weisfeiler-Leman k-aire (§2.5). Nous obtenons ainsi un
raffinement F3 (F2 (F1 (X))) = (Γ, cx : Ωk → C ) qui est une configuration cohérente
k-aire.
Comme F1 , F2 , F3 sont des foncteurs, l’assignation de cx à x est canonique. Elle nous
sera donc utile : si cx et cy ne sont pas isomorphes sous l’action de Altm , alors x et y
ne sont pas isomorphes sous l’action de Alt(k)
m non plus.
Nous obtiendrons une configuration cohérente classique de façon canonique à partir
de cx (Lemme des designs). Soit cette nouvelle configuration sera non triviale, soit nous
obtiendrons un coloriage canonique sans couleur dominante, ce qui nous permettra
immédiatement de réduire le problème à un certain nombre de problèmes pour des
chaı̂nes plus courtes, comme dans l’algorithme de Luks.
Supposons, alors, que nous disposons d’une configuration cohérente classique non triviale assignée de façon canonique à x. La procédure Coupe-ou-Johnson nous donnera
l’un ou l’autre de ces deux résultats : soit un découpage canonique de Γ, soit un schéma
de Johnson plongé de façon canonique dans Γ. Dans un cas comme dans l’autre, avoir
une telle structure canonique limite fortement l’ensemble d’isomorphismes et automorphismes possibles. Nous pourrons réduire G à un sous-groupe ∼ Altm′ , avec m′ ≤ m/2,
√
dans le cas du découpage, ou m′ ≪ m, dans le cas de Johnson. Déjà m′ ≤ m/2 est
suffisante pour une récursion réussie.
1125–23
5.1. Lemme des designs
Étant donnés une configuration X = (Γ, c : Γk → C ) et un paramètre 1/2 ≤ α < 1,
une couleur i est dite α-dominante si c(γ, . . . , γ) = i pour ≥ α|Γ| valeurs de γ ∈ Γ. La
classe de couleurs {γ ∈ Γ : c(γ, . . . , γ) = i} est, elle aussi, dite dominante. Par contre,
si, pour toute couleur i, la classe {γ ∈ Γ : c(γ, . . . , γ) = i} est de taille < α|Γ|, le
coloriage est dit un α-coloriage.
Comme avant, deux éléments γ1 , γ2 ∈ Γ sont des jumeaux par rapport à une structure X (ici, une configuration cohérente sur Γ) si (γ1 γ2 ) ∈ Aut(X).
Proposition 5.1 (Lemme des designs). — Soit X = (Γ, c : Γk → C ) une configuration
cohérente k-aire, où 2 ≤ k ≤ |Γ|/2. Soit 1/2 ≤ α < 1. Supposons qu’il n’y a aucune
classe de jumeaux dans Γ avec > α|Γ| éléments.
Alors, au moins une des options suivantes est vraie :
(1)
(a) il existe x1 , . . . , xℓ ∈ Γ, 0 ≤ ℓ < k, tels que X~x n’a pas de couleur α-dominante ;
(1)
(b) il existe x1 , . . . , xℓ ∈ Γ, 0 ≤ ℓ < k − 1, tels que X~x a une couleur α-dominante C
et (X~x )(2) [C] n’est pas une clique.
La notation a été définie dans les sections 2.3 – 2.4 . En particulier, le 1-squelette
est tout simplement un coloriage de Γ.
(1)
X~x
Lemme 5.2 (Lemme de la grande clique). — Soit X = (Γ, c) une configuration cohérente
classique. Soit C ⊂ Γ une classe de couleurs avec |C| ≥ |Γ|/2. Si X[C] est une clique,
alors C est une classe de jumeaux.
Preuve — Supposons que C n’est pas une classe de jumeaux. Il y a donc un x ∈ Γ
et une couleur (disons, azur) telle que c(x, y) est de cette couleur pour au moins un
y ∈ C mais pas pour tous. Comme X[C] est une clique, x ∈
/ C. Appelons la couleur de
C carmin, et celle de x bronze. Soit B ⊂ Γ l’ensemble des éléments de couleur bronze.
Il s’agit de construire un block design équilibré (§2.6) qui contredise l’inégalité de
Fisher (Prop. 2.19). Définissons Ab = {y ∈ Γ : c(by) = azur} pour b ∈ B. Comme
x ∈ B et c(xy) = azur pour au moins un y ∈ C, et c(xy) connaı̂t la couleur de y, tous
les éléments de Ab sont carmin.
Par la cohérence de X et la définition des nombres d’intersection (Def. 2.8),
|Ab | = γ(azur, azur−1 , bronze),
et donc |Ab | ne dépend pas de b. Comme nous l’avons dit au début, 1 ≤ |Ax | < |C| ;
donc, 1 ≤ |Ab | < C pour tout b ∈ B.
Montrez de façon similaire que, pour v ∈ C, la taille de {b ∈ B : v ∈ Ab } = {b ∈ B :
c(bv) = azur} ne dépend pas de b. Comme X[C] est une clique, c(v, v ′) est de la même
couleur pour tous v, v ′ ∈ C, v = v ′ ; appelons cette couleur doré. Montrez que
{b ∈ B : v, v ′ ∈ Ab } = γ(azur, azur−1 , doré).
Alors (C, {Ab}b∈B ) est un block design équilibré incomplet.
1125–24
En conséquence, par l’inégalité de Fisher, |B| ≥ |C|. Or, nous savons que |C| > |Γ|/2,
B, C ⊂ Γ et B ∩ C = ∅. Contradiction.
Preuve du Lemme des designs (Prop. 5.1) — Supposons que pour chaque ~x ∈ Ωℓ ,
0 ≤ ℓ < k, C~x a une couleur α-dominante C(~x), et, en plus, si ℓ < k − 1, (X~x )(2) [C] est
une clique. Nous arriverons à une contradiction.
Soit C = C(vide). Comme |C| > α|Γ|, C est trop grande pour être un ensemble de
jumeaux. Donc il existe u, v ∈ C, u 6= v, tels que τ = (uv) ∈
/ Aut(X). Soit ~y de longueur
τ
minimale r entre les chaı̂nes satisfaisant c(~y ) 6= c(~y ). Par cette minimalité et les règles
dans la définition 2.5, y1 , . . . yr sont tous distincts. En les permutant, nous pouvons
assurer que u, v ∈
/ {y1 , y2 , . . . , yr−2 }, et, sans perte de généralité, que soit (i) yr−1 6= u, v,
yr = u, soit (ii) yr−1 = u, yr = v. Dans le cas (i), nous choisissons ~x = y1 , . . . , yr−1 ,
ℓ = r −1, et voyons que c~x (u) 6= c~x (v) ; dans le cas (ii), nous choisissons ~x = y1 , . . . , yr−2 ,
ℓ = r−2, et obtenons c~x (u, v) 6= cx (v, u). Nous aurons donc une contradiction avec notre
supposition une fois que nous aurons prouvé que u, v ∈ C(~x).
Le fait que u, v ∈ C(~x) s’ensuivra immédiatement de l’égalité C(~x) = C \{x1 , . . . , xℓ } ;
cette égalité, à son tour, se déduit par itération du fait que, pour ~y de longueur ≤ k − 2
et ~x = ~y z, z ∈ Ω,
(6)
C(~x) = C(~y ) \ {z}.
(2)
Pourquoi (6) est-il vrai ? Nous sommes en train de supposer que X~y [C(~y )] est une
clique, et que |C(~y )| > α|Γ| ≥ |Γ|/2. Donc, par le lemme de la grande clique, tous
(2)
les éléments de C(~y ) sont des jumeaux en X~y . En particulier, pour u ∈ C(~y ) \ {z},
c~x (u) = c~y (zu) ne dépend pas de u. Puisque le coloriage de sommets en C~x est un
raffinement de celui en C~y (par la deuxième règle de la définition 2.5), il s’ensuit que,
soit C(~x) = C(~y ) \ {z}, soit C(~x) ⊂ Γ \ C(~y ), soit C(~x) = {z}. Comme |C(~x)|, |C(~y)| >
α|Γ| ≥ |Γ|/2, les deux dernières possibilités sont exclues.
Nous appliquons le Lemme des designs (avec α = 1/2) à la configuration cohérente
k-aire X′ = F3 (F2 (F1 (X))), où X est donnée par x de la façon décrite au début de la
section. Nous parcourons tous les tuples possibles ~x = (x1 , . . . , xℓ ) ∈ Γℓ , 0 ≤ ℓ < k,
jusqu’à trouver un tuple pour lequel la première ou la deuxième conclusion du Lemme
des designs est vraie.
(1)
Si la première conclusion est vraie, nous définissons cX = X~x et sautons à la section
5.3.1. Si la deuxième conclusion est vraie, nous passons au §5.2, ayant défini X′′ =
(2)
(1)
X~x [C], où C est la couleur α-dominante de X~x .
5.2. Coupe ou Johnson
Nous avons une configuration classique cohérente homogène non triviale X′′ = (Γ, c).
(Nous rappelons que ceci est un coloriage c du graphe complet sur Γ tel que (a) les sommets ont leur couleur propre (« couleur diagonale »), (b) les arêtes (x, y), x 6= y, ne sont
pas toutes de la même couleur, (c) la couleur c(x, y) de l’arête (x, y) détermine c(y, x),
et (d) l’axiome de cohérence (2.8) se vérifie.) Nous voudrions trouver des structures qui
dépendent canoniquement de X′′ et qui contraignent son groupe d’automorphismes.
1125–25
Il est raisonnable de s’attendre à ce que de telles structures existent : par le Théorème
3.1, si le groupe d’automorphismes est transitif, soit il est imprimitif (et donc il laisse
une partition invariante), soit il est près d’être Alt(k)
m , k ≥ 2, (qui laisse invariant un
schéma de Johnson), soit il est petit (et donc le stabilisateur de quelques points aura
des orbites petites, et ainsi nous donnera un coloriage sans couleur dominante). Le défi
est de trouver de telles structures, et de le faire canoniquement.
Si X′′ n’est pas primitif (Déf. 2.14), la tâche est plutôt facile : soit r la couleur non
diagonale la plus rouge telle que le graphe Gr = {(x, y) : x, y ∈ Γ, c(x, y) = r} est
connexe ; par l’exercice 2.16, ceci donne une partition de Γ dans des ensembles de la
même taille ≤ |Γ|/2.
Théorème 5.3 (Coupe ou Johnson). — Soit X = (Γ, c) une configuration classique
cohérente uniprimitive. Soit 2/3 ≤ α < 1. En temps |Γ|O(1) , nous pouvons trouver
— soit un α-découpage de Γ,
— soit un schéma de Johnson plongé sur Γ0 ⊂ Γ, |Γ0 | ≥ α|Γ|,
et un sous-groupe H < Sym(Γ) avec
[Sym(Γ) : H] = |Γ|O(log |Γ|)
tel que le découpage, voire le schéma, est canonique en relation à H.
Le groupe H sera défini comme un stabilisateur de points en Γ. La valeur 2/3 dans
l’énoncé est assez arbitraire ; toute valeur > 1/2 serait valable. Une valeur proche à 1/2
affecterait les constantes implicites.
Preuve — Choisissons un x ∈ Γ arbitraire. Donnons à chaque y ∈ Γ la couleur de
c(x, y). Ce coloriage est canonique en relation à Gx . S’il n’y a aucune classe de couleur
C de taille > α|Γ|, la partition triviale (non-partition) de chaque classe nous donne un
α-découpage de Γ, et nous avons fini.
Supposons, par contre, qu’il y ait une classe de couleur – disons, Clin – de taille > α|Γ|.
Comme αn > n/2, la relation Rlin de cette couleur est non orientée (c(y, z) = lin ssi
c(z, y) = lin). Le complément de Rlin (ou de toute autre relation) est de diamètre 2
(exercice). Soient x, z ∈ Γ tels que c(x, z) = lin, et soit y ∈ Γ tel que c(x, y), c(z, y) 6= lin.
Appelons c(x, y) bis et c(z, y) terre.
Considérons le graphe biparti (V1 , V2 ; A) avec sommets V1 = Clin, V2 = Cbis et arêtes
Rterre ∩ (V1 × V2 ). Le graphe est non vide par définition et semirégulier par l’exercice
2.17b. Par homogénéité et cohérence, le nombre de y tels que c(y, w) est d’une couleur
donnée c0 est indépendant de w. Donc, il est toujours ≤ (1 − α)n < n/2 pour c0 6= lin.
Appliquant ceci à c0 = terre et V2 , nous voyons que le degré |{v1 ∈ V1 : (v1 , v2 ) ∈ A}|
est < n/2, et donc, comme |V1 | > n/2, le graphe n’est pas complet.
Nous appliquons donc la Proposition 5.7 à (V1 , V2 ; A) avec β = α|Γ|/|V1|. Notons que
|V2 | ≤ β|V1 |.
Nous travaillerons donc avec un graphe biparti (V1 , V2 ; A). La stratégie sera d’essayer,
soit de rendre V2 plus petit (par au moins un facteur constant), soit de trouver des
structures en lui. Soit ces structures nous permettront de réduire V2 quand même, soit
1125–26
elles nous aideront à découper V1 , ou à trouver un schéma de Johnson assez grand
sur V1 .
Tout d’abord, nous devrons borner la symétrie en V1 , c’est-à-dire réduire, voire
éliminer les jumeaux. Il y a deux raisons à ceci.
— Même si nous découvrions une structure assez riche en V2 , cela impliquerait peu
ou rien sur V1 si beaucoup d’éléments de V1 se connectent à V2 de la même façon.
— Si V2 est petit, nous colorierons chaque sommet de V1 par son ensemble de voisins
en V2 . Ceci nous donnera un coloriage canonique en relation à G(V2 ) . Or, dans ce
coloriage, deux sommets en V1 auront la même couleur ssi ils sont des jumeaux ;
donc, si aucune classe de jumeaux en V1 n’a > α|V1 | éléments, nous aurons un
α-coloriage.
Exercice 5.4. — Soit (V1 , V2 ; A) un graphe biparti semirégulier et non trivial. Alors,
aucune classe de jumeaux en V1 n’a plus de |V1 |/2 éléments.
Solution — Nous assurons que |A| ≤ |V1 ||V2 |/2 en prenant le complément s’il est
nécessaire. Soient d2 le degré des sommets en V2 et S une classe de jumeaux en V1 .
Montrez que d2 ≥ |S|, et donc |A| ≥ |S||V2|.
Exercice 5.5. — Soit (V1 , V2 ; A) un graphe biparti sans jumeaux en V1 . Soient V2 =
C1 ∪ C2 , C1 ∪ C2 = ∅. Montrez que, pour au moins un i = 1, 2, il n’y a aucune classe
de ≥ |V1 |/2 + 1 jumeaux en V1 dans le graphe (V1 , Ci ; A ∩ (V1 × Ci )).
Exercice 5.6. — Soit X = (Γ, c) une configuration cohérente. Soient C1 , C2 deux
classes de couleurs en Γ. Soit brun une couleur d’arêtes en A × B. Alors, pour x, y ∈
C1 , la couleur c(x, y) détermine si x et y sont des jumeaux dans le graphe biparti
(C1 , C2 ; Gbrun ).
Proposition 5.7 (Coupe ou Johnson biparti, ou « Una partita a poker »)
Soit X = (V1 , V2 ; A) un graphe biparti avec |V2 | < β|V1 |, où 2/3 ≤ β < 1, et tel
qu’aucune classe de jumeaux en V1 n’ait plus de 2|V1|/3 éléments. Alors, nous pouvons
trouver, en temps |V1 |O(1) ,
— soit un β-découpage de V1 ,
— soit un schéma de Johnson plongé sur V0 ⊂ V1 , |V0 | ≥ β|V1|,
et un sous-groupe H < G, G = Sym(V1 ) × Sym(V2 ), avec
[G : H] = |V1 |O(log |V1 |)
tel que le découpage, voire le schéma, est canonique en relation à H.
La condition sur les classes de jumeaux ici était remplie (même avec 1/2 à la place
de 2/3) à la fin de la preuve du Thm. 5.3, grâce à l’exercice 5.4.
En ce qui concerne le temps de la procédure, nous expliciterons quelques détails qui
pourraient ne pas être évidents. Ce qui sera le détail le plus délicat est l’indice [G : H].
Le groupe H sera défini comme un stabilisateur de points ; nous devons bien contrôler
le nombre de points que nous stabilisons.
1125–27
Esquissons la stratégie générale de la preuve. Ce que nous voulons est une réduction
à la Proposition 5.8, “Coupe-ou-Johnson cohérent”. Nous pouvons produire une configuration cohérente classique sur V1 ∪V2 à partir du graphe X, tout simplement en utilisant
Weisfeiler-Leman. Ce qui demande de la ruse est de garantir que la restriction X[C2 ] à
la classe de couleurs dominante (s’il y a une) soit non triviale.
Pour obtenir une configuration non-triviale sur C2 , nous noterons que le graphe X
induit lui-même une relation d-aire sur C2 , où d est au plus le degré de la majorité
d’éléments de V1 (si telle chose existe ; sinon, les degrés nous donnent une partition de
V1 ). Si la relation est triviale, dans le sens de contenir toutes les d-tuples d’éléments
distincts dans C2 , nous obtenons un schéma de Johnson. Si elle est non triviale mais
contient beaucoup de jumeaux, elle nous donne une manière de descendre à un C2
plus petit. S’il n’y a pas beaucoup de jumeaux, nous utilisons le Lemme des Designs
(supplémenté par un lemme standard sur les designs) pour obtenir une configuration
cohérente classique non-triviale sur C2 , ce qui était à trouver.
Preuve — Si |V1 | ≤ c, où c est une constante, nous colorions chaque v ∈ V1 par
lui-même. Ce coloriage est canonique en relation à H = {e} ; autrement dit, il n’est
pas canonique du tout. Peu importe : trivialement, |G| ≤ (c!)2 ≤ |V1 |O(log |V1 |) . Nous
pouvons donc supposer que |V1 | > c.
Si |V2 | ≤ (6 log |V1 |)3/2 (disons), alors, par la discussion ci-dessus, nous obtenons un
(2/3)-coloriage de V1 (et donc : un (2/3)-découpage de V1 ). Ce coloriage est canonique
en relation à un H d’indice
3
3
|V2 |! ≤ |V2 ||V2 | ≤ (6 log |V1 |) 2 (6 log |V1 |) 2 ≪ |V1 |log |V1 | .
Nous pouvons donc supposer que |V2 | > (6 log |V1 |)3/2 .
Notre première tâche est d’éliminer les jumeaux. Nous divisons V1 dans ses classes
de jumeaux et colorions chaque v ∈ V1 par son nombre de jumeaux et par son degré
dans le graphe (V1 , V2 ; A). Nous obtenons un β-découpage de V1 , sauf s’il y a un entier
d1 tel que l’ensemble V1′ des sommets v sans jumeaux et de degré d1 est de taille
|V1′ | > β|V1|. Supposons dorénavant que cela est le cas. Comme |V1′ | > |V2 | et qu’il n’y
a pas de jumeaux en V1′ , nous voyons que 1 < d1 < |V2 | − 1 ; nous pouvons supposer
que d1 ≤ |V2 |/2 en remplaçant A par son complément, si nécessaire.
Soit H = (V2 , A ) l’hypergraphe dont les arêtes sont les voisinages en (V1 , V2 ; A) des
sommets dans V1′ . (Elles sont toutes contenues en V2 .) L’hypergraphe est d1 -uniforme.
Comme il n’y a pas de jumeaux dans V1′ , il n’y a pas d’arêtes identiques. Si H est l’hypergraphe complet d1 -uniforme, alors V1′ peut être identifié avec le schéma de Johnson
Sd1 (V2 ). (Scoppia in un pianto angoscioso e abbraccia la testa di Johnson.)
Supposons alors que H n’est pas complet. Nous voudrions avoir un coloriage canonique sur V2d pour un d ≪ l, l = (log |V1′ |)/ log |V2 |, tel que les éléments de V2 ne soient
pas tous jumeaux. Si d1 ≤ 6⌈l⌉, nous définissons d = d1 et colorions {(v1 , . . . , vd ) ∈ V2d :
{v1 , . . . , vd } ∈ H } en écarlate, et tout le reste en gris.
Supposons, par contre, que d1 > 6⌈l⌉. Soit d = 6⌈l⌉. Nous colorions ~v = (v1 , . . . , vd )
en gris si les vi ne sont pas tous distincts ; dans le cas contraire, nous donnons à ~v la
1125–28
couleur
(7)
|{H ∈ H : {v1 , . . . , vd } ⊂ H}|.
Cette opération de coloriage peut être faite en temps de l’ordre de
log |V1′ |
6
d1
≤ |V1 | · |V2 |d = |V1 | · |V2 | log |V2 | = |V1 | · |V1′ |O(1) = |V1 |O(1) .
|V1 | ·
d
Si les tuples avec v1 , . . . , vd distincts n’avaient pas tous la même couleur λ, nous
aurions
un design d − (|V2 |, d1, λ) avec |V1′ | arêtes. Donc, par la Proposition 2.20, |V1′ | ≥
|V2 |
pour s = 3⌈l⌉. Comme |V2 | ≥ (6 log |V1′ |)3/2 et |V1′ | peut être supposé plus grand
s
qu’une constante,
s
s
log |V1′ |
|V2 |
|V2 |
|V2 |
1/3 3l
log |V2 |
≥
>
> |V2 |
= |V2 |
= |V1′ |,
s
s
6l
ce qui donne une contradiction.
Donc, pour d1 arbitraire, les tuples avec v1 , . . . , vd distincts n’ont pas tous la même
couleur ; en d’autres termes, les éléments de V2 ne sont pas tous jumeaux en relation
à notre nouvelle structure d-aire. S’il y a une classe S de jumeaux de taille > |V2 |/2,
alors, par l’exercice 5.5, au moins un des deux graphes (V1′ , S; A ∩ (V1′ × S)), (V1′ , V2 \
S; A ∩ (V1′ × (V2 \ S))) n’a aucune classe de > |V1′ |/2 + 1 jumeaux dans V1 . Comme
2|V1′ |/3 ≥ |V1′ |/2 + 1 pour |V1 | ≥ 8, nous appliquons la Proposition 5.7 elle-même à un
de ces deux graphes (disons, celui sur V1′ × S si les deux sont valables), et terminons.
(Peut-être que la taille de V2 est descendue seulement à |V2 | − 1, mais tous nos choix
ont été canoniques – des non-choix, si l’on veut – donc gratuits. Nous n’avons perdu
que du temps ; pour être précis, |V1|O(1) de temps, ce qui est acceptable.)
Alors, nous avons un coloriage de V2d en relation auquel il n’y a aucune classe de
jumeaux en V2 de taille > |V2 |/2. Nous appliquons les foncteurs F1 , F2 , F3 (WeisfeilerLeman) à ce coloriage. Puis nous utilisons le Lemme des designs (Prop. 5.1) avec α =
2/3. Nous trouvons les éléments x1 , . . . , xℓ ∈ V2 (ℓ = d − 1 ou ℓ = d − 2) dans l’énoncé
de la Proposition 5.1 par force brute, en temps proportionnel à |V2 |d = |V1 |O(1) . Nous
les fixons, et nous imposons que H fixe x1 , . . . , xℓ , ce qui a un coût de |V1 |O(1) , dans le
sens où
[G : Gx1 ,...,xℓ ] ≤ |V2 |d = |V1 |O(1) .
Si nous sommes dans le premier cas du Lemme de designs (pas de couleur dominante),
nous cueillons les classes de couleur, en commençant par la plus rouge (interprétez
la quantité en (7) comme une longueur d’onde), jusqu’à avoir une union des classes
S ⊂ V2 avec |V2 |/3 < |S| ≤ 2|V2 |/3. (Ceci marche s’il n’y a aucune classe de taille
> |V2 |/3 ; si telles classes existent, nous définissons S comme la classe la plus grande de
ce type.) Nous appliquons l’exercice 5.5, et obtenons un graphe (V1′ , V2′ , A ∩ (V1′ ∩ V2′ ))
remplissant les conditions de notre Proposition 5.7 avec V2′ = S ou V2′ = V2 \ S, et
donc |V2′ | ≤ α|V2|. Donc, nous appliquons la Proposition 5.7 à ce graphe ; la récursion
marche. (Il est important ici que |V2′ | ≤ α|V2|, puisque nous avons déjà encouru un coût
considérable (|V1 |O(1) ) dans l’indice.)
1125–29
Restons donc dans le deuxième cas du Lemme des designs : nous avons un coloriage
de V2 avec une classe de couleurs C ⊂ V2 telle que |C| ≥ 2|V2|/3, et une configuration
cohérente homogène classique Y non triviale sur C.
Nous définissons un graphe avec des sommets V1′ ∪ V2 , où V1′ est de couleur nacrée et
V2 vient d’être colorié par le Lemme des Designs ; les arêtes seront non pas seulement
celles en A ∩ (V1′ × V2 ) (coloriées en noir) mais aussi les arêtes entre les éléments de
V2 , dans les couleurs données par Y. Nous appliquons les raffinements F1 , F2 et F3
(Weisfeiler-Leman) à ce graphe, et obtenons une configuration cohérente X.
La configuration X[V2 ] est un raffinement de Y. Si elle n’a pas de couleur α-dominante,
nous réduisons notre problème à celui pour (V1′ , V2′ , A ∩ (V1′ ∩ V2′ )), |V2′ | ≤ α|V2 |, comme
avant ; nous pouvons appliquer la proposition 5.7 à un tel graphe sans changer β parce
que
2
|V2′ | ≤ |V2 | ≤ β|V2| < β|V1′ |.
3
La récursion marche ici aussi parce que |V2′ | ≤ 2|V2 |/3 : il est important que V2′ soit
plus petit que V2 par un facteur constant, puisque le coût entraı̂né jusqu’à maintenant
dans l’index [G : H] est déjà considérable (|V1|O(1) ).
Supposons donc qu’il y a une classe de couleurs (2/3)-dominantes C2 dans X[V2 ]. Elle
doit être un sous-ensemble de C car 2/3 + 2/3 > 1. La restriction de X[C2 ] n’est pas une
clique : si elle l’était, la restriction de Y à C2 l’aurait été aussi, et cela est impossible
par l’exercice 2.15.
Nous pouvons supposer qu’il existe une classe de couleurs C1 ⊂ V1′ en X1 qui satisfait
|C1 | > β|V1 | ; sinon, nous avons un β-coloriage de V1 , et pouvons finir. Le fait que
|C1 | > β|V1| implique que |C1 | > |V2 | ≥ |C2 |.
Nous pouvons supposer aussi que les arêtes de X en C1 × C2 ne sont pas toutes de
la même couleur. Si elles l’étaient, il y aurait une classe de ≥ |C1 | > β|V1| > |V1 |/2 + 1
jumeaux en V1 dans le graphe (V1 , C2; A∩(V1 ×C2 )), dont X[V1 ×C2 ] est un raffinement.
Dans ce cas, par l’exercice 5.5, nous aurions une réduction à (V1 , V2 \ C2 ; A ∩ (V1 × (V2 \
C2 ))), et nous pourrions finir en utilisant la Proposition 5.7 de façon récursive.
Ainsi, nous avons tout réduit à la Proposition 5.8 : nous l’appliquons à X[C1 ∪ C2 ].
Nous obtenons, soit un (1/2)-découpage de C1 , soit un graphe biparti (W1 , W2 ; A′ ),
W1 ⊂ C1 , W2 ⊂ C2 , avec |W1 | ≥ |C1 |/2, |W2 | ≤ |C2 |/2 ≤ |V2 |/2, tel qu’aucune
classe de jumeaux en W1 n’a plus que |W1 |/2 éléments. Nous pouvons supposer que
|W1 | > β|V1 |, parce que, dans le cas contraire, nous avons obtenu un β-découpage de
W1 . Alors, |W2 | < |W1 |/2. Nous pouvons, alors, faire de la récursion : nous appliquons
la Proposition 5.7 avec (W1 , W2 ; A′ ) à la place de (V1 , V2 ; A).
La récursion se finit après pas plus que O(log |V2 |) pas puisque |W2 | ≤ |V2 |/2. Si la
taille de W1 (ou de V1 ) décroı̂t en dessous de β|V1| (pour la valeur originale de |V1 |),
alors nous avons obtenu un β-découpage de V1 .
Comme nous l’avons vu, Coupe ou Johnson biparti utilise Coupe ou Johnson cohérent.
À son tour, Coupe ou Johnson cohérent se réduira à Coupe ou Johnson biparti pour un
graphe biparti (V1 , V2 ; A) avec V2 de taille au plus une moitié de la taille du V2 original.
1125–30
Proposition 5.8 (Coupe ou Johnson cohérent). — Soit X = (C1 ∪ C2 ; c) une configuration cohérente avec des classes de couleurs de sommets C1 , C2 , où |C1 | > |C2 |.
Supposons que ni c|C1 ×C2 ni c|C2 ×C2 est une fonction constante.
Alors, nous pouvons trouver, en temps |C1 |O(1) , soit
— un (1/2)-découpage de C1 , ou
— un graphe biparti (V1 , V2 ; A), Vi ⊂ Ci , |V1 | ≥ |C1 |/2, |V2 | ≤ |C2 |/2, tel que toute
classe de jumeaux en V1 contient au plus |V1 |/2 éléments,
et un élément y ∈ C2 , tel que le découpage, voire le graphe biparti, est canonique en
relation à Gy , où G = Sym(C1 ) × Sym(C2 ).
Il va de soi que dire que c|C2 ×C2 est constant équivaut à dire que X[C2 ] est une clique.
Preuve —
Si la restriction X[C1 ] était une clique, alors, par cohérence, pour toute couleur en
C1 × C2 – pourpre, disons – les voisinages dans (C1 , V2 ; Gpourpre ) des sommets en C2
nous donneraient un block design équilibré (et peut-être dégénéré) sur C1 . Le design
est incomplet parce que c n’est pas monochrome sur C1 × C2 . L’inégalité de Fisher nous
donne que |C2 | ≥ |C1 |, en contradiction avec nos suppositions. Donc, X1 [C1 ] n’est pas
une clique.
Si X[C1 ] n’est pas primitive, la plus rouge de ses relations non connexes nous donne
un (1/2)-découpage canonique de V1 , par l’exercice 2.16. Nous pouvons donc supposer
que X[C1 ] est primitif.
Nous avons deux cas à considérer : X[C2 ] primitive et X[C2] imprimitive.
Supposons d’abord que X[C2 ] est imprimitive. La relation non connexe la plus rouge
dans X[C2 ] nous donne une partition de C2 dans des ensembles B1 , . . . , Bm , m ≥ 2, tous
de la même taille ≥ 2. Nous avons donc trouvé une structure en C2 , et nous l’utiliserons,
soit pour découper C1 , soit pour réduire |C2| par un facteur constant. Le premier pas
consiste à montrer qu’il n’y a pas de jumeaux dans C1 .
Comme notre configuration est cohérente, la couleur d’une arête en C1 sait si ses
sommets sont des jumeaux en relation à C2 (Ex. 5.6) ; donc, s’il y avait des jumeaux
dans C1 en relation à C2 , nous aurions, soit qu’une des couleurs d’arêtes en C1 donne
une relation non connexe – ce qui contredit le fait que X[C1 ] est uniprimitive – soit que
tous les éléments de C1 sont des jumeaux en relation en C2 . Dans ce dernier cas, par
l’exercice 5.4, c|C1 ×C2 serait monochrome, ce qui n’est pas le cas. En conclusion, il n’y
a pas de jumeaux dans C1 en relation à C2 .
Notre intention est d’appliquer l’exercice 2.18 pour obtenir un graphe biparti
contracté C1 × {1, 2, . . . , m} avec m ≤ |C2 |/2. Nous devons seulement faire attention à
ce que ce graphe ne soit pas trivial.
Soit dk le degré de tout w ∈ C2 dans le graphe biparti (C1 , C2; Gk ) pour une couleur k
donnée, où Gk consiste en les arêtes de couleur k. (Par l’ex. 2.17a, le degré dk ne
dépend pas de w.) Si dk ≤ |C1 |/2 pour tout k, nous fixons un w ∈ C2 (non canonique)
et obtenons un (1/2)-coloriage de C1 en assignant la couleur c(x, w) au sommet x ∈
C1 . Supposons donc qu’il y a une couleur – que nous appellerons violet – telle que
1125–31
dviolet > |C1 |/2. S’il y a un 1 ≤ i ≤ m tel qu’il n’y a aucune classe de plus que |C1 |/2
jumeaux dans C1 en relation à Bi , nous fixons un élément y ∈ Bi d’un tel i (non
canoniquement), fixant ainsi cet i. De cette façon, nous obtenons une réduction au
graphe biparti (C1 , Bi ; Gviolet ∩ (C1 × Bi )).
Supposons que cela n’est pas le cas. Donc, pour chaque i, il existe une classe Ti ⊂ C1
de jumeaux en relation à Bi telle que |Ti | > |C1 |/2. Pour chaque w ∈ Bi , les arêtes
de w à tout v ∈ Ti sont de la même couleur ; alors, elles doivent être violettes. Soit
vert une couleur d’arêtes en C1 × C2 qui ne soit pas violet. Alors, le graphe X =
(C1 , {1, . . . , m}; D) dans l’exercice 2.18 n’est pas vide ; comme (vi , i) est violet pour
tout v ∈ Ti , X n’est pas complet non plus. Comme X est birégulier, il n’y a aucune
classe de jumeaux en C1 en relation à {1, . . . , m} avec > |C1 |/2 éléments (ex. 5.4). Nous
avons donc tout réduit à un graphe biparti X du type que nous désirions.
Considérons maintenant le cas de X[C2 ] primitive (16) . Fixons un y ∈ C2 arbitraire
(non canoniquement). Nous pouvons supposer qu’il y a une couleur – disons, violet
– telle que dviolet > |C1 |/2, puisque, sinon, les couleurs des arêtes qui connectent les
éléments de C1 avec y nous donneraient un (1/2)-coloriage de C1 . Écrivons V1 =
Lviolet (y) = {x ∈ C1 : c(x, y) = violet}. Donc |V1 | > |C1 |/2. Soit bleu une couleur
d’arêtes en X[C2 ] telle que le degré de Gbleu est (positif et) < |C2 |/2 ; une telle couleur existe parce que X[C2 ] n’est pas une clique. (S’il y a plusieurs couleurs comme
cela, nous choisissons la plus bleue d’entre elles.) Alors, V2 = Lbleu (y) ⊂ C2 satisfait
1 ≤ |V2 | < |C2 |/2.
Le graphe biparti (V1 , V2 ; Gviolet ∩ (W × U)) est semirégulier par l’exercice 2.17b. Il est
non vide parce que, pour tout u ∈ V2 , |Lviolet (u)| > |C1 |/2, et donc Lviolet (u) ∩ V1 6= ∅.
S’il était complet, nous aurions V1 ⊂ Lviolet (u) pour tout u ∈ V2 ; comme |V1 | =
|Lviolet (y)| = |Lviolet (u)|, ceci impliquerait que V1 = Lviolet (u). Or, cela voudrait dire
que y et u sont des jumeaux dans le graphe (C1 , C2 ; Gviolet ). Par le même argument
qu’avant (basé sur l’exercice 5.6), la primitivité de X[C2 ] et le fait que c|C1 ×C2 ne soit
pas monochrome impliquent qu’il n’y a pas de jumeaux dans C2 en relation au graphe
(C1 , C2 ; Gviolet ). Donc, (V1 , V2 ; Gviolet ∩ (V1 × V2 )) n’est pas complet. Par l’exercice 5.4,
nous obtenons qu’aucune classe de jumeaux dans (V1 , V2 ; Gviolet ∩ (V1 × V2 )) n’a plus de
|V2 |/2 éléments. Nous avons donc terminé.
16. Le problème dans la preuve originale de Babai était à ce point précis. Ce qui suit est un argument
alternatif proposé par lui (col rumore sordo di un galoppo) lorsque cet article était en train d’être édité.
Il est plus concis et élégant que l’argument d’origine, en plus d’être correct. Avant, la preuve faisait
deux fois (ou plus) recours à la proposition elle-même, ce qui faisait croı̂tre l’indice [G : H] de façon
catastrophique.
1125–32
5.3. Récursion et réduction
une
couleur
domine ?
non
récursion
n′ ≤ n/2
réduction
de G/N
à Altm′
m′ ≤ |m|/2
aligner
G transitif ?, etc.
réduction
de G/N
à Altm′
√
m′ ≪ m
5.3.1. Le cas sans couleurs dominantes. — Nous sommes dans le cas dans lequel un
coloriage cX : Γ → C n’a pas de couleur dominante. Ici cX est l’image d’une structure
X sous un foncteur F qui commute avec l’action de H(x1 ,...,xℓ ) , où H = Alt(Γ), xi ∈ Γ.
Le fait que cX n’a pas de couleur dominante nous servira pour trouver ou écarter ses
isomorphismes possibles en H~x = H(x1 ,...,xℓ ) . Pour trouver ou écarter des isomorphismes
en tout H = Alt(Γ), nous n’avons qu’à travailler avec un ensemble de représentants
{σ1 , . . . , σs }, s ≤ |Γ|ℓ = mℓ , des classes de H~x dans H, et à faire l’union de IsoH~x (cX , cYi )
−1
pour Yi = Yσi :
[
(8)
IsoH (X, Y) =
IsoH~x (X, Yi ) σi ,
IsoH~x (X, Yi ) ⊂ IsoH~x (cX , cYi ).
1≤i≤s
Ceci est similaire à l’équation (4), en §2.2. Le coût de la procédure est multiplié par
s ≤ mℓ .
Si le coloriage cX n’est pas une permutation (en H~x ) du coloriage cYi , alors
IsoH~x (cX , cYi ) = ∅. Supposons, par contre, qu’il y a au moins un τi ∈ H~x tel que
cX = cτYii . (Nous disons que τi aligne cX et cYi .) Il est trivial de trouver τi . Or
IsoH~x (X, Yi ) = IsoH~x (X, Yτi i ) τi−1 ⊂ AutH~x (cX )τi−1 .
Comme cX n’a pas de couleur dominante, ceci est assez contraignant, ce que nous
voulions.
Appliquons cette procédure générale au cas de G primitif que nous sommes en train
de discuter. Il y a une bijection ι : Ω → {S ⊂ Γ : |S| = k} ; donc, cX induit un coloriage
P
c′ : Ω → {(ki )i∈C : ki ≥ 0, i ki = k}. Nous sommes dans une situation similaire à celle
de la fin du §4.2, mais en mieux : il est facile de montrer que, comme aucune classe de
couleur de c possède plus de α|Γ| éléments, aucune classe de couleur de c′ possède plus
de α|Ω| éléments.
Nous procédons alors comme dans le cas intransitif de la preuve de Luks (Thm. 2.4),
ce qui réduit le problème à ≤ n problèmes d’isomorphisme de chaı̂nes pour des chaı̂nes
de longueur ≤ αn et de longueur totale ≤ n. Le dernier pas (lifting, « relèvement »)
consiste à trouver des éléments de G qui induisent τi . Étant donnée une bijection ι, ceci
est trivial.
1125–33
5.3.2. Le cas du découpage. — Considérons maintenant un α-découpage (fin de §2.3)
d’un ensemble de sommets Γ. Ce découpage sera donné canoniquement, à savoir, en tant
que l’image d’une structure X sous un foncteur, tout comme le coloriage au §5.3.1. Nous
pouvons supposer que le découpage a une classe de couleurs C dominante (|C| > α|Γ|,
α > 1/2), puisque, dans le cas contraire, nous pouvons passer au §5.3.1.
Nous voulons savoir quels éléments de Alt(Γ) respectent le α-découpage ; ceci nous
aidera à contraindre les isomorphismes de X, tout comme en (8). Par la définition de
α-découpage, C est partitionné en ℓ ≥ 2 ensembles de la même taille ≥ 2. Les seules
permutations en Altm0 , m0 = C, qui sont permises sont celles qui respectent la partition.
Le groupe qui respecte la partition est isomorphe à Altm0 /ℓ .
Nous avons donc réduit notre problème à un problème avec m′ = m0 /ℓ ≤ m/2. Après
avoir résolu ce problème, nous travaillons – comme dans le §5.3.1 – sur les autres classes
de couleurs.
Étant donnés deux α-découpages, nous vérifions si les partitions des deux découpages
ont le même nombre d’ensembles de la même taille pour chaque couleur, puis nous
alignons les deux découpages, et procédons exactement comme pour le problème de
l’automorphisme.
5.3.3. Le cas du schéma de Johnson. — Soit donné un schéma de Johnson sur un
ensemble de sommets Γ, ou plutôt deux schémas de Johnson J (mi , ki ), 2 ≤ ki ≤ mi /2,
sur des ensembles de sommets Γ1 , Γ2 de la même taille. Nous avons vu au §2.8 comment
identifier Γi (là, Ω) explicitement avec les ensembles de taille ki d’un ensemble Λi (là, Γ)
de taille mi . Si k1 6= k2 et m1 6= m2 , nos structures ne sont pas isomorphes. Si k1 = k2 et
m1 = m2 , nous établissons une bijection entre Λ1 et Λ2 et alignons les deux structures.
√
Nous avons réduit notre problème à un problème avec m′ ≪ m à la place de m.
La situation nous est donc même plus favorable que dans le cas du découpage. À
nouveau, nous laissons la comptabilité au lecteur.
***
Une petite confession : le cas de G primitif, que nous venons de finir de traiter,
pourrait être traité exactement comme le cas de G imprimitif, que nous examinerons
maintenant. La motivation du traitement séparé pour G primitif est pédagogique. Aucune peine n’est perdue, puisque toutes les techniques que nous avons étudiées nous
seront essentielles dans le cas imprimitif.
6. LE CAS IMPRIMITIF
Nous avons une application surjective explicite
φ : G → Alt(Γ),
1125–34
où G < Sym(Ω) est un groupe de permutation, |Γ| = m, |Ω| = n. Nous pouvons
supposer que |Γ| ≥ C log n, C arbitraire. L’application φ se factorise comme suit
G → G/N → Alt(Γ),
où N est le stabilisateur d’un système de blocs, et G/N → Alt(Γ) est un isomorphisme.
Nous devons déterminer IsoG (x, y), où x, y sont des chaı̂nes. Nous avons déjà résolu
le cas N = {e}.
Nous attaquerons le problème de façon locale : pour T ⊂ Γ, nous arriverons à obtenir
un certificat, soit du fait que φ(AutGT (x))|T contient Alt(T ) (« certificat de plénitude »),
soit du contraire. (Ici GT désigne le groupe {g ∈ G : T φ(g) = T }.) Nous calculerons tous
ces certificats pour T d’une taille k modérée. Si le nombre de certificats de plénitude est
très grand, nous aurons prouvé que φ(AutG (x)) contient un grand groupe alternant ; ce
qui restera à faire sera une version de la procédure du §4.2 (« pull-back »).
Dans le cas contraire, les certificats formeront une structure k-aire dont la symétrie
est bornée. Nous pourrons donc appliquer le Lemme des designs, suivi de Coupe-ouJohnson, comme avant. Il y a aussi quelques autres cas particuliers, mais ils nous
amènent à des α-découpages, α < 1, ce qui est aussi bien.
6.1. Les certificats locaux
6.1.1. Certificats d’automorphismes. — Un certificat local (17) pour T ⊂ Γ est
— soit une paire (« pas plein », W, M(T )), oùW ⊂ Ω, M(T ) < Sym(T ), M(T ) 6=
Alt(T ) (donc « pas plein ») et φ AutW
GT (x) |T < M(T ),
— soit une paire (« plein », K(T )), où K(T ) < AutGT (x), et φ(K(T ))|T = Alt(T ).
Le certificat local dépend de x de façon canonique. Il est clair qu’un certificat plein,
voire pas plein, garantit que φ(AutGT (x))|T est Alt(T ), voire ne l’est pas.
Si T est donné en tant que tuple ordonné, son certificat dépend de l’ordre de T seulement dans le sens de ne pas en dépendre : le même groupe {(23), e} < Sym({1, 2, 3})
(disons) a une apparence différente si nous le regardons du point de vue de l’ordre
(1, 2, 3) ou de l’ordre (2, 1, 3).
Nous construisons le certificat par une procédure itérative. Au début de chaque pas,
W ⊂ Ω et A(W ) est le groupe AutW
GT (x) ; la fenêtre W sera invariante sous A(W ). Au
tout début de la procédure, W = ∅ et A(W ) = GT . (Nous pouvons calculer GT comme
dans l’exercice 2.1c en temps |Ω|O(k) , où k = |T |.) À chaque pas, nous ajoutons à W
tous les éléments atteints par A(W ) (voir §4.1), puis nous mettons A(W ) à jour, selon
le nouveau W . Nous nous arrêtons si φ(A(W ))|T 6= Alt(T ) (non-plénitude) ou si W ne
croı̂t plus, ce qui veut dire qu’aucun élément de Ω \ W n’est atteint par A(W ).
Il est clair qu’il y aura ≤ |Ω| itérations. À la fin, dans le cas de non-plénitude,
nous retournons (« pas plein », W, φ(A(W ))) ; dans le cas de plénitude, nous retournons
« plein », A(W )(Ω\W ) . Il est clair que le stabilisateur des points A(W )(Ω\W ) est contenu
17. Ou « local-global », dans la nomenclature de Babai. « Global » fait référence à AutGT (x) <
Sym(Ω).
1125–35
non pas seulement dans AutW
GT (x), mais aussi dans
AutGT (x), puisqu’il fixe tous les
points de Ω \ W . Nous savons que φ A(W )(Ω\W ) = Alt(T ) par la Proposition 4.4a,
sous la condition que |T | ≥ max(8, 2 + log2 |Ω|).
Vérifier si φ(A(W ))|T = Alt(T ) est facile : nous n’avons qu’à vérifier, en utilisant
Schreier-Sims, si deux générateurs arbitraires de Alt(T ) sont en φ(A(W ))|T . De la même
façon, il est simple de déterminer quels éléments sont atteints par A(W ) : nous calculons
A(W )x pour chaque x ∈ Ω (par Schreier-Sims) et, toujours par Schreier-Sims, vérifions
si φ(A(W )x )|T = Alt(T ). Ceci prend du temps polynomial en |Ω|.
Il reste à voir comment mettre à jour A(W ), étant donné A (W − ), où nous écrivons
W − pour l’ancienne valeur de W . Tout élément de A(W ) est dans A(W − ), et donc
A (W ) = AutW
A(W − ) (x). Comme dans l’équation (4),
[
[
W
W
σ−1
,
x,
x
Iso
(9)
AutW
(x)
=
Aut
(x)
=
−
N
Nσ
A(W )
σ
σ
où N est le noyau de φ|A(W− ) et σ parcourt des représentants des k!/2 classes de N
en A(W ). Nous pouvons trouver rapidement un σ ∈ A(W − ) ∩ φ−1 ({τ }) pour tout
τ ∈ Sym(Γ), par Schreier-Sims.
La Proposition 4.4b nous donne que toute orbite de N contenue en W (l’ensemble
d’éléments atteints par A(W − )) est de longueur ≤ |W |/k ≤ |Ω|/k. En conséquence, par
la règle (3), mettre A(W ) à jour se réduit à |Ω| · (k!/2) problèmes de détermination de
Iso pour des chaı̂nes de longueur ≤ |Ω|/k.
Comme le nombre d’itérations est ≤ |Ω|, la procédure fait appel à Isomorphismede-Chaı̂nes ≤ |Ω|2 · (k!/2) fois pour des chaı̂nes de longueur ≤ |Ω|/k. Ceci – comme
la routine qui prenait |Ω|O(k) de temps – est acceptable pour k ≪ (log |Ω|)κ . Nous
choisirons κ = 1.
6.1.2. Comparaisons de certificats. — Une légère modification de la procédure cidessus nous permet d’élucider la relation entre deux certificats locaux pour deux
chaı̂nes. Soient x, x′ : Ω → Σ, T, T ′ ⊂ Σ, |T | = |T ′| = k. Pour S ⊃ T , soit xS la chaı̂ne
(
x(i)
si i ∈ S,
xS (i) =
glauque si i ∈
/S
où glauque ∈
/ Σ. Nous voulons calculer
(10)
′
IsoGT ·τT,T ′ xW , xW ,
où GT · τT,T ′ est la classe des éléments de G qui envoient l’ensemble T à T ′ , et W ′ est
la valeur de W retournée quand la donnée est T ′ à la place de T .
Pour déterminer (10), nous suivons la procédure (§6.1.1), modifiée de la façon suivante : nous mettrons à jour, dans chaque itération, non pas seulement A(W ), mais
′
aussi la classe A(W )τ d’isomorphismes en GT · τT,T ′ de xW à (x′ )W . Voilà comment le
1125–36
faire, de façon analogue à (9) :
−1
[
[
′ σ
′
W
′ W
W
′ W
,
=
IsoN x , (x )
(11)
IsoN σ x , (x )
σ
σ
où N est le noyau de φ|A(W− ) et σ parcourt des représentants des k!/2 classes de N
contenues en A(W − )τ − . Comme W est stabilisé par A(W− ) (et donc par N), le fait que
σ envoie W sur W ′ ou non dépend seulement de la classe de N à laquelle σ appartient.
(La classe Iso dans la dernière expression de (11) est vide si W σ 6= W ′.)
Comme avant, toute orbite de N contenue en W est de longueur ≤ |W |/k, et le
problème se réduit à |Ω| · (k!/2) appels par itération à Isomorphisme-de-Chaı̂nes pour
des chaı̂nes de longueur ≤ |W |/k ≤ |Ω|/k.
Par ailleurs, si T et T ′ nous sont données comme tuples ordonnés (T ), (T ′ ), il est
facile de déterminer
W′
,
(12)
I(x, x′ , T, T ′) = IsoG(T ) ·τ(T ),(T ′ ) xW , (x′ )
où G(T ) · τ(T ),(T ′ ) est la classe des éléments de G qui envoient le tuple ordonné (T ) à (T ′ ).
En effet, nous n’avons qu’à déterminer (10), puis utiliser Schreier-Sims pour déceler les
éléments de (10) qui envoient (T ) à (T ′ ) dans le bon ordre.
6.2. L’agrégation des certificats
coupe
coupe
certificats
locaux
plénitude
> 1/2 ?
coupe ou
relations ?
relations
réduction
de G/N
à Altm′
m′ ≤ |m|/2
Weisfeilerréduction
Leman,
Johnson de G/N
Lemme des
à Altm′
√
designs,
m′ ≪ m
etc.
oui
pas de couleur dominante
pullback
récursion
n′ ≤ 3n/4
En suivant la procédure du §6.1.1 pour une chaı̂ne x, nous trouvons des certificats
locaux pour chaque T ⊂ Γ de taille k, où k est une constante ∼ C log |Ω| (C > 1/ log 2)
et k < |Γ|/10. Soit F < AutG (x) le groupe engendré par les certificats pleins K(T ).
Soit S ⊂ Γ le support de φ(F ), c’est-à-dire l’ensemble des éléments de Γ qui ne sont
pas fixés par tout élément de φ(F ).
Notre objectif est de déterminer les isomorphismes IsoG (x, x′ ) de x à une autre
chaı̂ne x′ . Puisque les certificats sont canoniques, l’assignation de F et S à une chaı̂ne
l’est aussi. Donc, si nous arrivons à deux cas différents ci-dessous en suivant la procédure
pour x et pour x′ , les deux chaı̂nes ne sont pas isomorphes.
Cas 1 : |S| ≥ |Γ|/2, mais aucune orbite de φ(F ) n’est de longueur > |Γ|/2.
Alors, nous colorions chaque élément de Γ par la longueur de l’orbite qui le contient.
Ceci est un coloriage canonique. Soit aucune classe de couleurs n’est de taille > |Γ|/2,
1125–37
soit une classe de couleurs de taille > |Γ|/2 est découpée en ≥ 2 ensembles de la même
taille ≥ 2. Dans un cas comme dans l’autre, nous passons à une réduction/récursion.
Cas 2 : |S| ≥ |Γ|/2 et une orbite Φ de φ(F ) est de longueur > |Γ|/2.
Cas 2a : Alt(Φ) < φ(F )|Φ . Nous sommes dans le cas de grande symétrie. Nous
procédons comme au §4.2, jusqu’au point où nous devons déterminer IsoH (x, y) (où y
′
est (x′ )σ , σ ′ ∈ G, et H = φ−1 (Alt(Γ)Φ )). Définissons K = φ−1 Alt(Γ)(Φ) , et soient
σ1 , σ2 ∈ G des préimages (arbitraires) sous φ de deux générateurs de Alt(Φ) < Alt(G),
trouvées par Schreier-Sims. Nous savons que les classes AutKσi (x), i = 1, 2, sont non
vides, puisque Alt(Φ) < φ(F )|Φ . Comme K n’a pas d’orbites de longueur > |Ω|/2, nous
pouvons déterminer ces deux classes par des appels à Isomorphisme-de-Chaı̂nes pour
des chaı̂nes de longueur ≤ |Ω|/2 et longueur totale ≤ 2|Ω|. Elles engendrent AutH (x).
Encore par le fait que Alt(Φ) < φ(F )|Φ , la classe IsoH (x, y) sera non vide ssi
IsoK (x, y) est non vide. Nous pouvons déterminer cette dernière classe par des appels
à Isomorphisme-de-Chaı̂nes comme ci-dessus, puisque K n’a pas d’orbites de longueur
> |Ω|/2. Si elle est non vide, nous obtenons la réponse
IsoH (x, y) = AutH (x) IsoK (x, y).
Cas 2b : Alt(Φ) ≮ φ(F )|Φ . Soit d ≥ 1 l’entier maximal avec la propriété que
φ(F )|Φ est d-transitif, c’est-à-dire, φ(F )|Φ agit transitivement sur l’ensemble des dtuples d’éléments distincts de Φ. Par CGFS, d ≤ 5 ; si nous ne voulons pas utiliser
CGFS, nous avons la borne classique d ≪ log |Γ|.
Choisissons x1 , . . . , xd−1 ∈ Φ arbitrairement. Le reste de notre traitement de ce cas
sera donc seulement canonique en relation à
φ(g)
G(x1 ,...,xd−1 ) = {g ∈ G : xi
= xi ∀1 ≤ i ≤ d − 1},
ce qui, comme nous le savons, n’est pas un problème ; voir le début du §5.3.1.
La restriction du groupe φ(F )(x1 ,...,xd−1 ) à Φ′ = Φ\{x1 , . . . , xd−1 } est transitive sur Φ′ ,
mais elle n’est pas doublement transitive. Donc, la configuration cohérente schurienne
qui lui correspond n’est pas une clique. Nous livrons cette configuration à Coupe-ouJohnson (§5.2), tout comme à la fin du §5.2.
Pour comparer les configurations qui correspondent à deux chaı̂nes x, x′ , nous alignons leurs classes Φ d’abord. (Si elles ne sont pas de la même taille, ou si une chaı̂ne
nous donne le cas 2a et l’autre pas, les chaı̂nes ne sont pas isomorphes.) Les isomorphismes seront donc contenus dans le stabilisateur H < G de l’ensemble Φ (facile à
déterminer, comme vers la fin du §4.2, puisque φ est surjective). Nous pouvons remplacer
φ par l’application g 7→ φ(g)|Φ de H à Alt(Φ). Puis nous construisons les configurations
comme ci-dessus, et comparons ce que Coupe-ou-Johnson nous donne.
Tout à la fin, nous nous occupons du complément de C. Il s’agit, comme d’habitude,
d’appels à Isomorphisme-de-Chaı̂nes pour des chaı̂nes de longueur ≤ |Ω|/2 et longueur
totale < |Ω|.
Cas 3 : |S| < |Γ|/2. Nous commençons en alignant les supports S pour les chaı̂nes x,
x′ , et en remplaçant φ par g 7→ φ(g)|Γ\S , tout comme dans le cas 2(b).
1125–38
Nous allons définir une relation k-aire avec très peu de jumeaux, pour la donner après
au Lemme des designs.
Regardons la catégorie de toutes les chaı̂nes Ω → Σ, où Ω et Σ sont fixes, une action
de G sur Ω est donnée, et φ : G → Γ est aussi donnée. Nous la regardons depuis
longtemps, puisque nous devons comparer les couleurs sur des configurations induites
par des chaı̂nes différentes pour décider si ces dernières sont isomorphes.
Cette fois-ci, nous définirons des couleurs par des classes d’équivalence : deux paires
(x, (T )), (x′ , (T ′ )) (T, T ′ ⊂ Γ \ S, |T | = |T ′ | = k) sont équivalentes si l’ensemble des
isomorphismes en (12) est non vide. Nous colorions (T ) – dans le coloriage de (Γ \ S)k
correspondant à x – par la classe d’équivalence de (x, (T )). Ici, (T ) est un k-tuple
ordonné sans répétitions ; si (T ) a des répétitions, elle est coloriée en gris.
Pour x donné, aucune classe de jumeaux en Γ ne peut avoir ≥ k éléments : s’il existait
un tel ensemble avec ≥ k éléments, il contiendrait un ensemble T avec k éléments, et
tous les ordres (T ) de T auraient la même couleur. Ceci voudrait dire que l’ensemble
des isomorphismes en (12) serait non vide pour n’importe quels ordres (T ), (T ′ ) de T .
En conséquence, AutGT (xW ) contiendrait des éléments donnant toutes les permutations
possibles de T . Ceci nous donnerait une contradiction, puisque T , étant contenu en Γ\S,
n’est pas plein.
Alors, pourvu que k ≤ |Γ|/4, nous avons un coloriage de (Γ\S)k sans aucune classe de
jumeaux avec ≥ |Γ \ S| éléments. Nous pourrons donc appliquer le Lemme des designs,
après une application de raffinements habituels F2 , F3 (Weisfeiler-Leman).
Mais – pouvons-nous calculer ces coloriages ? Les classes d’équivalence sont énormes.
Par contre, il n’y a aucun besoin de les calculer. Tout ce dont nous aurons besoin,
pour comparer des structures qui viennent de chaı̂nes x, y, sera d’être capables de
comparer deux tuples (T ) (sur la configuration donnée par x ou y) et (T ′ ) (sur la
configuration donnée par x′ = x ou x′ = y) et dire si elles sont de la même couleur.
En d’autres termes, nous devrons calculer – au tout début de la procédure, pour toute
paire ((T ), (T ′)), |T | = |T ′ | = k, et pour les paires de chaı̂nes (x, x), (x, y), (y, y) –
l’ensemble d’isomorphismes en (12), ce que nous savons déjà faire. Les couleurs sont
donc, dans la pratique, des entrées dans un index que nous enrichissons et auquel nous
faisons référence durant nos procédures.
Nous invoquons donc le Lemme des Designs, suivi par Coupe-ou-Johnson, et le reste
de la procédure.
— Fine dell’opera —
Le lecteur peut vérifier que les informations précisées jusqu’à ici (temps pris
par des procédures, type de récursion) sont assez pour donner une borne du type
exp(O(log |Ω|)c ) pour le temps de l’algorithme qui résout le problème de l’isomorphisme
de chaı̂nes. Ceci donne une borne exp(O(log n)c ) pour le problème de l’isomorphisme de
graphes avec n sommets. Avec un peu plus de travail, il devient clair que, dans un cas
comme dans l’autre, c = 3. Nous donnons les détails dans l’appendice. L’exposant c = 3
1125–39
est plus petit que celui d’origine ; il est devenu possible grâce à quelques améliorations
et simplifications que j’ai été capable d’apporter.
Remerciements .— Je remercie vivement L. Babai, J. Bajpai, L. Bartholdi, D. Dona,
E. Kowalski, W. Kantor, G. Puccini, L. Pyber, A. Rimbaud et C. Roney-Dougal pour
des corrections et suggestions. En particulier, L. Babai a répondu à beaucoup de mes
questions, et m’a aussi fourni des versions corrigées ou améliorées de plusieurs sections
de [Ba]. En particulier, les §2.3–2.4 et §5.1 sont basés sur ces nouvelles versions. Je
voudrais aussi remercier V. Ladret et V. Le Dret pour un grand nombre de corrections
d’ordre typographique et linguistique.
APPENDICE A. ANALYSE DU TEMPS D’EXÉCUTION
A.1. Quelques précisions sur la procédure principale
À tout moment donné, nous travaillons avec un groupe transitif G < Sym(Ω) qui
S
agit sur un système de blocs B = {Bi }, Ω = i Bi , Bi disjoints ; nous notons N le
noyau de l’action sur B. À vrai dire, nous aurons toute une tour de systèmes de blocs
B1 , B2 , . . . , Bk , où Bi est un raffinement de Bi+1 ; B signifiera Bk , le système le moins
fin. Au début, il n’y a qu’un système, B1 , dont les blocs Bi sont tous de taille 1, et dont
le noyau N est trivial.
Nous voudrions que l’action de G sur B soit primitive. Donc, si elle ne l’est pas, nous
ajoutons à la tour un système minimal Bk+1 tel que Bk soit un raffinement de Bk+1 .
Nous redéfinissons B = Bk+1 ; N sera le noyau du nouveau B.
Si G/N est petit (≤ bO(log b) , où b = |B| ; cas (a) du Théorème 3.1 (Cameron)), nous
réduisons notre problème à plusieurs instances du problème avec N à la place de G.
Chacune de ces instances se décompose en plusieurs instances – une pour chaque orbite
de N. Chaque orbite Ω′ de N est contenue dans un bloc de B. Les intersections de Ω′
avec les blocs de B1 , B2 , . . . nous donnent une tour de systèmes de blocs pour N|Ω′ .
Si nous sommes dans le cas (b) du Théorème 3.1, nous passons à ≤ b instances du
problème avec M ⊳ G (où[G : M] ≤ b) à la place de G. Nous passons à un nouveau
≤ b blocs, et l’ajoutons à la tour comme son nouveau dernier
système (18) B ′ de m′ = m
k
′
niveau. Nous notons N le noyau de l’action de M sur B ′ . Alors, M/N ′ = Alt(k)
m . Nous
remplaçons G par M et redéfinissons B = B ′ , N = N ′ .
Donc, nous avons un isomorphisme de G/N à Altm . Nous sommes dans le cas principal
que Babai attaque. Ses méthodes amènent à une réduction de Altm , soit à un groupe
intransitif sans grandes orbites, soit à un produit Alts1 ≀ Alts2 , s1 , s2 > 1, s1 s2 ≤ m, soit
√
à un groupe Altm′ , m′ ≪ m. (Nous simplifions quelque peu. Nous pourrions avoir,
disons, un produit Alts1 ≀ Alts2 , agissant sur une orbite de grande taille s1 s2 ≤ m, et
18. Ce système peut être égal à B seulement si M = G ; voir la deuxième note de pied de page dans
l’énoncé du Théorème 3.1. Dans ce cas-là, le passage de G à M est bien sûr gratuit.
1125–40
d’autres groupes sur des petites orbites, ou plusieurs produits agissant sur des petites
orbites.)
Dans le cas intransitif sans grandes orbites, nous procédons comme dans la preuve
de Luks. (La procédure aura été plus coûteuse que dans Luks, mais grâce au manque
de grandes orbites, le gain dans la récursion est aussi plus grand.) Dans le cas de Altm′ ,
√
m′ ≪ m, nous itérons la procédure. Dans le cas de Alts1 ≀ Alts2 – qui correspond à un
découpage dans des ensembles de taille r de la même couleur – nous avons une action
primitive de Alts sur un système de s blocs de taille r. Nous passons, alors, à cette
action et à ces blocs, sans oublier les blocs B ′ , auxquels nous retournons plus tard,
après avoir fini de travailler sur Alts .
Il est clair que ce type de procédure réduit complètement Altk en un nombre
d’itérations qui n’est pas supérieur à log2 m.
A.2. Récursion et temps
Examinons le temps total d’exécution de l’algorithme qui trouve les isomorphismes
entre deux chaı̂nes. Les pas individuels sont peu onéreux ; aucun ne précise plus de
nO(log n) de temps. Notre attention doit se porter avant tout sur la récursion.
Dans la procédure générale, une récursion est toujours d’une descente, soit vers des
chaı̂nes plus courtes, soit vers un groupe plus petit, ou au moins coupé dans des tranches
plus fines par une tour de systèmes de blocs ayant plus de niveaux. Dans le premier
type de descente, le groupe reste le même ou, plutôt, est remplacé par une restriction de
lui-même. Dans le deuxième cas, la longueur des chaı̂nes reste la même. (Nous pouvons
aussi avoir un mélange des deux cas – tant mieux : le groupe devient plus petit et les
chaı̂nes se raccourcissent aussi.)
La descente la moins coûteuse, et moins avantageuse, est celle du cas intransitif de
la procédure de Luks. Il pourrait arriver que G ait deux orbites sur Ω (|Ω| = n), une
de longueur n − 1 et une de longueur 1. Ceci serait même compatible avec une borne
polynomiale sur le temps, pourvu que le temps pris avant la descente soit lui-même
polynomial : nc+1 ≤ (n − 1)c+1 + 1c+1 + nc pour c ≥ 1.
D’autres types de descente sont plus coûteux, mais aussi plus avantageux : nous
descendons à des chaı̂nes de longueur ≤ n/2 (ou ≤ 2n/3), ou de Altm à Alts1 ≀ Alts2 ,
s1 s2 ≤ m, s1 , s2 ≤ m/2, par exemple. Il est clair qu’il est impossible de descendre plus
qu’un nombre logarithmique de fois de cette façon.
Il est crucial de ne pas oublier qu’un coût (considérable) peut être caché dans une
perte de canonicité. Si nos choix ne sont canoniques qu’en relation à un sous-groupe H
de notre groupe G, le coût de leur application sera multiplié par [G : H]. (Voir §5.3.1.)
***
Considérons, alors, le coût de chaque procédure. Le cas intransitif de Luks est, comme
nous l’avons déjà vu, compatible même avec une borne polynomiale. Concentrons-nous
alors sur le cas où G agit de façon primitive sur un système de blocs ; soit N le noyau.
1125–41
Si nous sommes dans le cas (a) du Théorème 3.1, ou dans le cas (b), mais avec
m ≤ C log n, nous faisons appel à (m′ )O(log n) instances de la procédure principale pour
des chaı̂nes de longueur n/m′ (où m′ ≥ m). Ceci est consistant avec une borne totale
du type exp(O((log n)c )), c ≥ 2. Nous pouvons, donc, nous concentrer sur le cas où il
existe un isomorphisme φ : G/N → Alt(Γ), |Γ| = m > C log n. (La procédure du §2.8
rend cet isomorphisme explicite.)
Le premier pas à considérer est la création de certificats locaux, avec, comme objectif,
la création d’une relation k-aire sur Γ. (Si G est primitif, créer une telle relation est
trivial ; voir le début du §5.) Il y a nk certificats locaux, où k = 2 log n (disons) ; nous
devons les calculer et aussi comparer toute paire de certificats. Déjà le premier pas du
calcul d’un certificat, à savoir le calcul de GT , prend un temps nO(k) (plus précisément,
O((n/k)O(k))). D’autres calculs prennent moins de temps. L’usage de la récursion, par
contre, est relativement lourd : nous faisons appel à la procédure principale ≤ n2 · k!
fois pour des chaı̂nes de longueur ≤ n/k. Ceci se passe pour chaque ensemble T de
taille k, c’est-à-dire ≤ nk /k! fois. La procédure pour comparer des paires de certificats
est analogue.
Nous faisons donc appel à la procédure principale O(n2k+1 ) fois pour des chaı̂nes de
longueur ≤ n/k. Dans chacun de ces appels, notre tour de stabilisateurs est héritée :
notre groupe est un groupe transitif, égal à la restriction de N − à une de ses orbites,
où N − (noté N au §6) est un sous-groupe d’un sous-groupe A(W − ) de G.
Pour deux systèmes de blocs consécutifs Bi , Bi+1 , notons ri le nombre de blocs de
Bi dans chaque bloc de Bi+1 . Il est clair que ce nombre n’augmente pas quand nous
passons à la restriction d’un sous-groupe de G (par exemple, N − ) à une de ses orbites.
Examinons maintenant l’agrégation des certificats locaux (§6.2). Il y a trois cas. Dans
le premier, le temps de calcul additionnel est à peu près trivial, et nous obtenons une
réduction, soit à un groupe intransitif sans grandes orbites, soit à un produit Alts1 ≀ Alts2
sur une grande orbite et éventuellement d’autres groupes sur des orbites plus petites.
Ici, déjà, l’analyse devient délicate. Nous devons prendre en considération non seulement la taille du domaine mais aussi le groupe qui agit sur lui. Plus précisément, nous
devons borner le nombre de fois que notre tour B1 , B2 , . . . , Bk pourrait être raffiné ou
raccourci encore. Ceci sera mesuré par
X
ρ=
(2⌊log2 ri ⌋ − 1),
1≤i≤k−1
où nous supposons que nous avons enlevé des systèmes répétés de la tour (donc ri > 1).
Notons F (n, r) le temps d’exécution de la procédure principale pour des chaı̂nes de
longueur n et pour une tour de systèmes de blocs pour G telle que le paramètre ρ est
≤ r. Une réduction de G/N fait décroı̂tre r par au moins 1 ; un coloriage sans aucune
grande classe de couleurs assure une descente vers des chaı̂nes de longueur ≤ n/2. Nous
devrons aussi inclure un facteur de log n2k , prenant en considération le temps requis
1125–42
pour accéder à nos comparaisons de paires de certificats locaux (19) . Donc, dans le cas
que nous examinons, F (n, r) est borné par
!
X
nO(k) + n2k+1 F (n/k, r) + F (n1 , r − 1) +
F (ni , r) · O(k log n),
i≥2
où
P
ni = n et ni ≤ n/2 pour i ≥ 2, ou
nO(k) +
n2k+1 F (n/k, r) +
X
i
F (ni , r)
!
· O(k log n),
où
ni = n et ni ≤ n/2 pour i ≥ 1. Puisque k ≪ log n, ceci est consistant avec F (n, r) = exp (O (r + log n)c ) pour c ≥ 3, ou même avec F (n, r) =
exp (O ((log n)c1 + (log r)c2 )) pour c1 ≥ 3 et c2 ≥ 1, par exemple.
Le cas 2a a un coût très similaire, à un facteur constant près. Le cas 2b et 3 sont
différents. Dans les deux cas, nous arrivons à construire une relation d-aire, avec d ≤ 5,
dans le cas 2b, et d = k ≪ log n dans le cas 3. Puis, nous appelons Weisfeiler-Leman,
suivi du Lemme des Designs pour des configurations d-aires, et, finalement, Coupe-ouJohnson.
Weisfeiler-Leman prend un temps |Γ|O(d) = mO(d) . Le Lemme des designs garantit
l’existence d’un tuple (x1 , . . . , xℓ ) ∈ Γ, ℓ ≤ d − 1, avec certaines propriétés. Nous
cherchons un tel tuple par force brute, ce qui prend un temps O(md ). Ce qui est plus
important est que ce choix n’est pas canonique. Donc, le temps d’exécution de tout ce
qui reste est multiplié par md = mO(log n) .
Coupe-ou-Johnson prend un temps O(md ). Ici, à nouveau, nous faisons des choix qui
ne sont pas complètement canoniques ; ils imposent un facteur de mO(log m) sur tout ce
qui suit. Le résultat de Coupe-ou-Johnson est soit un β-découpage, ce qui implique une
réduction à un produit du type Alts1 ≀ Alts2 et/ou à des chaı̂nes plus courtes, soit un
√
schéma de Johnson, ce qui implique une réduction à Altm′ , m′ ≪ m. Donc, soit
(13)
!
X
F (ni , r) ,
F (n, r) ≤ nO(k) + O kn2k+2 F (n/k, r) + mO(log n) 1 + F (n1 , r − 1) +
P
i≥2
où
P
(14)
ni = n, et ni ≤ n/2 pour i ≥ 2, ou
F (n, r) ≤ nO(k) + O kn2k+2 F (n/k, r) + mO(log n) 1 +
X
i
!
F (ni , r) ,
P
où
ni = n et ni ≤ n/2 pour i ≥ 1.
Ici m ≤ n. (Nous pourrions travailler avec une borne moins grossière, mais cela
nous servirait peu.) Donc, les inégalités (13) et (14) sont consistantes avec F (n, r) =
exp (O (r + log n)c ) pour c ≥ 3.
19. Faire ce type de comparaisons à l’avance nous aide, mais ne pas les faire à l’avance ne changerait
pas l’ordre du temps utilisé, asymptotiquement.
1125–43
Comme r ≤ 2 log2 n, nous concluons que le temps total d’exécution de la procédure
pour déterminer les isomorphismes entre deux chaı̂nes de longueur n est
3
F (n, r) ≤ eO(log n) .
RÉFÉRENCES
[Ba]
L. BABAI – Graph Isomorphism in Quasipolynomial time, prépublication,
disponible en ligne sur arxiv.org:1512.03547.
[Ba2]
L. BABAI – Graph Isomorphism in Quasipolynomial time (Extended Abstract),
dans Proc. 48th ACM STOC (2016), 684–697.
[Ba3]
L. BABAI – Lectures on Graph Isomorphism, University of Toronto, Dept. of
Computer Science, notes polycopiées, 1979.
[BCP]
L. BABAI, P. J. CAMERON et P. P. PÁLFY – On the orders of primitive
groups with restricted nonabelian composition factors, Journal of Algebra 79
(1982), 161–168.
[BKL]
L. BABAI, W. M. KANTOR et E. M. LUKS – Computational complexity and
the classification of finite simple groups, dans Proc. 24th IEEE FOCS (1983),
162–171.
[BLS]
L. BABAI, E. M. LUKS et Á. SERESS – Permutation groups in NC, dans
Proc. 19th ACM STOC (1987), 409–420.
[BaPS] L. BABAI, P. P. PÁLFY et J. SAXL – On the number of p-regular elements
in finite simple groups, LMS J. Comput. and Math. 12 (2009), 82–119.
[CFI]
J. CAI, M. FURER et N. IMMERMAN – An optimal lower bound on the
number of variables for graph identification, Combinatorica 12 (1992), 389–
410.
[Cam]
P. J. CAMERON – Finite permutation groups and finite simple groups, Bull.
London Math Soc. 13 (1981) 1–22.
[EvP]
S. EVDOKIMOV et I. N. PONOMARENKO – On highly closed celular algebras and highly closed isomorphisms, Electr. J. Comb. 6 (1999).
[F]
R. A. FISHER – An examination of the different possible solutions of a problem
in incomplete blocks, Ann. of Eugenics, 10 (1940), 52–75.
[FHL]
M. FURST, J. HOPCROFT et E. LUKS – Polynomial-time algorithms for
permutation groups, dans Proc. 21st IEEE FOCS (1980), 36–41.
[Hi]
D. G. HIGMAN – Finite permutation groups of rank 3, Math. Z. 86 (1964),
145–156.
[ImL]
N. IMMERMAN et E. S. LANDER – Describing graphs : a first-order approach
to graph canonization, dans Complexity Theory Retrospective — in honor of
Juris Hartmanis on the occasion of his 60th birthday, Springer, 1990, 59—81.
1125–44
[Lu]
E. LUKS – Isomorphism of graphs of bounded valence can be tested in polynomial time, J. of Comput. and Sys. Sci. 25 (1982), 42–65.
[Ma]
A. MARÓTI – On the orders of primitive groups, J. Algebra 258 (2) (2002),
631–640.
[Py]
L. PYBER – A CGFS-free analysis of Babai’s quasipolynomial GI algorithm,
disponible en ligne sur arxiv.org:1605.08266.
[Py2]
L. PYBER – On the orders of doubly transitive permutation groups, elementary
estimates, J. of Combin. Th. A 62 (1993), 361–366.
[RChW] D. K. RAY-CHAUDHURI et R. M. WILSON – On t-designs, Osaka J. Math.
12 (1975), 737–744.
[Sch]
O. SCHREIER – Die Untergruppen der freien Gruppen, Abh. Math. Semin.
Univ. Hambg. 5 (1927), 161–183.
[Si1]
C. C. SIMS – Graphs and finite permutation groups, Math. Z. 95 (1967), 76–89.
[Si2]
C. C. SIMS – Computational methods for permutation groups, dans « Computational Problems in Abstract Algebra », pp. 169–184, Pergamon, Oxford,
1970.
[SW]
X. SUN et J. WILMES - Faster canonical forms for primitive coherent configurations, dans Proc. 47th ACM STOC (2015), 693–702.
[WL]
Б. Ю. ВЕЙСФЕЙЛЕР и А. А. ЛЕМАН – Приведение графа к каноническому виду и возникающая при этом алгебра, НТИ, сер. 2, 9 (1968), 12–16.
Harald Andrés HELFGOTT
Universität Göttingen
Mathematisches Institut
Bunsenstrasse 3-5
D-37073 Göttingen
Allemagne
E-mail : [email protected]
| 4 |
Modified Recursive Cholesky (Rchol) Algorithm
arXiv:1703.05904v1 [math.AC] 17 Mar 2017
An Explicit Estimation and Pseudo-inverse of Correlation
Matrices
Vanita Pawar
Krishna Naik Karamtot
vanita [email protected]
[email protected]
Abstract—The Cholesky decomposition plays an important role
in finding the inverse of the correlation matrices. As it is a fast
and numerically stable for linear system solving, inversion, and
factorization compared to singular valued decomposition (SVD),
QR factorization and LU decomposition. As different methods
exist to find the Cholesky decomposition of a given matrix,
this paper presents the comparative study of a proposed RChol
algorithm with the conventional methods. The RChol algorithm
is an explicit way to estimate the modified Cholesky factors of a
dynamic correlation matrix.
Cholesky decomposition is a fast and numerically stable for
linear system solving, inversion, and factorization compared to
singular valued decomposition (SVD), QR factorization and
LU decomposition [1]. The wireless communication system is
highly dependent on matrix inversion of the correlation matrix.
Such system consists of a huge matrix inversion. An outdoor
wireless communication has a time-varying channel which
changes dynamically for mobile user. In case of narrowband
channel, the channel is considered constant for a symbol
duration, whereas for broadband, it is changing within a
symbol period. Such time-varying channel forms the special
structure of channel matrix and correlation matrix. To exploit
such special structure, a novel modified recessive Cholesky
(RChol) algorithm is introduced in [2]. Our proposed (RChol)
algorithm is a computational efficient algorithm to compute the
modified Cholesky factors of known as well as an unknown
covariance matrix.
In this paper, we present the comparative study of conventional Cholesky algorithm and the RChol algorithm to manifest
the importance of the proposed algorithm in a highly dynamic
wireless communication.
I. S YSTEM M ODEL
In wireless communication system, number of transmit
and or received antennas are used to improve the diversity
of the system. The channel h between transmitter and receiver has the different form and depends on the number
of antennas used at the transmitter and the receiver side.
The channel for Single-input-single-output (SISO) as h =
{h0n , h1n , . . . hL−1
}, for Single-input-multiple-output (SIMO)
n
as h = {h0n , h1n , . . . hL−1
} and for Multiple-input-multiplen
output (MIMO): H = {H0n , H1n , . . . HL−1
}.
n
Let y(n) be received signal with the number of transmit
antennas 0 K = 10 , multipath L − 1 and channel noise v,
represented as
y(n) :=
K L−1
X
X
hk (n; l)sk (n − l) + v(n),
n = 0, 1, ..T − 1
k=1 l=0
(1)
Let yN (n) be the received vector by stacking N successive received vectors. Where yN (n) = [y(n), y(n −
1), . . . y(n − N + 1)]T and the transmitted symbol vector is
sN = [s(n), s(n − 1), .s(n − N + 1)]T . Then yN (n) can be
represented in matrix form as yN (n) = HN sN (n) + vN (n)
and the correlation matrix for yN can be written as RN (n) =
H
(n)]. Let rn00 = E[y(n)yH (n)] and rnij =
E[yN (n)yN
H
E[y(n − i)y (n − j)] then the correlation matrix RN (n) and
RN (n − 1) at time instant 0 n0 and 0 n − 10 can be represented
as equation (2) and equation (3) respectively.
0
rn
00
rn
10
rn
01
rn
11
.
.
.
.
.
.
......
......
.
.
.
......
rn
0(N −2)
rn
1(N −2)
rn
0(N −1)
rn
1(N −1)
.
.
.
.
.
.
(2)
rn
(N −1)0
rn
(N −1)1
rn
11
rn
12
......
rn
1(N −1)
rn
1N
.
.
.
rn
(N −1)1
rn
N1
.
.
.
rn
(N −1)2
rn
N2
.
.
.
......
.
.
.
rn
(N −1)(N −1)
rn
N (N −1)
.
.
.
rn
(N −1)N
rn
NN
......
rn
(N −1)(N −2)
0
rn
(N −1)(N −1)
(3)
II. C HOLESKY D ECOMPOSITION
The correlation matrix is complex matrix and the pseudoinverse of R can be computed from Cholesky factors, such
that if lower triangular matrix L is Cholesky factors of the
correlation matrix R and can be represented as R = LLH then
pseudo-inverse of R can be computed as R̂† := L−H L−1 . The
section below details the conventional Cholesky algorithms
and the RChol algorithm.
A. Cholesky Decomposition (Gaxpy version)
The Cholesky Decomposition [3] factorizes a complex (or
real-valued) positive-denite Hermitian symmetric matrix into
a product of a lower triangular matrix and its Hermitian
transpose. R = LLH where, L is a lower triangular matrix
and LH is Hermitian of L. The matrix R must be a positive
definite and this method needs square root operation.
1)
1)
2)
3)
4)
5)
Algorithm steps:
Compute R at each time instant n
Find the square root of diagonal element of R
Modify each column of R
Equate lower triangular part of R to L
Repeat steps (1) to (4) for each time instant
Algorithm 1 Cholesky Decomposition R = LLH
Initialization:
[R]1:N,1
[R]1:N,1 = q
[R]1,1
0
Order Updates on R s: for k = 2 to N
H
[R]k:N ,k = [R]k:N ,k − [R]k:N ,1:k−1 [R]k,1:k−1
[R]k:N,k
[R]k:N,k = q
[R]k,k
end
[4] that Levinson recursion may be used to derive the Lattice
recursion for computing QR factors of data matrices and
Lattice recursion can be used to derive the Schur recursion for
computing Cholesky factors of a Toeplitz correlation matrix.
The detail algorithm is given in algorithm 3. The Schur
algorithm like previously mentioned algorithm computes all
N inner product to compute matrix R for initialization.
1) Algorithm steps:
1) Compute R at each time instant n
2) Initialize first column of R to the first column of
Cholesky factor H
3) Compute rest column recursively from columns of R
4) repeat step (1) to (3) for each time instant
Algorithm 3 Schur Algorithm R = LLH
Initialization:
for k = 1
L = tril(R)
T
n
n
n
H1 (n) = [r00 , r10 , .....r(N −1)0 ]
T
n
n
H̃1 (n) = [0, r10 , .....r(N −1)0 ]
H
B. Modified Cholesky Algorithm R = LDL
To avoid square root operation, a modified Cholesky algorithm [3] is used, which avoids square root operation by
introducing a diagonal matrix D in between Cholesky factors.
The modified Cholesky algorithm does not require R to be a
positive definite matrix but it’s determinant must be nonzero.
R may be rank deficient to a certain degree i.e. D may contain
negative main diagonal entries if R is not positive semidefinite.
1) Algorithm steps:
1) Compute R at each time instant n
2) Modify each column of R
3) Equate the strictly lower part of matrix R to L1 with
ones on the main diagonal
4) Equate main diagonal of R with the main diagonal of D
5) Repeat step (1) to (4) for each time instant.
Algorithm 2 Modified Cholesky Decomposition R = LDLH
Initialization:
[R]2:N,1 =
[R]2:N,1
[R]1,1
0
Order Updates on R s: for k = 2 to N
for i=1:k-1
[v]i = [R]1,k
, if i = 1
[R]i,i [R]∗
n,i , if i 6= 1
end
[v]k = [R]k,k − [R]k,1:k−1 [v]1:k−1
[R]k,k = [v]k
[R]k+1:N ,k =
[R]k+1:N ,k − [R]k+1:N ,1:k−1 [v]1:k−1
[v]k
end
D = diag(daig(R))
L = tril(R)
C. Recursive Cholesky Algorithm (The Shcur Algorithm)
RSchur := HHH
The Schur algorithm recursively compute the columns of
the lower triangular matrix H form matrix R. It is shown in
0
Order Updates on H s:
for k = 2 to N
σk Hk = k̃ref [kref (σk−1 H̃k−1 ) + ZM (σk−1 Hk−1 )]
k
k
σk H̃k = k̃ref [(σn H̃k ) + kref ZM (σk Hk )]
k
k
Scaling Factors:
(σk H̃k )k
kref = −
k
(σk Hk )k−1
(σk Hk )k−1
k̃ref =
k
(σk+1 Hk+1 )k
∗ Note: Here notation is followed same as in [4] and H represents vector
D. The RChol Algorithm L̂D̂L̂H := R
It is clear from above equation (2) and equation (3) that
RN (n) can be represented from submatrix of RN (n − 1). To
utilize such special structure of correlation matrices, we propose a modified recursive Cholesky algorithm to compute the
Cholesky factors recursively. This algorithm is modification of
Schur algorithm mentioned above. The more general approach
consists of using the Schur algorithm to induce recursion for
columns of dynamic L. This algorithm does not need N inner
products to compute the correlation matrix R. The Cholesky
factors are computed explicitly such that Let L1 = LD1/2 then
−1
pseudo-inverse can be computed as R̂† = L−H
1 L1
1) Algorithm steps:
1) Initialize first the first column of Cholesky factor A as
A1
2) Compute second column recursively from A1 (n) and
A1 (n − 1)
3) Substitute sub-matrix A2:N −1,2:N −1 (n − 1) to
A3:N,3:N (n)
4) Repeat step (1) to (3) for each time instant
In the Schur algorithm, columns of Cholesky factors at
time instant n are computed recursively from the correlation
matrix at that instant. Whereas in the RChol algorithm first two
columns of Cholesky factors at time instant n is computed
recursively from previous Cholesky factor and submatrix of
that Cholesky factors are updated recursively from previous
Cholesky factor i.e. at time instant n − 1. Conventional
Cholesky algorithm mentioned here are introduced for normal
matrices whereas proposed matrix is well suited for block
matrices and simulations are shown for that only.
Algorithm 4 Recursive Cholesky Update : RChol R = LDLH
Initialization:
for k = 1 ,
D1 (n) = rn
00
T
n
n
n
A1 (n) = [r00 , r10 , . . . r(N −1)0 ]
T
n
n
Ã1 (n) = [0, r10 , . . . r(N −1)0 ]
0
Order Updates on A s:
for k = 2
IV. C ONCLUSION
Ak (n) := ZM Ak−1 (n − 1) − Ãk−1 (n)k̃ref (n)
Convention methods of Cholesky factorization requires the
correlation matrix which needs inner product. While the recursive modied Cholesky algorithm (RChol) algorithm is an
explicit way to recursively calculating the pseudo-inverse of
the matrices without estimating the correlation matrix. It requires less number of iteration which avoids error propagation
through column updates. The RChol algorithm has most of the
use in calculating the pseudo-inverse of the of a time-varying
matrix which is applicable to SIMO/MIMO, CDMA, OFDM,
etc. wireless communication systems.
Dk (n) = Dk (n − 1)[IM − kref (n)k̃ref (n)]
for k > 2 ,
Ãk−1 (n) = 0
Ak (n) = ZM Ak (n − 1)
Dk (n) = ZM Dk (n − 1)
Ã1 (n)(2,:)
Scaling Factors: kref (n) =
A1 (n−1)(1,:)
0
kref (n) D1 (n−1)
k̃ref (n) =
D̃1 (n)
III. S IMULATION RESULTS
To compare proposed the RChol algorithm with Schur
algorithm, we compared the result of both the algorithm with
theoretical results. Fig. 1. Show the ratio and difference of
matrices R̂N , R̂RChol and R̂Schur , when the correlation
matrix is unknown. That has the application in blind channel
and or data estimation. Fig. 1 (a) and (b) shows the maximum
error for the RChol algorithm, [R̂N − R̂RChol ] is 0.6 while for
the Schur algorithm, [R̂N − R̂Schur ] is 4 i.e. nearly 6 times
the RChol algorithm. In case of ratio Fig. 1 (a) and (b) shows
the maximum ratio for the RChol algorithm, [R̂N ./R̂RChol ]
is 45 while for the Schur algorithm, [R̂N ./R̂Schur ] is 1500.
0
0
5
10
15
20
25
30
35
5
10
15
20
25
30
35
40
0
0
40
0
0
5
10
15
20
25
30
35
5
10
15
20
25
30
35
40
0
40
0
45
2500
0.6
5
1
5
40
5
0.4
10
35
10
5
10
2000
10
0
30
15
15
15
15
0.2
25
1500
-1
20
20
20
20
0
15
-2
25
25
-0.4
30
0
-5
40
40
0
40
40
(a)
(b)
0
5
10
15
20
25
30
35
(c)
5
10
15
20
25
30
35
(d)
40
0
0
40
0
0.03
0
0
0
10
15
20
25
30
35
5
1
15
20
25
30
35
40
1
10
0.8
0.02
15
15
15
20
10
1.05
10
15
5
0
40
5
0.025
10
-0.5
-1
5
5
5
0.95
0.6
20
0.015
20
20
-1.5
25
25
0.9
0.4
30
30
-2
25
25
0.01
30
0.005
0.2
0.85
35
35
35
35
-2.5
40
0
0.8
40
(e)
500
35
35
-4
-0.6
5
35
35
30
1000
30
-3
30
0
25
10
30
10
20
25
-0.2
40
0
40
(f)
(g)
(h)
Fig. 1: Comparisons of RChol algorithm Vs Schur Algorithm
for the unknown and known correlation matrix R, (a, e):
Proposed Algorithm ( Difference), (b, f): Schur Algorithm(
Difference), (c, g):Proposed Algorithm (Ratio), (d, h): Schur
Algorithm (Ratio)
Fig. 1 Show the ratio and difference of matrices R̂N ,
R̂RChol and R̂Schur , when the correlation matrix is known.
Fig. 1 (a) and (b) shows that the maximum error for the
RChol algorithm, [R̂N − R̂RChol ] is 2.5 while for the Schur
4
algorithm, [R̂N − R̂Schur ] is 0.03 i.e. nearly 6 times the RChol
algorithm. In case of ratio Fig. 1 (e) and (f) shows that the
maximum ratio for the RChol algorithm, [R̂N ./R̂RChol ] is
1.15 while for the Schur algorithm, [R̂N ./R̂Schur ] is 1.
From Fig. 1 it can be concluded that the Schur algorithm is
best suited when the correlation matrix is known, but leads
to huge error propagation through the column when R is
unknown and cannot be applied for blind channel estimation.
In converse, the RChol algorithm is best suited for blind
channel estimation and reduces error propagation through the
column.
V. Pawar and K. Naik (DIAT, Pune, India)
E-mail: [email protected]
R EFERENCES
[1] G. Golub and C. Van Loan: ’Matrix computations’, 2012
[2] V. Pawar and K. Krishna Naik: ’Blind multipath time varying channel
estimation using recursive Cholesky update’, AEU - Int. J. Electron.
Commun., 2016, 70, no. 1, pp. 113-119
[3] R. Hunger and T. Report: ’Floating Point Operations in Matrix-Vector
Calculus’, Matrix, 2007.
[4] C. P. Rialan and L. L. Scharf: ’Fast algorithms for computing QR and
Cholesky factors ofToeplitz operators’, IEEE Trans. Acoust., 1998 36, pp.
1740-1748
| 0 |
arXiv:1605.01298v1 [math.AC] 4 May 2016
THE EUCLIDEAN CRITERION FOR IRREDUCIBLES
PETE L. CLARK
Abstract. We recast Euclid’s proof of the infinitude of prime numbers as a
Euclidean Criterion for a domain to have infinitely many atoms. We make
connections with Furstenberg’s “topological” proof of the infinitude of prime
numbers and show that our criterion applies even in certain domains in which
not all nonzero nonunits factor into products of irreducibles.
1. Introduction
This article has its genesis in a graduate VIGRE research group taught by Paul
Pollack and me in Fall 2015: Introduction to the Process of Mathematical Research.
Rather than concentrating on a fixed topic preselected by us, the goal was to guide
students through the process of selecting and performing research on their own. One
technique we tried to inculcate is exploitation of the many-to-one relation between
theorems and proofs. A good theorem has several proofs, and you will know two
proofs are different when can be used to prove further theorems the other cannot.
In our first meeting, Pollack and I presented seven proofs of Euclid’s Proposition
IX.20: there are infinitely many prime numbers. My first proof: suppose given a
domain R that is not a field, in which each nonzero nonunit factors into irreducibles
and whenever x ∈ R is a nonzero nonunit then x + 1 is not a unit; then there is
at least one irreducible element f1 , and given irreducibles f1 , . . . , fn , by factoring
f1 · · · fn +1 we get a new irreducible element. It was pointed out that this argument,
though correct, does not imply Euclid’s result: x = −2 is a problem. Some salvages
were suggested: in Z it is enough to replace f1 · · · fn by −f1 · · · fn , if necessary.
Here we present a general fix – a Euclidean Criterion for a domain to have
infinitely many nonassociate irreducibles – and explore its consequences. We soon
find ourselves on a scenic tour of 20th century mathematics, as we engage with work
of Jacobson, Furstenberg, Cohen-Kaplansky and Anderson-Mott, among others.
1.1. Acknowledgments.
Thanks to all members of the 2015-2016 Introduction to Mathematical Research
UGA VIGRE group. Conversations with Saurabh Gosavi, Noah Lebowitz-Lockard,
Robert Samalis, Lee Troupe and Lori D. Watson were helpful.
My group coleader Paul Pollack made key contributions: first, he emphasized
that the Euclidean Criterion automatically yields pairwise comaximality. Second,
Theorem 2.9 was inspired by [P, Thm. 1.16], and though I came up with the statement, I could prove it only in various special cases. The proof included here is his.
I am grateful to two anonymous referees for their careful, detail-oriented reports.
In particular, Example 4.19 was suggested by the “first” referee.
Date: May 5, 2016.
1
2
PETE L. CLARK
2. The Euclidean Criterion
2.1. A primer on factorization in domains.
By a ring we will mean a commutative ring with a multiplicative identity. We
denote the set of nonzero elements of R by R• . An element x ∈ R is a unit if
there is y ∈ R such that xy = 1. We denote the group of units of R by R× . For
a subset S of a ring R, we denote by (S) the ideal of R generated by S. (As is
standard, we write (x1 , . . . , xn ) for ({x1 , . . . , xn }). Ideals I and J in R are comaximal if I + J = R. Elements a, b ∈ R are comaximal if (a) and (b) are comaximal:
(a, b) = R. An indexed family of ideals {Ii } is pairwise comaximal if Ii + Ij = R
for all i 6= j, and similarly for pairwise comaximal elements.
A domain is a nonzero ring in which x, y 6= 0 =⇒ xy 6= 0. For x, y ∈ R we
say x divides y and write x | y if there is c ∈ R such that cx = y. Elements x
and y are associates if y = ux for some u ∈ R× . An element x of a domain is
irreducible if it is a nonzero nonunit and x = yz implies y ∈ R× or z ∈ R× . A
prime element p ∈ R is an element p ∈ R• for which (p) is a prime ideal. Thus a
nonzero nonunit p is prime if and only if p | ab =⇒ p | a or p | b.
An atom in a domain R is a principal ideal (x) generated by an irreducible element x. Thus two irreducibles of a domain R determine the same atom if and only
if they are associate. (It is more common in the literature for the terms “atom”
and “irreducible” to be fully synonymous, but this minor distinction is convenient
for our purposes: usually we will count to count irreducibles in a domain up to
associates, but sometimes we will want to count irreducibles.) A Furstenberg
domain is a domain R in which every nonzero nonunit has an irreducible divisor.1
An atomic domain is a domain R in which for every nonzero nonunit x ∈ R there
are irreducible elements f1 , . . . , fn such that x = f1 · · · fn . A unique factorization domain (UFD) is an atomic domain such that if f1 , . . . , fm , g1 , . . . , gn are
irreducibles such that f1 · · · fm = g1 · · · gn , then m = n and there is a bijection
σ : {1, . . . , m} → {1, . . . , n} such that (fi ) = (gσi ) for all 1 ≤ i ≤ m.
Prime elements are irreducible. In general the converse is false! An atomic domain is a UFD iff every irreducible is prime [Cl-CA, Thm. 15.8]. The terminology
can be confusing in light of the definition of a prime number p as a positive integer
not divisible by any 1 < n < p: this means p is irreducible in Z. But Euclid showed
p | ab =⇒ p | a or p | b.
From this one can easily show the Fundamental Theorem of Arithmetic: Z is a UFD.
A principal ideal domain (PID) is a domain in which each ideal is generated
by a single element. Every PID is a UFD. It follows from the Euclidean algorithm
that Z is a PID. A Bézout domain is a domain in which every finitely generated
ideal is principal. A ring is Noetherian if all of its ideals are finitely generated.
Noetherian domains are atomic [Cl-CA, Prop. 15.3]. Thus a PID is precisely a
Noetherian Bézout domain. A Dedekind domain is a domain in which each
nonzero proper ideal factors uniquely into prime ideals. A domain is Dedekind iff it
is Noetherian, of dimension at most one – every nonzero prime ideal is maximal
– and integrally closed – every element of the fraction field which satisfies a monic
polynomial with coefficients in R lies in R [Cl-CA, Thm. 20.10].
Working in a domain rather than a general ring confers certain advantages:
1The explanation for the terminology comes in §3.1.
THE EUCLIDEAN CRITERION FOR IRREDUCIBLES
3
Fact 1. a) Every nonzero ideal in a ring contains a nonzero principal ideal.
b) If R is a domain and α ∈ R• , x ∈ R 7→ αx gives a bijection from R to (α).
c) Thus for every nonzero ideal I of a domain R we have #I = #R.
d) For nonzero ideals I and J of R, I ∩ J contains IJ and thus is nonzero.
2.2. The Euclidean Criterion.
A ring R satisfies Condition (E) if for all x ∈ R• , there is y ∈ R such that
yx + 1 ∈
/ R× . In other words, if x 6= 0 then 1 + (x) 6⊂ R× . By Fact 1a) this is
equivalent to: if I is a nonzero ideal of R then 1 + I 6⊂ R× , though we will defer
consideration of this restatement until later on.
Example 2.1.
a) The ring Z satisfies Condition (E). Indeed, Z× = {±1}, so for x ∈ Z• , take
y = 1 if x is positive and y = −1 if x is negative); then yx ≥ 1 so yx + 1 ≥ 2.
b) For any domain R, the polynomial ring R[t] satisfies Condition (E). Indeed,
(R[t])× = R× , so for any x ∈ R[t]• , take y = t.
c) R = Z[i] satisfies Condition (E). Indeed Z[i]× = {1, i, −1, −i}, so this is geometrically clear: for any x ∈ Z[i]• , if we multiply it by a y with large enough |y|, then
yx will be much more than 1 unit away from any point on the unit circle.
Proposition 2.2. A domain R with #R > #R× satisfies Condition (E).
Proof. For x ∈ R• , the map ι : R → R given by y 7→ yx + 1 is an injection. Thus
#ι(R) = #R > #R× , so it cannot be that ι(R) ⊂ R× .
And here we go:
Theorem 2.3. (The Euclidean Criterion)
Let R be a domain, not a field, satisfying Condition (E).
a) There is an infinite sequence {an }∞
n=1 of pairwise comaximal nonunits.
b) If R is also Furstenberg, it admits an infinite sequence {fn }∞
n=1 of pairwise
comaximal irreducibles. Thus {(fn )}∞
n=1 is a sequence of distinct atoms in R.
Proof. a) By induction on n. Let a1 ∈ R be a nonzero nonunit. Having chosen
a1 , . . . , an pairwise comaximal, by Condition (E) there is y ∈ R such that an+1 :=
ya1 · · · an + 1 ∈
/ R× . Clearly (ai , an+1 ) = R for all 1 ≤ i ≤ n.
b) By induction on n. Since R is Furstenberg and not a field, it has an irreducible
f1 . Having chosen pairwise comaximal irreducibles f1 , . . . , fn , by Condition (E)
there is y ∈ R such that x = yf1 · · · fn + 1 is a nonzero (since f1 ∈
/ R× ) nonunit, so
x has an irreducible factor fn+1 . For all 1 ≤ i ≤ n we have
Y
fj )fi ,
1 = (x/fn+1 )fn+1 − (y
j6=i
so fi , fn+1 are comaximal. Finally, if x and y are pairwise comaximal irreducibles,
then (x), (y) ( R and (x) + (y) = (x, y) = R, so we must have (x) 6= (y).
Here are two applications of the Euclidean Criterion. The first two are immediate.
Theorem 2.4. a) For any domain R, R[t] has infinitely many atoms.
b) In particular, let D be a UFD and let R = D[t1 , . . . , tn ]. Then R is a UFD
satisfying Condition (E), so R has infinitely many nonassociate prime elements.
c) The Gaussian integers Z[i] have infinitely many atoms. Since Z[i] is a PID,
there are infinitely many nonassociate prime elements.
4
PETE L. CLARK
Theorem 2.5. Let R be a Furstenberg domain, not a field, such that #R > #R× .
Then R has infinitely many atoms.
Theorem 2.6. Let R be a Furstenberg domain, let I be the set of all irreducible
elements of R. Then I is either empty (if R is a field) or infinite (otherwise).
Proof. Assume I 6= ∅ and fix f ∈ I. If R× is finite, Theorem 2.5 yields infinitely
many atoms. If R× is infinite, then {uf | u ∈ R× } is an infinite subset of I.
2.3. Supplement: Irreducibles in Residue Classes.
We switch from an ancient theorem to matters of contemporary interest if we ask
for infinitely many primes satisfying certain additional conditions. Here is a result
along these lines, relatively modest over Z, but of a general algebraic nature.
Lemma 2.7. Let a, b, c be elements of a ring R. If (a, b) = R and c | a + b, then
(a, c) = (b, c) = R.
Proof. Let d ∈ R be such that cd = a + b. Then
(a, c) ⊃ (a, cd) = (a, a + b) = (a, b) = R.
Lemma 2.8. Let R be a domain, not a field, satisfying Condition (E). For any
at + b ∈ R[t] with a ∈ R• , there is x ∈ R such that ax + b is a nonzero nonunit.
Proof. Put P (t) = at + b. If b = 0, take any nonzero nonunit x ∈ R. If b ∈ R× , by
Condition (E) there is x ∈ R such that b−1 ax + 1 ∈
/ R× so P (x) = b(b−1 ax + 1) is
a nonzero nonunit. If b ∈ R is a nonzero nonunit, take x = 0.
The proof of the following result was suggested to me by Paul Pollack.
Theorem 2.9. Let R be an atomic domain satisfying Condition (E), let I be a
nonzero ideal of R, and let H be a proper subgroup of (R/I)× . Then there are
infinitely many pairwise comaximal irreducibles f such that the class of f modulo
I lies in (R/I)× \ H.
Proof. Let r : R → R/I be the quotient map, let α ∈ R be such that r(α) ∈
(R/I)× \ H, and let β ∈ R be such that αβ − 1 ∈ I \ {0}. Inductively, assume that
we have pairwise irreducibles f1 , . . . , fn of R such that (fi , α) = (fi , I) = R for all
i and such that r(fi ) ∈
/ H. Let
P (t) = (αt + 1)(αβ − 1)f1 · · · fn + α ∈ R[t].
(We need to include the base case n = 0, and in this case f1 · · · fn = 1.) By Lemma
2.8 there is x ∈ R such that
y = (αx + 1)(αβ − 1)f1 · · · fn + α
is a nonzero nonunit, so we get an irreducible factorization
y = g1 · · · gs
with s ≥ 1. Then
r(g1 ) · · · r(gs ) = r(y) = r(α) ∈ (R/I)× \ H,
so (gj , I) = 1 for all j and there is at least one gj , say g1 , such that r(g1 ) ∈
/ H.
Now g1 cannot be associate to any fi ; if so g1 and hence also fi would divide α: if
α ∈ R× this contradicts the irreducibility of fi ; if not, this contradicts (fi , α) = 1.
THE EUCLIDEAN CRITERION FOR IRREDUCIBLES
5
Moreover y ≡ −f1 · · · fn (mod α) so y ∈ (R/α)× , hence also g1 ∈ (R/α)× , i.e.,
(g1 , α) = R. Finally, since
(αx + 1)(αβ − 1)f1 · · · fn ≡ −f1 · · · fn
(mod α),
we have ((αx + 1)(αβ − 1)f1 · · · fn , α) = R, so by Lemma 2.7 we have
(g1 , (αx + 1)(αβ − 1)f1 · · · fn ) = R
so (g1 , fi ) = R for all i. Thus we may take fn+1 = g1 , completing the induction.
When R = Z, we get: for any proper subgroup H ( (Z/N Z)× , there are infinitely
many prime numbers p such that ±p (mod N ) ∈
/ H. Moreover, in this classical case one can run the argument with positive integers only and so get rid of
the annoying ±. This is a special case of Dirichlet’s theorem on primes in arithmetic progressions. It is an observation of A. Granville – unpublished by him, but
reproduced in [P, Thm. 1.16] – that this case can be proved in an elementary
“Euclidean” way. The special case of trivial H – for all N ≥ 3 there are infinitely
many primes p 6≡ 1 (mod N ) – is older and better known. It is also simpler – just
consider N p1 · · · pn−1 − 1. This case does not use that Z is a UFD, but Granville’s
argument does. The most auspicious replacement for coprimality arguments is by
comaximality, and that is what we’ve done here.
3. A “Topological” Interlude
3.1. Furstenberg’s Lemma.
In this section we will give several proofs of the following result.
Theorem 3.1. Let R be a Furstenberg domain with at least one and only finitely
many irreducibles f1 , . . . , fn . Then:
a) We have #R× = #R.
b) More precisely there is a nonzero ideal I of R such that 1 + I ⊂ R× .
Theorem 3.1 is the contrapositive of part b) of the Euclidean Criterion, without
the information on comaximality. The proofs that we give here are inspired by the
famous paper of H. Furstenberg [Fu55]. The essential core of his argument is the
observation that in Z the set of elements not divisible by any prime number is ±1.
Notice that has nothing to do with the natural ordering of Z that underlies most
of the classical proofs of Euclid’s Theorem. In fact the property of Z being used is
that Z is a Furstenberg domain.
Lemma 3.2. (Furstenberg’s Lemma)
T
a) A domain R is a Furstenberg domain iff R× = f irreducible R \ (f ).
b) In a FurstenbergTdomain with at least one and only finitely many irreducibles
f1 , . . . , fn , we have ni=1 (R \ (fi )) = R× .
The proof is virtually immediate and is left to the reader.
3.2. Following Furstenberg.
Let R be a domain. By Fact 1d), for each x ∈ R, the family
C(x) = {x + I | I is a nonzero ideal of R}
6
PETE L. CLARK
is closed under finite intersections, so {C(x)}x∈X is a system of neighborhood bases
for a topology on R – let us call it the adic topology – in which U ⊂ R is open
iff for all x ∈ U there is a nonzero ideal I with x + I ⊂ U . By Fact 1c), every
nonempty open has cardinality #R.
Proof of Theorem 3.1: let R be a Furstenberg domain with at least one and only
finitely many irreducibles f1 , . . . , fn . Then each (fi ) is open, hence its complement
R \ (fi ), being a union of cosets of (fi ), is also open. By Furstenberg’s Lemma
T
R× = ni=1 (R \ (fi )) is open. Since 1 ∈ R× , we have #R× = #R. More precisely,
R× ⊃ 1 + I for some nonzero ideal of R.
3.3. Following Cass-Wildenberg.
Let R be a domain, and let F2 be the field of two elements. For an ideal I of
R, a function f : R → F2 is I-periodic if f (x + y) = f (x) for all x ∈ X and y ∈ I.
Lemma 3.3. Let R be a domain, and let I, I1 , . . . , In be nonzero ideals of R.
a) If I2 ⊂ I1 and f : R → F2 is I1 -periodic, it is also I2 -periodic.
b) If for all 1 ≤ i ≤ n, fi : R → F2 is Ii -periodic, then the pointwise product
f1 · · · fn : R → F2 is I1 · · · In -periodic.
c) If f : R → F2 is I-periodic, then for all x ∈ R, we have
#{y ∈ R | f (y) = f (x)} = #R.
Proof. a) This is immediate
T from the definition.
T
b) Certainly f1 · · · fn is ni=1 Ii -periodic, and ni=1 Ii ⊃ I1 · · · In . Apply part a).
c) Choose a nonzero α ∈ I. Then f (x + Rα) = f (x), and #Rα = #R.
Proof of Theorem 3.1:
Step 1: For 1 ≤ i ≤ n, let χi : R → F2 be the characteristic function of (fi ); put
n
Y
χ=
(1 − χi ).
i=1
Each χi is (fi )-periodic, hence so too is 1 − χi , and thus χ is (f1 · · · fn )-periodic.
T
Moreover χ is the characteristic function of ni=1 (R \ (fi )) = R× .
Step 2: Since χ(1) = 1, #R× = {x ∈ R | χ(x) = 1} = #R: part a).
Step 3: More precisely χ(1 + Rf1 · · · fn ) = 1, so Rf1 · · · fn + 1 ⊂ R× : part b).
3.4. Following Mercer.
Let R be a domain. Call a subset X ⊂ R lovely if it is of the form x + I for
x ∈ R and a nonzero ideal I of R, i.e., if it is a coset of a nonzero ideal. Call a
subset X ⊂ R pleasant if it is a union of lovely subsets. If I is a nonzero ideal of R,
then R\ I is a union of cosets of I hence pleasant. If X, Y ⊂ R are pleasant sets and
x ∈ X ∩ Y , there are nonzero ideals I, J of R such that x + I ⊂ X and x + J ⊂ Y .
By Fact 1d) x+(I ∩J) = (x+I)∩(x+J) is a lovely subset of X ∩Y containing x. So
X ∩Y is pleasant. By Fact 1c), every nonempty pleasant subset has cardinality #R.
Proof of Theorem 3.1: let R be a Furstenberg domain with at least one and only
finitely many irreducibles f1 , . . . , fn . By Furstenberg’s Lemma, R× is the finite
intersection of complements of nonzero ideals so is pleasant. Since 1 ∈ R× , we have
#R× = #R. More precisely, R× ⊃ 1 + I for some nonzero ideal of R.
THE EUCLIDEAN CRITERION FOR IRREDUCIBLES
7
3.5. Debriefing.
The three proofs given above are generalizations of the proofs of Euclid’s Theorem given by Furstenberg [Fu55], Cass-Wildenberg [CW03] and Mercer [Me09].
The latter two works take the detopologization of Furstenberg’s proof as their goal.
Our presentation of the argument of §3.4 differs superficially from Mercer’s.
We chose the words “lovely” and “pleasant” precisely because they do not have a
commonly understood technical mathematical meaning: had we said “basic” and
“open” then the reader’s attention would have been drawn to the fact that since
the basic sets are closed under finite intersections, they form the base of a topology.
Mercer’s exposition takes pains to point out that the underlying fact here is just
that finite intersections of unions are unions of finite intersections. Of course this is
a basic logical principle: conjunctions distribute over disjunctions and conversely.
Like many basic logical principles it is completely innocuous when used in context
(as in our version of the argument). That the pleasant sets form a topology on R
is no more and no less than a crisp enunciation of the facts we need to check in the
first part of the proof. I find it quite striking (and pleasant!) that the facts can
be enunciated in this way, but I must now agree with those who have claimed that
there is no essential topological content in Furstenberg’s argument.2
The use of periodic functions involves slightly more packaging, but of a standard
kind: it is well known that the Boolean ring 2R of subsets of R can be represented as
the ring Maps(R, F2 ) with pointwise addition and multiplication. We recommend
wikipedia and Glaymann [Gl67] as references. Glaymann develops this correspondence and applies it to prove such identities as A∆B = C ⇐⇒ B∆C = A ⇐⇒
C∆A = B...in a manner intended to be used in the high school classroom. This is
an interesting snapshot of “the new math” near its zenith.
3.6. The Ubiquitous Theorem.
Here is a result that complements Theorem 3.1. It is not deep, but it will play a
recurring role for us as a common intersection of various constructions and themes.
The first proof that we give follows the “topological conceit” of this section. We
will give other, simpler, proofs later on.
Theorem 3.4. Let R be a domain, not a field, with only finitely many maximal
ideals m1 , . . . , mn . Then:
a) We have #R× = #R.
b) More precisely there is a nonzero ideal I of R such that 1 + I ⊂ R× .
Proof. We endow R with the topology for which, for x ∈ R, C(x) = {x + m |
m is a maximal ideal of R} is a neighborhood subbase at x: that is, U ⊂ R is open
iff for all x ∈ U there is a subset J ⊂ {1, . . . , n} such that
\
\
(x + mi ) = x +
mi ⊂ U.
i∈J
i∈J
2Furstenberg does not claim a topological proof of the infinitude of the primes but rather a
“topological” proof of the infinitude of the primes.
8
PETE L. CLARK
T
Fact 1 gives i∈J mi ) (0), so every nonempty open has cardinality #R. Each
R \ mi , being a union of cosets of mi , is also open. Therefore
R× =
n
\
(R \ mi )
i=1
×
is open. Since 1 ∈ R× we have
T #R =×#R. More precisely
T there is a subset
J ⊂ {1, . . . , n} such that 1 + i∈J mi ⊂ R , and thus also 1 + ni=1 mi ⊂ R× .
3.7. Supplement: Further Topologies on a Domain.
Here is a common generalization of Theorems 3.1 and 3.4: let J be a family of
nonzero
T ideals of a domain R,T and suppose there are I1 , . . . , In ∈ J such that
R× = ni=1 (R \ Ii ). Then 1 + ni=1 Ii ⊂ R× , so in particular #R× = #R.
Look again at Theorem 3.1: instead of taking J to be the family of all nonzero
ideals, we could take J = {(f1 ), . . . , (fn )} and endow R with the unique translationinvariant topology with J as a neighborhood subbase at 0. This coarsens the adic
T
topology3 so that being open yields the sharper conclusion 1 + ni=1 (fi ) ⊂ R× . In
×
particular 1 + (f1 · · · fn ) ⊂ R . We are back to a version of Euclid’s argument.
The adic topology on Z is not very interesting as a topological space: it is countably infinite, metrizable, totally disconnected and without isolated points, hence
homeomorphic to the Euclidean topology on Q. In [Go59], Golomb proved Euclid’s
Theorem using the topology on Z+ with base the one-sided arithmetic progressions
{an + b | n ∈ Z+ } for coprime a, b ∈ Z+ . Golomb’s topology makes Z+ into a
countably infinite connected Hausdorff space...which is already interesting.
In a domain R that is not a field, we may consider the Golomb topology with
neighborhood base at x ∈ R given by
C(x) = {x + I | I is a nonzero ideal with (x, I) = R}.
In this topology every maximal ideal is closed, so in a domain that is not a field
with only finitely many maximal ideals m1 , . . . , mn , R× is open and thus contains
1 + I for some nonzero ideal I. We get another proof of Theorem 3.4.
The Golomb topology is never Hausdorff: in fact {0} = R. However, the induced
topology on R• can be (it is for Z). We leave further exploration to the reader.
4. Connections With Ideal Theory
For a ring R, we denote by MaxSpec R the set of all maximal ideals of R.
4.1. Comaximal Ideals.
Lemma 4.1. Let {In }∞
n=1 be a sequence of pairwise comaximal proper ideals in a
ring R. Then MaxSpec R is infinite.
Proof. For n ∈ Z+ , let mn be a maximal ideal containing In . If for n1 6= n2 we had
mn1 = mn2 then R = In1 + In2 ⊂ mn1 , contradiction.
In particular, part a) of the Euclidean Criterion implies that a domain that is not
a field and that satisfies Condition (E) has infinitely many maximal ideals. Thus
we get another proof of Theorem 3.4....but by no means our last.
3The adic topology on a domain is always Hausdorff, but in a Furstenberg domain with finitely
many irreducibles, this new topology is not.
THE EUCLIDEAN CRITERION FOR IRREDUCIBLES
9
4.2. Euclid Meets Jacobson.
Now is the time to examine the more explicitly ideal-theoretic statement of Condition (E): for all nonzero ideals I, we have 1 + I 6⊂ R. Some readers will now see – or
will have already seen – the connection with the Jacobson radical, but we will not
assume a prior familiarity. In fact we will use the Euclidean Criterion to motivate
a self-contained discussion of this and other ideal-theoretic concepts.
Proposition 4.2. [Cl-CA, Prop. 4.14] For a ring R, let
\
m,
J(R) =
m∈MaxSpec R
the Jacobson radical of R. For x ∈ R, the following are equivalent:
(i) x ∈ J(R).
(ii) For all y ∈ R, yx + 1 ∈ R× .
Proof. (i) =⇒ (ii): By contraposition: suppose there is y ∈ R such that z =
yx + 1 ∈
/ R× . Then z lies in some maximal ideal m. If also x ∈ m, then yx ∈ m and
thus also z − yx = 1 ∈ m, contradiction. So x does not lie in m and thus x ∈
/ J(R).
(ii) =⇒ (i): Again by contraposition: suppose that there is a maximal ideal m
such that x ∈
/ m. Then m ( (m, x), so (m, x) = R. It follows that there is m ∈ m
and y ∈ R such that m + yx = 1. Thus (−y)x + 1 = −m ∈ m so is not a unit.
We get immediately:
Corollary 4.3. A ring R satisfies Condition (E) iff J(R) = (0).
This gives a third proof of Theorem 3.4: if R has only finitely many maximal ideals
m1 , . . . , mn , then
n
n
Y
\
mi ) {0}.
mi ⊃
J(R) =
i=1
i=1
Apply Corollary 4.3.
A ring with zero Jacobson radical is called semiprimitive.4
4.3. Some Questions and Some Answers.
We now raise some natural questions...and answer them.
Question 4.4. In part b) of the Euclidean Criterion, must we assume that R is a
Furstenberg domain?
Question 4.5. A semiprimitive domain, not a field, has infinitely many maximal
ideals. Must a domain with infinitely many maximal ideals be semiprimitive?
Question 4.6. Let R be a Furstenberg domain.
a) If R is not semiprimitive, can it still have infinitely many atoms?
b) Can R have finitely many maximal ideals and infinitely many atoms?
4Or Jacobson semisimple or J-semisimple.
10
PETE L. CLARK
Example 4.7. The ring Z of all algebraic integers is not a Furstenberg domain. In
fact it is an antimatter domain: there are no irreducibles whatsoever: if z is an
algebraic integer then so is z 1/2 , so we can always factor z = z 1/2 z 1/2 . Moreover Z
×
is not a field: for all integers n ≥ 2, if n ∈ Z then n1 ∈ Z ∩ Q = Z, contradiction.
If I is a nonzero ideal of Z then the constant coefficient of the minimal polynomial
of a nonzero element α ∈ I is a nonzero integer in I. It follows that if J(Z) 6= 0
then there is N ∈ Z+ that is contained in every m ∈ MaxSpec Z. Choose a prime
number p ∤ N . Then p is not a unit in Z – otherwise 1p ∈ Z ∩ Q = Z – so there is at
least one maximal ideal mp of Z containing p. (In fact the set of maximal ideals of
Z containing p has continuum cardinality.) Then mp ⊃ (N, p) = Z: contradiction.
So the answer to Question 4.4 is yes: a semiprimitive domain that is not a field
can have no irreducibles whatsoever.
The following result answers Questions 4.5 and 4.6 for Dedekind domains and shows
that the Euclidean Criterion is, in principle, completely efficacious in determining
whether a Dedekind domain has infinitely many atoms.
Theorem 4.8.
For a Dedekind domain R that is not a field, the following are equivalent:
(i) R is semiprimitive.
(ii) R has infinitely many maximal ideals.
(iii) R has infinitely many atoms.
Proof. We know (i) =⇒ (ii) in any domain.
(ii) =⇒ (i): in a Dedekind domain, any nonzero element is contained in only
finitely many
maximal ideals. So in fact for any infinite subset M ⊂ MaxSpec R
T
we have m∈M m = (0).
(i) =⇒ (iii): Dedekind domains are Noetherian, hence Furstenberg domains, so
the Euclidean Criterion applies.
(iii) =⇒ (i): By contraposition: a Dedekind domain with finitely many maximal
ideals is a PID [Cl-CA, Thm. 20.6], and in a PID there is no distinction between
maximal ideals, principal ideals generated by prime elements, and atoms.
Question 4.9. Let K be a number field, with ring of integers ZK . The set of prime
numbers is an infinite sequence of pairwise comaximal nonunits of ZK , so (as is
well known!) ZK has infinitely many prime ideals and thus is semiprimitive. When
K = Q or is imaginary quadratic, the finiteness of Z×
K leads to a direct verification
of Condition (E). Is there a similarly direct verification for all K?
This is a question we will leave to the reader to address.
Proposition 4.10. Let R be a Noetherian domain of dimension at most one
(nonzero prime ideals are maximal). If MaxSpec R is infinite, then R is semiprimitive and thus has infinitely many pairwise comaximal irreducibles.
Proof. If R is not semiprimitive, then every maximal ideal m of R is a minimal
prime ideal of R/J(R). Since R is Noetherian, so is R/J(R), and a Noetherian ring
has only finitely many minimal prime ideals [Cl-CA, Thm. 10.13].
A Jacobson ring is a ring in which every prime ideal is the intersection of the
maximal ideals containing it. Since in a domain (0) is prime, a Jacobson domain
THE EUCLIDEAN CRITERION FOR IRREDUCIBLES
11
must be semiprimitive. Any quotient of a Jacobson ring is again a Jacobson ring.
If R is a Jacobson ring and S is a commutative, finitely generated R-algebra then
S is a Jacobson ring [Cl-CA, Thm. 12.15, 12.21]. So:
Theorem 4.11. a) A Jacobson Furstenberg domain that is not a field has infinitely
many pairwise comaximal irreducibles.
b) Let F be a field, and let p be a prime but not maximal ideal of F [t1 , . . . , tn ]. Then
the ring R = F [t1 , . . . , tn ]/p – i.e., a coordinate ring of an integral affine variety of
positive dimension – has infinitely many pairwise comaximal irreducibles.
c) A domain R that is finitely generated over Z and not a field has infinitely many
pairwise comaximal irreducibles.
To sum up: if we want to see a domain that has infinitely many maximal ideals but
is not semiprimitive, it cannot be finitely generated over a field, and if Noetherian
it must have a nonzero prime ideal that is not maximal. This cues us up for the
following example, which gives a negative answer to Question 4.5.
Example 4.12. Consider the ring Z[[t]] of formal power series with integral coefficients. It is not hard to show that Z[[t]] is an atomic domain. In fact Z[[t]] is a
Noetherian UFD [Cl-CA, Thm. 15.32]. Since 1 + (t) ⊂ Z[[t]]× , the Jacobson radical
J(Z[[t]]) contains (t) and is thus nonzero. Since J(Z[[t]])) 6= (0), the hypotheses
of the Euclidean Criterion do not apply. Nevertheless there are infinitely many
pairwise comaximal prime elements, namely the prime numbers! Hence there are
infinitely many maximal ideals.
Here we could have replaced Z with any PID with infinitely many maximal ideals.
Thus the answer to Question 4.6a) is yes: moreover a nonsemiprimitive domain
can have infinitely many comaximal irreducibles.
Example 4.13. Let k be a field. Recall that k[x, y] is a UFD, and let K = k(x, y)
be its fraction field. Let R be the subring of k(x, y) consisting of rational functions
f (x,y)
g(x,y) that, when written in lowest terms, have g(0, 0) 6= 0. Then R is itself a
UFD – factorization in R proceeds as in k[x, y] except that the prime elements
p(x, y) ∈ k[x, y] such that p(0, 0) 6= 0 become units in R – in which an element
(x,y)
is a unit iff f (0, 0) 6= 0. Thus m = { fg(x,y)
| f (0, 0) = 0} is the unique maximal
ideal, so J(R) = m and R is very far from being semiprimitive. Nevertheless it has
infinitely many prime elements, e.g. {y − xn }∞
n=1 . In more geometric language, the
irreducibles are the irreducible curves in the affine plane passing through (0, 0).
Thus the answer to Question 4.6b) is yes. However, there is more to say. The
preceding example can be vastly generalized using the following striking result.
Theorem 4.14. (Cohen-Kaplansky [CK46])
Let R be an atomic domain with finitely many atoms. Then:
a) R has only finitely many prime ideals.
b) R is Noetherian.
c) Every nonzero prime ideal of R is maximal.
Proof. a) In an atomic domain R, whenever a prime ideal p of R contains a nonzero
element x, we may factor x = f1 · · · fr into irreducibles and thus see that p contains
some irreducible element f dividing x. Thus, given any set of generators of a prime
ideal p we can replace it with a set of irreducible generators. In a set of generators
12
PETE L. CLARK
of an ideal, replacing each element by any one of its associates does not change the
ideal generated, and thus if we have only finitely many nonassociate irreducibles
we can only generate finitely many prime ideals.
b) It follows from the proof of part a) that every prime ideal of R is finitely generated. By a famous result of Cohen [Cl-CA, Thm. 4.26], all ideals are finitely generated. (This is an instance of the prime ideal principle of Lam-Reyes [LR08].)
c) If not, there are prime ideals (0) ( p1 ( p2 . Since R is Noetherian, this implies
there are infinitely many prime ideals between (0) and p2 [Cl-CA, Cor. 8.46].
A Cohen-Kaplansky domain is an atomic domain with finitely many atoms.
The work [CK46] does not give a complete classification: we are left with the case
of a Noetherian domain R with finitely many nonzero prime ideals, all of which
are maximal. If R is a Dedekind domain, then by Theorem 4.8 there are only
finitely many atoms. So the remaining case is when R is not integrally closed in
its fraction field, in which case the integral closure R is a Dedekind domain with
finitely many prime ideals [Cl-CA, Cor. 18.8]. One might expect that this forces R
to be Cohen-Kaplansky. This need not be the case!
Example 4.15. Let k be a field, and consider the subring
R = k[[t2 , t3 ]] = k + t2 k[[t]]
P
n
of the formal power series ring k[[t]]. For 0 6= f = ∞
n=0 an t ∈ k[[t]], we define
v(f ) to be the least n such that an 6= 0. Then v is a discrete valuation on k[[t]],
and the only nonzero prime ideal of k[[t]] is (t) = {f ∈ R | v(f ) > 0} ∪ {0}. In
particular, k[[t]] is a PID. So is the (isomorphic!) subring k[[t2 ]], and {1, t3 } is a
generating set for R as a k[[t2 ]]-module, so by standard PID structure theory, every
ideal of R canP
be generated by two elements. Thus R is Noetherian, hence atomic.
n
×
For f = a0 + ∞
⇐⇒ a0 6= 0, and thus
n=2 an t ∈ R, we have f ∈ R
m={
∞
X
an tn } = (t2 , t3 )
n=2
is the unique maximal ideal of R. We will give a complete description of the atoms
of R. First we claim that f ∈ R is irreducible iff v(f ) ∈ {2, 3}. Indeed a nontrivial
factorization f = xy involves v(x), v(y) ≥ 2 hence v(f ) ≥ 4; conversely, if v(f ) ≥ 4
then f = t2 tf2 is a nontrivial factorization. Since k × ⊂ R× , every irreducible is
associate to one of the form
t2 +
X
an tn , (v(f ) = 2 case)
n≥3
or one of the form
t3 +
X
an tn , (v(f ) = 3 case).
n≥4
Associate elements have the same valuation, so certainly no irreducible P
of the first
type is associate to an irreducible of the second type. We claim that t2 + n≥3 an tn
P
P
is associate to t2 + n≥3 bn tn iff a3 = b3 and t3 + n≥3 an tn is associate to
P
t3 + n≥3 bn tn iff a4 = b4 . This can be done by direct computation:
(t2 + a3 t3 + a1 4t4 + a5 t5 + . . .)(1 + u2 t2 + u3 t3 + . . .)
= t2 + a3 t3 + (a4 + u2 )t4 + (a5 + a3 u2 + u3 )t5 + . . . ,
THE EUCLIDEAN CRITERION FOR IRREDUCIBLES
13
so a3 = b3 and there is a unique choice of u2 , u3 , . . . leading to an = bn for all
n ≥ 4. The v(f ) = 3 case is similar. Thus there are precisely 2#k atoms, and R
is Cohen-Kaplansky iff k is finite.
Example 4.16. (Anderson-Mott [AM92, Cor. 7.2]) For a prime power q and
d, e ∈ Z+ , the ring R = Fq + te Fqd [[t]] is a Cohen-Kaplansky domain with exactly
d
−1 d(e−1)
q
irreducibles, none of which are
one nonzero prime ideal and exactly e qq−1
prime unless (d, e) = (1, 1).
The paper [CK46] was mostly forgotten for many years, until the breakthrough
work of Anderson and Mott [AM92] gave a complete characterization of CohenKaplansky domains. In fact they give 14 characterizations! Here is one:
Theorem 4.17. (Anderson-Mott [AM92])
For an atomic domain R, the following are equivalent:
(i) R is a Cohen-Kaplansky domain.
(ii) R is Noetherian of dimension at most one (nonzero prime ideals are maximal),
has finitely many prime ideals, the integral closure R of R is finitely generated
as an R-module, # MaxSpec R = # MaxSpec R, and for all nonprincipal ideals
m ∈ MaxSpec R, R/m is finite.
Example 4.18. Let k be a field of characteristic different from 2 or 3, and consider:
• R1 : the localization of k[x, y]/(y 2 − x3 − x) at m0 = (x, y).
• R2 : the localization of k[x, y]/(y 2 − x3 − x2 ) at m0 = (x, y).
• R3 : the localization of k[x, y]/(y 2 − x3 ) at m0 = (x, y).
Then:
• R1 is always Cohen-Kaplansky (it is a Dedekind domain with one maximal ideal).
• R2 is never Cohen-Kaplansky (# MaxSpec R2 = 2 > 1 = # MaxSpec R2 ).
• R3 is Cohen-Kaplansky iff k is finite.
4.4. Euclid Beyond Atomicity.
In the case of an atomic domain, the part of the Euclidean Criterion that yields infinitely many maximal ideals is much weaker than the Cohen-Kaplansky Theorem.
However, there is life beyond atomic domains.
Example 4.19. Let Hol(C) be the ring of entire functions f : C → C. For f ∈
Hol(C), put Z(f ) = {z ∈ C | f (z) = 0}. If f, g ∈ Hol(C)• , then Z(f ) and Z(g)
are countable sets, hence so is Z(f g) = Z(f ) ∪ Z(g), so f g 6= 0. Thus H(C) is
a domain. The map z0 ∈ C 7→ (z − z0 ) gives a bijection from C to the atoms of
Hol(C). An element f ∈ Hol(C) is a unit iff Z(f ) = ∅, and a nonzero nonunit f
is a (finite!) product of atoms iff Z(f ) is finite and nonempty.
So Hol(C) is not atomic – consider e.g. f (z) = sin z – but it is Furstenberg: if f
is a nonzero nonunit, then f vanishes at some z0 ∈ C and thus is divisible by the
irreducible element z − z0 . Moreover Hol(C) satisfies Condition (E): if f ∈ Hol(C)•
1
then there is w ∈ C such that f (w) 6= 0. Let g = z −w− f (w)
. Then (gf +1)(w) = 0,
×
so gf + 1 ∈
/ Hol(C) . Thus the Euclidean Criterion applies in Hol(C).
Theorem 4.20. Let 1 ≤ α ≤ β ≤ γ be cardinal numbers. There is a domain R
satisfying all of the following properties:
(i) R is a Bézout domain: every finitely generated ideal is principal.
(ii) R has exactly α atoms, each of which is a maximal ideal.
14
PETE L. CLARK
(iii) R has exactly β maximal ideals.
(iv) R has exactly γ nonzero prime ideals.
(v) R is an atomic domain iff α = β = γ < ℵ0 .
(vi) R is a Furstenberg domain iff α = β.
(vii) R is semiprimitive iff β ≥ ℵ0 .
We postpone the proof of Theorem 4.20 in order to discuss its significance. By
taking α = β and γ ≥ ℵ0 we get Furstenberg domains with any number α ≥ 1 of
irreducibles and any number γ ≥ max(α, ℵ0 ) nonzero prime ideals. In particular,
a Furstenberg domain can have any finite, positive number of irreducibles and any
infinite number of prime ideals, so the Cohen-Kaplansky Theorem does not extend
from atomic domains to Furstenberg domains. For any α = β ≥ ℵ0 and γ ≥ α we
get a semiprimitive Furstenberg domain that is not an atomic domain.
Now we come to the proof of Theorem 4.20, which requires somewhat more specialized results. A completely self-contained presentation would require more space
than we want to devote here. So we will make use of the material of [FS, Ch. II
and III], and our treatment will be at the level of a detailed sketch.
Let R be a domain with fraction field K. To x ∈ K • we attach the principal
fractional ideal (x) = {ax | a ∈ R}. When x ∈ R, this coincides with the usual
notion of a principal ideal. For x, y ∈ K • we have (x) = (y) iff there is u ∈ R×
such that y = ux. The principal fractional ideals of K form a commutative group
under pointwise multiplication: we have (x)(y) = (xy). We call this the group of
divisibility of R and denote it G(R). It is partially ordered by reverse inclusion:
that is, for x, y ∈ K • we put (x) ≤ (y) iff (y) ⊃ (x). This order reversal is actually
rather familiar: for x, y ∈ K × , we write x | y ⇐⇒ xy ∈ R, and then we have x | y
if (x) ⊃ (y): to contain is to divide.
Let {Gi }i∈IL
be an indexed family of nonzero totally ordered commutative groups,
and let G =
i∈I Gi be the direct sum endowed with the pointwise partial ordering: x ≤ y iff xi ≤ yi for all i ∈ I. Let πi : G → Gi be projection onto
the ith coordinate. By the Kaplansky-Jaffard-Ohm Theorem [FS, Thm. III.5.3]
∼
there is a Bézout domain R and an isomorphism ϕ : G(R) → G of partially
ordered commutative groups. See [FS, Example III.5.4]. Let v be the composϕ L
ite K × → K × /R× →
I∈I Gi . Then the maximal ideals of R are precisely
mi = {x ∈ R | (πi ◦ v)(x) > 0} ∪ {0} for i ∈ I. Thus no element of R• lies in
infinitely many maximal ideals, so R is semiprimitive iff I is infinite.
An atom in a partially ordered commutative group is a minimal positive element. This is a direct generalization of our previous use of the term: if R is a
domain, the minimal positive elements of the group of divisibility G(R) are precisely the principal fractional ideals (x) for an irreducible element x ∈ R. For every
atom x ∈ G, there is i ∈ I such that xi is an atom of Gi and xj = 0 for all j 6= i,
and conversely all such elements give atoms of G. Since each Gi is totally ordered,
it has at most one atom, the least positive element of Gi if such an element exists.
It follows that R is Furstenberg iff each Gi has a least positive element. Similarly,
a nonzero nonunit x ∈ R factors into irreducibles iff v(x) ∈ G is a sum of atoms iff
for all i ∈ I, Gi has a least positive element ai and vi (r) = nai for some n ∈ Z+ .
Thus R is an atomic domain iff each Gi ∼
= Z.
The domain R is h-local: each nonzero prime ideal is contained in a unique
maximal ideal [FS, loc. cit.]. The nonzero prime ideals contained in mi correspond
THE EUCLIDEAN CRITERION FOR IRREDUCIBLES
15
bijectively to the proper convex subgroups of Gi . (A subset Y of a totally ordered
set X is convex if for all x < y < z ∈ X, if x, z ∈ Y then also y ∈ Y .) We will take
each Gi to be a lexicographic product of copies of subgroups of (R, +) indexed by
an ordinal η. Then the convex subgroups of of Gi are precisely {Hδ }0≤δ≤η , where
Hδ is the set of all elements of Gi with j-coordinate zero for all j < δ. So there are
#η nonzero prime ideals in mi .
We will take a family of nonzero totally ordered commutative groups Gi parameterized by i ∈ β: this gives us β maximal ideals, and R is semiprimitive iff β ≥ ℵ0 .
We are left to choose the groups Gi in terms of α and γ so as to attain the other
assertions. We define an ordinal η: if γ is finite, it is the positive integer γ − β + 1;
if γ is infinite, it is the successor ordinal to γ (what matters in this case is that η
is a well-ordered set of cardinality γ and with a largest element). There are cases:
• If α = β = γ < ℵ0 , we take R to be a PID with γ nonzero prime ideals.
• If α = β and γ ≥ min(β + 1, ℵ0 ) we take Gi = Z for all 0 < i ∈ β. We take
G0 to be the Cartesian product of copies of Z indexed by η, endowed with the
lexicographic ordering. Then G0 has a least positive element: the element that is
0 in all factors but the last and 1 in the last factor. So all Gi have least elements
and R is a Furstenberg domain. Moreover η ≥ 2 so G0 ∼
6 Z and R is not an atomic
=
domain. It has (β − 1) + #η = γ nonzero prime ideals.
• If α < β, we take G0 to be the Cartesian product of copies of Z indexed by η, for
1 ≤ i < α we take Gi = Z, and for i ≥ α we take Gi = R.
4.5. Supplement: Rings With Infinitely Many Maximal Ideals.
Let us briefly consider the case of an arbitrary commutative ring. Though others
have done so (see e.g. [AVL96]), it is beyond our ambitions to pursue a factorization
theory in the presence of zero divisors. But we can still ask for criteria under which
there are infinitely many maximal ideals. In this more general context J(R) = (0)
is no longer sufficient: e.g. J(C × C) = 0 and there are only two maximal ideals.
Nevertheless both Euclid and Jacobson have a role to play.
Proposition 4.21. [Cl-CA, Prop. 4.15] Let I be an ideal of R contained in the
Jacobson radical. Then for all x ∈ R, if the image of x in R/I is a unit, then x is
a unit. In particular the natural map R× → (R/I)× is surjective.
Proof. If the image of x in R/I is a unit, then there is y ∈ R such that xy ≡ 1
(mod I), i.e., xy − 1 ∈ I ⊂ J(R). Thus for every maximal ideal m of R, xy − 1 ∈ m
so we cannot have x ∈ m. So x lies in no maximal ideal of R and thus x ∈ R× .
Theorem 4.22. (Dubuque [Du10]) Let R be an infinite ring. If #R > #R× , then
MaxSpec R is infinite.
Proof. We will show by induction on n that for all n ∈ Z+ , R has n maximal ideals.
Base Case: Since R is infinite, it is nonzero and thus it has a maximal ideal m1 .
Induction Step: Let m1 , . . . , mm be maximal ideals, and put
m
Y
mi .
I=
i=1
×
Case 1: Suppose I + 1 ⊂ R . Then #I ≤ #R× . Moreover I ⊂ J(R), so by
×
×
Proposition 4.21 R× → (R/I)× is surjective. It follows
Qn that #(R/I) ≤ #R <
∼
#R: by the Chinese Remainder Theorem, R/I = i=1 R/mi , hence there is an
16
PETE L. CLARK
injection (R/mi )× → (R/I)× . Putting the last two sentences together we conclude
#(R/mi )× < #R, and thus, since R/mi is a field and R is infinite, #R/mi =
#(R/mi )× + 1 < #R. Finally this gives the contradiction
n
Y
#R = #I · #R/I = #I ·
#R/mi < (#R)n+1 = #R.
i=1
×
Case 2: So there is x ∈ I + 1 \ R . Let mn+1 be a maximal ideal containing x. For
all 1 ≤ i ≤ n we have x − 1 ∈ I ⊂ mi , so
1 = x + (1 − x) ∈ mn+1 + mi .
So mn+1 is an (n + 1)st maximal ideal of R, completing the induction step.
A special case of Theorem 4.22 appears in [K, § 1.1, Exc. 8].
For a ring R, consider the quotient R/J(R). The maximal ideals of R/J(R) correspond to the maximal ideals of R containing J(R) – that is, to the maximal ideals of
R. Thus R/J(R) is semiprimitive. Thus we can replace any ring with a semiprimitive ring without changing its MaxSpec. However this “Jacobson semisimplification” need not carry domains to domains: e.g.
Q if R is a domain with 2 ≤ n < ℵ0
maximal ideals m1 , . . . , mn , then R/J(R) ∼
= ni=1 R/mi . Here is a generalization.
Theorem 4.23. a) For a ring R, the following are equivalent.
(i) R has only finitely many maximal ideals.
(ii) R/J(R) is a finite product of fields.
(iii) R/J(R) has only finitely many ideals.
(iv) R/J(R) is Artinian (i.e., there are no infinite descending chains of ideals).
b) A semiprimitive ring with finitely many maximal ideals has finitely many ideals.
Proof. a) (i) =⇒ (ii): If the maximal ideals of R are m1 , . . . , mn , then by the
Chinese Remainder Theorem [Cl-CA, Thm. 4.18] we have
R/J(R) = R/
n
\
i=1
mi ∼
=
n
Y
R/mi .
i=1
(ii) =⇒ (iii) =⇒ (iv) immediately. (iv) =⇒ (i): Maximal ideals of R/J(R)
correspond bijectively to maximal ideals of R. And an Artinian ring has only
finitely many maximal ideals [Cl-CA, Thm. 8.31]. b) This follows from part a).
5. But What About Primes?
Our take on Euclid’s argument has been as a criterion for the existence of irreducibles. The distinction evaporates in a UFD. A PID with only finitely many
prime ideals is a UFD with only finitely many principal prime ideals. It turns out
that the converse is also true.5
Theorem 5.1. Let R be a UFD, not a field, with only finitely many atoms. Then
R is a PID with finitely many prime ideals and #R = #R× .
5Theorem 5.1 is known to the experts: see e.g. [Za08].
THE EUCLIDEAN CRITERION FOR IRREDUCIBLES
17
Proof. A UFD with finitely many nonassociate prime elements is a Cohen-Kaplansky
domain, so MaxSpec R is finite and #R = #R× by Theorem 3.4. By Theorem 4.14
every nonzero prime ideal of R is maximal. The proof of Theorem 4.14a) shows:
every nonzero prime ideal p contains a prime element p. Since (p) is maximal, we
have p = (p). Thus every prime ideal is principal, so R is a PID [Cl-CA, Thm.
4.25]. (This is another case of the Lam-Reyes Prime Ideal Principle.)
Let us now move away from UFDs. From Example 4.15, we deduce:
Theorem 5.2. Let κ ≥ ℵ0 be a cardinal. There is a Noetherian domain R with
exactly one nonzero prime ideal, exactly κ irreducibles and no prime elements.
Proof. Let k be a field of cardinality κ, e.g. k = Q({tα | α ∈ κ}). By Example 4.15,
R = k[[t2 , t3 ]] is a Noetherian domain with one nonzero prime ideal m = (t2 , t3 )
and 2κ = κ irreducibles. Since m is not principal, R has no prime elements.
Cohen-Kaplansky showed that an atomic domain that is neither a field nor a UFD
must have at least 3 atoms [CK46, p. 469]. Their argument is a nice one: we must
have at least one nonprime irreducible f1 . Since (f1 ) is not prime, it is properly
contained in some prime ideal p, which must therefore contain a nonassociate irreducible f2 . Since f1 + f2 ∈ p, f1 + f2 is not a unit and therefore it is divisible by
an irreducible f3 , which cannot be associate to either f1 or f2 .
Finally, we consider Dedekind domains.
Question 5.3. Let R be a Dedekind domain with infinitely many prime ideals.
Must R have infinitely many atoms?
In an important classical case the answer is yes, as most number theorists know.
Theorem 5.4. For each number field K, the ring of integers ZK has infinitely
many nonassociate prime elements.
Proof. Step 1: For any number field L, the number of rational primes that split
completely in L is infinite. This is a special case of the Chebotarev Density Theorem, which however can be proved in a more elementary way, as was shown in
[Po10]. Using some basic algebraic number theory which we omit here, it comes
down to showing that for every nonconstant polynomial f ∈ Z[t], the set of prime
numbers p dividing f (n) for some n ∈ Z is infinite. If f (0) = 0 this is trivial. If
f (0) 6= 0, let p1 , . . . , pk be the prime divisors of f (0) (we allow k = 0) and let
q1 , . . . , qℓ be any finite set of primes not dividing f (0). For 1 ≤ i ≤ k, let ai be such
that pai i | f (0) and pai i +1 ∤ f (0). For N ∈ Z+ consider
xN = f (N pa1 1 +1 · · · pakk +1 q1 · · · qℓ ).
Then for all 1 ≤ i ≤ k, pai i +1 ∤ xN and for all 1 ≤ j ≤ ℓ, qj ∤ xN , so the set of N for
which xN is not divisible by some prime other than p1 , . . . , pk , q1 , . . . , qℓ is finite.
Step 2: A prime ideal p of a number field is principal iff it splits completely in the
Hilbert class field K 1 of K. So every prime ideal p of K lying above any one of the
infinitely many prime numbers p that split completely in K 1 is principal.
Looking at the above argument, one wonders: were we working working too hard?
Perhaps some simple argument gives a general affirmative answer to Question 5.3.
In fact Question 5.3 was answered negatively by Claborn [Cl65, Example 1.5].
18
PETE L. CLARK
The construction is impressively direct: start with a Dedekind domain A that is
not a PID, let P be the set of prime elements of R and pass to R = A[{ p1 }p∈P ]. The
prime ideals of R are precisely the nonprincipal prime ideals of A, which remain
nonprincipal in R! This prime-killing construction also appears in a work of
Samuel [Sa64, p. 17, Thm. 6.3] and is therein attributed to Nagata (cf. [N57,
Lemma 2]). For a Dedekind domain A, write Cl A for its ideal class group: the
quotient of the monoid of nonzero ideals of A under the equivalence relation I ∼ J
iff there are α, β ∈ A• with (α)I = (β)J. In the setting of the prime-killing
construction – i.e., R is the localization of A at the multiplicative subset generated
by the prime elements – we have [Sa64], [Cl65] that Cl R ∼
= Cl A.
Theorem 5.5. Let κ be an infinite cardinal. There is a Dedekind domain R with
exactly κ atoms and no prime elements.
Proof. We will use some properties of “elliptic Dedekind domains”: for more details, see [Cl09, §2.4]. Let k be an algebraically closed field of characteristic 0
and cardinality κ, and put R = k[x, y]/(y 2 − x3 − x). Then R is a Dedekind
domain, and by the Nullstellensatz the nonzero prime ideals of R are all of the
form p(x0 ,y0 ) = (x − x0 , y − y0 , y 2 − x3 − x) for pairs (x0 , y0 ) ∈ k 2 such that
y02 = x30 +x0 . In other words, they are the k-rational points on the projective elliptic
curve E : y 2 z = x3 + xz 2 , excluding the point at infinity O = [0 : 1 : 0]. Moreover,
by the Riemann-Roch Theorem, since [x0 : y0 : 1] 6= O, the prime ideal p(x0 ,y0 ) is
not principal. Thus R is a Dedekind domain with # MaxSpec R = #R = κ and
without prime elements. Because R is Dedekind, every ideal can be generated by
two elements [Cl-CA, Thm. 20.12]. This, together with the fact that Dedekind domains are atomic domains, implies that for all p ∈ MaxSpec R there are irreducibles
pp , qp such that p = (pp , qp ). Thus if λ is the number of irreducibles of R we have
κ = # MaxSpec R ≤ λ2 ≤ (#R)2 = κ2 = κ,
so λ2 = κ. Since κ is infinite, so is λ and thus λ = λ2 = κ.
References
[AM92]
[AVL96]
[Cl-CA]
[Cl09]
[CK46]
[Cl65]
[Co73]
[CS12]
[CW03]
[Du10]
[FS]
[Fu55]
D.D. Anderson and J.L. Mott, Cohen-Kaplansky domains: integral domains with a
finite number of irreducible elements. J. Algebra 148 (1992), 17-41.
D.D. Anderson and S. Valdes-Leon, Factorization in commutative rings with zero
divisors. Rocky Mountain J. Math. 26 (1996), 439-480.
P. L. Clark, Commutative Algebra, http://math.uga.edu/~ pete/integral.pdf.
P.L. Clark, Elliptic Dedekind domains revisited. Enseignement Math. 55 (2009), 213–
225.
I.S. Cohen and I. Kaplansky, Rings with a finite number of primes. I. Trans. Amer.
Math. Soc. 60 (1946), 468-477.
L.E. Claborn, Dedekind domains and rings of quotients. Pacific J. Math. 15 (1965),
59–64.
P.M. Cohn, Unique factorization domains. Amer. Math. Monthly 80 (1973), 1–18.
J. Coykendall and C. Spicer, Cohen-Kaplansky domains and the Goldbach conjecture.
Proc. Amer. Math. Soc. 140 (2012), 2227-2233.
D. Cass and G. Wildenberg, Math Bite: A Novel Proof of the Infinitude of Primes,
Revisited Mathematics Magazine, Vol. 76 (2003), 203.
W.G. Dubuque, http://math.stackexchange.com/questions/201
L. Fuchs and L. Salce, Modules over non-Noetherian domains. Mathematical Surveys
and Monographs, 84. American Mathematical Society, Providence, RI, 2001.
H. Furstenberg, On the infinitude of primes, Amer. Math. Monthly 62 (1955), 353.
THE EUCLIDEAN CRITERION FOR IRREDUCIBLES
[Gl67]
[Go59]
[K]
[LR08]
[Me09]
[N57]
[P]
[Po10]
[Sa64]
[Za08]
19
M. Glaymann, Characteristic Functions and Sets. Mathematics Teacher, 60 (1967),
775–778.
S.W. Golomb, A connected topology for the integers. Amer. Math. Monthly 66 (1959),
663-665.
I. Kaplansky, Commutative rings. Allyn and Bacon, Inc., Boston, Mass. 1970.
T.Y. Lam and M. Reyes, A prime ideal principle in commutative algebra. J. Algebra
319 (2008), 3006-3027.
I.D. Mercer, On Furstenberg’s Proof of the Infinitude of Primes. Amer. Math. Monthly
116 (2009), 355–356.
M. Nagata, A remark on the unique factorization theorem. J. Math. Soc. Japan 9
(1957), 143–145.
P. Pollack, Not always buried deep. A second course in elementary number theory.
American Mathematical Society, Providence, RI, 2009.
B. Poonen, http://mathoverflow.net/q/15221
P. Samuel, Lectures on unique factorization domains. Notes by M. Pavman Murthy.
Tata Institute of Fundamental Research Lectures on Mathematics, No. 30 Tata Institute of Fundamental Research, Bombay 1964.
M. Zafrullah, http://mathforum.org/kb/message.jspa?messageID=6451774
| 0 |
1
Sufficient Conditions for the Tightness of Shannon’s
Capacity Bounds for Two-Way Channels
Jian-Jia Weng† , Lin Song‡, Fady Alajaji† , and Tamás Linder†
arXiv:1801.03163v1 [cs.IT] 9 Jan 2018
Abstract—New sufficient conditions for determining in closed
form the capacity region of point-to-point memoryless two-way
channels (TWCs) are derived. The proposed conditions not only
relax Shannon’s condition which can identify only TWCs with
a certain symmetry property but also generalize other existing
results. Examples are given to demonstrate the advantages of the
proposed conditions.
Index Terms—Network information theory, two-way channels,
capacity region, inner and outer bounds, channel symmetry.
I. I NTRODUCTION
Finding the capacity region of point-to-point discrete memoryless two-way channels (TWCs) in single-letter form is a
long-standing open problem. The difficulty lies in the causality
of transmission, since the senders are allowed to generate
channel inputs by adapting to previously received channel
outputs. In [1], Shannon gave an (uncomputable) multi-letter
expression for the capacity region. Another multi-letter expression, using directed information [2], was given in [3]. The
capacity region of TWCs is known only for some special
channels such as TWCs with additive white Gaussian noise
[4], determinisitc TWCs [5], TWCs with discrete additive
noise [6], and injective semi-deterministic TWCs [7]. Thus,
Shannon’s inner and outer bounds [1] still play an important
role in characterizing the capacity region.
In the literature, Shannon’s symmetry condition [1] and
a condition established by Chaaban, Varshney, and Alouini
(CVA) [7] are two known sufficient conditions under which
Shannon’s inner and outer bounds coincide, thus directly characterizing the capacity region. Shannon’s condition focuses on
a certain symmetry structure for the channel transition probabilities, while the CVA condition focuses on the existence
of independent inputs which achieve Shannon’s outer bound.
Although the two conditions can be used to determine the
capacity region of a large class of TWCs, it is of interest to
establish new conditions for wider families of channels.
In this paper, four sufficient conditions guaranteeing that
Shannon’s inner and outer bounds coincide are derived. Similar to the CVA condition, our conditions identify independent
inputs which achieve Shannon’s outer bound based on the
approach that a TWC can be viewed as two one-way channels
† The authors are with the Department of Mathematics and Statistics, Queen’s University, Kingston, ON K7L 3N6, Canada (Emails: [email protected], {fady, linder}@mast.queensu.ca).
‡ The author was with the Department of Mathematics and Statistics,
Queen’s University, Kingston, ON K7L 3N6, Canada. She is now with
Contextere Ltd., Ottawa, ON K1Y 2C5, Canada (Email: [email protected]).
This work was supported in part by NSERC of Canada.
X1N
M1
User 1
M̂2
X2N
TWC
Y1N
M2
User 2
Y2N
M̂1
Fig. 1. Block diagram of two-way transmission.
with state. Two of the derived results are shown to be substantial generalizations of the Shannon and CVA conditions.
Moreover, our simplest condition can be easily verified by
observing the channel marginal distributions.
The rest of this paper is organized as follows. In Section II,
the system model and prior results are reviewed. New conditions for finding the capacity region are provided in Section III.
A discussion of the connections between the new conditions
and prior results is given in Section IV along with illustrative
examples. Concluding remarks are given in Section V.
II. P RELIMINARIES
In a two-way communication system as shown in Fig. 1,
two users want to exchange their own messages M1 and M2
via N uses of a TWC. Here, the messages M1 and M2 are
assumed to be mutually independent and uniformly distributed
on M1 , {1, 2, ..., 2N R1 } and M2 , {1, 2, ..., 2N R2 },
respectively, where N R1 and N R2 are non-negative integers.
For j = 1, 2, let Xj and Yj respectively denote the finite
channel input and output alphabets for user j. The joint
distribution of the inputs and outputs of a memoryless TWC is
governed by the channel transition probability PY1 ,Y2 |X1 ,X2 .
A channel code for a TWC is defined as follows.
Definition 1: An (N, R1 , R2 ) code for a TWC consists
of two message sets M1 = {1, 2, . . . , 2N R1 } and M2 =
{1, 2, . . . , 2N R2 }, two sequences of encoding functions f1N ,
(f1,1 , f1,2 , . . . , f1,N ) and f2N , (f2,1 , f2,2 , . . . , f2,N ), with
f1,1 : M1 → X1 , f1,n : M1 × Y1n−1 → X1 , f2,1 : M2 → X2 ,
and M2 ×Y2n−1 → X2 for n = 2, 3, . . . , N , and two decoding
functions g1 : M1 × Y1N → M2 and g2 : M2 × Y2N → M1 .
When messages M1 and M2 are encoded, the channel inputs
at time n = 1 are only functions of the messages, i.e., Xj,1 =
fj,1 (Mj ) for j = 1, 2, but all the other channel inputs are
generated by also adapting to the previous channel outputs
Yjn−1 , (Yj,1 , Yj,2 , . . . , Yj,n−1 ) via Xj,n = fj,n (Mj , Yjn−1 )
for j = 1, 2 and n = 2, 3, . . . , N . After receiving N channel
outputs, user j reconstructs Mi as M̂i = gj (Mj , YjN ) for
i, j = 1, 2 with i 6= j, and the probability of decoding error
(N )
is defined as Pe (f1N , f2N , g1 , g2 ) = Pr{M̂1 6= M1 or M̂2 6=
M2 }. Based on this performance index, we define achievable
rate pairs and the capacity region.
Definition 2: A rate pair (R1 , R2 ) is said to be achievable
if there exists a sequence of (N, R1 , R2 ) codes such that
2
(N )
limN →∞ Pe = 0. The capacity region C of a TWC is the
closure of the convex hull of all achievable rate pairs.
To date, a computable single-letter expression for the capacity region of general memoryless TWCs has not been
found. In [1], Shannon established inner and outer bounds
for the capacity region. Let R(PX1 ,X2 , PY1 ,Y2 |X1 ,X2 ) denote
the set of rate pairs (R1 , R2 ) with R1 ≤ I(X1 ; Y2 |X2 )
and R2 ≤ I(X2 ; Y1 |X1 ), where the joint distribution of all
random variables is given by PX1 ,X2 PY1 ,Y2 |X1 ,X2 . Then, the
capacity region of a discrete memoryless TWC with transition
probability PY1 ,Y2 |X1 ,X2 is inner bounded by [1]
CI (PY1 ,Y2 |X1 ,X2 ) , co
[
PX1 PX2
and outer bounded by
CO (PY1 ,Y2 |X1 ,X2 ) , co
[
PX1 ,X2
R(PX1 PX2 , PY1 ,Y2 |X1 ,X2 ),
R(PX1 ,X2 , PY1 ,Y2 |X1 ,X2 ),
where co denotes taking the closure of the convex hull. In
general, CI and CO are different, but if they coincide, then the
exact capacity region is obtained and independent inputs can
be used to achieve any point of the capacity region. We note
that there exist other improved bounds for TWCs [4], [8]-[11].
However, those bounds are either restricted for the particular
case of the binary multiplier TWC or expressed with auxiliary
random variables, which do not match our approach.
We next review the Shannon [1] and CVA [7] conditions
that imply the coincidence of CI and CO . For a finite set A,
let π A : A → A be a permutation (bijection), and for any
two symbols a′ and a′′ in A, let τaA′ ,a′′ : A → A denote the
transposition which swaps a′ and a′′ in A, but leaves the other
symbols unaffected. Moreover, let PX,Z,Y = PX PZ|X PY |X,Z
denote a probability distribution defined on finite sets X , Y,
and Z. We define two functionals for conditional entropies:
H(PX,Z , PY |X,Z ) ,
X
PX,Z (x, z)PY |X,Z (y|x, z) log
x,z,y
1
PY |X,Z (y|x, z)
and
H̄(PX , PZ|X , PY |X,Z ) ,
X
PX (x)PY |X (y|x) log
x,y
1
,
PY |X (y|x)
P
where PY |X (y|x) =
z PY |X,Z (y|x, z)PZ|X (z|x). In particular, if P
PZ|X=x′ = PZ|X=x′′ for any x′ , x′′ ∈ X , we let
PZ (z) = x PX,Z (x, z) and define
H̄⊥ (PX , PZ , PY |X,Z ) ,
X
x,y
P
PX (x)QY |X (y|x) log
1
,
QY |X (y|x)
where QY |X (y|x) = z PY |X,Z (y|x, z)PZ (z).
Note that, given any PX1 ,X2 = PX2 PX1 |X2 = PX1 PX2 |X1 ,
we have H(Yj |X1 , X2 )
=
H(PX1 ,X2 , PYj |X1 ,X2 ),
H(Y1 |X1 ) = H̄(PX1 , PX2 |X1 , PY1 |X1 ,X2 ), and H(Y2 |X2 ) =
H̄(PX2 , PX1 |X2 , PY2 |X1 ,X2 ), where PYj |X1 ,X2 is a marginal
of the channel probability PY1 ,Y2 |X1 ,X2 and j = 1, 2.
Furthermore, for any PX1 ,X2 which can be factorized as
PX1 PX2 , we have H(Y1 |X1 ) = H̄⊥ (PX1 , PX2 , PY1 |X1 ,X2 )
and H(Y2 |X2 ) = H̄⊥ (PX2 , PX1 , PY2 |X1 ,X2 ). Finally, let
P(Xj ) denote the set of all probability distributions on Xj
for j = 1, 2.
Proposition 1 (Shannon’s Symmetry Condition [1]): For a
memoryless TWC with transition probability PY1 ,Y2 |X1 ,X2 ,
we have C = CI = CO if for any pair of distinct input
symbols x′1 , x′′1 ∈ X1 , there exists a pair of permutations
(π Y1 [x′1 , x′′1 ], π Y2 [x′1 , x′′1 ]) on Y1 and Y2 (which depend on
x′1 and x′′1 ) such that for all x1 , x2 , y1 , y2 ,
PY1 ,Y2 |X1 ,X2 (y1 , y2 |x1 , x2 ) =
PY1 ,Y2 |X1 ,X2 (π Y1 [x′1 , x′′1 ](y1 ), π Y2 [x′1 , x′′1 ](y2 )|τxX′1,x′′ (x1 ), x2 ). (1)
1
1
Proposition 2 (CVA Condition [7]): For a memoryless
TWC with transition probability PY1 ,Y2 |X1 ,X2 , we have C =
CI = CO if for any PX1 ,X2 = PX2 PX1 |X2 = PX1 PX2 |X1 ,
H(PX2 P̃X1 |X2 , PYj |X1 ,X2 ) does not depend on P̃X1 |X2 for
given PX2 and there exists P̃X1 ∈ P(X1 ) such that
H̄⊥ (P̃X1 , PX2 , PY1 |X1 ,X2 ) ≥ H̄(PX1 , PX2 |X1 , PY1 |X1 ,X2 ) and
H̄⊥ (PX2 , P̃X1 , PY2 |X1 ,X2 ) ≥ H̄(PX2 , PX1 |X2 , PY2 |X1 ,X2 ).
We remark that Proposition 1 describes a channel symmetry property with respect to the channel input of user 1,
but an analogous condition can be obtained by exchanging the roles of users 1 and 2. Also, the invariance of
H(PX2 P̃X1 |X2 , PYj |X1 ,X2 ) in Proposition 2 in fact imposes
a certain symmetry constraint on the channel marginal distribution PYj |X1 ,X2 . In the literature, a TWC with independent
q-ary additive noise [6] is an example that satisfies both the
Shannon and CVA conditions.
III. C ONDITIONS FOR THE T IGHTNESS OF S HANNON ’ S
I NNER AND O UTER B OUNDS
In this section, we present four results regarding the tightness of Shannon’s inner and outer bounds. We adopt the
viewpoint that a two-way channel consists of two one-way
channels with state. For example, the one-way channel from
user 1 to user 2 is governed by the marginal distribution
PY2 |X1 ,X2 (derived from the channel probability distribution
PY1 ,Y2 |X1 ,X2 ), where X1 and Y2 are respectively the input and
the output of the channel with state X2 .
Let PX and PY |X be probability distributions on finite sets
X and Y. To simplify the presentation, we define
I(PX , PY |X ) =
X
x,y
PX (x)PY |X (y|x) log P
x′
PY |X (y|x)
,
PX (x′ )PY |X (y|x′ )
which is the mutual information I(X; Y ) between input
X (governed by PX ) and corresponding output Y of a
channel with transition probability PY |X . A useful fact
is that I(·, ·) is concave in the first argument when the
second argument is fixed. Moreover, the conditional mutual information I(X1 ; Y2 |X2 = x2 ) and I(X2 ; Y1 |X1 =
x1 ) can be expressed as I(PX1 |X2 =x2 , PY2 |X1 ,X2 =x2 ) and
I(PX2 |X1 =x1 , PY1 |X1 =x1 ,X2 ), respectively.
By viewing a TWC as two one-way channels with state,
each of the following four theorems comprises two conditions, one for each direction of the two-way transmission.
By symmetry, these theorems are also valid if the roles of
users 1 and 2 are swapped. For simplicity, we will use
I (k) (Xi ; Yj |Xj ) and H (k) (Yj |X1 , X2 ) to denote the conditional mutual information and conditional entropy evaluated
(k)
under input distribution PX1 ,X2 for i, j = 1, 2 with i 6= j. For
3
(k)
(k)
(k)
(1)
(1)
(1)
(2)
PX1 ,X2 = PXi PXi |Xj , the conditional entropy H (k) (Yi |Xi )
Proof: Given any PX1 ,X2 = PX2 PX1 |X2 , let PX1 ,X2 =
is evaluated under the marginal distribution PYi |Xi (yi |xi ) =
P
(k)
xj PXj |Xi (xj |xi )PYi |Xj ,Xi (yi |xj , xi ).
Theorem 1: For a given memoryless TWC, if both of the
following conditions are satisfied, then CI = CO :
∗
(i) There exists PX
∈ P(X1 ) such that for all x2 ∈ X2 we
1
∗
.
have arg maxPX |X =x I(X1 ; Y2 |X2 = x2 ) = PX
1
1
2
2
(ii) I(PX2 , PY1 |X1 =x1 ,X2 ) does not depend on x1 ∈ X1 for
any fixed PX2 ∈ P(X2 ).
(2)
(1) (1)
(1)
Proof: For any PX1 ,X2 = PX2 PX1 |X2 , let PX1 ,X2 =
∗
PX
P . By the same argument as in (2)-(6), we obtain via
1 X2
(i) that I (1) (X1 ; Y2 |X2 ) ≤ I (2) (X1 ; Y2 |X2 ). Moreover,
(k)
(1)
∗
∗
PX
PX2 , where PX
is given by (i). In light of (i), we have
1
1
I (1) (X1 ; Y2 |X2 )
X (1)
PX2 (x2 ) · I (1) (X1 ; Y2 |X2 = x2 )
=
(2)
x2
≤
X
(1)
PX2 (x2 ) ·
x2
=
X
max
PX1 |X2 =x2
I(X1 ; Y2 |X2 = x2 )
(1)
PX2 (x2 ) · I(PX1∗ , PY2 |X1 ,X2 =x2 )
(3)
(4)
x2
=
X
x2
(2)
=I
(1)
PX2 (x2 ) · I (2) (X1 ; Y2 |X2 = x2 )
(5)
(X1 ; Y2 |X2 ).
(6)
Moreover,
I (1) (X2 ; Y1 |X1 )
X (1)
PX1 (x1 ) · I (1) (X2 ; Y1 |X1 = x1 )
=
(7)
x1
=
X
(1)
(1)
(1)
(1)
PX1 (x1 ) · I(PX2 |X1 =x1 , PY1 |X1 =x1 ,X2 )
(8)
x1
=
X
PX1 (x1 ) · I(PX2 |X1 =x1 , PY1 |X1 =x′1 ,X2 )
(9)
x1
≤I
X
(1)
(1)
PX1 (x1 )PX2 |X1 =x1 , PY1 |X1 =x′1 ,X2
x1
(1)
!
= I(PX2 , PY1 |X1 =x′1 ,X2 )
X
(1)
PX1∗ (x′1 ) · I(PX2 , PY1 |X1 =x′1 ,X2 )
=
(10)
(11)
(12)
x′1
= I (2) (X2 ; Y1 |X1 ),
(13)
where (9) holds by the invariance assumption in (ii), (10)
holds since the functional I(·, ·) is concave in the first argument, and (12) is obtained from the invariance assumption in
(1)
(ii). Combining the above yields R(PX1 ,X2 , PY1 ,Y2 |X1 ,X2 ) ⊆
(1)
∗
R(PX
PX2 , PY1 ,Y2 |X1 ,X2 ), which implies that CO ⊆ CI and
1
hence CI = CO .
Theorem 2: For a given memoryless TWC, if for any
PX1 ,X2 = PX2 PX1 |X2 = PX1 PX2 |X1 , both of the following
conditions are satisfied, then CI = CO :
∗
(i) There exists PX
∈ P(X1 ) such that for all x2 ∈ X2 we
1
∗
.
have arg maxPX |X =x I(X1 ; Y2 |X2 = x2 ) = PX
1
1
2
2
(ii) H(PX2 P̃X1 |X2 , PY1 |X1 ,X2 ) does not depend on
P̃X1 |X2 given PX2 and PY1 |X1 ,X2 , and the
∗
common maximizer PX
in (i) also satisfies
1
∗
H̄⊥ (PX1 , PX2 , PY1 |X1 ,X2 ) ≥ H̄(PX1 , PX2 |X1 , PY1 |X1 ,X2 ).
(1)
I (1) (X2 ; Y1 |X1 )
= H (1) (Y1 |X1 ) − H (1) (Y1 |X1 , X2 )
= H̄(PX(1)1 , PX(1)2 |X1 , PY1 |X1 ,X2 ) − H(PX(1)2 PX(1)1 |X2 , PY1 |X1 ,X2 ) (14)
≤ H̄⊥ (PX∗ 1 , PX(1)2 , PY1 |X1 ,X2 ) − H(PX(1)2 PX∗ 1 , PY1 |X1 ,X2 )
(2)
= H (Y1 |X1 ) − H
= I (2) (X2 ; Y1 |X1 ),
(2)
(Y1 |X1 , X2 )
(15)
(16)
where (14) and (16) follow from the definitions in Section II and (15) is due to condition (ii). Consequently,
(1)
(1)
∗
PX2 , PY1 ,Y2 |X1 ,X2 ), and
R(PX1 ,X2 , PY1 ,Y2 |X1 ,X2 ) ⊆ R(PX
1
hence CO ⊆ CI , so that CI = CO .
Theorem 3: For a given memoryless TWC, if both of the
following conditions are satisfied, then CI = CO :
(i) I(PX1 , PY2 |X1 ,X2 =x2 ) does not depend on x2 ∈ X2 for
any fixed PX1 ∈ P(X1 ).
(ii) I(PX2 , PY1 |X1 =x1 ,X2 ) does not depend on x1 ∈ X1 for
any fixed PX2 ∈ P(X2 ).
Proof: From conditions (i) and (ii), we know that
maxPX1 |X2 =x2 I(X1 ; Y2 |X2 = x2 ) has a common maximizer
∗
PX
for all x2 ∈ X2 and maxPX2 |X1 =x1 I(X2 ; Y1 |X1 = x1 )
1
∗
has a common maximizer PX
for all x1 ∈ X1 . For any
2
(2)
(1) (1)
(1)
∗
P ∗ . By the same
PX1 ,X2 = PX1 PX2 |X1 , let PX1 ,X2 = PX
1 X2
argument as in (2)-(6), we conclude that I (1) (X1 ; Y2 |X2 ) ≤
I (2) (X1 ; Y2 |X2 ) and I (1) (X2 ; Y1 |X1 ) ≤ I (2) (X2 ; Y1 |X1 ).
(1)
∗
∗
PX
Thus, R(PX1 ,X2 , PY1 ,Y2 |X1 ,X2 ) ⊆ R(PX
, PY1 ,Y2 |X1 ,X2 ),
1
2
which yields CI = CO .
Similar to the CVA condition, complex computations are
often inevitable for checking the above conditions. We next
present a useful condition which needs little computational
effort. Let [PY2 |X1 ,X2 (·|·, x2 )] (resp. [PY1 |X1 ,X2 (·|x1 , ·)]) denote the marginal transition probability matrix obtained from
PY1 ,Y2 |X1 ,X2 =x2 (resp. PY1 ,Y2 |X1 =x1 ,X2 ), whose columns and
rows are indexed according to a fixed order on the symbols in
Y2 and X1 (resp. Y1 and X2 ).
Theorem 4: For a given memoryless TWC, if both of the
following conditions are satisfied, then CI = CO :
(i) The matrices [PY2 |X1 ,X2 (·|·, x2 )], x2 ∈ X2 , are column
permutations of each other.
(ii) The matrices [PY1 |X1 ,X2 (·|x1 , ·)], x1 ∈ X1 , are column
permutations of each other.
Since the proof is similar to the second part of the proof of
Theorem 5 in the next section, the details are omitted.
IV. D ISCUSSION AND E XAMPLES
A. Comparison with Other Conditions
As already noted, the relationship between Propositions 1
and 2 is unclear as examples that satisfy the Shannon condition
but not the CVA condition seem hard to construct. In this
section, we show that Theorems 1 and 2 in fact generalize
the Shannon and CVA results, respectively. To see this, it
suffices to show that the Shannon and CVA conditions imply
the conditions in Theorems 1 and 2, respectively.
4
Theorem 5: A TWC satisfying Shannon’s symmetry condition in Proposition 1 must satisfy the conditions in Theorem 1.
Proof: For a TWC satisfying the condition of Proposition 1, the optimal input probability distribution that achieves
capacity is of the form PX1 ,X2 = PX2 /|X1 | for some PX2 ∈
P(X2 ) [1]. This result implies that condition (i) of Theorem 1
is satisfied because a common maximizer exists for all x2 ∈ X
∗
and is given by PX
(x1 ) = 1/|X1 |. To prove that condition
1
(ii) is also satisfied, we consider the two (marginal) matrices
[PY1 |X1 ,X2 (·|x′1 , ·)] and [PY1 |X1 ,X2 (·|x′′1 , ·)] for some fixed
x′1 , x′′1 ∈ X1 and show that these matrices are column permutations of each other and hence I(PX2 , PY1 |X1 =x′1 ,X2 ) =
I(PX2 , PY1 |X1 =x′′1 ,X2 ). The former claim is true because
PY1 |X1 ,X2 (y1 |x′1 , x2 )
(17)
= PY1 |X1 ,X2 (π1Y1 [x′1 , x′′1 ](y1 )|x′′1 , x2 ),
(18)
1
where (17) is obtained by marginalizing Y2 on both sides
of (1) and (18) follows from the definition of transposition.
The second claim can be verified by a direct computation on
I(PX2 , PY1 |X1 =x1 ,X2 ) with the above result straightforwardly,
and hence the details are omitted.
Remark 1: Example 1 in the next subsection demonstrates
that a TWC that satisfies the conditions in Theorem 1 may not
satisfy Shannon’s symmetry condition in Proposition 1 since
the common maximizer is not necessarily the uniform input
distribution. Hence, Theorem 1 is a more general result than
Proposition 1.
Theorem 6: A TWC satisfying the CVA condition in Proposition 2 must satisfy the conditions in Theorem 2.
Proof: Suppose that the condition of Proposition 2 is
satisfied. To prove the theorem, we first claim that for j = 1, 2,
H(Yj |X1 = x′1 , X2 = x′2 ) = H(Yj |X1 = x′′1 , X2 = x′2 ) for
all x′1 , x′′1 ∈ X1 and x′2 ∈ X2 . Given arbitrary pairs (x′1 , x′2 )
and (x′′1 , x′2 ) with x′1 6= x′′1 , consider the two probability
distributions
1, if a = x′1 and b = x′2 ,
(1)
PX1 ,X2 (a, b) =
0, otherwise,
and
(2)
PX1 ,X2 (a, b)
(1)
=
1,
0,
if a = x′′1 and b = x′2 ,
otherwise.
(2)
Noting that PX2 = PX2 , we have
H(Yj |X1 = x′1 , X2 = x′2 ) = H (1) (Yj |X1 , X2 )
(19)
= H(PX(1)2 PX(1)1 |X2 , PYj |X1 ,X2 )
= H(PX(1)2 PX(2)1 |X2 , PYj |X1 ,X2 )
(2)
= H (Yj |X1 , X2 )
= H(Yj |X1 = x′′1 , X2 = x′2 ),
(1)
PX1 ,X2
1
2
2
(1)
(1)
(1)
∗
for some PX2 ∈ P(X2 ).
and define PX1 ,X2 = PX2 PX
1 |X2
Since H(Yj |X1 , X2 = x2 ) does not depend on PX1 |X2 =x2 ,
∗
is in fact a maximizer for H(Y2 |X2 = x2 ). Note
PX
1 |X2 =x2
∗
may not be unique, but any
that the maximizer PX
1 |X2 =x2
(1)
choice works for our purposes. Now for PX1 ,X2 , by the CVA
condition, there exists P̃X1 ∈ P(X1 ) such that
(1)
(1)
= PY1 |X1 ,X2 (π1Y1 [x′1 , x′′1 ](y1 )|τxX′1,x′′ (x′1 ), x2 )
1
on x1 ∈ X1 for fixed x2 ∈ X2 , H(Yj |X1 , X2 = x2 ) does not
depend on PX1 |X2 =x2 .
Next, we show that condition (i) of Theorem 2
holds
by
constructing
the
common
maximizer
from the CVA condition. For each x2 ∈ X2 , let
∗
= arg maxPX |X =x I(X1 ; Y2 |X2 = x2 ) =
PX
1 |X2 =x2
1
2
2
arg maxPX |X =x [H(Y2 |X2 = x2 ) − H(Y2 |X1 , X2 = x2 )]
(20)
(21)
(22)
where (19) and (22) are due to the definitions of
and
(2)
PX1 ,X2 , respectively, (20) follows from the CVA condition,
(2)
(1)
PX2 . The claim is proved. Since
and (21) holds since PX2 = P
H(Yj |X1 , X2 = x2 ) =
x1 PX1 |X2 (x1 |x2 )H(Yj |X1 =
x1 , X2 = x2 ) and H(Yj |X1 = x1 , X2 = x2 ) does not depend
∗
, PY2 |X1 ,X2 ) ≤ H̄⊥ (PX2 , P̃X1 , PY2 |X1 ,X2 ).
H̄(PX2 , PX
1 |X2
(1)
(2)
∗
is the maximizer
Set PX1 ,X2 = P̃X1 PX2 . Since PX
1 |X2 =x2
for H(Y2 |X2 = x2 ), we have
(1)
∗
, PY2 |X1 ,X2 )
H̄(PX2 , PX
1 |X2
= H (1) (Y2 |X2 )
X (1)
PX2 (x2 ) · H (1) (Y2 |X2 = x2 )
=
x2
=
X
(1)
PX2 (x2 )
x2
≥
X
max
PX1 |X2 =x2
H(Y2 |X2 = x2 )
(1)
PX2 (x2 ) · H (2) (Y2 |X2 = x2 )
x2
(2)
=H
·
(Y2 |X2 )
(1)
= H̄⊥ (PX2 , P̃X1 , PY2 |X1 ,X2 ).
Thus, H̄(PX(1)2 , PX∗ 1 |X2 , PY2 |X1 ,X2 ) = H̄⊥ (PX(1)2 , P̃X1 , PY2 |X1 ,X2 ),
i.e.,
X
(1)
PX2 (x2 ) · H (1) (Y2 |X2 = x2 ) =
X
(1)
PX2 (x2 ) · H (2) (Y2 |X2 = x2 ).
x2
x2
(2)
(1)
Since H (Y2 |X2 = x2 ) ≤ H (Y2 |X2 = x2 ) for each x2 ∈
X2 , we obtain H (1) (Y2 |X2 = x2 ) = H (2) (Y2 |X2 = x2 ), i.e.,
∗
P̃X1 achieves the same value of H(Y2 |X2 = x2 ) as PX
1 |X2 =x2
for all x2 ∈ X2 . Consequently, P̃X1 is a common maximizer
and thus condition (i) of Theorem 2 is satisfied. Moreover,
since the common maximizer P̃X1 is provided by the CVA
condition, condition (ii) of Theorem 2 automatically holds.
Remark 2: Example 1 below shows that a TWC that satisfies
the conditions in Theorem 2 does not necessarily satisfy
the condition in Proposition 2 because our conditions allow
H(PX2 P̃X1 |X2 , PY2 |X1 ,X2 ) to depend on P̃X1 |X2 for given
PX2 . Hence, Theorem 2 is more general than Proposition 2.
B. Examples
We next illustrate the effectiveness of our conditions via two
examples in which X1 = X2 = Y1 = Y2 = {0, 1}. The TWC
in Example 1 satisfies the conditions of Theorems 1-4 and the
capacity region is rectangular. The TWC in Example 2 satisfies
the conditions of Theorem 1 and 2 and has a non-rectangular
capacity region. However, neither of the constructed TWCs
satisfy the Shannon or the CVA conditions.
5
Example 1: Consider the TWC with
00
01
10
11
0.783
0.087
0.117
0.013
01
0.0417 0.3753 0.0583 0.5247 .
[PY1 ,Y2 |X1 ,X2 ] =
10 0.261 0.609 0.039 0.091
11 0.2919 0.1251 0.4081 0.1749
The corresponding one-way channel marginal distributions are
given by
0.9
[PY2 |X1 ,X2 (·|·, 0)] =
0.3
0.1
[PY2 |X1 ,X2 (·|·, 1)] =
0.7
0.08
0.1
0.87 0.13
, [PY1 |X1 ,X2 (·|0, ·)] =
,
0.7
0.417 0.583
0.87 0.13
0.9
.
, [PY1 |X1 ,X2 (·|1, ·)] =
0.417 0.583
0.3
For this TWC, Shannon’s symmetry condition in Proposition 1
does not hold since there are no permutations on Y1 and
Y2 which can result in (1). Furthermore, since H(Y2 |X1 =
0, X2 = 0) = Hb (0.1) and H(Y2 |X1 = 1, X2 = 0) =
Hb (0.3), where Hb (·) denotes the binary entropy function,
H(PX2 P̃X1 |X2 , PY2 |X1 ,X2 ) depends on P̃X1 |X2 for given PX2 .
Thus, the CVA condition in Proposition 2 does not hold, either.
However by Theorem 4, Shannon’s inner and outer bounds
coincide since [PY2 |X1 ,X2 (·|·, 0)] (resp. [PY1 |X1 ,X2 (·|0, ·)]) can
be obtained by permuting the columns of [PY2 |X1 ,X2 (·|·, 1)]
(resp. [PY1 |X1 ,X2 (·|1, ·)]). Since the conditions in Theorem 4
imply the conditions in Theorem 3 and the conditions in
Theorem 3 further imply the conditions in Theorem 1, the
conditions of Theorems 1 and 3 are also satisfied. Moreover,
the optimal input distribution for this TWC can be obtained
by searching for the common maximizer for each of the two
one-way channels via the Blahut-Arimoto algorithm yielding
∗
∗
PX
(0) = PX
(0) = 0.471. Thus, the capacity region is
1
2
∗
∗
achieved by the input distribution PX
= PX
P ∗ , i.e.,
1 ,X2
1 X2
C = {(R1 , R2 ) : 0 ≤ R1 ≤ 0.2967, 0 ≤ R2 ≤ 0.1715}.
Finally, we note that this TWC also satisfies the conditions
of Theorem 2, in which the first condition is already implied by
the conditions of Theorem 1. To verify the second condition,
we consider
0.07
0.06
R2
00
0.1
0.09
0.05
0.04
0.03
0.02
0.01
0
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
R1
Fig. 2. The capacity region of the TWC in Example 2.
[PY1 |X1 ,X2 (·|0, ·)] = [PY1 |X1 ,X2 (·|1, ·)] = [PY2 |X1 ,X2 (·|·, 1)].
Using the same arguments as in Example 1, one can easily
see that this TWC satisfies neither the Shannon nor the CVA
conditions. However, it satisfies the conditions in Theorem 1
since a common maximizer exists for the one-way channel
∗
from users 1 to 2, i.e., PX
(0) = 0.471, and condition (ii)
1
trivially holds. To verify that this channel also satisfies the
conditions in Theorem 2, the same argument as in the previous
example is used. Finally, by considering all input distributions
∗
of the form PX1 ,X2 = PX
P , the capacity region of this
1 X2
channel is determined as shown in Fig. 2.
V. C ONCLUSIONS
In this paper, four conditions on the coincidence of Shannon’s capacity inner and outer bounds were derived. These
invariance conditions were shown to generalize existing results, thus enlarging the class of TWCs whose capacity region
can be exactly determined. Numerical examples illustrate the
applications of the new conditions in situations where prior
results do not apply.
R EFERENCES
[1] C. E. Shannon, “Two-way communications channels,” in Proc. 4th
Berkeley Symp. Math. Stat. Probab., Chicago, IL, USA, Jun. 1961, pp.
611-644.
[2] J. Massey, “Causality, feedback and directed information,” in Proc. Int.
Symp. Information Theory Its Applic. (ISITA-90), Waikiki, HI, USA,
Nov. 1990, pp. 303-305.
0.417 0.583
0.87 0.13
. [3] G. Kramer, “Directed information for channels with feedback,” Ph.D.
, [PY1 |X1 ,X2 (·|·, 1)] =
[PY1 |X1 ,X2 (·|·, 0)] =
0.417 0.583
0.87 0.13
Dissertation, Swiss Federal Institute of Technology Zurich, 1998.
[4] T. Han, “A general coding scheme for the two-way channel,” IEEE
Here, for all x1 ∈ {0, 1}, H(Y1 |X1 = x1 , X2 = 0) =
Trans. Inf. Theory, vol. IT-30, no. 1, pp. 35-44, Jan. 1984.
Hb (0.13) and H(Y1 |X1 = x1 , X2 = 1) = Hb (0.417).
[5] Z. Cheng and N. Devroye, “Two-way networks: when adaptation is
Thus, H(PX2 P̃X1 |X2 , PY1 |X1 ,X2 ) does not depend on P̃X1 |X2
useless,” IEEE Trans. Inf. Theory, vol. 60, no. 3, pp. 1793-1813, Mar.
(1)
2014.
given PX2 . Together with the substitutions PX1 ,X2 = PX1 ,X2
[6] L. Song, F. Alajaji, and T. Linder, “Adaptation is useless for two discrete
(2)
∗
into (7)-(13), we then obtain that
and PX1 ,X2 = PX
P
additive-noise two-way channels,” in Proc. IEEE Int. Symp. Inf. Theory,
1 X2
∗
Barcelona, Spain, Jul. 2016, pp. 1854-1858.
).
,
P
,
P
)
≥
H̄(P
H̄⊥ (PX
,
P
,
P
X
X
Y
|X
,X
X
|X
Y
|X
,X
1
2
1
1
2
2
1
1
1
2
1
[7] A. Chaaban, L. R. Varshney, and M. Alouini, “The capacity of injective
Therefore, the second condition of Theorem 2 holds.
semi-deterministic two-way channels,” in Proc. IEEE Int. Symp. Inf.
Theory, Aachen, Germany, Jun. 2017, pp. 431-436.
Example 2: Consider the TWC with
[8] J. Schalkwijk, “The binary multiplying channel - a coding scheme that
00
01
10
11
operates beyond Shannons inner bound region,” IEEE Trans. Inf. Theory,
vol. IT-28, no. 1, pp. 107-110, Jan. 1982.
00
0.783
0.087
0.117
0.013
[9] J. Schalkwijk, “On an extension of an achievable rate region for the
01
0.36279 0.05421 0.50721 0.07579
binary multiplying channel,” IEEE Trans. Inf. Theory, vol. IT-29, no. 3,
[PY1 ,Y2 |X1 ,X2 ] =
,
pp. 445-448, May 1983.
10 0.261
0.609
0.039
0.091
[10] Z. Zhang, T. Berger, and J. Schalkwijk, “New outer bounds to capacity
11 0.173889 0.243111 0.243111 0.339889
regions of two-way channels,” IEEE Trans. Inf. Theory, vol. IT-32, no.
3, pp. 383-386, May 1986.
where two one-way channel marginal distributions are
[11] A. Hekstra and F. Willems, “Dependence balance bounds for single
output two-way channels,” IEEE Trans. Inf. Theory, vol. 35, no. 1, pp.
0.9 0.1
0.87 0.13
44-53, Jan. 1989.
[PY2 |X1 ,X2 (·|·, 0)] =
, [PY2 |X1 ,X2 (·|·, 1)] =
,
0.3 0.7
0.417 0.583
| 7 |
On κ-reducibility of pseudovarieties
of the form V ∗ D
J. C. Costa
C. Nogueira
M. L. Teixeira
arXiv:1602.03020v1 [math.GR] 9 Feb 2016
February 8, 2016
Abstract
This paper deals with the reducibility property of semidirect products of the form
V ∗ D relatively to graph equation systems, where D denotes the pseudovariety of definite
semigroups. We show that, if the pseudovariety V is reducible with respect to the canonical
signature κ consisting of the multiplication and the (ω − 1)-power, then V ∗ D is also
reducible with respect to κ.
Keywords. Pseudovariety, definite semigroup, semidirect product, implicit signature,
graph equations, reducibility.
1
Introduction
A semigroup (resp. monoid) pseudovariety is a class of finite semigroups (resp. monoids)
closed under taking subsemigroups (resp. submonoids), homomorphic images and finite direct
products. It is said to be decidable if there is an algorithm to test membership of a finite
semigroup (resp. monoid) in that pseudovariety. The semidirect product of pseudovariets has
been getting much attention, mainly due to the Krohn-Rhodes decomposition theorem [18]. In
turn, the pseudovarieties of the form V∗D, where D is the pseudovariety of all finite semigroups
whose idempotents are right zeros, are among the most studied semidirect products [23, 25, 3,
1, 4]. For a pseudovariety V of monoids, LV denotes the pseudovariety of all finite semigroups
S such that eSe ∈ V for all idempotents e of S. We know from [17, 23, 24, 25] that V ∗ D
is contained in LV and that V ∗ D = LV if and only if V is local in the sense of Tilson [25].
In particular, the equalities Sl ∗ D = LSl and G ∗ D = LG hold for the pseudovarieties Sl of
semilattices and G of groups.
J. C. Costa & M. L. Teixeira: CMAT, Dep. Matemática e Aplicações, Universidade do Minho, Campus
de Gualtar, 4710-057 Braga, Portugal; e-mail: [email protected], [email protected]
C. Nogueira: CMAT, Escola Superior de Tecnologia e Gestão, Instituto Politécnico de Leiria, Campus 2,
Morro do Lena, Alto Vieiro, 2411-901 Leiria, Portugal; e-mail: [email protected]
1
2
J. C. Costa, C. Nogueira, M. L. Teixeira
It is known that the semidirect product operator does not preserve decidability of pseudovarieties [20, 11]. The notion of tameness was introduced by Almeida and Steinberg [7, 8] as
a tool for proving decidability of semidirect products. The fundamental property for tameness
is reducibility. This property was originally formulated in terms of graph equation systems
and latter extended to any system of equations [2, 21]. It is parameterized by an implicit
signature σ (a set of implicit operations on semigroups containing the multiplication), and we
speak of σ-reducibility. For short, given an equation system Σ with rational constraints, a
pseudovariety V is σ-reducible relatively to Σ when the existence of a solution of Σ by implicit
operations over V implies the existence of a solution of Σ by σ-words over V and satisfying
the same constraints. The pseudovariety V is said to be σ-reducible if it is σ-reducible with
respect to every finite graph equation system. The implicit signature which is most commonly
encountered in the literature is the canonical signature κ = {ab, aω−1 } consisting of the multiplication and the (ω − 1)-power. For instance, the pseudovarieties D [9], G [10, 8], J [1, 2]
of all finite J -trivial semigroups, LSl [16] and R [6] of all finite R-trivial semigroups are
κ-reducible.
In this paper, we study the κ-reducibility property of semidirect products of the form V∗D.
This research is essentially inspired by the papers [15, 16] (see also [13] where a stronger form
of κ-reducibility was established for LSl). We prove that, if V is κ-reducible then V ∗ D is κreducible. In particular, this gives a new and simpler proof (though with the same basic idea)
of the κ-reducibility of LSl and establishes the κ-reducibility of the pseudovarieties LG, J ∗ D
and R ∗ D. Combined with the recent proof that the κ-word problem for LG is decidable [14],
this shows that LG is κ-tame, a problem proposed by Almeida a few years ago. This also
extends part of our work in the paper [15], where we proved that under mild hypotheses
on an implicit signature σ, if V is σ-reducible relatively to pointlike systems of equations
(i.e., systems of equations of the form x1 = · · · = xn ) then V ∗ D is pointlike σ-reducible as
well. As in [15], we use results from [5], where various kinds of σ-reducibility of semidirect
products with an order-computable pseudovariety were considered. More specifically, we know
from [5] that a pseudovariety of the form V ∗ Dk is κ-reducible when V is κ-reducible, where
Dk is the order-computable pseudovariety defined by the identity yx1 · · · xk = x1 · · · xk . As
S
V ∗ D = k V ∗ Dk , we utilize this result as a way to achieve our property concerning the
pseudovarieties V ∗ D. The method used in this paper is similar to that of [15]. However,
some significant changes, inspired by [16], had to be introduced in order to deal with the much
more intricate graph equation systems.
2
Preliminaries
The reader is referred to the standard bibliography on finite semigroups, namely [1, 21],
for general background and undefined terminology. For basic definitions and results about
combinatorics on words, the reader may wish to consult [19].
On κ-reducibility of pseudovarieties of the form V ∗ D
2.1
3
Words and pseudowords
Throughout this paper, A denotes a finite non-empty set called an alphabet. The free semigroup
and the free monoid generated by A are denoted respectively by A+ and A∗ . The empty word
is represented by 1 and the length of a word w ∈ A∗ is denoted by |w|. A word is called
primitive if it cannot be written in the form un with n > 1. Two words u and v are said to
be conjugate if u = w1 w2 and v = w2 w1 for some words w1 , w2 ∈ A∗ . A Lyndon word is a
primitive word which is minimal in its conjugacy class, for the lexicographic order on A+ .
A left-infinite word on A is a sequence w = (an )n of letters of A indexed by −N also
written w = · · · a−2 a−1 . The set of all left-infinite words on A will be denoted by A−N and
we put A−∞ = A+ ∪ A−N . The set A−∞ is endowed with a semigroup structure by defining
a product as follows: if w, z ∈ A+ , then wz is already defined; left-infinite words are right
zeros; finally, if w = · · · a−2 a−1 is a left-infinite word and z = b1 b2 · · · bn is a finite word,
then wz is the left-infinite word wz = · · · a−2 a−1 b1 b2 · · · bn . A left-infinite word w of the form
u−∞ v = · · · uuuv, with u ∈ A+ and v ∈ A∗ , is said to be ultimately periodic. In case v = 1, the
word w is named periodic. For a periodic word w = u−∞ , if u is a primitive word, then it will
be called the root of w and its length |u| will be said to be the period of w.
For a pseudovariety V of semigroups, we denote by ΩA V the relatively free pro-V semigroup generated by the set A: for each pro-V semigroup S and each function ϕ : A → S,
there is a unique continuous homomorphism ϕ : ΩA V → S extending ϕ. The elements of
ΩA V are called pseudowords (or implicit operations) over V. A pseudovariety V is called
order-computable when the subsemigroup ΩA V of ΩA V generated by A is finite, in which case
ΩA V = ΩA V, and effectively computable. Recall that, for the pseudovariety S of all finite
semigroups, ΩA S is (identified with) the free semigroup A+ . The elements of ΩA S \ A+ will
then be called infinite pseudowords.
A pseudoidentity is a formal equality π = ρ of pseudowords π, ρ ∈ ΩA S over S. We say that
V satisfies the pseudoidentity π = ρ, and write V |= π = ρ, if ϕπ = ϕρ for every continuous
homomorphism ϕ : ΩA S → S into a semigroup S ∈ V, which is equivalent to saying that
pV π = pV ρ for the natural projection pV : ΩA S → ΩA V.
2.2
Pseudoidentities over V ∗ Dk
For a positive integer k, let Dk be the pseudovariety of all finite semigroups satisfying the
identity yx1 · · · xk = x1 · · · xk . Denote by Ak the set of words over A with length k and by Ak
the set {w ∈ A+ : |w| ≤ k} of non-empty words over A with length at most k. We notice that
ΩA Dk may be identified with the semigroup whose support set is Ak and whose multiplication
is given by u · v = tk (uv), where tk w denotes the longest suffix of length at most k of a given
(finite or left-infinite) word w. Then, the Dk are order-computable pseudovarieties such that
S
D = k Dk . Moreover, it is well-known that ΩA D is isomorphic to the semigroup A−∞ .
For each pseudoword π ∈ ΩA S, we denote by tk π the unique smallest word (of Ak )
such that Dk |= π = tk π. Simetrically, we denote by ik π the smallest word (of Ak ) such
4
J. C. Costa, C. Nogueira, M. L. Teixeira
that Kk |= π = ik π, where Kk is the dual pseudovariety of Dk defined by the identity
x1 · · · xk y = x1 · · · xk . Let Φk be the function A+ → (Ak+1 )∗ that sends each word w ∈ A+
to the sequence of factors of length k + 1 of w, in the order they occur in w. We still denote
by Φk (see [3] and [1, Lemma 10.6.11]) its unique continuous extension ΩA S → (ΩAk+1 S)1 .
This function Φk is a k-superposition homomorphism, with the meaning that it verifies the
conditions:
i) Φk w = 1 for every w ∈ Ak ;
ii) Φk (πρ) = Φk πΦk (tk π)ρ = Φk π(ik ρ) Φk ρ for every π, ρ ∈ ΩA S.
Throughout the paper, V denotes a non-locally trivial pseudovariety of semigroups. For
any pseudowords π, ρ ∈ ΩA S, it is known from [1, Theorem 10.6.12] that
V ∗ Dk |= π = ρ
2.3
⇐⇒
ik π = ik ρ, tk π = tk ρ and V |= Φk π = Φk ρ.
(2.1)
Implicit signatures and σ-reducibility
By an implicit signature we mean a set σ of pseudowords (over S) containing the multiplication.
In particular, we represent by κ the implicit signature {ab, aω−1 }, usually called the canonical
signature. Every profinite semigroup has a natural structure of a σ-algebra, via the natural
interpretation of pseudowords on profinite semigroups. The σ-subalgebra of ΩA S generated
by A is denoted by ΩσA S. It is freely generated by A in the variety of σ-algebras generated by
the pseudovariety S and its elements are called σ-words (over S). To a (directed multi)graph
e
Γ = V (Γ) ⊎ E(Γ), with vertex set V (Γ), edge set E(Γ), and edges αe −
→ ωe, we associate
the system ΣΓ of all equations of the form (αe) e = ωe, with e ∈ E(Γ). Let S be a finite
A-generated semigroup, δ : ΩA S → S be the continuous homomorphism respecting the choice
of generators and ϕ : Γ → S 1 be an evaluation mapping such that ϕE(Γ) ⊆ S. We say that
a mapping η : Γ → (ΩA S)1 is a V-solution of ΣΓ with respect to (ϕ, δ) when δη = ϕ and
V |= ηu = ηv for all (u = v) ∈ ΣΓ . Furthermore, if ηΓ ⊆ (ΩσA S)1 for an implicit signature σ,
then η is called a (V, σ)-solution. The pseudovariety V is said to be σ-reducible relatively to
the system ΣΓ if the existence of a V-solution of ΣΓ with respect to a pair (ϕ, δ) entails the
existence of a (V, σ)-solution of ΣΓ with respect to the same pair (ϕ, δ). We say that V is
σ-reducible, if it is σ-reducible relatively to ΣΓ for all finite graphs Γ.
3
κ-reducibility of V ∗ D
Let V be a given κ-reducible non-locally trivial pseudovariety. The purpose of this paper is
to prove the κ-reducibility of the pseudovariety V ∗ D. So, we fix a finite graph Γ and a finite
A-generated semigroup S and consider a V ∗ D-solution η : Γ → (ΩA S)1 of the system ΣΓ
with respect to a pair (ϕ, δ), where ϕ : Γ → S 1 is an evaluation mapping such that ϕE(Γ) ⊆ S
and δ : ΩA S → S is a continuous homomorphism respecting the choice of generators. We have
to construct a (V ∗ D, κ)-solution η ′ : Γ → (ΩκA S)1 of ΣΓ with respect to the same pair (ϕ, δ).
On κ-reducibility of pseudovarieties of the form V ∗ D
3.1
5
Initial considerations
Suppose that g ∈ Γ is such that ηg = u with u ∈ A∗ . Since η and η ′ are supposed to be
V ∗ D-solutions of the system ΣΓ with respect to (ϕ, δ), we must have δη = ϕ = δη ′ and so, in
particular, δη ′ g = δu. As the homomorphism δ : ΩA S → S is arbitrarily fixed, it may happen
that the equality δη ′ g = δu holds only when η ′ g = u. In that case we would be obliged to
define η ′ g = u. Since we want to describe an algorithm to define η ′ that should work for any
given graph and solution, we will then construct a solution η ′ verifying the following condition:
∀g ∈ Γ,
(ηg ∈ A∗ =⇒ η ′ g = ηg).
C1 (Γ, η, η ′ )
Suppose next that a vertex v ∈ V (Γ) is such that D |= ηv = uω with u ∈ A+ , that is,
suppose that pD ηv = u−∞ . Because Γ is an arbitrary graph, it could include, for instance,
an edge e such that αe = ωe = v and the labeling η could be such that ηe = u. Since D
is a subpseudovariety of V ∗ D, η is a D-solution of ΣΓ with respect to (ϕ, δ). Hence, as
by condition C1 (Γ, η, η ′ ) we want to preserve finite labels, it would follow in that case that
D |= (η ′ v)u = η ′ v and, thus, that D |= η ′ v = uω = ηv. This observation suggests that we
should preserve the projection into ΩA D of labelings of vertices v such that pD ηv = u−∞ with
u ∈ A+ . More generally, we will construct the (V ∗ D, κ)-solution η ′ in such a way that the
following condition holds:
∀v ∈ V (Γ),
(pD ηv = u−∞ z with u ∈ A+ and z ∈ A∗ =⇒ pD η ′ v = pD ηv).
C2 (Γ, η, η ′ )
Let ℓη = max{|u| : u ∈ A∗ and ηg = u for some g ∈ Γ} be the maximum length of finite
labels under η of elements of Γ. To be able to make some reductions on the graph Γ and
solution η, described in Section 3.2, we want η ′ to verify the extra condition below, where
L ≥ ℓη is a non-negative integer to be specified later, on Section 3.3:
∀v ∈ V (Γ),
3.2
(ηv = uπ with u ∈ AL =⇒ η ′ v = uπ ′ with δπ = δπ ′ ).
C3 (Γ, η, η ′ )
Simplifications on the solution η
We begin this section by reducing to the case in which all vertices of Γ are labeled by infinite
e
pseudowords under η. Suppose first that there is an edge v −
→ w such that ηv = uv and ηe = ue
with uv ∈ A∗ and ue ∈ A+ , so that ηw = uv ue . Drop the edge e and consider the restrictions
η1 and ϕ1 , of η and ϕ respectively, to the graph Γ1 = Γ\{e}. Then η1 is a V∗D-solution of the
system ΣΓ1 with respect to the pair (ϕ1 , δ). Assume that there is a (V ∗ D, κ)-solution η1′ of
ΣΓ1 with respect to (ϕ1 , δ) verifying condition C1 (Γ1 , η1 , η1′ ). Then η1′ v = uv and η1′ w = uv ue .
Let η ′ be the extension of η1′ to Γ obtained by letting η ′ e = ue . Then η ′ is a (V ∗ D, κ)-solution
of ΣΓ with respect to (ϕ, δ). By induction on the number of edges labeled by finite words
under η beginning in vertices also labeled by finite words under η, we may therefore assume
that there are no such edges in Γ.
Now, we remove all vertices v of Γ labeled by finite words under η such that v is not the
beginning of an edge, thus obtaining a graph Γ1 . As above, if η1′ is a (V ∗ D, κ)-solution of
6
J. C. Costa, C. Nogueira, M. L. Teixeira
ΣΓ1 , then we build a (V ∗ D, κ)-solution η ′ of ΣΓ by letting η ′ coincide with η1′ on Γ1 and
letting η ′ v = ηv for each vertex v ∈ Γ \ Γ1 . So, we may assume that all vertices of Γ labeled
by finite words under η are the beginning of some edge.
e
Suppose next that v −
→ w is an edge such that ηv = u and ηe = π with u ∈ A∗ and
π ∈ ΩA S \ A+ . Notice that, since it is an infinite pseudoword, π can be written as π = π1 π2
with both π1 and π2 being infinite pseudowords. Drop the edge e (and the vertex v in case e
e1
is the only edge beginning in v) and let v1 be a new vertex and v1 −→
w be a new edge thus
obtaining a new graph Γ1 . Let η1 and ϕ1 be the labelings of Γ1 defined as follows:
• η1 and ϕ1 coincide, respectively, with η and ϕ on Γ′ = Γ1 ∩ Γ;
• η1 v1 = uπ1 , η1 e1 = π2 , ϕ1 v1 = δη1 v1 and ϕ1 e1 = δη1 e1 .
Then η1 is a V ∗ D-solution of the system ΣΓ1 with respect to the pair (ϕ1 , δ). Assume that
there is a (V ∗D, κ)-solution η1′ of ΣΓ1 with respect to (ϕ1 , δ) verifying conditions C1 (Γ1 , η1 , η1′ )
and C3 (Γ1 , η1 , η1′ ). In particular, since L is chosen to be greater than ℓη , η1′ v1 = uπ1′ with
′
′
′
′
δπ1 = δπ1′ . Let η ′ be the extension of η1|Γ
′ to Γ obtained by letting η e = π1 (η1 e1 ) (and
η ′ v = u in case v 6∈ Γ′ ). As one can easily verify, η ′ is a (V ∗ D, κ)-solution of ΣΓ with respect
to (ϕ, δ). By induction on the number of edges beginning in vertices labeled by finite words
under η, we may therefore assume that all vertices of Γ are labeled by infinite pseudowords
under η.
Suppose at last that an edge e ∈ Γ is labeled under η by a finite word u = a1 · · · an , where
n > 1 and ai ∈ A. Denote v0 = αe and vn = ωe. In this case, we drop the edge e and, for each
ei
→ vi to the graph Γ. Let Γ1
i ∈ {1, . . . , n − 1}, we add a new vertex vi and a new edge vi−1 −
be the graph thus obtained and let η1 and ϕ1 be the labelings of Γ1 defined as follows:
• η1 and ϕ1 coincide, respectively, with η and ϕ on Γ′ = Γ \ {e};
• for each i ∈ {1, . . . , n − 1}, η1 vi = (ηv)a1 · · · ai , η1 ei = ai , ϕ1 vi = δη1 vi and ϕ1 ei = δη1 ei .
Hence, η1 is a V ∗ D-solution of the system ΣΓ1 with respect to the pair (ϕ1 , δ). Suppose there
exists a (V ∗ D, κ)-solution η1′ of ΣΓ1 with respect to (ϕ1 , δ) verifying condition C1 (Γ1 , η1 , η1′ ).
′
′
′
Let η ′ be the extension of η1|Γ
′ to Γ obtained by letting η e = u. Then η is a (V ∗ D, κ)solution of ΣΓ with respect to (ϕ, δ). By induction on the number of edges labeled by finite
words under η, we may further assume that each edge of Γ labeled by a finite word under η
is, in fact, labeled by a letter of the alphabet.
3.3
Borders of the solution η
The main objective of this section is to define a certain class of finite words, called borders of
the solution η. Since the equations (of ΣΓ ) we have to deal with are of the form (αe) e = ωe,
these borders will serve to signalize the transition from a vertex αe to the edge e.
For each vertex v of Γ, denote by dv ∈ A−N the projection pD ηv of ηv into ΩA D and let
Dη = {dv | v ∈ V (Γ)}. We say that two left-infinite words v1 , v2 ∈ A−N are confinal if they
On κ-reducibility of pseudovarieties of the form V ∗ D
7
have a common prefix y ∈ A−N , that is, if v1 = yz1 and v2 = yz2 for some words z1 , z2 ∈ A∗ .
As one easily verifies, the relation ∝ defined, for each dv1 , dv2 ∈ Dη , by
dv 1 ∝ dv 2
if and only if
dv1 and dv2 are confinal
is an equivalence on Dη . For each ∝-class ∆, we fix a word y∆ ∈ A−N and words zv ∈ A∗ , for
each vertex v with dv ∈ ∆, such that
dv = y∆ zv .
Moreover, when dv is ultimately periodic, we choose y∆ of the form u−∞ , with u a Lyndon
word, and fix zv not having u as a prefix. The word u and its length |u| will be said to be,
respectively, a root and a period of the solution η. Without loss of generality, we assume that
η has at least one root (otherwise we could, easily, modify the graph and the solution in order
to include one).
η′ .
We fix a few of the integers that will be used in the construction of the (V ∗ D, κ)-solution
They depend only on the mapping η and on the semigroup S.
Definition 3.1 (constants nS , pη , L, E and Q) We let:
• nS be the exponent of S which, as one recalls, is the least integer such that snS is idempotent for every element s of the finite A-generated semigroup S;
• pη = lcm{|u| : u ∈ A+ is a root of η};
• L = max{ℓη , |zv | : v ∈ V (Γ)};
• E be an integer such that E ≥ nS pη and, for each word w ∈ AE , there is a factor e ∈ A+
of w for which δe is an idempotent of S. Notice that, for each root u of η, |unS | ≤ E and
δ(unS ) is an idempotent of S;
• Q = L + E.
For each positive integer m, we denote by Bm the set
Bm = {tm y∆ ∈ Am | ∆ is a ∝-class}.
If y∆ = u−∞ is a periodic left-infinite word, then the element y = tm y∆ of Bm will be said to
be periodic (with root u and period |u|). For words y1 , y2 ∈ Bm , we define the gap between y1
and y2 as the positive integer
g(y1 , y2 ) = min{|u| ∈ N : u ∈ A+ and, for some v ∈ A+ , y1 u = vy2 or y2 u = vy1 },
and notice that g(y1 , y2 ) = g(y2 , y1 ) ≤ m.
Proposition 3.2 Consider the constant Q introduced in Definition 3.1. There exists qQ ∈ N
such that for all integers m ≥ qQ the following conditions hold:
8
J. C. Costa, C. Nogueira, M. L. Teixeira
(a) If y1 and y2 are distinct elements of Bm , then g(y1 , y2 ) > Q;
(b) If y is a non-periodic element of Bm , then g(y, y) > Q.
Proof.
Suppose that, for every qQ ∈ N there is an integer m ≥ qQ and elements ym,1 and
ym,2 of Bm such that g(ym,1 , ym,2 ) ≤ Q. Hence, there exist a strictly increasing sequence (mi )i
of positive integers and an integer r ∈ {1, . . . , Q} such that g(ymi ,1 , ymi ,2 ) i is constant and
equal to r. Moreover, since the graph Γ is finite, we may assume that ymi ,1 = tmi y∆1 and
ymi ,2 = tmi y∆2 for every i and some ∝-classes ∆1 and ∆2 . It then follows that y∆1 u = y∆2
or y∆2 u = y∆1 for some word u ∈ Ar . Hence, y∆1 and y∆2 are confinal left-infinite words,
whence ∆1 and ∆2 are the same ∝-class ∆. Therefore, for every m, ym,1 and ym,2 have the
same length and are suffixes of the word y∆ and, so, ym,1 and ym,2 are the same word. This
proves already (a). Now, notice that y∆ u = y∆ , meaning that y∆ is the periodic left-infinite
word u−∞ . This shows (b) and completes the proof of the proposition.
We now fix two more integers.
Definition 3.3 (constants M and k) We let:
• M be an integer such that M is a multiple of pη and M is greater than or equal to the
integer qQ of Proposition 3.2, and notice that M > Q;
• k = M + Q.
The elements of the set BM will be called the borders of the solution η. We remark that the
borders of η are finite words of length M such that, by Proposition 3.2, for any two distinct
occurrences of borders y1 and y2 in a finite word, either these occurrences have a gap of size
at least Q between them, or y1 and y2 are the same periodic border y. In this case, y is a
power of its root u, since M is a multiple of the period |u|, and g(y, y) is |u|.
3.4
Getting a (V ∗ Dk , κ)-solution
As V ∗ Dk is a subpseudovariety of V ∗ D, η is a V ∗ Dk -solution of ΣΓ with respect to (ϕ, δ).
The given pseudovariety V was assumed to be κ-reducible. So, by [5, Corollary 6.5], V ∗ Dk is
κ-reducible too. Therefore, there is a (V ∗ Dk , κ)-solution ηk′ : Γ → (ΩκA S)1 of ΣΓ with respect
to the same pair (ϕ, δ). Moreover, as observed in [6, Remark 3.4], one can constrain the
values ηk′ g of each g ∈ Γ with respect to properties which can be tested in a finite semigroup.
Since the prefixes and the suffixes of length at most k can be tested in the finite semigroup
ΩA Kk × ΩA Dk , we may assume further that ηk′ g and ηg have the same prefixes and the same
suffixes of length at most k. We then denote
ig = ik ηk′ g = ik ηg
and tg = tk ηk′ g = tk ηg,
for each g ∈ Γ. Notice that, by the simplifications introduced in Section 3.2, if ηg is a finite
word, then g is an edge and ηg is a letter ag and so ig = tg = ag . Otherwise, ig and tg are
On κ-reducibility of pseudovarieties of the form V ∗ D
9
length k words. In particular, condition C1 (Γ, η, ηk′ ) holds. That is, ηk′ e = ηe for every edge
e such that ηe is a finite word. On the other hand, Lemma 2.3 (ii) of [12], which is stated
only for edges, can be extended easily to vertices, so that ηk′ g can be assumed to be an infinite
pseudoword for every g ∈ Γ such that ηg is infinite. Thus, in particular, ηk′ v is an infinite
pseudoword for all vertices v.
Notice that, for each vertex v, there exists a border yv of η such that the finite word yv zv
is a suffix of ηv. On the other hand, by Definitions 3.1 and 3.3, |zv | ≤ L < Q and k = M + Q.
So, as |yv | = M ,
tv = xv yv zv
and
ηk′ v = πv tv
(3.1)
for some infinite κ-word πv and some word xv ∈ A+ with |xv | = Q − |zv |.
3.5
Basic transformations
The objective of this section is to introduce the basic steps that will allow to transform the
(V ∗ Dk , κ)-solution ηk′ into a (V ∗ D, κ)-solution η ′ . The process of construction of η ′ from ηk′
is close to the one used in [15] to handle with systems of pointlike equations. Both procedures
are supported by (basic) transformations of the form
a1 · · · ak 7→ a1 · · · aj (ai · · · aj )ω aj+1 · · · ak ,
which replace words of length k by κ-words. Those procedures differ in the way the indices
i ≤ j are determined. In the pointlike case, the only condition that a basic transformation
had to comply with was that j had to be minimum such that the value of the word a1 · · · ak
under δ is preserved. In the present case, the basic transformations have to preserve the value
under δ as well, but the equations (αe)e = ωe impose an extra restriction that is not required
by pointlike equations. Indeed, we need η ′ to verify, in particular, δη ′ αe = δηk′ αe(= δηαe)
and δη ′ e = δηk′ e(= δηe). So, somewhat informally, for a word a1 · · · ak that has an occurrence
overlapping both the factors ηk′ αe and ηk′ e of the pseudoword (ηk′ αe)(ηk′ e), the introduction
of the factor (ai · · · aj )ω by the basic transformation should be done either in ηk′ αe or in ηk′ e,
and not in both simultaneously. The borders of the solution η were introduced to help us to
deal with this extra restriction. Informally speaking, the borders will be used to detect the
“passage” from the labeling under ηk′ of a vertex αe to the labeling of the edge e and to avoid
that the introduction of (ai · · · aj )ω affect the labelings under δ of ηk′ αe or ηk′ e.
Consider an arbitrary word w = a1 · · · an ∈ A+ . An integer m ∈ {M, . . . , n} will be called
a bound of w if the factor w[m] = am′ · · · am of w is a border, where m′ = m − M + 1. The
bound m will be said to be periodic or non-periodic according to the border w[m] is periodic
or not. If w admits bounds, then there is a maximum one that we name the last bound of w.
In this case, if ℓ is the last bound of w, then the border w[ℓ] will be called the last border of
w. Notice that, by Proposition 3.2 and the choice of M , if m1 and m2 are two bounds of w
with m1 < m2 , then either m2 − m1 > Q or w[m1 ] and w[m2 ] are the same periodic border.
10
J. C. Costa, C. Nogueira, M. L. Teixeira
Let w = a1 · · · ak ∈ A+ be a word of length k. Notice that, since k = M + Q, if w has a
non-periodic last bound ℓ, then ℓ is the unique bound of w. We split the word w in two parts,
lw (the left-hand of w) and rw (the right-hand of w), by setting
l w = a1 · · · as
and rw = as+1 · · · ak
where s (the splitting point of w) is defined as follows: if w has a last bound ℓ then s = ℓ;
otherwise s = k. In case w has a periodic last bound ℓ, the splitting point s will be said to be
periodic. Then, s is not periodic in two situations: either w has a non-periodic last border or
w has not a last border. The factorization
w = lw rw
will be called the splitting factorization of w. We have s ≥ M > Q ≥ E. So, by definition of
E, there exist integers i and j such that s − E < i < j ≤ s and the factor e = ai · · · aj of lw
verifies δe = (δe)2 . We begin by fixing the maximum such j and, for that j, we fix next an
integer i and a word ew = ai · · · aj , called the essential factor of w, as follows. Notice that,
if the splitting point s is periodic and u is the root of the last border of w, then δ(unS ) is
idempotent and the left-hand of w is of the form lw = l′w unS . Hence, in this case, j = s and
we let ew = unS , thus defining i as j − nS |u| + 1. Suppose now that the splitting point is not
periodic. In this case we let i be the maximum integer such that δ(ai · · · aj ) is idempotent.
The word w can be factorized as w = l′w ew l′′w rw , where l′w = a1 · · · ai−1 . We then denote by
w
b the following κ-word
w
b = l′w ew eωw l′′w rw = a1 · · · aj (ai · · · aj )ω aj+1 · · · ak
and notice that δw
b = δw. Moreover |ew l′′w | ≤ E and so |l′w | ≥ M − E > Q − E = L. It is also
convenient to introduce two κ-words derived from w
b
λk w = a1 · · · aj (ai · · · aj )ω ,
̺k w = (ai · · · aj )ω aj+1 · · · ak .
(3.2)
This defines two mappings λk , ̺k : Ak → ΩκA S that can be extended to ΩA S as done in [15].
Although they are not formally the same mappings used in that paper, because of the different
choice of the integers i and j, we keep the same notation since the selection process of those
integers is absolutely irrelevant for the purpose of the mappings. That is, with the above
adjustment the mappings maintain the properties stated in [15].
The next lemma presents a property of the b-operation that is fundamental to our purposes.
Lemma 3.4 For a word w = a1 · · · ak+1 ∈ A+ of length k + 1, let w1 = a1 · · · ak and w2 =
a2 · · · ak+1 be the two factors of w of length k. If w
b1 = a1 · · · aj1 (ai1 · · · aj1 )ω aj1 +1 · · · ak and
w
b2 = a2 · · · aj2 (ai2 · · · aj2 )ω aj2 +1 · · · ak+1 , then a1 lw2 = lw1 x for some word x ∈ A∗ . In
particular j1 ≤ j2 .
On κ-reducibility of pseudovarieties of the form V ∗ D
11
Proof.
Write w2 = b1 · · · bk with bi = ai+1 . Let s1 and s2 be the splitting points of w1
and w2 respectively, whence lw1 = a1 · · · as1 and lw2 = b1 · · · bs2 = a2 · · · as2 +1 . To prove that
there exists a word x such that a1 lw2 = lw1 x, we have to show that s1 ≤ s2 + 1. Under this
hypothesis, we then deduce that ai1 · · · aj1 is an occurrence of the essential factor ew1 in lw2
which proves that j1 ≤ j2 .
Assume first that w1 has a last bound ℓ1 , in which case s1 = ℓ1 . By definition, ℓ1 ≥ M .
If ℓ1 > M , then the last border of w1 occurs in w2 , one position to the left relatively to w1 .
Hence ℓ1 − 1 is a bound of w2 and, so, w2 has a last bound ℓ2 such that ℓ2 ≥ ℓ1 − 1. It follows
in this case that s2 = ℓ2 and s1 ≤ s2 + 1. Suppose now that ℓ1 = M . Since s2 ≥ M by
definition, the condition s1 ≤ s2 + 1 holds trivially in this case. Suppose now that w1 has not
a last bound. Then s1 = k. Moreover, either w2 does not have a last bound or k is its last
bound. In both circumstances s2 = k, whence s1 = s2 ≤ s2 + 1. This concludes the proof of
the lemma.
In the conditions of the above lemma and as in [15], we define ψk : (ΩAk+1 S)1 → (ΩA S)1
as the only continuous monoid homomorphism which extends the mapping
Ak+1 → ΩκA S
a1 · · · ak+1 7→ (ai1 · · · aj1 )ω aj1 +1 · · · aj2 (ai2 · · · aj2 )ω
and let θk = ψk Φk . The function θk : ΩA S → (ΩA S)1 is a continuous k-superposition homomorphism since it is the composition of the continuous k-superposition homomorphism Φk
with the continuous homomorphism ψk . We remark that a word w = a1 · · · an of length n > k
has precisely r = n − k + 1 factors of length k and
θk (w) = ψk (a1 · · · ak+1 , a2 · · · ak+2 , . . . , ar−1 · · · an )
= ψk (a1 · · · ak+1 )ψk (a2 · · · ak+2 ) · · · ψk (ar−1 · · · an )
= (eω1 f1 eω2 )(eω2 f2 eω3 ) · · · (eωr−1 fr−1 eωr )
= eω1 f1 eω2 f2 · · · eωr−1 fr−1 eωr
where, for each p ∈ {1, . . . , r}, ep is the essential factor ewp = aip · · · ajp of the word wp =
ap · · · ak+p−1 and fp = ajp +1 · · · ajp+1 (p 6= r). Above, for each p ∈ {2, . . . , r − 1}, we have
replaced each expression eωp eωp with eωp since, indeed, these expressions represent the same κword. More generally, one can certainly replace an expression of the form xω xn xω with xω xn .
Using this reduction rule as long as possible, θk (w) can be written as
θk (w) = eωn1 f¯1 eωn2 f¯2 · · · eωnq f¯q ,
called the reduced form of θk (w), where q ∈ {1, . . . , r}, 1 = n1 < n2 < · · · < nq ≤ r,
f¯p = fnp · · · fnp+1−1 (for p ∈ {1, . . . , q − 1}) and f¯q is fnq · · · fr−1 if nq 6= r and it is the empty
word otherwise.
12
3.6
J. C. Costa, C. Nogueira, M. L. Teixeira
Definition of the (V ∗ D, κ)-solution η ′
We are now in conditions to describe the procedure to transform the (V ∗ Dk , κ)-solution ηk′
into the (V ∗ D, κ)-solution η ′ . The mapping η ′ : Γ → (ΩκA S)1 is defined, for each g ∈ Γ, as
η ′ g = (τ1 g)(τ2 g)(τ3 g),
where, for each i ∈ {1, 2, 3}, τi : Γ → (ΩκA S)1 is a function defined as follows.
First of all, we let
τ2 = θk ηk′ .
That τ2 is well-defined, that is, that τ2 g is indeed a κ-word for every g ∈ Γ, follows from the
fact that ηk′ g is a κ-word and θk transforms κ-words into κ-words (see [15]). Next, for each
vertex v, consider the length k words iv = ik ηk′ v = ik ηv and tv = tk ηk′ v = tk ηv. We let
τ1 v = λk iv
and τ3 v = ̺k tv ,
where the mappings λk and ̺k were defined in (3.2). Note that, by (3.1), tv = xv yv zv .
Moreover, the occurrence of yv shown in this factorization is the last occurrence of a border
in tv . Hence, the right-hand rtv of tv is precisely zv . Therefore, one has
τ1 v = λk iv = l′iv eiv eωiv
and
τ3 v = ̺k tv = eωtv l′′tv zv .
Consider now an arbitrary edge e. Suppose that ηe is a finite word. Then, ηe is a letter
ae and ηk′ e is also ae in this case. Then τ2 e = θk ae = 1 because θk is a k-superposition
homomorphism. Since we want η ′ e to be ae , we then define, for instance,
τ 1 e = ae
and
τ3 e = 1.
Suppose at last that ηe (and so also ηk′ e) is an infinite pseudoword. We let
τ3 e = ̺k te
and notice that τ3 e = τ3 ωe. Indeed, as ηk′ is a V ∗ Dk -solution of ΣΓ , it follows from (2.1) that
te = tk ηk′ e = tk ηk′ ωe = tωe . The definition of τ1 e is more elaborate. Let v be the vertex αe
and consider the word tv ie = a1 · · · a2k . This word has r = k + 1 factors of length k. Suppose
that θk (tv ie ) is eω1 f1 eω2 f2 · · · eωr−1 fr−1 eωr and consider its reduced form
θk (tv ie ) = eω1 f¯1 eωn2 f¯2 · · · eωnq f¯q .
Notice that tv ie = f¯0 f¯1 · · · f¯q f¯q+1 for some words f¯0 , f¯q+1 ∈ A∗ . Hence, there is a (unique)
′ and f¯ = f¯′ f¯′′ with f¯′ ∈ A∗ and f¯′′ ∈
index m ∈ {1, . . . , q} such that tv = f¯0 f¯1 · · · f¯m−1 f¯m
m
m m
m
m
ω
ω
′′
′
+
ω
ω
ω
¯
¯
¯
¯
¯
A . Then θk (tv ie ) = β1 β2 , where β1 = e1 f1 en2 f2 · · · enm fm and β2 = fm enm+1 fm+1 · · · enq f¯q
and we let
′′ ω
τ1 e = β2 = f¯m
enm+1 f¯m+1 · · · eωnq f¯q .
′′ f¯
′ ω
¯
Note that the word β2′ = f¯m
m+1 · · · fq is ak+1 · · · ajr , whence β2 er = λk ie .
The next lemma is a key result that justifies the definition of the b-operation.
On κ-reducibility of pseudovarieties of the form V ∗ D
13
Lemma 3.5 Let e be an edge such that ηe is infinite. Then, with the above notation, β1 = τ3 v
and so θk (tv ie ) = (τ3 v)(τ1 e). Moreover, δτ1 e = δλk ie .
Proof.
We begin by recalling that tv ie = a1 · · · a2k and
θk (tv ie ) = eω1 f1 eω2 f2 · · · eωr−1 fr−1 eωr = eω1 f¯1 eωn2 f¯2 · · · eωnq f¯q ,
where ep is the essential factor ewp = aip · · · ajp of the word wp = ap · · · ak+p−1 and fp =
ajp +1 · · · ajp+1 for each p. Note also that λk ie = β2′ eωr , er is a suffix of β2′ and δer is idempotent.
So, to prove the equality δτ1 e = δλk ie it suffices to show that δτ1 e = δβ2′ . We know from (3.1)
that tv = xv yv zv with 1 ≤ |xv | ≤ Q. So, xv = a1 · · · ah−1 , yv = ah · · · aM +h−1 and zv =
aM +h · · · ak for some h ∈ {2, . . . , Q + 1}. There are two cases to verify.
Case 1. yv is a non-periodic border. Consider the factor wh = ah · · · ak+h−1 of tv ie . By the
choice of M and k, the prefix yv is the only occurrence of a border in wh . Hence, M is the
last bound of wh and, so, its splitting point. It follows that wh = yv ·zv ak+1 · · · ak+h−1 is the
splitting factorization of wh . Therefore, as one can verify for an arbitrary p ∈ {1, . . . , h},
there is only one occurrence of a border in wp , precisely yv , and the splitting factorization
of wp is
wp = ap · · · ah−1 yv · zv ak+1 · · · ak+p−1 ,
whence ep = e1 with jp = j1 ≤ M + h − 1 and, so, fp = 1 for p < h. So, the prefix
eω1 f1 eω2 · · · fh−1 eωh of θk (tv ie ) reduces to eω1 . Consider now the factor wh+1 = ah+1 · · · ak+h .
Hence, either wh+1 does not have a last bound or k is its last bound. In both situations,
the splitting point of wh+1 is k and its splitting factorization is wh+1 = wh+1 · 1. Therefore,
one deduces from Lemma 3.4 that, for every p ∈ {h + 1, . . . , r}, the occurrence aip · · · ajp of
the essential factor ewp in wp is, in fact, an occurrence in the suffix w′ = ak+h−E · · · a2k =
aM +L+h · · · a2k of tv ie . Since |xv yv | = M +h−1 and |zv | ≤ L, it follows that k = |xv yv zv | <
M + L + h, whence w′ is a suffix of ie and so k < ip < jp for all p ∈ {h + 1, . . . , r}. This
means, in particular, that the ω-power eωh+1 is introduced at the suffix ie of tv ie . Hence
β1 = eω1 f1 eω2 · · · fh−1 eωh ajh +1 · · · ak and its reduced form is eω1 aj1 +1 · · · ak = τ3 v, which proves
that β1 and τ3 v are the same κ-word. Moreover, from k < ip , one deduces that the word
ep is a suffix of ak+1 · · · ajp , which proves that δτ1 e = δβ2′ .
Case 2. yv is a periodic border. Let u be the root of yv . Then, since M was fixed as a
M
multiple of |u|, yv = uMu where Mu = |u|
. If the prefix yv is the only occurrence of a
border in wh , then one deduces the lemma as in Case 1 above. So, we assume that there
is another occurrence of a border y in wh . Hence, by Proposition 3.2 and the choice of
M and k, y is precisely yv . Furthermore, since u is a Lyndon word and k = M + Q with
Q < M , wh = yv ud wh′ for some positive integer d and some word wh′ ∈ A∗ such that u is
not a prefix of wh′ . Notice that, since u is not a prefix of zv by definition of this word, zv
is a proper prefix of u. On the other hand wh = ud yv wh′ and the occurrence of yv shown in
14
J. C. Costa, C. Nogueira, M. L. Teixeira
this factorization is the last occurrence of yv in wh . Thus,
wh = ud yv · wh′
is the splitting factorization of wh . Therefore w
ch = ud yv (unS )ω wh′ and eh = unS . More
generally, for any p ∈ {1, . . . , h}, yv is a factor of wp and it is the only border that occurs
in wp . Hence, the splitting point of wp is periodic and ep = unS . Moreover, as one can
verify, j1 = M + h − 1 and the prefix eω1 f1 eω2 · · · fh−1 eωh of θk (tv ie ) is eω1 (u(eω1 )|u| )d and
so, analogously to Case 1, it reduces to eω1 ud . Since zv is a proper prefix of u and d ≥ 1,
k < jh . This allows already deduce that the reduced form of β1 is (unS )ω zv = τ3 v, thus
concluding the proof of the first part of the lemma. Now, there are two possible events.
′′ = β ′ , in which case δτ e = δβ ′ is trivially verified. Or m 6= q
Either m = q and β2 = f¯m
1
2
2
ω
and the ω-power enm+1 was not eliminated in the reduction process of θk (tv ie ). This means
that the splitting point of the word wnm+1 is not determined by one of the occurrences of
the border yv in the prefix a1 · · · ak+h−1 of tv ie . Then, as in Case 1 above, one deduces
that k < ip for each p ∈ {nm+1 , . . . , r} and, so, that δτ1 e = δβ2′ .
In both cases β1 = τ3 v and δτ1 e = δλk ie . Hence, the proof of the lemma is complete.
Notice that, as shown in the proof of Lemma 3.5 above, if a vertex v is such that yv is
a periodic border with root u, then τ3 v = (unS )ω zv . So, the definition of the mapping τ3 on
vertices assures condition C2 (Γ, η, η ′ ).
3.7
Proof that η ′ is a (V ∗ D, κ)-solution
This section will be dedicated to showing that η ′ is a (V ∗ D, κ)-solution of ΣΓ with respect
to the pair (ϕ, δ) verifying conditions C1 (Γ, η, η ′ ) and C3 (Γ, η, η ′ ).
We begin by noticing that η ′ g is a κ-word for every g ∈ Γ. Indeed, as observed above, each
τ2 g is a κ-word. That both τ1 g and τ3 g are κ-words too, is easily seen by their definitions.
Let us now show the following properties.
Proposition 3.6 Conditions δη ′ = ϕ, C1 (Γ, η, η ′ ) and C3 (Γ, η, η ′ ) hold.
Proof.
As ηk′ is a V ∗ Dk -solution of ΣΓ with respect to (ϕ, δ) and, so, the equality δηk′ = ϕ
holds, to deduce that δη ′ = ϕ holds it suffices to establish the equality δη ′ = δηk′ . Consider
first a vertex v ∈ Γ. Then τ1 v = λk iv = l′iv eiv eωiv and τ3 v = ̺k tv = eωtv l′′tv zv . In this case, the
equality δηk′ v = δη ′ v is a direct application of [15, Proposition 5.3], where the authors proved
that
δπ = δ (λk ik π)(θk π)(̺k tk π)
(3.3)
for every pseudoword π. Moreover, by definition of the b-operation, |l′iv | > L. Therefore, ηv
and η ′ v are of the form ηv = uπ and η ′ v = uπ ′ with u ∈ AL and δπ = δπ ′ . So, condition
C3 (Γ, η, η ′ ) holds.
On κ-reducibility of pseudovarieties of the form V ∗ D
15
Consider next an edge e ∈ Γ. If ηk′ e is a finite word ae , then η ′ e = (τ1 e)(τ2 e)(τ3 e) = ae ·1·1 =
ae = ηk′ e, whence δη ′ e = δηk′ e holds trivially. Moreover, since ηk′ e = ηe in this case and every
vertex is labeled under η by an infinite pseudoword, it follows that condition C1 (Γ, η, η ′ )
holds. Suppose at last that ηk′ e is infinite and let v = αe. Then τ3 e = ̺k te . On the other
hand, by Lemma 3.5, δτ1 e = δλk ie . Hence, by (3.3) and since δ is a homomorphism, δη ′ e =
δ (τ1 e)(τ2 e)(τ3 e) = δ (λk ie )(θk ηk′ e)(̺k te ) = δηk′ e. This ends the proof of the proposition.
e
Consider an arbitrary edge v −
→ w of Γ. To achieve the objectives of this section it
remains to prove that V ∗ D satisfies (η ′ v)(η ′ e) = η ′ w. Since ηk′ is a V ∗ Dk -solution of ΣΓ ,
V ∗ Dk satisfies (ηk′ v)(ηk′ e) = ηk′ w. Hence, by (2.1), iv = ik (ηk′ v)(ηk′ e) = ik (ηk′ w) = iw and
tk (ηk′ v)(ηk′ e) = tk (ηk′ w) = tw . Thus, τ1 v = λk iv = l′iv eiv eωiv = l′iw eiw eωiw = λk iw = τ1 w
and τ3 w = ̺k tw = eωtw l′′tw zw . As shown in the proof of [15, Proposition 5.4], it then follows
that V ∗ D satisfies eωiw θk (ηk′ v)(ηk′ e) eωtw = eωiw θk (ηk′ w)eωtw and, so,
V ∗ D |= (τ1 v)θk (ηk′ v)(ηk′ e) (τ3 w) = (τ1 w)θk (ηk′ w)(τ3 w) = η ′ w.
(3.4)
On the other hand, from the fact that θk is a k-superposition homomorphism one deduces
θk (ηk′ v)(ηk′ e) = θk (ηk′ v)θk tv (ηk′ e) = θk (ηk′ v)θk (tv ie )θk (ηk′ e).
(3.5)
Suppose that ηk′ e is an infinite pseudoword. In this case te = tw , whence τ3 e = τ3 w.
Moreover, by Lemma 3.5, θk (tv ie ) = (τ3 v)(τ1 e). Therefore, by conditions (3.4) and (3.5),
V ∗ D satisfies (η ′ v)(η ′ e) = η ′ w. Assume now that ηk′ e is a finite word, whence ηk′ e = ae ∈ A
and η ′ e = ae . Since η is a D-solution of ΣΓ , D |= (η ′ v)ae = η ′ w and, thus, dv ae = dw .
Hence the left-infinite words dv and dw are confinal and, so, ∝-equivalent. Hence dv = y∆ zv ,
dw = y∆ zw and yv = yw = tk y∆ , where ∆ is the ∝-class of dv and dw . It follows that
y∆ zv ae = y∆ zw and tk tv ae ) = tw . In this case, θk (ηk′ v)(ηk′ e) = θk (ηk′ v)θk (tv ae ). On the
other hand, tv ae = a1 · · · ak ak+1 = a1 tw is a word of length k + 1 and, so, θk (tv ae ) = ψk (tv ae )
is of the form
θk (tv ae ) = eω1 f eω2 .
The splitting factorizations of tv and tw are, respectively, tv = xv yv · zv and tw = xw yw · zw .
Since yv = yw , it follows that e1 = etv = etw = e2 .
Suppose that zv ae = zw . In this case it is clear that f = 1, so that θk (tv ae ) = eωtv .
Since θk (ηk′ v) ends with eωtv , it then follows that θk (ηk′ v)(ηk′ e) = θk ηk′ v = τ2 v. Therefore,
(τ1 v)θk (ηk′ v)(ηk′ e) (τ3 w) = (τ1 v)(τ2 v)(τ3 w). On the other hand,
τ3 w = ̺k tw = eωtw l′′tw zw = eωtv l′′tv zv ae = (τ3 v)ae .
So, by (3.4), one has that V ∗ D satisfies (η ′ v)ae = (τ1 v)(τ2 v)(τ3 v)ae = (τ1 v)(τ2 v)(τ3 w) = η ′ w.
Suppose now that zv ae 6= zw . In this case, one deduces from the equality y∆ zv ae = y∆ zw ,
that y∆ is a periodic left-infinite word. Let u be its root, so that y∆ = u−∞ , etv = unS and l′′tv =
l′′tw = 1. Since, by definition, u is a primitive word which is not a prefix of zv nor a prefix of
16
J. C. Costa, C. Nogueira, M. L. Teixeira
zw , we conclude that zv ae = u and zw = 1. In this case f = u, whence θk (tv ae ) = eωtv u. Then,
θk (ηk′ v)(ηk′ e) = (θk ηk′ v)u = (τ2 v)u. Therefore, (τ1 v)θk (ηk′ v)(ηk′ e) (τ3 w) = (τ1 v)(τ2 v)u(τ3 w).
Moreover,
u(τ3 w) = ueωtw l′′tw zw = u(unS )ω = (unS )ω u = eωtv l′′tv zv ae = (τ3 v)ae .
Therefore, using (3.4), one deduces as above that V ∗ D satisfies (η ′ v)ae = η ′ w.
We have proved the main theorem of the paper.
Theorem 3.7 If V is κ-reducible, then V ∗ D is κ-reducible.
This result applies, for instance, to the pseudovarieties Sl, G, J and R. Since the κ-word
problem for the pseudovariety LG of local groups is already solved [14], we obtain the following
corollary.
Corollary 3.8 The pseudovariety LG is κ-tame.
Final remarks. In this paper we fixed our attention on the canonical signature κ, while
in [15] we dealt with a more generic class of signatures σ verifying certain undemanding
conditions. Theorem 3.7 is still valid for such generic signatures σ but we preferred to treat
only the instance of the signature κ to keep the proofs clearer and a little less technical.
References
[1] J. Almeida, Finite Semigroups and Universal Algebra, (World Scientific, Singapore, 1995).
English translation.
[2] J. Almeida, Finite semigroups: an introduction to a unified theory of pseudovarieties,
in Semigroups, Algorithms, Automata and Languages (Coimbra, 2001), World Scientific,
2002, pp. 3–64.
[3] J. Almeida and A. Azevedo, On regular implicit operations, Portugaliæ Mathematica 50
(1993), 35–61.
[4] J. Almeida, A. Azevedo and M. L. Teixeira, On finitely based pseudovarieties of the forms
V ∗ D and V ∗ Dn , J. Pure Appl. Algebra 146 (2000), 1–15.
[5] J. Almeida, J. C. Costa and M. L. Teixeira, Semidirect product with an order-computable
pseudovariety and tameness, Semigroup Forum 81 (2010), 26–50.
[6] J. Almeida, J. C. Costa and M. Zeitoun, Tameness of pseudovariety joins involving R,
Monatsh. Math. 146 (2005), 89–111.
[7] J. Almeida and B. Steinberg, Syntactic and global semigroup theory: a synthesis approach, in Algorithmic Problems in Groups and Semigroups (Lincoln, NE, 1998), Trends
Math. (Birkhäuser Boston, Boston, MA, 2000), pp. 1–23.
On κ-reducibility of pseudovarieties of the form V ∗ D
17
[8] J. Almeida and B. Steinberg, On the decidability of iterated semidirect products and
applications to complexity, Proc. London Math. Soc. 80 (2000), 50–74.
[9] J. Almeida and M. Zeitoun, Tameness of some locally trivial pseudovarieties, Comm.
Algebra 31 (2003), 61–77.
[10] C. Ash, Inevitable graphs: a proof of the type II conjecture and some related decision
procedures, Int. J. Algebra Comput. 1 (1991), 127–146.
[11] K. Auinger and B. Steinberg, On the extension problem for partial permutations, Proc.
Amer. Math. Soc. 131 (2003), 2693–2703.
[12] J. C. Costa, Reducibility of joins involving some locally trivial pseudovarieties, Comm.
Algebra 32 (2004), 3517–3535.
[13] J. C. Costa and C. Nogueira, Complete reducibility of the pseudovariety LSl, Int. J.
Algebra Comput. 19 (2009), 247–282.
[14] J. C. Costa, C. Nogueira and M. L. Teixeira, The word problem for κterms over the pseudovariety of local groups, submitted, preprint available at
http://arxiv.org/abs/1509.01533.
[15] J. C. Costa, C. Nogueira and M. L. Teixeira, Pointlike reducibility of pseudovarieties of
the form V ∗ D, Int. J. Algebra Comput., DOI: 10.1142/S0218196716500090, to appear,
preprint available at http://arxiv.org/abs/1509.04088.
[16] J. C. Costa and M. L. Teixeira, Tameness of the pseudovariety LSl, Int. J. Algebra
Comput. 14 (2004), 627–654.
[17] S. Eilenberg, Automata, Languages and Machines, vol. B, (Academic Press, New York,
1976).
[18] K. Krohn and J. Rhodes, Algebraic theory of machines. I. Prime decomposition theorem
for finite semigroups and machines, Trans. Amer. Math. Soc. 116 (1965), 450–464.
[19] M. Lothaire, Algebraic Combinatorics on Words, (Cambridge University Press, 2002).
[20] J. Rhodes, Undecidability, automata and pseudovarieties of finite semigroups, Int. J.
Algebra Comput. 9 (1999), 455–473.
[21] J. Rhodes and B. Steinberg, The q-theory of Finite Semigroups: A New Approach,
(Springer Monographs in Mathematics, 2009).
[22] B. Steinberg, A delay theorem for pointlikes, Semigroup Forum 63 (2001), 281–304.
[23] H. Straubing, Finite semigroup varieties of the form V ∗ D, J. Pure Appl. Algebra 36
(1985), 53–94.
18
J. C. Costa, C. Nogueira, M. L. Teixeira
[24] D. Thérien and A. Weiss, Graph congruences and wreath products, J. Pure Appl. Algebra
36 (1985), 205–215.
[25] B. Tilson, Categories as algebra: an essential ingredient in the theory of monoids, J. Pure
Appl. Algebra 48 (1987), 83–198.
| 4 |
arXiv:1403.4349v1 [math.AC] 18 Mar 2014
LINEARLY RELATED POLYOMINOES
VIVIANA ENE, JÜRGEN HERZOG, TAKAYUKI HIBI
Abstract. We classify all convex polyomino ideals which are linearly related
or have a linear resolution. Convex stack polyominoes whose ideals are extremal
Gorenstein are also classified. In addition, we characterize, in combinatorial terms,
the distributive lattices whose join-meet ideals are extremal Gorenstein or have a
linear resolution.
Introduction
The ideal of inner minors of a polyomino, a so-called polyomino ideal, is generated by certain subsets of 2-minors of an m × n-matrix X of indeterminates. Such
ideals have first been studied by Qureshi in [17]. They include the two-sided ladder
determinantal ideals of 2-minors which may also be viewed as the join-meet ideal of
a planar distributive lattice. It is a challenging problem to understand the graded
free resolution of such ideals. In [7], Ene, Rauf and Qureshi succeeded to compute
the regularity of such joint-meet ideals. Sharpe [19, 20] showed that the ideal I2 (X)
of all 2-minors of X is linearly related, which means that I2 (X) has linear relations.
Moreover, he described these relations explicitly and conjectured that also the ideals
of t-minors It (X) are generated by a certain type of linear relations. This conjecture
was then proved by Kurano [13]. In the case that the base field over which It (X) is
defined contains the rational numbers, Lascoux [14] gives the explicit free resolution
of all ideals of t-minors. Unfortunately, the resolution of It (X) in general may depend on the characteristic of the base field. Indeed, Hashimoto [8] showed that for
2 ≤ t ≤ min(m, n) − 3, the second Betti number β2 of It (X) depends on the characteristic. On the other hand, by using squarefree divisor complexes [2] as introduced
by Bruns and the second author of this paper, it follows from [2, Theorem 1.3] that
β2 for t = 2 is independent of the characteristic.
In this paper we use as a main tool squarefree divisor complexes to study the
first syzygy module of a polyomino ideal. In particular, we classify all convex polyominoes which are linearly related; see Theorem 2.1. This is the main result of
this paper. In the first section we recall the concept of polyomino ideals and show
that the polyomino ideal of a convex polyomino has a quadratic Gröbner basis.
The second section of the paper is devoted to state and to prove Theorem 2.1. As
mentioned before, the proof heavily depends on the theory of squarefree divisor
complexes which allow to compute the multi-graded Betti numbers of a toric ideal.
To apply this theory, one observes that the polyomino ideal of a convex polyomino
2010 Mathematics Subject Classification. 13C05, 05E40, 13P10.
Key words and phrases. Binomial ideals, Linear syzygies, Polyominoes.
The first author was supported by the grant UEFISCDI, PN-II-ID-PCE- 2011-3-1023.
1
may be naturally identified with a toric ideal. The crucial conclusion deduced from
this observation, formulated in Corollary 2.5, is then that the Betti numbers of a
polyomino ideal is bounded below by the Betti numbers of the polyomino ideal of
any induced subpolyomino. Corollary 2.5 allows to reduce the study of the relation
of polyomino ideals to that of a finite number of polyominoes with a small number
of cells which all can be analyzed by the use of a computer algebra system.
In the last section, we classify all convex polyominoes whose polyomino ideal has a
linear resolution (Theorem 3.1) and all convex stack polyominoes whose polyomino
ideal is extremal Gorenstein (Theorem 3.4). Since polyomino ideals overlap with
join-meet ideals, it is of interest which of the ideals among the join-meet ideals have
a linear resolution or are extremal Gorenstein. The answers are given in Theorem 3.2
and Theorem 3.5. It turns out that the classifications for both classes of ideals almost
lead to the same result.
1. Polyominoes
In this section we consider polyomino ideals. This class of ideals of 2-minors was
introduced by Qureshi [17]. To this end, we consider on N2 the natural partial order
defined as follows: (i, j) ≤ (k, l) if and only if i ≤ k and j ≤ l. The set N2 together
with this partial order is a distributive lattice.
If a, b ∈ N2 with a ≤ b, then the set [a, b] = {c ∈ N2 | a ≤ c ≤ b} is an interval of
N2 . The interval C = [a, b] with b = a + (1, 1) is called a cell of N2 . The elements of
C are called the vertices of C and a is called the left lower corner of C. The egdes
of the cell C are the sets {a, (a + (1, 0)}, {a, a + (0, 1)}, {(a + (1, 0), a + (1, 1)} and
{(a + (0, 1), a + (1, 1)}.
Let P be a finite collection of cells and C, D ∈ P. Then C and D are connected,
if there is a sequence of cells of P given by C = C1 , . . . , Cm = D such that Ci ∩ Ci+1
is an edge of Ci for i = 1, . . . , m − 1. If, in addition, Ci 6= Cj for all i 6= j, then C is
called a path (connecting C and D). The collection of cells P is called a polyomino
if any two cells of P are connected; see Figure 1. The set of vertices of P, denoted
V (P), is the union of the vertices of all cells belonging to P. Two polyominoes are
called isomorphic if they are mapped to each other by a composition of translations,
reflections and rotations.
Figure 1. A polyomino
2
We call a polyomino P row convex, if for any two cells C, D of P with left lower
corner a = (i, j) and b = (k, j) respectively, and such that k > i, it follows that
all cells with left lower corner (l, j) with i ≤ l ≤ k belong to P. Similarly, one
defines column convex polyominoes. The polyomino P is called convex if it is row
and column convex.
The polyomino displayed in Figure 1 is not convex, while Figure 2 shows a convex
polyomino. Note that a convex polyomino is not convex in the common geometric
sense.
Figure 2. A convex polyomino
Now let P be any collection of cells. We may assume that the vertices of all
the cells of P belong to the interval [(1, 1), (m, n)]. Fix a field K and let S be the
polynomial ring over K in the variables xij with (i, j) ∈ P. The ideal of inner
minors IP ⊂ S of P, is the ideal generated by all 2-minors xil xkj − xkl xij for which
[(i, j), (k, l)] ⊂ V (P). Furthermore, we denote by K[P] the K-algebra S/IP . If P
happens to be a polyomino, then IP will also be called a polyomino ideal.
For example, the polyomino P displayed in Figure 2 may be embedded into the
interval [(1, 1), (4, 4)]. Then, in these coordinates, IP is generated by the 2-minors
x22 x31 − x32 x21 , x23 x31 − x33 x21 , x24 x31 − x34 x21 , x23 x32 − x33 x22 ,
x24 x32 − x34 x22 , x24 x33 − x34 x23 , x13 x22 − x12 x23 , x13 x32 − x12 x33 ,
x13 x42 − x12 x43 , x23 x42 − x22 x43 , x33 x42 − x32 x43 .
The following result has been shown by Qureshi in [17, Theorem 2.2].
Theorem 1.1. Let P be a convex polyomino. Then K[P] is a normal Cohen–
Macaulay domain.
The proof of this theorem is based on the fact that IP may be viewed as follows as a
toric ideal: with the assumptions and notation as introduced before, we may assume
that V (P) ⊂ [(1, 1), (m, n)]. Consider the K-algebra homomorphism ϕ : S → T
with ϕ(xij ) = si tj for all (i, j) ∈ V (P). Here T = K[s1 , . . . , sm , t1 , . . . , tn ] is the
polynomial ring over K in the variables si and tj . Then, as observed by Qureshi,
IP = Ker ϕ. It follows that K[P] may be identified with the edge ring of the
bipartite graph GP on the vertex set {s1 , . . . , sm } ∪ {t1 , . . . , tn } and edges {si , tj }
3
with (i, j) ∈ V (P). With this interpretation of K[P] in mind and by using [16], we
obtain
Proposition 1.2. Let P be a convex polyomino. Then IP has a quadratic Gröbner
basis.
Proof. We use the crucial fact, proved in [16], that the toric ideal which defines the
edge ring of a bipartite graph has a quadratic Gröbner basis if and only if each
2r-cycle with r ≥ 3 has a chord. By what we explained before, a 2k-cycle, after
identifying the vertices of P with the edges of a bipartite graph, is nothing but a
sequence of vertices a1 , . . . , a2r of P with
a2k−1 = (ik , jk ) and a2k = (ik+1 , jk ) for k = 1, . . . , r
such that ir+1 = i1 , ik 6= iℓ and jk 6= jℓ for all k, ℓ ≤ r and k 6= ℓ.
A typical such sequence of pairs of integers is the following:
32244553
11332244
Here the first row is the sequence of the first component and the second row the
sequence of the second component of the vertices ai . This pair of sequences represents an 8-cycle. It follows from Lemma 1.3 that there exist integers s and t with
1 ≤ t, s ≤ r and t 6= s, s + 1 such that either is < it < is+1 or is+1 < it < is . Suppose
that is < it < is+1 . Since a2s−1 = (is , js ) and a2s = (is+1 , js ) are vertices of P and
since P is convex, it follows that (it , js ) ∈ P. This vertex corresponds to a chord of
the cycle a1 , . . . , a2r . Similarly one argues if is+1 < it < is .
Lemma 1.3. Let r ≥ 3 be an integer and f : [r + 1] → Z a function such that
f (i) 6= f (j) for 1 ≤ i < j ≤ r and f (r + 1) = f (1). Then there exist 1 ≤ s, t ≤ r
such that one has either f (s) < f (t) < f (s + 1) or f (s + 1) < f (t) < f (s).
Proof. Let, say, f (1) < f (2). Since f (r + 1) = f (1), there is 2 ≤ q ≤ r with
f (1) < f (2) < · · · < f (q) > f (q + 1).
• Let q = r. Then, since q = r ≥ 3, one has (f (1) =) f (r + 1) < f (2) < f (r).
• Let q < r and f (q + 1) > f (1). Since f (q + 1) 6∈ {f (1), f (2), . . . , f (q)}, it
follows that there is 1 ≤ s < q with f (s) < f (q) < f (s + 1).
• Let q < r and f (q + 1) < f (1). Then one has f (q + 1) < f (1) < f (q).
The case of f (1) > f (2) can be discussed similarly.
We denote the graded Betti numbers of IP by βij (IP ).
Corollary 1.4. Let P be a convex polyomino. Then β1j (IP ) = 0 for j > 4.
Proof. By Proposition 1.2, there exists a monomial order < such that in< (IP ) is
generated in degree 2. Therefore, it follows from [10, Corollary 4] that β1j (in< (IP )) =
0 for j > 4. Since β1j (IP ) ≤ β1j (in< (IP )) (see, for example, [9, Corollary 3.3.3]), the
desired conclusion follows.
4
2. The first syzygy module of a polyomino ideal
Let P be a convex polyomino and let f1 , . . . , fm be the minors generating IP . In
this section we study the relation module Syz1 (IP ) of IP which is the kernel of the
L
S-module homomorphism m
i=1 Sei → IP with ei 7→ fi for i = 1, . . . , m. The graded
module Syz1 (IP ) has generators in degree 3 and no generators in degree > 4, as
we have seen in Corollary 1.4. We say that IP (or simply P) is linearly related if
Syz1 (IP ) is generated only in degree 3.
Let fi and fj be two distinct generators of IP . Then the Koszul relation fi ej −fj ei
belongs Syz1 (IP ). We call fi , fj a Koszul relation pair if fi ej − fj ei is a minimal
generator of Syz1 (IP ). The main result of this section is the following.
Theorem 2.1. Let P be a convex polyomino. The following conditions are equivalent:
(a) P is linearly related;
(b) IP admits no Koszul relation pairs;
(c) Let, as we may assume, [(1, 1), (m, n)] be the smallest interval with the property that V (P) ⊂ [(1, 1), (m, n)]. We refer to the elements (1, 1), (m, 1), (1, n)
and (m, n) as the corners. Then P has the shape as displayed in Figure 5,
and one of the following conditions hold:
(i) at most one of the corners does not belong to V (P);
(ii) two of the corners do not belong to V (P), but they are not opposite
to each other. In other words, the missing corners are not the corners
(1, 1), (n, m), or the corners (m, 1), (1, n).
(iii) three of the corners do not belong to V (P). If the missing corners are
(m, 1), (1, n) and (m, n) (which one may assume without loss of generality), then referring to Figure 5 the following conditions must be satisfied:
either i2 = m − 1 and j4 ≤ j2 , or j2 = n − 1 and i4 ≤ i2 .
As an essential tool in the proof of this theorem we recall the co-called squarefree
divisor complex, as introduced in [9]. Let K be field, H ⊂ Nn an affine semigroup and
K[H] the semigroup ring attached to it. Suppose that h1 , . . . , hm ∈ Nn is the unique
minimal set of generators of H. We consider the polynomial ring T = K[t1 , . . . , tn ]
Q
h (j)
in the variables t1 , . . . , tn . Then K[H] = K[u1 , . . . , um ] ⊂ T where ui = nj=1 tj i
and where hi (j) denotes the jth component of the integer vector hi . We choose
a presentation S = K[x1 , . . . , xm ] → K[H] with xi 7→ ui for i = 1, . . . , m. The
kernel IH of this K-algebra homomorphism is called the toric ideal of H. We assign
a Zn -grading to S by setting deg xi = hi . Then K[H] as well as IH become Zn graded S-modules. Thus K[H] admits a minimal Zn -graded S-resolution F with
L
Fi = h∈H S(−h)βih (K[H]) .
In the case that all ui are monomials of the same degree, one can assign to K[H]
the structure of a standard graded K-algebra by setting deg ui = 1 for all i. The
degree of h with respect to this standard grading will be denoted |h|.
Given h ∈ H, we define the squarefree divisor complex ∆h as follows: ∆h is the
simplicial complex whose faces F = {i1 , . . . , ik } are the subsets of [n] such that
5
h(1)
ui1 · · · uik divides t1 · · · th(n)
in K[H]. We denote by H̃i (Γ, K) the ith reduced
n
simplicial homology of a simplicial complex Γ.
Proposition 2.2 (Bruns-Herzog [2]). With the notation and assumptions introduced
one has Tori (K[H], K)h ∼
= H̃i−1 (∆h , K). In particular,
βih (K[H]) = dimK H̃i−1 (∆h , K).
Let H ′ be a subsemigroup of H generated by a subset of the set of generators of
H, and let S ′ be the polynomial ring over K in the variables xi with hi generator
of H ′ . Furthermore, let F′ the Zn -graded free S ′ -resolution of K[H ′ ]. Then, since
′
S. The
S is a flat S ′ -module, F′ ⊗S ′ S is a Zn -graded free S-resolution of S/IH
′
n
′
inclusion K[H ] → K[H] induces a Z -graded complex homomorphism F ⊗S ′ S → F.
Tensoring this complex homomorphism with K = S/m, where m is the graded
maximal ideal of S, we obtain the following sequence of isomorphisms and natural
maps of Zn -graded K-modules
′
TorS (K[H ′ ], K) ∼
= TorS (K[H], K).
= Hi (F′ ⊗S ′ S)⊗S K) → Hi (F⊗S K) ∼
= Hi (F′ ⊗S ′ K) ∼
i
i
For later applications we need
Corollary 2.3. With the notation and assumptions introduced, let H ′ be a subsemigroup of H generated by a subset of the set of generators of H, and let h be
an element of H ′ with the property that hi ∈ H ′ whenever h − hi ∈ H. Then the
′
natural K-vector space homomorphism TorSi (K[H ′ ], K)h → TorSi (K[H], K)h is an
isomorphism for all i.
Proof. Let ∆′h be the squarefree divisor complex of h where h is viewed as an element
of H ′ . Then we obtain the following commutative diagram
Tori (K[H ′], K)h −−−→ Tori (K[H], K)h
y
y
H̃i−1 (∆′h , K) −−−→ H̃i−1 (∆h , K).
The vertical maps are isomorphisms, and also the lower horizontal map is an isomorphism, simply because ∆′h = ∆h , due to assumptions on h. This yields the desired
conclusion.
Let H ⊂ Nn be an affine semigroup generated by h1 , . . . , hm . An affine subsemigroup H ′ ⊂ H generated by a subset of {h1 , . . . , hm } will be called a homological
pure subsemigroup of H if for all h ∈ H ′ and all hi with h − hi ∈ H it follows that
hi ∈ H ′ .
As an immediate consequence of Corollary 2.3 we obtain
Corollary 2.4. Let H ′ be a homologically pure subsemigroup of H. Then
′
TorSi (K[H ′], K) → TorSi (K[H], K)
is injective for all i. In other words, if F′ is the minimal Zn -graded free S ′ -resolution
of K[H ′] and F is the minimal Zn -graded free S-resolution of K[H], then the complex
homomorphism F′ ⊗S → F induces an injective map F′ ⊗K → F ⊗K. In particular,
6
any minimal set of generators of Syzi (K[H ′]) is part of a minimal set of generators
of Syzi (K[H]). Moreover, βij (IH ′ ) ≤ βij (IH ) for all i and j.
We fix a field K and let P ⊂ [(1, 1), (m, n)] be a convex polyomino. Let as before
S be the polynomial ring over K in the variables xij with (i, j) ∈ V (P) and K[P]
the K-subalgebra of the polynomial ring T = K[s1 , . . . , sm , t1 , . . . , tn ] generated by
the monomials uij = si tj with (i, j) ∈ V (P). Viewing K[P] as a semigroup ring
K[H], it is convenient to identify the semigroup elements with the monomial they
represent.
Given sets {i1 , i2 , . . . , is } and {j1 , j2 , . . . , jt } of integers with ik ⊂ [m] and jk ⊂ [n]
for all k, we let H ′ be the subsemigroup of H generated by the elements sik tjl with
(ik , jl ) ∈ V (P). Then H ′ a homologically pure subsemigroup of H. Note that H ′ is
also a combinatorially pure subsemigroup of H in the sense of [15].
A collection of cells P ′ will be called a collection of cells of P induced by the
columns i1 , i2 , . . . , is and the rows j1 , j2 , . . . , jt , if the following holds: (k, l) ∈ V (P ′ )
if and only if (ik , jl ) ∈ V (P). Observe that K[P ′ ] is always a domain, since it is a
K-subalgebra of K[P]. The map V (P ′ ) → V (P), (k, l) 7→ (ik , jl ) identifies IP ′ with
the ideal contained in IP generated by those 2-minors of I(P) which only involve
the variables xik ,jl . In the following we always identify IP ′ with this subideal of IP .
If the induced collection of cells of P ′ is a polyomino, we call it an induced polyomino. Any induced polyomino P ′ of P is again convex.
Consider for example the polyomino P on the left side of Figure 3 with left lower
corner (1, 1). Then the induced polyomino P ′ shown on the right side of Figure 3 is
induced by the columns 1, 3, 4 and the rows 1, 2, 3, 4.
P′
P
Figure 3.
Obviously Corollary 2.4 implies
Corollary 2.5. Let P ′ be an induced collection of cells of P. Then βij (IP ′ ) ≤ βij (IP )
for all i and j, and each minimal relation of IP ′ is also a minimal relation of IP .
We will now use Corollary 2.5 to isolate step by step the linearly related polyominoes.
Lemma 2.6. Suppose P admits an induced collection of cells P ′ isomorphic to one
of those displayed in Figure 4. Then IP has a Koszul relation pair.
7
Proof. We may assume that V (P ′ ) ⊂ [(1, 1), (4, 4)]. By using CoCoA [3] or Singular [4] to compute Syz1 (IP ′ ) we see that the minors fa = [12|12] and fb = [34|34]
form a Koszul relation pair of IP ′ . Thus the assertion follows from Corollary 2.5.
(a)
(b)
Figure 4. P ′
Corollary 2.7. Let P be a convex polyomino, and let [(1, 1), (m, n)] be the smallest
interval with the property that V (P) ⊂ [(1, 1), (m, n)]. We assume that m, n ≥ 4. If
one of the vertices (2, 2), (m − 1, 2), (m − 1, n − 1) or (2, n − 1) does not belong to
V (P), then IP has a Koszul relation pair, and, hence, IP is not linearly related.
Proof. We may assume that (2, 2) 6∈ V (P). Then the vertices of the interval
[(1, 1), (2, 2)] do not belong to V (P). Since [(1, 1), (m, n)] is the smallest interval
containing V (P), there exist, therefore, integers i and j with 2 < i ≤ m − 1 and
2 < j ≤ n − 1 such that the cells [(i, 1), (i + 1, 2)] and [(1, j), (2, j + 1)] belong to
P. Then the collection of cells induced by the rows 1, 2, i, i + 1 and the columns
1, 2, j, j + 1 is isomorphic to one of the collections P ′ of Figure 4. Thus the assertion
follows from Lemma 2.6 and Corollary 2.5.
Corollary 2.7 shows that the convex polyomino P should contain all the vertices
(2, 2), (m − 1, 2), (m − 1, n − 1) and (2, n − 1) in order to be linearly related. Thus
a polyomino which is linearly related must have the shape as indicated in Figure 5.
The number i1 is also allowed to be 1 in which case also j1 = 1. In this case
the polyomino contains the corner (1, 1). A similar convention applies to the other
corners. In Figure 5 all for corners (1, 1), (1, n), (m, 1) and (m, n) are missing.
The convex polyomino displayed in Figure 6 however is not linearly related, though
it has the shape as shown in Figure 5. Thus there must still be other obstructions
for a polyomino to be linearly related.
Now we proceed further in eliminating those polyominoes which are not linearly
related.
Lemma 2.8. Let P be a convex polyomino, and let [(1, 1), (m, n)] be the smallest
interval with the property that V (P) ⊂ [(1, 1), (m, n)]. If P misses only two opposite
corners, say (1, 1) and (m, n), or P misses all four corners (1, 1), (1, n), (m, 1) and
(m, n), then IP admits a Koszul pair and hence is not linearly related.
8
i3
i4
(2, n − 1)•
• (m − 1, n − 1)
j4
j2
j3
j1
(2, 2) •
•(m − 1, 2)
i1
i2
Figure 5. Possible shape
Figure 6. Not linearly related
Proof. Let us first assume that (1, 1) and (m, n) do not belong to V (P), but (1, n)
and (m, 1) belong to V (P). The collection of cells P1 induced by the rows 1, 2, m −
1, m and the columns 1, 2, n − 1, n is shown in Figure 7. All the light colored cells,
some of them or none of them are present according to whether or not all, some or
none of the equations i1 = 2, j1 = 2, i4 = m − 1 and j4 = n − 1 hold. For example,
if i1 = 2, j1 6= 2, i4 = m − 1 and j4 6= n − 1, then the light colored cells [(2, 1), (3, 2)]
and [(2, 3), (3, 4)] belong P1 and the other two light colored cells do not belong to
P1 .
It can easily be checked that the ideal IP1 displayed in Figure 7 has a Koszul
relation pairs in all possible cases, and so does IP by Corollary 2.5.
Next, we assume that none of the four corners (1, 1), (1, n), (m, 1) and (m, n)
belong to P. In the following arguments we refer to Figure 5. In the first case
suppose [i3 , i4 ] ⊂ [i1 , i2 ] and [j3 , j4 ] ⊂ [j1 , j2 ]. Then the collection of cells induced
by the columns 2, i3 , i4 , m − 1 and the rows 1, j3 , j4 , n is the polyomino displayed
in Figure 2 which has a Koszul relation pair as can be verified by computer. Thus
P has a Koszul relation pair. A similar argument applies if [i1 , i2 ] ⊂ [i3 , i4 ] or
[j1 , j2 ] ⊂ [j3 , j4 ].
Next assume that [i3 , i4 ] 6⊂ [i1 , i2 ] or [j3 , j4 ] 6⊂ [j1 , j2 ]. By symmetry, we may
discuss only [i3 , i4 ] 6⊂ [i1 , i2 ]. Then we may assume that i3 < i1 and i4 < i2 .
9
Figure 7.
We choose the columns i1 , i2 , i3 , i4 and the rows 1, 2, n − 1, n. Then the induced
polyomino by these rows and columns is P1 if i1 < i4 , P2 if i4 = i1 and P3 if i4 < i1 ;
see Figure 8. In all three cases the corresponding induced polyomino ideal has a
Koszul relation pair, and hence so does IP .
i1 < i4
i4 = i1
i4 < i1
Figure 8.
Lemma 2.9. Let P be a convex polyomino, and let [(1, 1), (m, n)] be the smallest
interval with the property that V (P) ⊂ [(1, 1), (m, n)]. Suppose P misses three
corners, say (1, n), (m, 1), (m, n), and suppose that i2 < m − 1 and j2 < n − 1, or
i2 = m − 1 and j2 < j4 , or j2 = n − 1 and i2 < i4 . Then IP has a Koszul relation
pair and hence is not linearly related.
Proof. We proceed as in the proofs of the previous lemmata. In the case that
i2 < m − 1 and j2 < n − 1, we consider the collection of cells P ′ induced by the
columns 1, 2, m − 1 and the rows 1, 2, n − 1. This collection of cells P ′ is depicted
in Figure 9. It is easily seen that IP ′ is generated by a regular sequence of length
2, which is a Koszul relation pair. In the case that i2 = m − 1 and j2 < j4 we
choose the columns 1, 2, m − 1, m and the rows 1, 2, j4 − 1, j4 . The polyomino P ′′
induced by this choice of rows and columns has two opposite missing corners, hence,
by Lemma 2.8, it has a Koszul pair. The case j2 = n − 1 and i2 < i4 is symmetric.
In both cases the induced polyomino ideal has a Koszul relation pair. Hence in all
three cases IP itself has a Koszul relation pair.
10
Figure 9.
Proof of Theorem 2.1. Implication (a)⇒(b) is obvious. Implication (b)⇒(c) follows
by Corollary 2.7, Lemma 2.8, and Lemma 2.9.
It remains to prove (c)⇒(a). Let P be a convex polyomino which satisfies one of
the conditions (i)–(iii). We have to show that P is linearly related. By Corollary 1.4,
we only need to prove that β14 (IP ) = 0. Viewing K[P] as a semigroup ring K[H], it
follows that one has to check that β1h (IP ) = 0 for all h ∈ H with |h| = 4. The main
idea of this proof is to use Corollary 2.3.
Let h = h1 h2 h3 h4 with jq = siq tiq for 1 ≤ q ≤ 4, and i = minq {iq }, k =
maxq {iq }, j = minq {jq }, and ℓ = maxq {jq }. Therefore, all the points hq lie in the
(possible degenerate) rectangle Q of vertices (i, j), (k, j), (i, ℓ), (k, ℓ). If Q is degenerate, that is, all the vertices of Q are contained in a vertical or horizontal line segment
in P, then β1h (IP ) = 0 since in this case the simplicial complex ∆h is just a simplex.
Let us now consider Q non-degenerate. If all the vertices of Q belong to P, then the
rectangle Q is an induced subpolyomino of P. Therefore, by Corollary 2.3, we have
β1h (IP ) = β1h (IQ ) = 0, the latter equality being true since Q is linearly related.
Next, let us assume that some of the vertices of Q do not belong to P. As P has
one of the forms (i)–(iii), it follows that at most three verices of Q do not belong to
P. Consequently, we have to analyze the following cases.
Case 1. Exactly one vertex of Q does not belong to P. Without loss of generality,
we may assume that (k, ℓ) ∈
/ P which implies that k = m and ℓ = n. In this case,
any relation in degree h of P is a relation of same degree of one of the polyominoes
displayed in Figure 10.
One may check with a computer algebra system that all polyominoes displayed
in Figure 10 are linearly related, hence they do not have any relation in degree h.
Actually, one has to check only the shapes (a), (b), and (d) since the polyomino
displayed in (c) is isomorphic to that one from (b). Hence, β1h (IP ) = 0.
Case 2. Two vertices of Q do not belong to P. We may assume that the missing
vertices from P are (i, ℓ)and (k, ℓ). Hence, we have i = 1, k = m, and ℓ = n. In
this case, any relation in degree h of P is a relation of same degree of one of the
polyominoes displayed in Figure 11 (a)–(c). Note that the polyominoes (b) and (c)
are isomorphic. One easily checks with the computer that all these polyominoes are
linearly related, thus β1h (IP ) = 0.
Case 3. Finally, we assume that there are three vertices of Q which do not belong
to P. We may assume that these vertices are (i, ℓ), (k, ℓ), and (k, j). In this case,
any relation in degree h of P is a relation of same degree of the polyomino displayed
in Figure 11 (d) which is linearly related as one may easily check with the computer.
Therefore, we get again β1h (IP ) = 0.
11
(a)
(b)
(c)
(d)
Figure 10.
(a)
(b)
(c)
(d)
Figure 11.
12
3. Polyomino ideals with linear resolution
In this final section, we classify all convex polyominoes which have a linear resolution and the convex stack polyominoes which are extremal Gorenstein.
Theorem 3.1. Let P be a convex polyomino. Then the following conditions are
equivalent:
(a) IP has a linear resolution;
(b) there exists a positive integer m such that P is isomorphic to the polyomino
with cells [(i, i), (i + 1, i + 1)], i = 1, . . . , m − 1.
Proof. (b) ⇒ (a): If the polyomino is of the shape as described in (b), then IP is just
the ideal of 2-minors of a 2×m-matrix. It is well-known that the ideal of 2-minors of
such a matrix has a linear resolution. Indeed the Eagon-Northcott complex, whose
chain maps are described by matrices with linear entries, provides a free resolution
of the ideal of maximal minors of any matrix of indeterminates, see for example [6,
Page 600].
(a) ⇒ (b): We may assume that [(1, 1), (m, n)] is the smallest interval containing
V (P). We may further assume that m ≥ 4 or n ≥ 4. The few remaining cases
can easily be checked with the computer. So let us assume that m ≥ 4. Then we
have to show that n = 2. Suppose that n ≥ 3. We first assume that all the corners
(1, 1), (1, n), (m, 1) and (m, n) belong to V (P). Then the polyomino P ′ induced by
the columns 1, 2, m and the rows 1, 2, n is the polyomino which is displayed on the
right of Figure 15. The ideal IP ′ is a Gorenstein ideal, and hence it is does not have
a linear resolution. Therefore, by Corollary 2.5, the ideal IP does not have a linear
resolution as well, a contradiction.
Next assume that one of the corners, say (1, 1), is missing. Since IP has a linear a
linear resolution, IP is linearly related and hence has a shape as indicated in Figure 5.
Let i1 and j1 be the numbers as shown in Figure 5, and let P ′ the polyomino of P
induced by the columns 1, 2, 3 and the rows a, j1 , j1 + 1 where a = 1 if i1 = 2 and
a = 2 if i1 > 2, j1 > 2. If j1 = 2 and i1 > 2, we let P ′ to be the polyomino induced by
the columns 1, i1 , i1 + 1 and the rows 1, 2, 3. In any case, P ′ is isomorphic to that one
displayed on the left of Figure 15. Since IP ′ is again a Gorenstein ideal, we conclude,
as in the first case, that IP does not a have linear resolution, a contradiction.
As mentioned in the introduction, polyomino ideals overlap with join-meet ideals
of planar lattices. In the next result we show that the join-meet ideal of any lattice
has linear resolution if and only if it is a polyomino as described in Theorem 3.1.
With methods different from those which are used in this paper, the classification
of join-meet ideals with linear resolution was first given in [7, Corollary 10].
Let L be a finite distributive lattice [11, pp. 118]. A join-irreducible element of L
is an element α ∈ L which is not a unique minimal element and which possesses the
property that α 6= β ∨ γ for all β, γ ∈ L \ {α}. Let P be the set of join-irreducible
elements of L. We regard P as a poset (partially ordered set) which inherits its
ordering from that of L. A subset J of P is called an order ideal of P if a ∈ J,
b ∈ P together with b ≤ a imply b ∈ J. In particular, the empty set of P is an order
ideal of P . Let J (P ) denote the set of order ideals of P , ordered by inclusion. It
13
then follows that J (P ) is a distributive lattice. Moreover, Birkhoff’s fundamental
structure theorem of finite distributive lattices [11, Proposition 37.13] guarantees
that L coincides with J (P ).
Let L = J (P ) be a finite distributive lattice and K[L] = K[ xα : α ∈ L ] the
polynomial ring in |L| variables over K. The join-meet ideal IL of L is the ideal of
K[L] which is generated by those binomials
xα xβ − xα∧β xα∨β ,
where α, β ∈ L are incomparable in L. It is known [12] that IL is a prime ideal and
the quotient ring K[L]/IL is normal and Cohen–Macaulay. Moreover, K[L]/IL is
Gorenstein if and only if P is pure. (A finite poset is pure if every maximal chain
(totally ordered subset) of P has the same cardinality.)
Now, let P = {ξ1 , . . . , ξd } be a finite poset, where i < j if ξi < ξj , and L = J (P ).
A linear extension of P is a permutation π = i1 · · · id of [n] = {1, . . . , n} such that
j < j ′ if ξij < ξij′ . A descent of π = i1 · · · id is an index j with ij > ij+1 . Let
D(π) denote the set of descents of π. The h-vector of L is the sequence h(L) =
(h0 , h1 , . . . , hd−1 ), where hi is the number of permutations π of [n] with |D(π)| = i.
Thus, in particular, h0 = 1. It follows from [1] that the Hilbert series of K[L]/IL is
of the form
h0 + h1 λ + · · · + hd−1 λd−1
.
(1 − λ)d+1
We say that a finite distributive lattice L = J (P ) is simple if L has no elements
α and β with β < α such that each element γ ∈ L \ {α, β} satisfies either γ < β or
γ > α. In other words, L is simple if and only if P possesses no element ξ for which
every µ ∈ P satisfies either µ ≤ ξ or µ ≥ ξ.
Theorem 3.2. Let L = J (P ) be a simple finite distributive lattice. Then the
join-meet ideal IL has a linear resolution if and only if L is of the form shown in
Figure 12.
•
•
•
•
•
•
•
•
•
•
Figure 12.
Proof. Since IL is generated in degree 2, it follows that IL has a linear resolution
if and only if the regularity of K[L]/IL is equal to 1. We may assume that K is
infinite. Since K[L]/IL is Cohen–Macaulay, we may divide by a regular sequence of
14
linear forms to obtain a 0-dimensional K-algebra A with reg A = reg K[L]/IL whose
h-vector coincides with that of reg K[L]/IL . Since reg A = max{i : Ai 6= 0} (see for
example [6, Exercise 20.18]), it follows that IL has a linear resolution if and only if
the h-vector of L is of the form h(L) = (1, q, 0, . . . , 0), where q ≥ 0 is an integer.
Clearly, if P is a finite poset of Figure 12, then |D(π)| ≤ 1 for each linear extension
π of P . Thus IL has a linear resolution.
Conversely, suppose that IL has a linear resolution. In other words, one has
|D(π)| ≤ 1 for each linear extension π of P . Then P has no three-element clutter.
(A clutter of P is a subset A of P with the property that no two elements belonging
to A are comparable in P .) Since L = J (P ) is simple, it follows that P contains a
two-element clutter. Hence Dilworth’s theorem [5] says that P = C ∪ C ′ , where C
and C ′ are chains of P with C ∩ C ′ = ∅. Let |C| ≥ 2 and |C ′ | ≥ 2. Let ξ ∈ C and
µ ∈ C ′ be minimal elements of P . Let ξ ′ ∈ C and µ′ ∈ C ′ be maximal elements of
P . Since L = J (P ) is simple, it follows that ξ 6= µ and ξ ′ 6= µ′ . Thus there is a
linear extension π of P with |D(π)| ≥ 2. Thus IL cannot have a linear resolution.
Hence either |C| = 1 or |C ′ | = 1, as desired.
A Gorenstein ideal can never have a linear resolution, unless it is a principal ideal.
However, if the resolution is as much linear as possible, then it is called extremal
Gorenstein. Since polyomino ideals are generated in degree 2 we restrict ourselves
in the following definition of extremal Gorenstein ideals to graded ideals generated
in degree 2.
Let S be a polynomial ring over field, and I ⊂ S a graded ideal which is not
principal and is generated in degree 2. Following [18] we say that I is an extremal
Gorenstein ideal if S/I is Gorenstein and if the shifts of the graded minimal free
resolution are
−2 − p − 1, −2 − (p − 1), −2 − (p − 2), . . . , −3, −2,
where p is the projective dimension of I.
With similar arguments as in the proof of Theorem 3.2, we see that I is an
extremal Gorenstein ideal if and only if I is a Gorenstein ideal and reg S/I = 2, and
that this is the case if and only if S/I is Cohen–Macaulay and the h-vector of S/I
is of the form
h(L) = (1, q, 1, 0, . . . , 0),
where q > 1 is an integer.
In the following theorem we classify all convex stack polyominoes P for which IP
is extremal Gorenstein. Convex stack polyominoes have been considered in [17]. In
that paper Qureshi characterizes those convex stack polyominoes P for which IP is
Gorenstein.
Let P be a polyomino. We may assume that [(1, 1), (m, n)] is the smallest interval
containing V (P). Then P is called a stack polyomino if it is column convex and for
i = 1, . . . . , m − 1 the cells [(i, 1), (i + 1, 2)] belong to P. Figure 13 displays stack
polyominoes – the right polyomino is convex, the left is not. The number of cells
of the bottom row is called the width of P and the number of cells in a maximal
column is called the height of P.
15
Figure 13. Stack polyominoes
Let P be a convex stack polyomino. Removing the first k bottom rows of cells of
P we obtain again a convex stack polyomino which we denote by Pk . We also set
P0 = P. Let h be the height of the polymino, and let 1 < k1 < k2 < · · · < kr < h
be the numbers with the property that width(Pki ) < width(Pki−1 ). Furthermore,
we set k0 = 1. For example, for the convex stack polyomino in Figure 13 we have
k1 = 1, k2 = 2 and k3 = 3.
With the terminology and notation introduced, the characterization of Gorenstein
convex stack polyominoes is given in the following theorem.
Theorem 3.3 (Qureshi). Let P be a convex stack polyomino of height h. Then the
following conditions are equivalent:
(a) IP is a Gorenstein ideal.
(b) width(Pki ) = height(Pki ) for i = 0, . . . , r.
According to this theorem, the convex stack polyomino displayed in Figure 13
is not Gorenstein, because width(Pk0 ) = 5 and height(Pk0 ) = 4. An example of a
Gorenstein stack polyomino is shown in Figure 14.
Figure 14. A Gorenstein stack polyomino
Combining Theorem 3.3 with the results of Section 2, we obtain
Theorem 3.4. Let IP be convex stack polyomino. Then IP is extremal Gorenstein
if and only if P is isomorphic to one of the polyominoes in Figure 15.
16
Figure 15. Extremal convex stack polyominoes
Proof. It can be easily checked that IP is extremal Gorenstein, if P is isomorphic to
one of the two polyominoes shown in Figure 15.
Conversely, assume that IP is extremal Gorenstein. Without loss of generality
we may assume that [(1, 1), (m, n)] is the smallest interval containing V (P). Then
Theorem 3.3 implies that m = n. Suppose first that V (P) = [(1, 1), (n, n)]. Then,
by [7, Theorem 4] of Ene, Rauf and Qureshi, it follows that the regularity of IP is
equal to n. Since IP is extremal Gorenstein, its regularity is equal to 3. Thus n = 3.
Next, assume that V (P) is properly contained in [(1, 1), (n, n)]. Since IP is linearly
related, Corollary 2.7 together with Theorem 3.3 imply that the top row of P consists
of only one cell and that [(2, 1), (n − 1, n − 1)] ⊂ V (P). Let P ′ be the polyomino
induced by the rows 2, 3, . . . , n − 1 and the columns 1, 2, . . . , n − 1. Then P ′ is the
polyomino with V (P ′ ) = [(1, 1), (n − 2, n − 1)]. By applying again [7, Theorem 4] it
follows that reg IP′ = n − 2. Corollary 2.5 then implies that reg IP ≥ reg IP ′ = n − 2,
and since reg IP = 3 we deduce that n ≤ 5. If n = 5, then IP′ is the ideal of 2-minors
of a 3 × 4-matrix which has Betti numbers β35 6= 0 and β36 6= 0. Since P ′ is an
induced polyomino of P and since IP is extremal Gorenstein, Corollary 2.5 yields a
contradiction.
Up to isomorphism there exist for n = 4 precisely the Gorenstein polyominoes
displayed in Figure 16. They are all not extremal Gorenstein as can be easily checked
with CoCoA or Singular. For n = 3 any Gorenstein polyomino is isomorphic to one
of the two polyominoes shown in Figure 15. This yields the desired conclusion.
Figure 16. Gorenstein polyominoes of width 3
17
The following theorem shows that besides of the two polyominoes listed in Theorem 3.4 whose polyomino ideal is extremal Gorenstein, there exist precisely two
more join-meet ideals having this property.
Theorem 3.5. Let L = J (P ) be a simple finite distributive lattice. Then the joinmeet ideal IL is an extremal Gorenstein ideal if and only if L is one of the following
displayed in Figure 17.
•
• • •
•
•
•
•
•
•
• • •
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Figure 17.
Proof. Suppose that L = J (P ) is simple and that K[L]/IL is Gorenstein. it then
follows that P is pure and there is no element ξ ∈ P for which every µ ∈ P satisfies
either µ ≤ ξ or µ ≥ ξ. Since h(L) = (1, q, 1, 0, . . . , 0), no 4-element clutter is
contained in P .
Suppose that a three-element clutter A is contained in P . If none of the elements
belonging to A is a minimal element of P , then, since L = J (P ) is simple, there
exist at least two minimal elements. Hence there exists a linear extension π of P
with |D(π)| ≥ 3, a contradiction. Thus at least one of the elements belonging to A
is a minimal element of P . Similarly, at least one of the elements belonging to A
is a maximal element. Let an element x ∈ A which is both minimal and maximal.
Then, since P is pure, one has P = A. Let A = {ξ1 , ξ2 , ξ3} with A 6= P , where ξ1 is
a minimal element and ξ2 is a maximal element. Let µ1 be a maximal element with
ξ1 < µ1 and µ2 a minimal element with µ2 < ξ2 . Then neither µ1 nor µ2 belongs
to A. If ξ3 is either minimal or maximal, then there exists a linear extension π of
P with |D(π)| ≥ 3, a contradiction. Hence ξ3 can be neither minimal nor maximal.
Then since P is pure, there exist ν1 with ξ1 < ν1 < µ1 and ν2 with µ2 < ν2 < ξ2 such
that {ν1 , ν2 , ξ3} is a three-element clutter. Hence there exists a linear extension π
of P with |D(π)| ≥ 4, a contradiction. Consequently, if P contains a three-element
clutter A, then P must coincide with A. Moreover, if P is a three-element clutter,
then h(L) = (1, 4, 1) and IL is an extremal Gorenstein ideal.
Now, suppose that P contains no clutter A with |A| ≥ 3. Let a chain C with
|C| ≥ 3 be contained in P . Let ξ, ξ ′ be the minimal elements of P and µ, µ′ the
maximal elements of P with ξ < µ and ξ ′ < µ′ . Since L = J (P ) is simple and since
P is pure, it follows that there exist maximal chains ξ < ν1 < · · · < νr < µ and
ξ ′ < ν1′ < · · · < νr′ < µ′ such that νi 6= νi′ for 1 ≤ i ≤ r. Then one has a linear
extension π of P with D(π) = 2+r ≥ 3, a contradiction. Hence the cardinality of all
maximal chains of P is at most 2. However, if the cardinality of all maximal chains
of P is equal to 1, then h(L) = (1, 1). Thus IL cannot be an extremal Gorenstein
ideal. If the cardinality of all maximal chains of P is equal to 2, then P is the posets
18
displayed in Figure 18. For each of them the join-meet ideal IL is an extremal
Gorenstein ideal.
•
•
•
•
•
•
•
•
•
•
•
•
h(L) = (1, 2, 1)
h(L) = (1, 3, 1)
h(L) = (1, 4, 1)
Figure 18.
References
[1] A. Björner, A. M. Garsia and R. P. Stanley, An introduction to Cohen–Macaulay partially
ordered sets, In: “Ordered Sets” (I. Rival, Ed.), Springer Netherlands, 1982, pp. 583–615.
[2] W. Bruns, J. Herzog, Semigroup rings and simplicial complexes, J. Pure Appl. Algebra 122
(1997), 185–208.
[3] CoCoATeam, CoCoA: a system for doing Computations in Commutative Algebra. Available at
http://cocoa.dima.unige.it
[4] W. Decker, G.-M. Greuel, G. Pfister, H. Schönemann, Singular 3-1-6 — A computer algebra
system for polynomial computations. http://www.singular.uni-kl.de (2012).
[5] R. P. Dilworth, A decomposition theorem for partially ordered sets, Annals of Math. 51 (1950),
161–166.
[6] D. Eisenbud, Commutative Algebra with a View Toward Algebraic Geometry, Graduate Texts
in Mathematics 150, Springer, 1995.
[7] V. Ene, A. A. Qureshi, A. Rauf, Regularity of join-meet ideals of distributive lattices, Electron.
J. Combin. 20 (3) (2013), #P20.
[8] M. Hashimoto, Determinantal ideals without minimal free resolutions, Nagoya Math. J. 118
(1990), 203–216.
[9] J. Herzog, T. Hibi, Monomial ideals, Graduate Texts in Mathematics 260, Springer, 2010.
[10] J. Herzog, H. Srinivasan, A note on the subadditivity problem for maximal shifts in free resolutions, to appear in MSRI Proc., arxiv: 1303:6214
[11] T. Hibi, Algebraic Combinatorics on Convex Polytopes, Carslaw Publications, Glebe, N.S.W.,
Australia, 1992.
[12] T. Hibi, Distributive lattices, affine semigroup rings and algebras with straightening laws, In:
“Commutative Algebra and Combinatorics” (M. Nagata and H. Matsumura, Eds.), Adv. Stud.
Pure Math. 11, North–Holland, Amsterdam, 1987, pp. 93–109.
[13] K. Kurano, The first syzygies of determinantal ideals, J. Algebra 124 (1989), 414–436.
[14] A. Lascoux, Syzygies des variétés determinantales, Adv. in Math. 30 (1978), 202–237.
[15] H. Ohsugi, J. Herzog, T. Hibi, Combinatorial pure subrings, Osaka J. Math. bf 37 (2000),
745–757.
[16] H. Ohsugi, T. Hibi, Koszul bipartite graphs, Adv. in Appl. Math. 22 (1999), 25-28.
[17] A. Qureshi, Ideals generated by 2-minors, collections of cells and stack polyominoes, J. Algebra
357 (2012), 279–303.
[18] P. Schenzel, Uber die freien Auflösungen extremaler Cohen-Macaulay Ringe, J. Algebra 64
(1980), 93–101.
[19] D. W. Sharpe, On certain polynomial ideals defined by matrices, Quart. J. Math. Oxford (2)
15 (1964), 155–175.
[20] D. W. Sharpe, The syzygies and semi-regularity of certain ideals defined by matrices, Proc.
London Math. Soc. 15 (1965), 645–679.
19
Viviana Ene, Faculty of Mathematics and Computer Science, Ovidius University,
Bd. Mamaia 124, 900527 Constanta, Romania, and
Simion Stoilow Institute of Mathematics of the Romanian Academy, Research
group of the project ID-PCE-2011-1023, P.O.Box 1-764, Bucharest 014700, Romania
E-mail address: [email protected]
Jürgen Herzog, Fachbereich Mathematik, Universität Duisburg-Essen, Campus
Essen, 45117 Essen, Germany
E-mail address: [email protected]
Takayuki Hibi, Department of Pure and Applied Mathematics, Graduate School
of Information Science and Technology, Osaka University, Toyonaka, Osaka 5600043, Japan
E-mail address: [email protected]
20
| 0 |
A Class of MSR Codes
for Clustered Distributed Storage
Jy-yong Sohn, Beongjun Choi and Jaekyun Moon
arXiv:1801.02014v1 [cs.IT] 6 Jan 2018
KAIST
School of Electrical Engineering
Email: {jysohn1108, bbzang10}@kaist.ac.kr, [email protected]
Abstract—Clustered distributed storage models real data centers where intra- and cross-cluster repair bandwidths are different. In this paper, exact-repair minimum-storage-regenerating
(MSR) codes achieving capacity of clustered distributed storage
are designed. Focus is given on two cases: = 0 and = 1/(n−k),
where is the ratio of the available cross- and intra-cluster repair
bandwidths, n is the total number of distributed nodes and k
is the number of contact nodes in data retrieval. The former
represents the scenario where cross-cluster communication is not
allowed, while the latter corresponds to the case of minimum
cross-cluster bandwidth that is possible under the minimum
storage overhead constraint. For the = 0 case, two types of
locally repairable codes are proven to achieve the MSR point. As
for = 1/(n − k), an explicit MSR coding scheme is suggested
for the two-cluster situation under the specific of condition of
n = 2k.
I. I NTRODUCTION
Distributed Storage Systems (DSSs) have been deployed by
various enterprises to reliably store massive amounts of data
under the frequent storage node failure events. A failed node
is regenerated (repaired) by collecting information from other
survived nodes with the regeneration process guided by a predefined network coding scheme. Under this setting, Dimakis
et al. [1] obtained the expression for the maximum reliably
storable file size, denoted as capacity C(α, γ), as a function
of given system parameters: the node capacity α and the
bandwidth γ required for repairing a failed node. The capacity
analysis in [1] underscores the following key messages. First,
there exists a network coding scheme which utilizes the (α, γ)
resources and enables a reliable storage of a file of size
C(α, γ). Second, it is not feasible to find a network coding
scheme which can reliably store a file larger than C(α, γ),
given the available resources of (α, γ). In subsequent research
efforts, the authors of [2]–[4] proposed explicit network coding
schemes which achieve the capacity of DSSs. These coding
schemes are optimal in the sense of efficiently utilizing (α, γ)
resources for maintaining the reliable storage systems.
Focus on the clustered nature of distributed storage has been
a recent research direction taken by several researchers [5]–[8].
According to these recent papers, storage nodes dispersed into
multiple racks in real data centers are seen as forming clusters.
In particular, the authors of the present paper proposed a
system model for clustered DSSs in [5] that reflects the
difference between intra- and cross-cluster bandwidths. In
the system model of [5], the file to be stored is coded and
distributed into n storage nodes, which are evenly dispersed
into L clusters. Each node has storage capacity of α, and
the data collector contacts arbitrary k out of n existing nodes
to retrieve the file. Since nodes are dispersed into multiple
clusters, the regeneration process involves utilization of both
intra- and cross-cluster repair bandwidths, denoted by βI and
βc , respectively. In this proposed system model, the authors
of [5] obtained the closed-form expression for the maximum
reliably storable file size, or capacity C(α, βI , βc ), of the
clustered DSS. Furthermore, it has been shown that network
coding exists that can achieve the capacity of clustered DSSs.
However, explicit constructions of capacity-achieving network
coding schemes for clustered DSSs have yet to be found.
This paper proposes a network coding scheme which
achieves capacity of the clustered DSS, with a minimum
required node storage overhead. In other words, the suggested code is shown to be a minimum-storage-regenerating
(MSR) code of the clustered DSS. This paper focuses on
two important cases of = 0 and = 1/(n − k), where
:= βc /βI represents the ratio of cross- to intra-cluster repair
bandwidths. The former represents the system where crosscluster communication is not possible. The latter corresponds
to the minimum value that can achieve the minimum storage
overhead of α = M/k, where M is the file size. When
= 0, it is shown that appropriate application of locally
repairable codes suggested in [9], [10] achieves the MSR point
for general n, k, L settings with the application rule depending
on the parameter setting. For the = 1/(n−k) case, an explicit
coding scheme is suggested which is proven to be an MSR
code under the conditions of L = 2 and n = 2k. There have
been some previous works [7], [8], [11], [12] on code construction for DSS with clustered storage nodes, but to a limited
extent. The works of [8], [11] suggested a coding scheme
which can reduce the cross-cluster repair bandwidth, but these
schemes are not proven to be an MSR code that achieves
capacity of clustered DSSs with minimum storage overhead.
The authors of [12] provided an explicit coding scheme which
reduces the repair bandwidth of a clustered DSS under the
condition that each failed node can be exactly regenerated by
contacting any one of other clusters. However, the approach
of [12] is different from that of the present paper in the sense
that it does not consider the scenario with unequal intra- and
cross-cluster repair bandwidths. Moreover, the coding scheme
proposed in [12] is shown to be a minimum-bandwidth-
regenerating (MBR) code for some limited parameter setting,
while the present paper deals with an MSR code. An MSR
code for clustered DSSs has been suggested in [7], but this
paper has the data retrieval condition different from the present
paper. The authors of [7] considered the scenario where data
can be collected by contacting arbitrary k out of n clusters,
while data can be retrieved by contacting arbitrary k out of
n nodes in the present paper. Thus, the two models have
the identical condition only when each cluster has one node.
The difference in data retrieval conditions results in different
capacity values and different MSR points. In short, the code in
[7] and the code in this paper achieves different MSR points.
(1)
A data collector (DC) retrieves the original file M by contacting arbitrary k out of n nodes - this property is called the
maximum-distance-separable (MDS) property. The clustered
distributed storage system with parameters n, k, L is called
an [n, k, L]-clustered DSS. In an [n, k, L]-clustered DSS with
given parameters of α, βI , βc , capacity C(α, γ) is defined in
[5] as the maximum data that can be reliably stored. The
closed-form expression for C(α, γ) is obtained in Theorem 1
of [5]. Aiming at reliably storing file M, the set of (α, γ) pair
values is said to be feasible if C(α, γ) ≥ M holds. According
to Corollaries 1 and 2 of [6], the set of feasible (α, γ) points
shows the optimal trade-off relationship between α and γ, as
illustrated in Fig. 1. In the optimal trade-off curve, the point
with minimum node capacity α is called the minimum-storageregenerating (MSR) point. Explicit regenerating codes that
achieve the MSR point are called the MSR codes. According
to Theorem 3 of [6], node capacity of the MSR point satisfies
αmsr = M/k
αmsr > M/k
1
,
n−k
1
if 0 ≤ <
.
n−k
if ≥
MBR point
𝛾𝛾𝑚𝑚𝑚𝑚𝑚𝑚
(2)
𝛼𝛼𝑚𝑚𝑚𝑚𝑚𝑚
𝛼𝛼𝑚𝑚𝑚𝑚𝑚𝑚
𝛼𝛼
Fig. 1: The optimal trade-off relationship between α and γ in the
clustered distributed storage modeled in [6]
A given file of M symbols is encoded and distributed into n
nodes, each of which has node capacity α. The storage nodes
are evenly distributed into L ≥ 2 clusters, so that each cluster
contains nI := n/L nodes. A failed node is regenerated by
obtaining information from other survived nodes: nI −1 nodes
in the same cluster help by sending βI each, while n−nI nodes
in other clusters help by sending βc each. Thus, repairing each
node requires the overall repair bandwidth of
𝛼𝛼: Node Capacity
𝛾𝛾: Repair Bandwidth
MSR point
𝛾𝛾𝑚𝑚𝑚𝑚𝑚𝑚
II. BACKGROUNDS AND N OTATIONS
γ = (nI − 1)βI + (n − nI )βc .
𝛾𝛾
1st cluster (݈ = 1)
2nd cluster (݈ = 2)
3rd cluster (݈ = 3)
ܰ(ଶ,ଷ)
Fig. 2: Two-dimensional representation of clustered distributed storage (n = 12, L = 3, nI = n/L = 4)
divides b. Similarly, write a - b if a does not divide b. For
given k and nI , we define
k
c,
nI
m := mod(k, nI ) = k − qnI .
q := b
(4)
(5)
For vectors we use bold-faced lower case letters. For a given
vector a, the transpose of a is denoted as aT . For natural
numbers m and n ≥ m, the set {ym , ym+1 , · · · , yn } is
represented as {yi }ni=m . For a matrix G, the entry of G at
the ith row and j th column is denoted as Gi,j . We also
express the nodes in a clustered DSS using a two-dimensional
representation: in the structure illustrated in Fig. 2, N (l, j)
represents the node at the lth row and the j th column. Finally,
we recall definitions on the locally repairable codes (LRCs)
in [9], [10]. As defined in [10], an (n, k, r)−LRC represents
a code of length n, which is encoded from k information
symbols. Every coded symbol of the (n, k, r)−LRC can be
regenerated by accessing at most r other symbols. As defined
in [9], an (n, r, d, M, α)−LRC takes a file of size M and
encodes it into n coded symbols, where each symbol is
composed of α bits. Moreover, any coded symbol can be
regenerated by contacting at most r other symbols, and the
code has the minimum distance of d.
(3)
Note that α = M/k is the minimum storage overhead to
satisfy the MDS property, as stated in [1]. Thus, = 1/(n −
k) is the scenario with minimum cross-cluster communication
when the minimum storage overhead constraint α = M/k is
imposed.
Here we introduce some useful notations used in the paper.
For a positive integer n, [n] represents the set {1, 2, · · · , n}.
For natural numbers a and b, we use the notation a | b if a
III. MSR C ODE D ESIGN FOR = 0
In this section, MSR codes for = 0 (i.e., βc = 0) is
designed. Under this setting, no cross-cluster communication
is allowed in the node repair process. First, the system parameters for the MSR point are examined. Second, two types
of locally repairable codes (LRCs) suggested in [9], [10] are
proven to achieve the MSR point, under the settings of nI | k
and nI - k, respectively.
A. Parameter Setting for the MSR Point
We consider the MSR point (α, γ) = (αmsr , γmsr ) which
can reliably store file M. The following property specifies the
system parameters for the = 0 case.
Proposition 1. Consider an [n,k,L] clustered DSS to reliably
store file M. The MSR point for = 0 is
M
M
,
(nI − 1) ,
(6)
(αmsr , γmsr ) =
k−q k−q
where q is defined in (4). This point satisfies α = βI .
(1)
𝑦𝑦1
(1)
𝑦𝑦2
(1)
𝑦𝑦3
(1)
𝑦𝑦4
(1)
𝑦𝑦5
(1)
𝑦𝑦6
(1)
𝑥𝑥1
(1)
𝑥𝑥2
(6,3) MDS
(1)
𝑥𝑥3
(2)
𝑥𝑥2
(1)
𝑠𝑠3 = 𝑦𝑦3
(1)
𝑠𝑠4 = 𝑦𝑦4
(2)
(6,3) MDS
(2)
𝑥𝑥3
Proof. See Appendix D-A.
(1)
𝑠𝑠2 = 𝑦𝑦2
𝑦𝑦1
(2)
𝑦𝑦2
(2)
𝑦𝑦3
(2)
𝑦𝑦4
(2)
𝑦𝑦5
(2)
𝑦𝑦6
(2)
𝑥𝑥1
(1)
𝑠𝑠1 = 𝑦𝑦1
(1)
𝑠𝑠5 = 𝑦𝑦5
(1)
𝑠𝑠6 = 𝑦𝑦6
(2)
+ 𝑦𝑦1
(2)
+ 𝑦𝑦2
(2)
+ 𝑦𝑦3
(2)
+ 𝑦𝑦4
(2)
+ 𝑦𝑦5
(2)
+ 𝑦𝑦6
(a) MDS Precoding
B. Code Construction for nI | k
We now examine how to construct an MSR code for the nI |
k case. The following theorem shows that a locally repairable
code constructed in [9] with locality r = nI − 1 is a valid
MSR code for nI | k.
Theorem 1 (Exact-repair MSR Code Construction for
= 0, nI | k) Let C be the (n, r, d, M, α)−LRC explicitly
constructed in [9] for locality r = nI − 1. Consider allocating
coded symbols of C in a [n, k, L]−clustered DSS, where
r + 1 = nI nodes within the same repair group of C are
located in the same cluster. Then, the code C is an MSR code
for the [n, k, L]− clustered DSS under the conditions of = 0
and nI | k.
Proof. See Appendix A.
Fig. 3 illustrates an example of the MSR code for the = 0
and nI | k case, which is constructed using the LRC in [9]. In
the [n, k, L] = [6, 3, 2] clustered DSS scenario, the parameters
are set to
𝑦2
(1)
𝑦3
(2)
𝑦2
(2)
𝑦3
𝑦1
(1)
Cluster 1
(1)
𝑠3 = 𝑦3
(2)
+ 𝑦3
(1)
𝑠1 = 𝑦1
(2)
𝑠6 =
+
(2)
+ 𝑦1
(1)
𝑠2 = 𝑦2
𝑦6
(2)
(2)
𝑦6
(2)
𝑦6
𝑠4 =
(1)
𝑦4
+
(2)
+ 𝑦2
(1)
𝑦5
𝑦5
(1)
𝑦6
(2)
(1)
(1)
𝑦4
Cluster 2
(1)
𝑦1
𝑦4
(2)
𝑦4
𝑠5 =
(1)
𝑦5
(2)
+ 𝑦5
(b) Allocation of coded symbols into n nodes
Fig. 3: MSR code for = 0 with nI | k (n = 6, k = 3, L = 2). The
construction rule follows the instruction in [9], while the concept of
the repair group in [9] can be interpreted as the cluster in the present
paper.
authors of [9], while the present paper proves that this code
also achieves the MSR point of the [n, k, L] clustered DSS, in
the case of = 0 and nI | k.
α = nI = n/L = 3,
M = (k − q)α = (k − bk/nI c)α = 6.
C. Code Construction for nI - k
Thus, each storage node contains α = 3 symbols, while the
[n, k, L] clustered DSS aims to reliably store a file of size
M = 6. This code has two properties, 1) exact regeneration
and 2) data reconstruction:
1) Any failed node can be exactly regenerated by contacting
nI − 1 = 2 nodes in the same cluster,
2) Contacting any k = 3 nodes can recover the original file
(j)
{xi : i ∈ [3], j ∈ [2]} of size M = 6.
(1)
(2)
The first property is obtained from the fact that yi , yi
(1)
(2)
and si = yi + yi form a (3, 2) MDS code for i ∈ [6]. The
second property is obtained as follows. For contacting arbitrary
(1) (1) (1)
k = 3 nodes, three distinct coded symbols {yi1 , yi2 , yi3 }
having superscript one and three distinct coded symbols
(2) (2) (2)
{yj1 , yj2 , yj3 } having superscript two can be obtained for
some i1 , i2 , i3 ∈ [6] and j1 , j2 , j3 ∈ [6]. From Fig. 3a, the
(1) (1) (1)
(1)
(1)
(1)
information {yi1 , yi2 , yi3 } suffice to recover x1 , x2 , x3 .
(2) (2) (2)
Similarly, the information {yj1 , yj2 , yj3 } suffice to recover
(2)
(2)
(2)
x1 , x2 , x3 . This completes the proof for the second property. Note that this coding scheme is already suggested by the
Here we construct an MSR code when the given system
parameters satisfy nI - k. The theorem below shows that the
optimal (n, k − q, nI − 1)−LRC designed in [10] is a valid
MSR code when nI - k holds.
Theorem 2 (Exact-repair MSR Code Construction for
= 0, nI - k) Let C be the (n0 , k0 , r0 )−LRC constructed
in [10] for n0 = n, k0 = k − q and r0 = nI − 1. Consider
allocating the coded symbols of C in a [n, k, L]−clustered
DSS, where r + 1 = nI nodes within the same repair group
of C are located in the same cluster. Then, C is an MSR code
for the [n, k, L]−clustered DSS under the conditions of = 0
and nI - k.
Proof. See Appendix B.
Fig. 4 illustrates an example of code construction for
the nI - k case. Without loss of generality, we consider
α = 1 case; parallel application of this code multiple α times
achieves the MSR point for general α ∈ N, where N is the set
𝐴 0
0 𝐴
1
1 1
= 𝛼1 𝛼2 𝛼3
𝛼12 𝛼22 𝛼32
𝐺=𝑉
𝐺=𝑉
𝑥1
𝑥2
1
= 𝛼1
𝛼12
𝑥3
𝐴 0
0 𝐴
1
1
𝛼2 𝛼3
𝛼22 𝛼32
1
𝛼4
𝛼42
1 𝑤
0 1
0 0
0 0
1 𝑤
0 1
0 0
0 0
1
𝛼4
𝛼42
0
𝑤
0
0
0
0
1
0
0
0
𝑤
1
0
0
0
𝑤
(4,3) RS code
𝑥1
𝑧1
𝑧2
(3,2) - MDS
𝑦1
𝑦2
𝑧1
𝑧2
(3,2) - MDS
𝑦
𝑦1 3𝑦2
𝑥2
𝑥3
Cluster 1
𝑧3RS code𝑧4
(4,3)
𝑧3
Cluster 1
𝑧4
Cluster 2
(3,2) - MDS
Cluster 2
(3,2) - MDS
𝑦
𝑦
𝑦3 4 𝑦4 5𝑦5
𝑦1
𝑦2
𝑦1
𝑦4
𝑦5
𝑦4
𝑦2
𝑦3
𝑦6
𝑦5
𝑦𝑦6
(a) Encoding structure
0
𝑤
0
0
0
0
1
0
0
0
𝑤
1
0
0
0
𝑤
Proposition 2. The MSR point for = 1/(n − k) is
M M
n − nI
(αmsr , γmsr ) =
,
nI − 1 +
.
𝑦3
k k
n−k
(8)
This point satisfies α = βI = n − k and M = k(n − k).
𝑦6
Proof. See Appendix D-B.
6
(b) Allocation of coded symbols
into n nodes
Fig. 4: MSR code for = 0 with nI - k case (n = 6, k = 4, L = 2).
The encoding structure follows from the instruction in [10], which
constructed [n0 , k0 , r0 ] − LRC. This paper utilizes [n, k − q, nI −
1] − LRC to construct MSR code for [n, k, L] clustered DSS, in the
case of = 0 with nI - k.
of positivie integers. In the [n = 6, k = 4, L = 2] clustered
DSS with = 0, the code and system parameters are:
[n0 , k0 , r0 ] = [n, k − q, nI − 1] = [6, 3, 2],
α = 1,
M = (k − q)α = (k − bk/nI c) = 3
from Proposition 1. The code in Fig. 4 satisfies the exact
regeneration and data reconstruction properties:
1) Any failed node can be exactly regenerated by contacting
nI − 1 = 2 nodes in the same cluster,
2) Contacting any k = 4 nodes can recover the original file
{xi : i ∈ [3]} of size M = 3.
Note that {yi }3i=1 in Fig. 4 is a set of coded symbols generated
by a (3, 2)−MDS code, and this statement also holds for
{yi }6i=4 . This proves the first property. The second property is
directly from the result of [10], which states that the minimum
distance of the [n0 , k0 , r0 ] − LRC is
k0
d = n0 − k0 −
+ 2 = 6 − 3 − d3/2e + 2 = 3. (7)
r0
Note that the [n0 , k0 , r0 ] − LRC is already suggested by the
authors of [10], while the present paper proves that applying
this code with n0 = n, k0 = k − q, r0 = nI − 1 achieves
the MSR point of the [n, k, L]−clustered DSS, in the case of
= 0 and nI - k.
IV. MSR C ODE D ESIGN FOR =
1
n−k
1
We propose an MSR code for = n−k
in clustered DSSs.
1
From (2) and (3), recall that n−k is the minimum value
which allows the minimum storage of αmsr = M/k. First, we
obtain the system parameters for the MSR point. Second, we
design a coding scheme which is shown to be an MSR code
under the conditions of n = 2k and L = 2.
A. Parameter Setting for the MSR Point
The following property specifies the system parameters for
the = 1/(n − k) case. Without a loss of generality, we set
the cross-cluster repair bandwidth as βc = 1.
B. Code Construction for [n, k, L] = [2k, k, 2]
Here, we construct an MSR code under the constraints of
n = 2k and L = 2. Since we consider the n = 2k case, the
system parameters in Proposition 2 are set to
α = βI = n − k = k,
(9)
2
M = kα = k .
Construction 1. Suppose that we are given M = k 2 source
symbols {mi,j : i, j ∈ [k]}. Moreover, let the encoding matrix
(1)
(2)
(k)
G1
G1
· · · G1
(1)
(2)
(k)
G2
G2
· · · G2
(10)
G= .
..
..
..
.
..
.
.
(1)
(2)
(k)
Gk
Gk
· · · Gk
(j)
be a k 2 × k 2 matrix, where each encoding sub-matrix Gi is
a k × k matrix. For j ∈ [k], node N (1, j) stores mj and node
N (2, j) stores pj , where
mi = [mi,1 , · · · , mi,k ]T ,
pi = [pi,1 , · · · , pi,k ]T =
(11)
k
X
(j)
mTj Gi .
(12)
j=1
Remark 1. The code generated in Construction 1 satisfies the
followings:
(a) Every node in cluster 1 contains k message symbols.
(b) Every node in cluster 2 contains k parity symbols.
Note that this remark is consistent with (9), which states
α = k. Under this construction, we have the following
theorem, which specifies the MSR construction rule for the
[n = 2k, k, L = 2]−DSS with = 1/(n − k).
Theorem 3 (Exact-repair MSR Code Construction for
1
= n−k
) If all square sub-matrices of G are invertible,
the code designed by Construction 1 is an MSR code for
[n, k, L] = [2k, k, 2]−DSS with = 1/(n − k).
Proof. See Appendix C.
The following result suggests an explicit construction of an
MSR code using the finite field.
Corollary 1. Applying Construction 1 with encoding matrix
G set to the k 2 × k 2 Cauchy matrix [13] achieves the MSR
point for an [n = 2k, k, L = 2]−DSS. A finite field of size 2k 2
suffices to design G.
Proof. The proof is directly from Theorem 3 and the fact that
all sub-matrices of a Cauchy matrix has full rank, as stated
in [14]. Moreover, the Cauchy matrix of size n × n can be
Cluster 1
Cluster 2
N(1,1)
N(1,2)
𝒎𝟏,𝟏
𝒎𝟐,𝟏
𝒎𝟏,𝟐
𝒎𝟐,𝟐
𝒑𝟏,𝟏
𝒑𝟐,𝟏
𝒑𝟏,𝟐
𝒑𝟐,𝟐
N(2,1)
N(2,2)
𝑝,,,
𝑝,,/
𝑝/,, = 𝐺
𝑝/,/
7
= 2
3
4
𝑚,,,
𝑚,,/
𝑚/,,
𝑚/,/
2
7
4
3
Cluster 1
3
4
7
2
4
3
2
7
𝑚,,,
𝑚,,/
𝑚/,,
𝑚/,/
Cluster 2
ü
𝒎𝟏,𝟏
𝒎𝟐,𝟏
𝒎𝟏,𝟐
𝒎𝟐,𝟐
𝒑𝟏,𝟏
𝒑𝟐,𝟏
𝒑𝟏,𝟐
𝒑𝟐,𝟐
ü
ü
7
𝐺= 2
3
4
ü
2
7
4
3
3
4
7
2
4
3
2
7
Parities 𝒑𝟏,𝟐, 𝒑𝟐,𝟏
are inaccessible
Messages 𝒎𝟐,𝟏, 𝒎𝟐,𝟐
are accessible
Fig. 6: Repairing a failed node in proposed MSR code example for
n = 4, k = 2, L = 2
Fig. 5: MSR example for n = 4, k = 2, L = 2
constructed using a finite field of size 2n, according to [15].
An example of MSR code designed by Construction 1 is
illustrated in Fig. 5, in the case of n = 4, k = 2, L = 2. This
coding scheme utilizes a Cauchy matrix
7 2 3 4
2 7 4 3
(13)
G=
3 4 7 2
4 3 2 7
3
using the finite field GF (2 ) with the primitive polynomial
x3 + x + 1. The element aα2 + bα + c in GF (23 ) is denoted
by the decimal number of (abc)2 , where α is the primitive
element. For example, α + 1 is denoted by 3 = (011)2 in
the generator matrix G. When [n, k, L, ] = [4, 2, 2, 1/2], the
system parameters are
α = 2, M = 4, βI = 2, βc = 1
from Proposition 2, which holds for the example in Fig. 5.
Here we show that the proposed coding scheme satisfies two
properties: 1) exact regeneration of any failed node and 2) recovery of M = 4 message symbols {m1,1 , m1,2 , m2,1 , m2,2 }
by contacting any k = 2 nodes.
1) Exact regeneration: Fig. 6 illustrates the regeneration
process. Suppose that node N (1, 1) containing the message
m1 = [m1,1 , m1,2 ] fails. Then, node N (1, 2) transmits βI = 2
symbols, m2,1 and m2,2 . Nodes N (2, 1) and N (2, 2) transmit
βc = 1 symbol each, for example p1,1 and p2,2 , respectively.
Then, from the received symbols of m2,1 , m2,2 , p1,1 , p2,2 and
matrix G, we obtain
y1
p1,1 − G1,3 m2,1 − G1,4 m2,2
7 2 m1,1
:=
=
.
y2
p2,2 − G4,3 m2,1 − G4,4 m2,2
4 3 m1,2
Thus, the contents of the failed node can be regenerated by
−1
m1,1
7 2
y1
3 2 y1
=
=
m1,2
4 3
y2
4 7 y2
where the matrix inversion is over GF (23 ). Note that the
exact regeneration property holds irrespective of the contents
transmitted by N (2, 1) and N (2, 2), since the encoding matrix
is a Cauchy matrix, all submatrices of which are invertible.
2) Data recovery: First, if DC contacts two systematic
nodes, the proof is trivial. Second, contacting two parity
nodes can recover the original message since G is invertible.
Third, suppose that DC contacts one systematic node and one
parity node, for example, N (1, 1) and N (1, 4). Then, DC
can retrieve message symbols m1,1 , m1,2 and parity symbols
p2,1 , p2,2 . Using the retrieved symbols and the information on
the encoding matrix G, DC additionally obtains
z1
p2,1 − G3,1 m1,1 − G3,2 m1,2
7 2 m2,1
:=
=
.
z2
p2,2 − G4,1 m1,1 − G4,2 m1,2
2 7 m2,2
Thus, DC obtains
m2,1
7
=
m2,2
2
2
7
−1
z1
1
=
z2
3
3
1
z1
,
z2
which completes the data recovery property of the suggested
code.
V. C ONCLUSION
A class of MSR codes for clustered distributed storage
modeled in [5] has been constructed. The proposed coding
schemes can be applied in practical data centers with multiple
racks, where the available cross-rack bandwidth is limited
compared to the intra-rack bandwidth. Two important cases
of = 0 and = 1/(n − k) are considered, where = βc /βI
represents the ratio of available cross- to intra-cluster repair
bandwidth. Under the constraint of zero cross-cluster repair
bandwidth ( = 0), appropriate application of two locally
repairable codes suggested in [9], [10] is shown to achieve
the MSR point of clustered distributed storage. Moreover, an
explicit MSR coding scheme is suggested for = 1/(n − k),
when the system parameters satisfy n = 2k and L = 2. The
proposed coding scheme can be implemented in a finite field,
by using a Cauchy generator matrix.
A PPENDIX A
P ROOF OF T HEOREM 1
We focus on code C, the explicit (n, r, d, M, α)-LRC constructed in Section V of [9]. This code has the parameters
r+1M
),
(A.1)
r k
where r is the repair locality and d is the minimum distance,
and other parameters (n, M, α) have physical meanings identical to those in the present paper. By setting r = nI − 1, the
code has node capacity of
(n, r, d = n − k + 1, M, α =
α=
nI M
M
M
=
=
nI − 1 k
k(1 − 1/nI )
k−q
(A.2)
where the last equality holds from the nI | k condition and
the definition of q in (4).
We first prove that any node failure can be exactly regenerated by using the system parameters in (6). According to
the description in Section V-B of [9], any node is contained
in a unique corresponding repair group of size r + 1 = nI ,
so that a failed node can be exactly repaired by contacting
r = nI − 1 other nodes in the same repair group. This implies
that a failed node does not need to contact other repair groups
in the exact regeneration process. By setting each repair group
as a cluster (note that each cluster contains nI = n/L nodes),
we can achieve
βc = 0.
(A.3)
Moreover, Section V-B of [9] illustrates that the exact regeneration of a failed node is possible by contacting the entire
symbols contained in r = nI − 1 nodes in the same repair
group, and applying the XOR operation. This implies βI = α,
which result in
M
,
(A.4)
γ = (nI − 1)βI = (nI − 1)
k−q
combined with (1) and (A.2). From (A.2) and (A.4), we can
conclude that code C satisfies the exact regeneration of any
failed node using the parameters in (6).
Now we prove that contacting any k nodes suffices to
recover original data in the clustered DSS with code C applied.
Note that the minimum distance is d = n − k + 1 from (A.1).
Thus, the information from k nodes suffices to pick the correct
codeword. This completes the proof of Theorem 1.
A PPENDIX B
P ROOF OF T HEOREM 2
We first prove that the code C has minimum distance of d =
n−k+1, which implies that the original file of size M = k−q
can be recovered by contacting arbitrary k nodes. Second, we
prove that any failed node can be exactly regenerated under
the setting of (6). Recall that the [n0 , k0 , r0 ]−LRC constructed
in [10] has the following property, as stated in Theorem 1 of
[10]:
Lemma 1 (Theorem 1 of [10]). The code constructed in [10]
has locality r0 and optimal minimum distance d = n0 − k0 −
d kr0 e + 2, when (r0 + 1) | n0 .
Note that we consider code C of optimal [n0 , k0 , r0 ] =
[n, k − q, nI − 1]−LRC. Since r0 + 1 = nI divides n0 = n,
Lemma 1 can be applied. The result of Lemma 1 implies that
the minimum distance of C is
k−q
d = n − (k − q) −
+ 2.
(B.1)
nI − 1
Since we consider the nI - k case, we have
k = qnI + m,
(0 < m ≤ nI − 1)
from (5). Inserting (B.2) into (B.1), we have
(nI − 1)q + m
d = n − (k − q) −
+2
nI − 1
= n − (k − q) − (q + 1) + 2 = n − k + 1,
(B.2)
(B.3)
Cluster 1
(2)
(2)
𝑠3 =
+
𝑠1 =
𝑠6 =
(1)
𝑦1
+
𝑦1
(2)
𝑦1
𝑠2 =
(1)
𝑦4
𝑦5
(2)
𝑦5
(2)
𝑦6
(1)
𝑦6
(2)
𝑦3
(2)
𝑦3
(1)
Cluster 2
𝑦3
𝑦2
𝑦2
(1)
𝑦3
(1)
(1)
(1)
𝑦1
+
(2)
𝑦6
𝑠4 =
(1)
𝑦4
+
(1)
𝑦2
(2)
+ 𝑦2
(1)
𝑦6
(2)
𝑦4
(2)
𝑦4
𝑠5 =
(1)
𝑦5
(2)
+ 𝑦5
Fig. 7: Code construction for = 0, nI - k case
where the second last equality holds since 0 < m ≤ nI − 1
from (B.2). Thus, this proves that contacting arbitrary k nodes
suffices to recover the original source file.
Now, all we need to prove is that any failed node can be
exactly regenerated under the setting of system parameters
specified in Proposition 1. According to the rule illustrated
in [10], the construction of code C can be shown as in Fig. 7.
First, we have M = k − q source symbols {xi }k−q
i=1 to store
reliably. By applying a (T, k − q) Reed-Solomon code to the
source symbols, we obtain {zi }ti=1 where T := L(nI − 1).
Then, we partition {zi }Ti=1 symbols into L groups, where each
group contains (nI − 1) symbols. Next, each group of {zi }
symbols is encoded by an (nI , nI − 1)−MDS code, which
result in a group of nI symbols of {yi }. Finally, we store
symbol ynI (l−1)+j in node N (l, j). By this allocation rule, yi
symbols in the same group are located in the same cluster.
Assume that N (l, j), the j th node at lth cluster, containing
ynI (l−1)+j symbol fails for l ∈ [L] and j ∈ [nI ]. From Fig.
I
7, we know that (nI − 1) symbols of {ynI (l−1)+s }ns=1,s6
=j
stored in lth cluster can decode the (nI , nI − 1)−MDS code
for group l. Thus, the contents of ynI (l−1)+j can be recovered
by retrieving symbols from nodes in the the lth cluster (i.e.,
the same cluster where the failed node is in). This proves the
ability of exactly regenerating an arbitrary failed node. The
regeneration process satisfies
βc = 0, βI = α.
(B.4)
Moreover, note that the code in Fig. 7 has
M = (k − q)α
(B.5)
source symbols. Since parameters obtained in (B.4) and (B.5)
are consistent with Proposition 1, we can confirm that code C
is a valid MSR point under the conditions = 0 and nI - k.
A PPENDIX C
P ROOF OF T HEOREM 3
Recall that the code designed by Construction 1 allocates
systematic nodes at 1st cluster and parity nodes at 2nd cluster,
as illustrated in Fig. 8. Moreover, recall that the system
parameters for [n, k, L] = [2k, k, 2]−DSS with = 1/(n − k)
are
α = βI = k,
βc = 1,
(C.1)
9
𝒎Z = 𝑚Z,,, 𝑚Z,/, ⋯ 𝑚Z,> ,
𝒑Z = 𝑝Z,,, 𝑝Z,/, ⋯ 𝑝Z,>
9
𝑁(1,1)
𝑁(1,2)
𝑁(1, 𝑘)
Cluster 1
𝒎,9
𝒎9/
⋯
𝒎9>
Cluster 2
𝒑,9
𝒑9/
⋯
𝒑9>
𝑁(2,1)
𝑁(2,2)
𝑁(2, 𝑘)
Fig. 8: Code construction for [n, k, L] = [2k, k, 2]−clustered DSS
when = 1/(n − k)
from Proposition 2 and the definition of = βc /βI . First, we
show that exact regeneration of systematic nodes (in the first
cluster) is possible using βI = k, βc = 1 in the [n, k, L] =
[2k, k, 2] DSS with Construction 1. We use the concept of the
projection vector to illustrate the repair process. For l ∈ [L],
(l)
let vi,j be the lth projection vector assigned for N (1, j), in
repairing N (1, i). Similarly, let vi,j be the projection vector
assigned for N (2, j), in repairing N (1, i). Assume that the
node N (1, i) containing mi = [mi,1 , mi,2 , · · · , mi,k ]T fails.
(l)
Then, node N (1, j) transmits βI = k symbols {mTj vi,j }kl=1 ,
T
while node N (2, j) transmits βc = 1 symbol pj vi,j . For
(l)
simplicity, we set vi,j = el and vi,j = ek , where ei is the kdimensional standard basis vector with a 1 in the ith coordinate
and 00 s elsewhere. This means that node N (1, j) transmits
k symbols mj = [mj,1 , mj,2 , · · · , mj,k ]T it contains, while
N (2, j) transmits the last symbol it contains, i.e., the symbol
pj,k . Thus, the newcomer node for regenerating systematic
node N (1, i) obtains the following information
Mi := {mj,s : j ∈ [k] \ {i}, s ∈ [k]} ∪ {pj,k }kj=1 .
(C.2)
We now show how the newcomer node regenerates mi =
[mi,1 , mi,2 , · · · , mi,k ]T using information Mi . Recall that the
parity symbols and message symbols are related as in the
following k 2 equations:
m1
p1
p2
m2
(C.3)
.. = G ..
.
.
pk
mk
obtained from (10) and (12). Among these k 2 parity symbols,
k parity symbols received by the newcomer node can be
expressed as
p1,k
Gk,1 Gk,2 · · · Gk,k2
m1,1
p2,k G2k,1 G2k,2 · · · G2k,k2 m1,2
.. = ..
..
.. .. , (C.4)
..
. .
.
.
.
.
pk,k
Gk2 ,1
Gk2 ,2
···
subtracting the constant known values from (C.4) results in
mi,1
y1
Gk,(i−1)k+1 Gk,(i−1)k+2 · · · Gk,ik
y2 G2k,(i−1)k+1 G2k,(i−1)k+2 · · · G2k,ik mi,2
.. =
..
..
.. ..
..
.
.
.
.
.
.
2
2
2
mi,k
yk
Gk ,(i−1)k+1 Gk ,(i−1)k+2 · · · Gk ,ik
(C.5)
where
Gk2 ,k2
mk,k
where the matrix in (C.4) is generated by removing k(k − 1)
rows from G. Since we are aware of k(k−1) message symbols
of {mj,s : j ∈ [k] \ {i}, s ∈ [k]} and the entries of G matrix,
yl := pl,k −
k
k
X
X
Glk,(j−1)k+s mj,s
(C.6)
j=1j6=i s=1
for l ∈ [k]. Note that the matrix in (C.5) can be obtained by
removing k(k − 1) columns from the matrix in (C.4). Since
every square sub-matrix of G is invertible, we can obtain mi =
[mi,1 , mi,2 , · · · , mi,k ]T , which completes the proof for exactly
regenerating the failed systematic node.
Second, we prove that exact regeneration of the parity
(l)
nodes (in the second cluster) is possible. Let ωi,j be the lth
projection vector assigned for N (2, j) in repairing N (2, i).
Similarly, let ωi,j be the projection vector assigned for N (1, j)
in repairing N (2, i). Assume that the parity node N (2, i)
fails, which contains pi = [pi,1 , pi,2 , · · · , pi,k ]T . Then, node
(l)
N (2, j) transmits βI = k symbols {pTj ωi,j }kl=1 , while node
N (1, j) transmits βc = 1 symbol mTj ωi,j . For simplicity, we
(l)
set ωi,j = el and ωi,j = ek . This means that node N (2, j)
transmits k symbols pj = [pj,1 , pj,2 , · · · , pj,k ]T it contains,
while N (1, j) transmits the last symbol it contains, i.e., the
symbol mj,k . Thus, the newcomer node for regenerating parity
node N (2, i) obtains the following information
Pi := {pj,s : j ∈ [k] \ {i}, s ∈ [k]} ∪ {mj,k }kj=1
(C.7)
We show how the newcomer node regenerates pi =
[pi,1 , pi,2 , · · · , pi,k ]T using the information Pi . Among k 2
parity symbols in (C.3), k(k − 1) parity symbols received by
the newcomer node can be expressed as
(1)
(k)
G1
· · · G1
p1
..
..
..
..
m1
.
.
.
.
(1)
(k) m2
pi−1
0
Gi−1 · · · Gi−1
(C.8)
(1)
(k) .. = G m,
pi+1 =
G
·
·
·
G
.
i+1
i+1
. .
.. mk
..
.. ..
.
.
(1)
(k)
pk
Gk
· · · Gk
(j)
where Gi is defined in Construction 1. Note that G0 is a
k(k − 1) × k 2 matrix, which is generated by removing lth
rows from G, for l ∈ {(i − 1)k + 1, (i − 1)k + 2, · · · , ik}.
Since we know the values of k message symbols {mj,k }kj=1
and the entries of G matrix, subtracting constant known values
from (C.8) results in
G00 m0 ,
(C.9)
where G00 is generated by removing lth columns from G0 for
l ∈ {k, 2k, · · · , k 2 }. Similarly, m0 is generated by removing
lth rows from m for l ∈ {k, 2k, · · · , k 2 }. Thus, G00 is an
invertible k(k − 1) × k(k − 1) matrix, so that we can obtain
m0 , which contains
P̃i = {mj,s : j ∈ [k], s ∈ [k − 1]}.
(C.10)
Since Pi ∪ P̃i contains every message symbol {mj,s : j, s ∈
[k]}, we can regenerate pi = [pi,1 , pi,2 , · · · , pi,k ]T using
(C.3). This completes the proof for exactly regenerating the
failed parity node.
Finally, we prove that M = k 2 message symbols can be
obtained by contacting arbitrary k nodes. In this proof, we
use a slightly modified notation for representing message and
parity symbols. For j, s ∈ [k], the message symbol mj,s
and the parity symbol pj,s are denoted as m(j−1)k+s and
p(j−1)k+s , respectively. Then, (C.3) is expressed as
m1
p1
m2
p2
(C.11)
.. = G ..
.
.
Suppose that the data collector (DC) contacts e nodes from
the 1st cluster, and k − e nodes from the 2nd cluster, for
e ∈ {0, 1, · · · , k}. Then, DC obtains k(k − e) parity symbols
and ke message symbols. Since there exists total of M = k 2
message symbols, the number of message symbols that DC
cannot obtain is M − ke = k(k − e). Let the parity symbols
obtained by DC be pi1 , · · · , pik(k−e) , and the message symbols
not obtained by DC be mj1 , · · · , mjk(k−e) . Then, the known
parities can be expressed as
pi1
m1,1
m1,2
pi2
(C.12)
.. = G0 .. ,
.
.
pik(k−e)
1) if m = nI − 1
where G00 is a k(k−e)×k(k−e) matrix generated by taking the
k(k−e)
lth columns from G0 for l ∈ {jt }t=1 . Since G00 is invertible,
k(k−e)
we obtain the unknown message symbols {mji }i=1 . This
completes the proof.
M
ζnI −2 λnI −2
(1 −
(D.4)
q
nI − 1
= k − 2q
λnI −2 =
(D.5)
δnI −2
(D.6)
ζnI −2 = k − q
(D.7)
where q and m are defined in (5) and (4). When m = nI − 1,
(D.3) can be expressed as
= k − 2q − 1,
δnI −2
)),
ζnI −2
(D.8)
where the last equality is from (5). Thus, from (D.8), (D.4)
and (D.2), we have
ζnI −2 − δnI −2
= nI − 1
λnI −2
(D.9)
holds for the m = nI −1 case. Similarly, using (D.5), (D.6) and
(D.7), we can confirm that (D.9) holds for the 0 ≤ m ≤ nI −2
case. Inserting (D.4), (D.7), (D.9) into (D.1), we obtain
M
,
k−q
M
γ=
(nI − 1)
k−q
α=
(D.10)
(D.11)
Since γ = (nI − 1)βI for βc = 0 from (1), we obtain
βI = α = M/(k − q),
(D.12)
which completes the proof.
B. Proof of Proposition 2
We consider the βc = 1 case without losing generality. This
implies that
βI = 1/ = n − k
(D.13)
according to the definition = βc /βI . Now, we observe the
expressions for α and M. From Corollary 3 of [6], the MSR
point for = 1/(n − k) is illustrated as
M M 1
,
),
k k sk−1
(D.14)
where
From Corollary 3 of [6], the MSR point for = 0 is given
,
(D.3)
2) else (m = 0, 1, · · · , nI − 2)
A. Proof of Proposition 1
M
δnI −2
(α, γ) = (
A PPENDIX D
P ROOF OF P ROPOSITIONS
(α, γ) = (
(D.2)
ζnI −2 = k − q
mk,k
where G0 is a k(k − e) × k 2 matrix obtained by taking lth
k(k−e)
rows from G, for l ∈ {it }t=1 . Since we know ke message
symbols and the elements of G, subtracting the known constant
values from (C.12) results in
mj1
mj2
G00
(C.13)
,
..
.
mjk(k−e)
by
q+1
nI − 1
= (q + 1)(nI − 2)
λnI −2 =
δnI −2 = qnI − 2q + nI − 2 = qnI − 2q + m − 1
mk2
pk2
where {ζi }, {λi }, {δi } are defined in [6]. This paper does not
review the explicit form of the definitions, but shows how
(α, γ) looks like. From the proof of Lemma 5 of [6], we have
(D.1)
sk−1 =
(n − k)
1
=
(nI − 1) + (n − nI )
nI − 1 +
n−nI
n−k
(D.15)
from the definition of {si } in [6] and the setting of = 1/(n−
k). Combining (D.14) and (D.15) result in (8).
Note that γ in (1) can be expressed as
γ = (n − nI )βc + (nI − 1)βI = (n − nI ) + (nI − 1)(n − k),
(D.16)
where the last equality holds due to (D.13). Combining (8)
and (D.16), we obtain
γ=
M γ
,
k n−k
which result in
M = k(n − k).
(D.17)
Using α = M/k in (D.14), we have
α = n − k.
(D.18)
This completes the proof.
R EFERENCES
[1] A. G. Dimakis, P. B. Godfrey, Y. Wu, M. J. Wainwright, and K. Ramchandran, “Network coding for distributed storage systems,” IEEE
Transactions on Information Theory, vol. 56, no. 9, pp. 4539–4551,
2010.
[2] K. Rashmi, N. B. Shah, P. V. Kumar, and K. Ramchandran, “Explicit
construction of optimal exact regenerating codes for distributed storage,”
in Communication, Control, and Computing, 2009. Allerton 2009. 47th
Annual Allerton Conference on. IEEE, 2009, pp. 1243–1249.
[3] V. R. Cadambe, S. A. Jafar, H. Maleki, K. Ramchandran, and C. Suh,
“Asymptotic interference alignment for optimal repair of mds codes in
distributed storage,” IEEE Transactions on Information Theory, vol. 59,
no. 5, pp. 2974–2987, 2013.
[4] T. Ernvall, “Codes between mbr and msr points with exact repair
property,” IEEE Transactions on Information Theory, vol. 60, no. 11,
pp. 6993–7005, 2014.
[5] J. y. Sohn, B. Choi, S. W. Yoon, and J. Moon, “Capacity of clustered
distributed storage,” in 2017 IEEE International Conference on Communications (ICC), May 2017.
[6] J. Sohn, B. Choi, S. W. Yoon, and J. Moon, “Capacity of clustered
distributed storage,” CoRR, vol. abs/1710.02821, 2017. [Online].
Available: http://arxiv.org/abs/1710.02821
[7] N. Prakash, V. Abdrashitov, and M. Médard, “The storage vs repairbandwidth trade-off for clustered storage systems,” arXiv preprint
arXiv:1701.04909, 2017.
[8] Y. Hu, X. Li, M. Zhang, P. P. Lee, X. Zhang, P. Zhou, and D. Feng,
“Optimal repair layering for erasure-coded data centers: From theory to
practice,” arXiv preprint arXiv:1704.03696, 2017.
[9] D. S. Papailiopoulos and A. G. Dimakis, “Locally repairable codes,”
IEEE Transactions on Information Theory, vol. 60, no. 10, pp. 5843–
5855, 2014.
[10] I. Tamo, D. S. Papailiopoulos, and A. G. Dimakis, “Optimal locally
repairable codes and connections to matroid theory,” IEEE Transactions
on Information Theory, vol. 62, no. 12, pp. 6661–6671, 2016.
[11] M. A. Tebbi, T. H. Chan, and C. W. Sung, “A code design framework for
multi-rack distributed storage,” in Information Theory Workshop (ITW),
2014 IEEE. IEEE, 2014, pp. 55–59.
[12] S. Sahraei and M. Gastpar, “Increasing availability in distributed storage
systems via clustering,” arXiv preprint arXiv:1710.02653, 2017.
[13] D. S. Bernstein, Matrix Mathematics: Theory, Facts, and Formulas
(Second Edition). Princeton University Press, 2009. [Online]. Available:
http://www.jstor.org/stable/j.ctt7t833
[14] N. B. Shah, K. V. Rashmi, P. V. Kumar, and K. Ramchandran, “Explicit codes minimizing repair bandwidth for distributed storage,” in
Information Theory (ITW 2010, Cairo), 2010 IEEE Information Theory
Workshop on. IEEE, 2010, pp. 1–5.
[15] C. Suh and K. Ramchandran, “Exact-repair mds code construction using
interference alignment,” IEEE Transactions on Information Theory,
vol. 57, no. 3, pp. 1425–1442, 2011.
| 7 |
arXiv:1604.08860v4 [cs.DS] 25 Nov 2016
Designing optimal- and fast-on-average pattern
matching algorithms
Gilles Didier and Laurent Tichit
Aix-Marseille Université, CNRS, Centrale Marseille, I2M UMR7373, Marseille, France
E-mail: {gilles.didier,_laurent.tichit}@univ-_amu.fr
March 26, 2018
Abstract
Given a pattern w and a text t, the speed of a pattern matching
algorithm over t with regard to w, is the ratio of the length of t to the
number of text accesses performed to search w into t. We first propose a
general method for computing the limit of the expected speed of pattern
matching algorithms, with regard to w, over iid texts. Next, we show how
to determine the greatest speed which can be achieved among a large class
of algorithms, altogether with an algorithm running this speed. Since the
complexity of this determination makes it impossible to deal with patterns
of length greater than 4, we propose a polynomial heuristic. Finally, our
approaches are compared with 9 pre-existing pattern matching algorithms
from both a theoretical and a practical point of view, i.e. both in terms of
limit expected speed on iid texts, and in terms of observed average speed
on real data. In all cases, the pre-existing algorithms are outperformed.
1
Introduction
We focus on algorithms solving the online string matching problem, which consists in reporting all, and only the occurrence positions of a pattern w in a text
t (online meaning that no pre-processing of the text is allowed). As one of the
oldest problems addressed in computer science, it has been extensively studied.
We refer to [10] for a comprehensive list and an evaluation of all the pattern
matching algorithms developed so far. By the authors’ count, more than 80
algorithms have already been proposed, among which more than a half were
published during the last ten years. This fact sounds quite paradoxical, since
the Morris-Pratt algorithm, which is optimal in terms of worst case analysis,
dates back to 1970.
A possible explanation is that there is wide gap between the worst case complexity of algorithms and their computation times on real data. For instance,
there are pattern matching algorithms with non-linear worst case complexities,
which perform much better than Morris-Pratt on English texts. Basically, the
1
average case analysis is way more suited to assess the relevance of a pattern
matching algorithm from a practical point of view. The average case analysis of some pattern matching algorithms, notably Boyer-Moore-Horspool and
Knuth-Morris-Pratt, has already been carried out from various points of view
[27, 12, 4, 3, 16, 21, 22, 25]. We provide here a general method for studying
the limit average behavior of a pattern algorithm over iid texts. More precisely,
following [18], we consider the limit expectation of the ratio of the text length to
the number of text accesses performed by an algorithm for searching a pattern
w in iid texts. This limit expectation is called the asymptotic speed of the algorithm with regard to w under the iid model. The computation of the asymptotic
speed is based on w-matching machines which are automata-like structures able
to simulate the behavior of a pattern matching algorithm while searching the
pattern w. The underlying idea is the same as in [18, 19, 20, 17] and can be
seen as a generalization of the string matching automaton [7].
In the companion paper, G. Didier provided a theoretical analysis of the
asymptotic speed of pattern matching algorithms over iid texts [8]. In particular,
he showed that, for a given pattern w, the greatest asymptotic speed among a
large class of pattern matching algorithms, is achieved by a w-matching machine
in which the states are essentially subsets of positions of w. Such machines are
called strategies below.
We provide here a brute force algorithm computing the Fastest strategy for a
given pattern w and the frequencies of an iid model. The algorithm is based on
an original structure associated to the pattern w and called its position lattice,
which gives a full representation of the overlap relations between the subsets of
positions of w.
Since the brute force algorithm cannot be applied on patterns of length
greater than 4, because of its (very high) time-complexity, we propose a polynomial K-Heuristic, in which the polynomial order K may be chosen by the
user.
The Fastest and K-Heuristic approaches are finally compared with 9 several
pre-existing pattern matching algorithms:
• from a theoretical point of view, by computing their limit expected speeds
with regard to various patterns and iid models,
• from a practical point of view, by computing their average speeds over
two sources (an English text and a DNA sequence).
In both cases, the Fastest and K-Heuristic (with K large enough) approaches
outperform the pre-existing algorithms.
The software and the data used to perform the tests are available at https:
//github.com/gilles-_didier/Matchines.git.
The rest of the paper is organized as follows. Section 2 presents the notations
and recalls some concepts and results from [8]. It is followed by two sections
which introduce the central objects of this work: the strategies and the position
lattice of a pattern. In particular, we provide an algorithm computing the
position lattice of a given pattern. Section 5 shows how to use the position
2
lattice of a pattern to obtain the Fastest strategy with regard to this pattern
and an iid model. In Section 6, we provide a polynomial heuristic allowing to
compute fast strategies. Section 7 presents the results of various comparisons
between 9 pre-existing pattern matching algorithms, the K-Heuristic and, each
time it is possible, the Fastest strategy. The results are discussed in the last
section.
2
Notations and definitions
2.1
Notations and general definition
For all finite sets S, P(S) is the power set of S and |S| is its cardinal. An
alphabet is a finite set A of elements called letters or symbols.
A word, a text or a pattern on A is a finite sequence of symbols of A. We put
|v| for the length of a word v. Words are indexed from 0, i.e. v = v0 v1 . . . v|v|−1 .
We write v[i,j] for the subword of v starting at its position i and ending at its
position j, i.e. v[i,j] = vi vi+1 . . . vj . The concatenate of two words u and v is
the word uv = u0 u1 . . . u|u|−1 v0 v1 . . . v|v|−1 .
For any length n ≥ 0, we note An the S
set of words of length n on A, and
∞
?
A , the set of finite words on A, i.e. A? = n=0 An .
Unless otherwise specified, all the texts and patterns considered below are
on a fixed alphabet A.
A pattern matching algorithm takes a pattern w and a text t as inputs an
reports all, and only the occurrence positions of w in t. For all patterns w, we
say that two pattern matching algorithms are w-equivalent if, for all texts t,
they access exactly the same positions of t on the input (w, t).
2.2
Matching machines and the generic algorithm [8]
For all patterns w, a w-matching machine is 6-uple (Q, o, F, α, δ, γ) where
• Q is a finite set of states,
• o ∈ Q is the initial state,
• F ⊂ Q is the subset of pre-match states,
• α : Q → N is the next-position-to-check function, which is such that for
all q ∈ F , α(q) < |w|,
• δ : Q × A → Q is the transition state function,
• γ : Q × A → N is the shift function.
By convention, the set of states of a matching machine always contains a sink
state , which is such that, for all symbols x ∈ A, δ( , x) = and γ( , x) = 0.
The order OΓ of a matching machine Γ = (Q, o, F, α, δ, γ) is defined as
OΓ = maxq∈Q {α(q)}.
3
The w-matching machines carry the same information as the Deterministic
Arithmetic Automatons defined in [19, 20].
The generic algorithm takes a w-matching machine and a text t as inputs
and outputs positions of t (Algorithm 1).
input : a w-matching machine (Q, o, F, α, δ, γ) and a text t
output: all the occurrence positions of w in t
1
2
3
4
5
(q, p) ← (o, 0)
while p ≤ |t| − |w| do
if q ∈ F and tp+α(q) = wα(q) then
print “ occurrence at position p ”
(q, p) ← (δ(q, tp+α(q) ), p + γ(q, tp+α(q) ))
Algorithm 1: The generic algorithm
Each component of a w-matching machine makes sense in regard to the way
it is used by the generic algorithm. The pre-match states in F are those which
lead to report an occurrence of the pattern at the current position, if the nextposition-to-check of the pattern matches the corresponding position in the text
(Line 3 of Algorithm 1). The condition α(q) < |w| for all q ∈ F in the definition
of w-matching machines, is technical and used in [8].
A w-matching machine Γ is valid if, for all texts t, the execution of the generic
algorithm on the input (Γ, t) outputs all, and only the occurrence positions of w
in t. Since one has to check all the positions of the pattern w before concluding
that it occurs somewhere in a text, the order of a valid w-matching machine is
at least |w| − 1.
We claim that for all the pattern matching algorithms developed so far and
all patterns w, there exists a w-matching machine Γ which is such that, for
all texts t, the generic algorithm and the pattern matching algorithm access
exactly the same positions of t on the inputs (Γ, t) and (w, t) respectively [8].
For instance, Figure 1 displays a abb-matching machine which accesses the same
positions as the naive algorithm while searching abb.
2.3
Full-memory expansion – standard matching machines
[8]
We present here a transformation on matching machines which split their states
according to the text positions read from the current position during an execution of the generic algorithm. The main point of this transformation is that the
average complexity of matching machines such obtained may then be computed
through algebraic methods (Sections 2.4 and 2.5).
For all n ∈ N, Rn is the set of subsets H of {0, . . . , n} × A verifying that,
for all i ∈ {0, . . . , n}, there exists at most one pair in H with i as first entry.
For all H ∈ Rn , we put f (H) for the set comprising all the first entries of
4
b/1
S0
0
a/0
S1
1
b/0
S2
2
a/1
b/1
a/1
Figure 1: abb-matching machine of the naive algorithm. The next-position-to
check are displayed below all states S0, S1 and S2. Edges from states Si are
labelled with “x/γ(Si, x)” for all symbols x. The transition associated to a
match is blue-colored.
the pairs in H, namely
f (H) = {i | ∃x ∈ A with (i, x) ∈ H}.
For all k ∈ N and H ∈ Rn , the k-shifted of H is
k
←
−
H = {(u − k, y) | (u, y) ∈ H with u ≥ k},
i.e. the subset of Rn obtained by subtracting k from the first entries of the pairs
in H and by keeping only the pairs with non-negative first entries.
The full memory expansion of a w-matching machine Γ = (Q, o, F, α, δ, γ)
is the w-matching machine Γ? obtained by removing the unreachable states of
Γ0 = (Q0 , o0 , F 0 , α0 , δ 0 , γ 0 ), defined as:
• Q0 = Q × ROΓ
• o0 = (o, ∅)
• α0 ((q, H)) = α(q)
• γ 0 ((q, H), x) = γ(q, x)
• F 0 = F × ROΓ
•
δ 0 ((q, H), x)
γ(q,x)
←−−−−−−−−−−−
(δ(q, x), H ∪ {(α(q), x)}) if ∀a ∈ A, (α(q), a) 6∈ H
=
if ∃a 6= x s.t. (α(q), a) ∈ H
γ(q,x)
←
−
(δ(q, x), H )
if (α(q), x) ∈ H
5
(S0, {(0, b)})
0
b/1
b/1
b/1
b/1
(S0, ∅)
0
a/0
(S0, {(0, b), (1, b)})
0
(S1, {(0, a)})
1
a/0
b/0
a/1
(S0, {(0, a)})
0
(S2, {(0, a), (1, b)})
2
a/1
b/1
(S0, {(0, b), (1, a)})
0
Figure 2: Full memory expansion of the abb-matching machine of Figure 1.
By construction, at all iterations of the generic algorithm on the input (Γ? , t),
if the current state and position are (q, H) and p, respectively, then the positions
of {j + p | j ∈ f (H)} are exactly the positions of t greater than p accessed so far
(the second entries of the corresponding elements of H give the symbols read).
For all texts t, the generic algorithm access the same positions of t on the
inputs (Γ, t) and (Γ? , t) [8].
Let us remark that the full memory expansion of the full memory expansion
of a matching machine is equal to its full memory expansion (up to a state
isomorphism). A w-matching machine Γ is standard if each state q of Γ appears
in a unique pair/state of its full memory expansion, or, equivalently, if it is equal
to its full memory expansion. For instance the abb-matching machine of Figure
1 is not standard. Since the matching machine of Figure 2 is a full memory
expansion, it is standard. For all states q of a standard matching machine Γ,
we put hΓ (q) for the second entry of the unique pair/state of Γ? in which q
appears.
We implemented a basic algorithm computing the full-memory expansion
Γ? = (Q? , o? , F ? , α? , δ ? , γ ? ) of a w-matching machine Γ = (Q, o, F, α, δ, γ) in
O(|w|.|Q? |) time. We have |Q? | ≤ (A + 1)|w| |Q| but the size of Q? may vary a
lot with regard to the matching machine/algorithm considered.
A w-matching machine Γ is compact if it contains no state q which always
leads to the same state. Formally, Γ = (Q, o, F, α, δ, γ) is compact if there is
no q ∈ Q such that one of the following assertions holds:
1. there exists a symbol x with δ(q̇, x) 6=
y 6= x;
and δ(q̇, y) =
for all symbols
2. for all symbols x and y, we have both δ(q̇, x) = δ(q̇, y) and γ(q̇, x) =
γ(q̇, y).
Basically, a non-compact machine performs useless text accesses. In [8], it is
shown that any w-matching machine can be turned into a compact (and faster)
machine.
6
2.4
iid and Markov models
An independent identically distributed (iid) model (aka Bernoulli model) is fully
specified by a probability distribution π on the alphabet (i.e. π(x) is the probability of the symbol x in the model). Such a model will be simply referred to
as “π” below. Under π, the probability of a text t is
|t|−1
Y
pπ (t) =
π(ti ).
i=0
A Markov model M over a given set of states Q is a 2-uple (πM , δM ), where
πM is a probability distribution on Q (the initial distribution) and δM associates a pair of states (q, q 0 ) with the probability for q to be followed by q 0 (the
transition probability). Under a Markov model M = (πM , δM ), the probability
of a sequence s of states is
|s|−1
pM (s) = πM (s0 )
Y
δM (si , si+1 ).
i=0
Theorem 1 ([8]). Let Γ = (Q, o, F, α, δ, γ) be a w-matching machine. If a text
t follows an iid model and Γ is standard then the sequence of states parsed by
the generic algorithm on the input (Γ, t) follows a Markov model (πM , δM ).
Proof. Whatever the text model and the machine, the sequence of states always
starts with o with probability 1. We have πM (o) = 1 and πM (q) = 0 for all
q 6= o.
The probability δM (q, q 0 ) that the state q 0 follows the state q during an
execution of the generic algorithm, is equal to:
• 1, if there exists a symbol x such that δ(q, x) = q 0 and (α(q), x) ∈ hΓ (q),
i.e. if the relative position α(q) was already checked with x occurring at
it,
X
•
π(x), otherwise,
x s.t. δ(q, x) = q 0
independently of the previous states.
2.5
Asymptotic speed
Let M be a text model and A be an algorithm. The w-asymptotic speed of
A under M is the limit expectation, under M, of the ratio of the text length
to the number of text accesses performed by A [8]. Namely, by putting aA (t)
for the number of text accesses performed by A to parse t and pM (t) for the
probability of t with regard to M, the asymptotic speed of A under M is
ASM (A) = lim
n→∞
X
t∈An
7
|t|
pM (t).
aA (t)
The asymptotic speed ASM (Γ) of a w-matching machines Γ is that the
generic algorithm with Γ as first input. From Theorem 5 of [8], the asymptotic speed of a standard w-matching machine Γ = (Q, o, F, α, δ, γ) under an
iid model π exists and is given by
X
ASπ (Γ) =
βq E(q),
(1)
q∈Q
where (βq )q∈Q are the limit frequencies of the states of the Markov model associated to Γ and π in Theorem 1, and
γ(q, x)
if (α(q), x) ∈ hΓ (q)),
P
E(q) =
γ(q,
x)π
otherwise.
x
x
Computing the asymptotic speed of a pattern matching algorithm, with
regard to a pattern w and an iid model π is performed by following the stages
below.
1. We get a w-matching machine Γ which simulates the behavior of the algorithm while looking for w (Figure 1). The transformation of the 9 algorithms presented in Section 7 (and a few others, see our GitHub repository)
into w-matching machines, given w, has been implemented.
2. We obtain the full-memory expansion Γ? of Γ (Figure 2, Section 2.3).
3. We compute the limit frequencies of the Markov model associated to Γ?
and π in Theorem 1. This mainly needs to solve a system of linear equations of dimension |Q? |.
4. We finally obtain the asymptotic speed of the algorithm from these limit
frequencies, π and Γ? by using Equation 1.
The most time-consuming stage is the computation of the limit frequencies,
which has O(|Q? |3 ) time complexity, where |Q? |, the number of states of the
full memory expansion, is smaller than (A + 1)|w| |Q|.
3
Strategies
For all sets I ⊂ N and k ∈ N, we define the k-left-shifted of I as
∆(I, k) = {i − k | ∃i ∈ I and i ≥ k}.
A w-strategy S = (Q, o, F, α, δ, γ) is a w-matching machine such that
• Q ⊆ P({0, . . . , |w| − 1}) \ {{0, . . . , |w| − 1}} and ∅ ∈ Q,
• o = ∅,
• F = {s ∈ Q | |s| = |w| − 1},
8
b/3
{1}
0
b/0
∅
1
a/2
a/1
{0}
2
a/0
{0, 1}
2
a/2
a/1
b/0
{0, 2}
1
a/0
{0, 1}
2
b/3
b/3
{1}
0
b/0
∅
1
a/2
a/1
b/0
a/1
{0}
1
Figure 3: Two abb-strategies with the same conventions as in Figure 1.
• α : Q → {0, 1, . . . , |w| − 1} is such that for all s ∈ Q, α(s) 6∈ s and
|s| < |w| − 1 ⇒ s ∪ {α(s)} ∈ Q,
• γ : Q × A → {0, 1, . . . , |w|} is such that for all states s and all symbols x,
γ(s, x)
min{k ≥ 1 | wα(s)−k = x if α(s) ≥ k and wj = wj+k for all j ∈ ∆(s, k)}
=
min{k ≥ 0 | wα(s)−k = x if α(s) ≥ k and wj = wj+k for all j ∈ ∆(s, k)}
• δ : Q × A → Q is such that for all s ∈ Q and all symbols x,
δ(s, x) = ∆(s ∪ {α(s)}, γ(s, x)).
Figure 3 shows two abb-strategies which differ notably in the next-positionto-check of state {0}.
Proposition 1. A w-strategy is a standard, compact, valid and non-redundant
w-matching machine.
Proof. By construction, a w-strategy is standard, compact and non-redundant.
The validity of a w-strategy follows from Theorem 1 of [8].
Proposition 2. There is a w-strategy which achieves the greatest asymptotic
speed among all the w-matching machines of order |w| − 1.
9
if s ∈ F,
otherwise,
Proof. The Corollary 2 of [8] implies that there exists a w-matching machine
which achieves the greatest asymptotic speed among those of order |w| − 1 and
which is
1. standard,
2. compact,
3. valid,
4. in which all the states are relevant (i.e. such that they may lead to a
match without any positive shift [8]),
5. such that there is no pair of states (q, q 0 ) with q 6= q 0 and hΓπ (q) = hΓπ (q 0 ).
Let us verify that a w-matching machine Γ = (Q, o, F, α, δ, γ) of order |w| − 1
satisfying the properties above is (isomorphic to) a w-strategy. Since it verifies
in particular the properties 4 and 5, its set of states Q is in bijection with a
subset of P({0, . . . , |w| − 1}). Let us identify all states q of Q with f (hΓ (q)),
its corresponding element of P({0, . . . , |w| − 1}). Since Γ is standard, compact
and of order |w| − 1, we do not have {0, . . . , |w| − 1} ∈ Q. Moreover, since
Γ is standard, we have δ(s, x) = ∆(s ∪ {i}, γ(s, x)) for all s ∈ Q. Last, by
construction, if
γ(s, x)
min{k ≥ 1 | wα(s)−k = x if α(s) ≥ k and wj = wj+k for all j ∈ ∆(s, k)}
>
min{k ≥ 0 | wα(s)−k = x if α(s) ≥ k and wj = wj+k for all j ∈ ∆(s, k)}
if s ∈ F,
otherwise,
then Γ is not valid, and if
γ(s, x)
min{k ≥ 1 | wα(s)−k = x if α(s) ≥ k and wj = wj+k for all j ∈ ∆(s, k)}
<
min{k ≥ 0 | wα(s)−k = x if α(s) ≥ k and wj = wj+k for all j ∈ ∆(s, k)}
if q ∈ F,
otherwise,
then δ(s, x) is not relevant.
4
Position lattices
[w]
[w]
The position lattice of a pattern w is the 3-uple L[w] = (Q[w] , (δs )s∈Q[w] , (γs )s∈Q[w] )
where, by putting s for {0, . . . , |w| − 1} \ s,
• Q[w] = P({0, . . . , |w| − 1}) \ {{0, . . . , |w| − 1}}, i.e. the set made of all the
subsets of positions of w but {0, . . . , |w| − 1},
[w]
is a map from s × A to {0, . . . , |w|},
[w]
is a map from s × A to Q[w] ,
• for all s ∈ Q[w] , γs
• for all s ∈ Q[w] , δs
10
0, b|1
∅
0, a|0
2, b|3
1, b|0
0, b|2
2, b|0
0, b|3
2, a|2
0, a|3
1, a|1
1, a|1
{0}
{1}
2, a|2
{2}
0, b|1
1, b|3
2, a|2
2, b|0
0, a|0
2, a|2
1, b|0
1, b|0
1, a|1
2, b|0
0, a|0
{0, 1}
1, a|1
{0, 2}
{1, 2}
Figure 4: Position lattice of the pattern abb. Vertices represent the states
of L[abb] . For all states s, there is an outgoing edge for all pairs (i, x) with
i ∈ {0, . . . , |abb| − 1} \ s and x ∈ A. This outgoing edge is labeled with
[abb]
[abb]
“i, x|γs (i, x)”, is colored according to i, and goes to δs (i, x).
where, for all s ∈ Q[w] , all i ∈ s and all x ∈ A, we have
γs[w] (i, x)
min{k ≥ 1 | wα(s)−k = x if α(s) ≥ k and wj = wj+k for all j ∈ ∆(s, k)}
=
min{k ≥ 0 | wα(s)−k = x if α(s) ≥ k and wj = wj+k for all j ∈ ∆(s, k)}
and
δs[w] (i, x) = ∆(s ∪ {i}, γs[w] (i, x)).
[w]
In particular, if x = wi and |s| < |w| − 1 then we have γs (i, x) = 0 and
[w]
δs (i, x) = s ∪ {i}.
Let us remark that, since max(s) ≤ |w| − 1 for all s ∈ Q[w] , we have, for all
[w]
i ∈ s and all x ∈ A, ∆(s ∪ {i}, |w|) = ∅, thus γs (i, x) ≤ |w| which is consistent
[w]
with the definition of γs .
[w]
The edges of L[w] are the pairs (s, δs (i, x)) for all s ∈ Q[w] , all i ∈ s and all
x ∈ A (see Figure 4).
11
if |s| = |w| − 1,
otherwise,
Remark 1. The position lattice of w contains 2|w| − 1 states and |A|.|w|.2|w|−1
edges.
Remark 2. Let s be a state of Q[w] , i and j be two positions in s such that
i 6= j and x and y be two symbols of A. We have
γ
[w]
[w]
δs (i,x)
(j − γs[w] (i, x), y) + γs[w] (i, x) = γ
δ
[w]
[w]
δs (i,x)
[w]
[w]
δs (j,y)
(j − γs[w] (i, x), y) = δ
(i − γs[w] (j, y), x) + γs[w] (j, y), and
[w]
[w]
δs (j,y)
(i − γs[w] (j, y), x).
By considering the particular case where x = wi , we get
[w]
γs∪{i} (j, y) = γ
[w]
δs∪{i} (j, y) = δ
[w]
[w]
δs (j,y)
[w]
[w]
δs (j,y)
(i − γs[w] (j, y), wi ) + γs[w] (j, y),
and
(i − γs[w] (j, y), wi ).
Let precw be the table indexed on {0, . . . , |w| − 1} × A and in which, for all
positions i of w and all symbols x of A, the entry precw [i, x] is defined as
max{j ≤ i | wj = x} if {j ≤ i | wj = x} =
6 ∅,
precw [i, x] =
NULL
otherwise.
For instance, the table precabb is
0
1
2
a
b
0 NULL
0
1
0
2
Lemma 1. Let s be a state of Q[w] , i a position in s and x a symbol of A.
1. If x = wi then
[w]
[w]
• if |s| = |w|−1 then δs (i, x) = {0, . . . , B−1} and γs (i, x) = |w|−B,
where B is the length of the longest proper suffix of w which is a prefix
of w;
[w]
[w]
• otherwise δs (i, x) = s ∪ {i} and γs (i, x) = 0.
2. If x 6= wi ,
(a) if s = ∅ then
{precw [i, x]}
[w]
δ∅ (i, x) =
∅
[w]
γ∅ (i, x) =
i − precw [i, x]
i+1
if precw [i, x] 6= NULL,
otherwise,
if precw [i, x] 6= NULL,
otherwise,
12
(b) if s 6= ∅ then for all ` ∈ s, we have
δs[w] (i, x) = δ
[w]
[w]
[w]
δs\{`} (i,x)
γs[w] (i, x) = γ
(` − γs\{`} (i, x), w` ),
[w]
[w]
[w]
δs\{`} (i,x)
[w]
(` − γs\{`} (i, x), w` ) + γs\{`} (i, x).
Proof. The only case which does not immediately follow from the definition of
L[w] , is when x 6= wi and s 6= ∅ which is given by Remark 2.
The relation 4 on Q[w] is defined as follows. For all sets s and s0 in Q[w] , we
have s 4 s0 if one of the following properties holds:
• |s| < |s0 |,
• |s| = |s0 |, s 6= s0 and min(s
difference of s and s0 ,
s0 ) ∈ s, where s
s0 is the symmetric
• s = s0 .
The relation 4 defines a total order on Q[w] . We write “ s ≺ s0 ” for “ s 4 s0 and
s 6= s0 ”.
Lemma 2. Let s be a state of Q[w] with |s| > 1, i a position in s and x a symbol
[w]
of A. If x 6= wi then δs\{max s} (i, x) ≺ s.
Proof. Under the assumption that |s| > 1, we have min s = min(s \ {max s}).
[w]
By construction, the fact that x 6= wi implies that γs\{max s} (i, x) > 0.
If we have
[w]
min((s \ {max s}) ∪ {i}) < γs\{max s} (i, x)
[w]
[w]
then |δs\{max s} (i, x)| < |s|, thus δs\{max s} (i, x) ≺ s.
[w]
Otherwise, we have |δs\{max s} (i, x)| = |s| but since necessarily
[w]
[w]
min δs\{max s} (i, x) ≤ min|s| − γs\{max s} (i, x) < min|s|,
[w]
we get again δs\{max s} (i, x) ≺ s.
Theorem 2. Algorithm 2 computes the position lattice of the pattern w in
O(|w|2|w| ) time by using the same amount of memory.
Proof. Let us first show that Algorithm 2 determines the shifts and the transitions of the state s before those of the state s0 if and only if s ≺ s0 . The loop
at Lines 2-8 computes the shifts and the transitions of ∅. Next, the loop at
Lines 9-23 computes the shifts and the transitions of the singletons from {0} to
{|w| − 1}. The last loop (Lines 24-41) determines the shifts and the transitions
of the states corresponding to the subsets of increasing cardinals ` from 2 to
13
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
B ← length of the longest proper suffix of w which is also a prefix;
for x ∈ A do last[x] ← NULL for i = 0 to |w| − 1 do
last[wi ] ← i;
for x ∈ A do
if last[x] 6= NULL then
[w]
[w]
γ∅ (i, x) ← i − last[x]; δ∅ (i, x) ← {last[x]};
else
[w]
[w]
γ∅ (i, x) ← i + 1; δ∅ (i, x) ← ∅;
for i = 0 to |w| − 1 do
for j = 0 to i − 1 do
for x ∈ A do
[w]
[w]
[w]
[w]
γ{i} (j, x) ← γ∅ (j, x) + γ [w]
(i − γ∅ (j, x), wi );
[w]
δ{i} (j, x) ← δ
δ∅ (j,x)
[w]
[w]
[w]
δ∅
(j,x)
(i − γ∅ (j, x), wi );
for j = i + 1 to |w| − 1 do
for x ∈ A do
if x = wj then
if |w| = 2 then
[w]
[w]
γs (i, x) ← |w| − B; δs (i, x) ← {0, . . . , B − 1};
else
[w]
[w]
γ{i} (j, x) ← 0; δ{i} (j, x) ← {i, j};
else
[w]
[w]
[w]
(j − γ∅ (i − 1, wi ), x);
γ{i} (j, x) ← γ [w]
[w]
δ{i} (j, x) ← δ
δ∅ (i−1,wi )
[w]
[w]
δ∅
(i−1,wi )
[w]
(j − γ∅ (i − 1, wi ), x);
for ` = 2 to |w| − 1 do
for j = 0 to ` − 1 do S[j] ← j repeat
s ← {S[0], . . . , S[` − 1]}; s0 ← {S[0], . . . , S[` − 2]};
for i ∈ {0, . . . , |w| − 1} \ s do
for x ∈ A do
if x = wi then
if ` = |w| − 1 then
[w]
[w]
γs (i, x) ← |w| − B; δs (i, x) ← {0, . . . , B − 1};
else
[w]
[w]
γs (i, x) ← 0; δs (i, x) ← s ∪ {i};
else
[w]
[w]
[w]
[w]
(S[` − 1] − γs0 (i, x), wS[`−1] );
γs (i, x) ← γs0 (i, x) + γ [w]
δ 0 (i,x)
s
[w]
δs (i, x) ← δ
[w]
[w]
[w]
δ 0 (i,x)
s
(S[` − 1] − γs0 (i, x), wS[`−1] );
j ← ` − 1;
while j ≥ 0 and S[j] ≥ |w| − ` + j do j ← j − 1 if j ≥ 0 then
S[j] ← S[j] + 1;
for k = j + 1 to ` − 1 do S[k] ← S[k − 1] + 1
until j < 0;
Algorithm 2: Computation of the position lattice. Value B is the last entry
of the Partial Match table of the KMP algorithm. Its computation takes a
time linear with |w| [15].
14
|w| − 1. Inside the last loop, the way in which the next subset s0 is computed
from the current subset s, both of cardinal `, ensures that s ≺ s0 (Lines 37-41).
For all iterations i of the loop at Lines 2-8 and all symbols x, we have
last[x] = precw [i, x] at the beginning of the inner loop (Line 4). From Lemma
[w]
[w]
1 (Cases 1 and 2a), the transitions δ∅ (i, x) and the shifts γ∅ (i, x) for all
positions i of w and all symbols x, are correctly computed at the end of the
loop.
The loop at Lines 9-23 computes the shifts and the transitions from the
singleton states. For all pairs of positions (i, j) and all symbols x, determining
[w]
[w]
δ{i} (j, x) and γ{i} (j, x) is performed by distinguishing between two cases.
[w]
• If i > j, then δ∅ (j, x) ≺{i} and its shifts and transitions were already
computed. Formula of Remark 2 gives us those of {i} (Lines 12-13).
• If i < j, we distinguish between two subcases according to the symbol x
considered. If x = wj then the shift and the transition state are given in
[w]
Lemma 1 - Case 1. Otherwise, we remark that, since γ{i} (j, x) is positive,
[w]
we have that γ{i} (j, x) = min{k ≥ 1 | wj−k = x if j ≥ k and wi−k = wi }.
[w]
This implies that γ{i} (j, x) = γ
[w]
[w]
[w]
δ∅ (i−1,wi )
(j, x). We have δ∅ (i−1, wi ) ≺ s,
[w]
thus both the shifts and the transitions of the state δ∅ (i − 1, wi ) are
computed before s (Lines 22-23).
The last loop, lines 24-41, computes the shifts and the transitions of the
states corresponding to the subsets of cardinals 2 to ` − 1. For all states s with
2 ≤ |s| ≤ |w| − 1, all positions i ∈ s and all symbols x 6= wi , the corresponding
[w]
[w]
shift and transition γs (i, x) and δs (i, x) are computed from the shifts and
[w]
transitions of the state δs\{max s} (i, x) following Lemma 1 - Cases 2b (in Algo[w]
rithm 2, we put s0 for s \ {max s}). Lemma 2 ensures that δs\{max s} (i, x) ≺ s,
[w]
thus that the shifts and transitions of δs\{max s} (i, x) are computed before those
of s. For all states s with 2 ≤ |s| ≤ |w| − 1, all positions i ∈ s, the shift and
[w]
[w]
transition γs (i, wi ) and δs (i, wi ) are given in Lemma 1 - Case 1.
P|w|−1
The time complexity is O( k=0 (k + |w| − k) |w|
k ) (loop Lines 24-41), i.e.
O(|w|2|w| ). We do not use more memory than needed to store the lattice, which
is, from Remark 1, O(|w|2|w| ).
5
The Fastest w-strategy
Determining the fastest w-strategy, which, from Proposition 2, has the greatest
asymptotic speed among all the w-matching machines of order |w| − 1, may be
performed by computing the asymptotic speed of all the w-strategies and by
returning the fastest one.
In order to enumerate all the w-strategies, let us remark that they are all
contained in the position lattice of w in the sense that:
15
• the set of states of a w-strategy is included in that of the position lattice;
[w]
• all the w-strategies Γ = (Q, o, F, α, δ, γ) verify δ(s, x) = δs (α(s), x) and
[w]
γ(s, x) = γs (α(s), x) for all s ∈ Q and all symbols x.
Reciprocally, to any map φ from Q[w] to {0, . . . , |w|−1} such that φ(s) ∈ s for
all states s ∈ Q[w] , there corresponds the unique w-strategy S = (Q, o, F, α, δ, γ)
for which the next-position-to-check function α coincides with φ on Q.
Finally, our brute force algorithm
1. takes as input a pattern w and an iid model π,
2. computes the position lattice of w,
3. enumerates all the maps φ such that φ(s) ∈ s for all states s ∈ Q[w] ,
4. for each φ, gets the corresponding w-strategy by keeping only the states
of Q[w] reachable from ∅, with the next-position-to-check function φ,
5. computes the asymptotic speed of all the w-strategies under π,
6. returns the w-strategy with the greatest speed.
The time complexity of the brute force algorithm is
|w|−1
Y
|w|
O
(|w| − k)( k ) 23|w| ,
k=1
where the first factor stands for the number of functions φ and the second one
for the computation of the asymptotic speed of a w-strategy, which needs to
solve a linear system of size equal to the number of states, which is O(2|w| ). Its
memory space complexity is |w|2|w|−1 , i.e. what is needed to store the position
lattice of w.
Under its current implementation, the brute force determination of the
fastest w-strategy is unfeasible for patterns of length greater than 4.
6
A polynomial heuristic
There are two points which make the complexity of the brute force algorithm
given in Section 5 that high:
1. the size of the position lattice, which is exponential with the length of the
pattern,
2. determining the fastest strategy in the position lattice, which needs a time
exponential with its size.
16
Our heuristic is based on two independent stages, each one aiming to overcome one of these two points. Both of them start from the general idea that,
since, for any current position of the text, the probability that no mismatch
occurs until the nth text access decreases geometrically with n, the first relative
positions accessed by a strategy (or more generally by a pattern algorithm) are
those which have the greatest influence on its asymptotic speed.
6.1
n-sets sublattices
A sufficient condition for a sublattice U ⊆ Q[w] to contain a w-strategy is that,
[w]
for all s ∈ U, there exists at least a position i ∈ s with δs (i, x) ∈ U for all
x ∈ A. A sublattice U verifying this condition will be said to be complete.
Figure 5 displays four complete sublattices extracted from the position lattice
of abb (Figure 4).
Let us introduce some additional notations here. For all sets S of positions,
the prefix of S is defined as P (S) = max{i ∈ S | j ∈ S for all 0 ≤ j ≤ i} and
its rest is R(S) = S \ {0, . . . , P (S)}
For all positive integers n, the n-sets sublattice of w is the sublattice U of
Q[w] which contains all and only the subsets of Q[w] with a rest containing less
than n positions, i.e. the subsets of the form {0, . . . , p} ∪ X with p < |w| − 1
and |X | ≤ n.
By construction, the n-sets sublattice of w is complete. It contains O(|w|n )
states and O(|w|n+1 ) transitions.
We adapted Algorithm 2 to compute the n-sets sublattice of w in O(|w|n+1 )
time with the same amount of memory space.
6.2
`-shift expectation
We are now interested in a fast way for finding an efficient w-strategy in a given
complete sublattice.
For all integers ` and all states s of a sublattice U, the `-shift expectation of
s is defined as the greatest shift expectation one could possibly get in ` steps
in U by starting from s, conditioned on starting from s, while parsing a text
following an iid model π. Namely, the `-shift expectation is computed following
the recursive formula:
[w]
• ES0 [s] = 0,
• for all ` > 0,
[w]
ES` [s] = max
i∈Tr(s)
X
[w]
π(x) γs[w] (i, x) + ES`−1 [δs[w] (i, x)]
x∈A
[w]
where Tr(s) = {i ∈ s | δs (i, x) ∈ U for all x ∈ A}.
The `-shift expectation of a complete sublattice U is well defined and can be
computed in O(`T ) time, where T is the number of transitions of the sublattice
U and by using O(|U|) memory space.
17
0, b|1
∅
∅
2, b|0
2, b|3
2, a|2
0, a|0
2, b|3
0, a|3
2, a|2
1, a|1
{0}
{2}
{0}
0, b|3
2, b|0
2, a|2
2, a|2 1, b|0
1, b|0
1, a|1
{0, 1}
{0, 2}
1, a|1
1, b|3
{1, 2}
{0, 1}
a
b
∅
∅
2, b|3
{0}
2, a|2
2, b|3
1, b|0 0, b|2
2, a|2
1, b|0 0, b|2
1, a|1
1, a|1
{1} 1, b|3
{0}
2, b|0
1, a|1
{1}
2, a|2 1, b|0
0, a|0
0, a|0
{0, 1}
1, a|1
{0, 2}
{0, 1}
c
d
Figure 5: Four complete sublattices extracted from the position lattice of abb.
Sublattice a (resp. b) leads to the strategy where the next-position-to-check is
always the smallest (resp. the greatest) relative position unchecked. Sublattice
c (resp. d) leads to the strategy at the top (resp. at the bottom) of Figure 3.
18
We finally extract a w-strategy from U by setting the next-position-to-check
of all states s ∈ U to
X
[w]
arg max
γs[w] (i, x) + ES`−1 [δs[w] (i, x)] .
i ∈Tr(s) x ∈A
6.3
K-Heuristic
The K-Heuristic combines the two approaches above in order to compute a
w-strategy in a time polynomial with the length of the pattern.
Being given an order K ≥ 1, we start by computing the K-sets sublattice
of w, thus in O(|w|K+1 ) time. In order to select a w-strategy from the K-sets
sublattice, we next compute the (K + 1)-shift expectation of all its states and
extract a w-strategy as described just above. This computation is performed in
O(K|w|K+1 ) time, since the number of transition of the sublattice is O(|w|K+1 ),
by using O(|w|K ) memory space.
Let us remark that the order ` of the `-shift expectation does not have, a
priori, to be strongly related to the order K of the K-sets sublattice on which it
is computed. By experimenting various situations, we observed that considering
an order greater than K + 1 generally does not improve much the performances,
whereas the strategies obtained from `-expectations with ` smaller than K may
be significantly slower.
The K-Heuristic returns a w-strategy in O(K|w|K+1 ) time by using O(|w|K )
memory space. We insist on the fact that the K-Heuristic generally does not
return the fastest strategy, even if K > |w|. However, we will see in the next
section that it performs quite well in practice.
7
Evaluation
We shall compare the approaches introduced in Sections 5 and 6 with selected
pattern matching algorithms. The comparison is performed, first, from a theoretical point of view, by computing their asymptotic speeds under iid models,
and second, in practical situations, by measuring their average speed over real
data. The average speed with regard to a pattern w, of an algorithm or a
matching machine on a text t is the ratio of |t| to the number of text accesses
performed by the algorithm to search w in t.
We are also interested in to what extent taking into account the frequencies
of the letters of an iid model or a text, for determining the Fastest and the KHeuristic strategies, actually improves their asymptotic or their average speeds.
To this purpose, we compute the Fastest and the K-Heuristic strategies from
the uniform iid model. Next, we test their efficiency in terms of asymptotic
speed under a non-uniform iid model and in terms of average speeds on data
with non-uniform frequencies of letters.
19
7.1
Pre-existing pattern matching algorithms
More than forty years of research have already led to the development of dozens
algorithms. We selected the 9 ones below for our evaluation:
1. Naive [6],
2. Morris-Pratt [6],
3. Knuth-Morris-Pratt [15],
4. Quicksearch [23],
5. Boyer-Moore-Horspool [13],
6. TVSBS [24], a“right-to-left” algorithm in which shifts are given by a badcharacter rule [5, 23] taking into account the two letters at distances |w|−1
and |w| from the current position of the text,
7. EBOM [9], a version of the Backward Oracle Matching algorithm [1] which
also uses a “bad two-characters” rule,
8. HASHq [26], which implements the Boyer-Moore algorithm on blocks of
length q by using efficient hashing techniques [14] (our tests are performed
with q = 3),
9. FJS [11], which combines the ideas of Knuth-Morris-Pratt [15] and Sunday
[23] algorithms.
Algorithms 1 to 5 are classics. The last four ones were chosen for being
known to be efficient on short patterns and small alphabets [10], a situation in
which the determination of the fastest strategy is feasible.
Let us remark that the order of the w-matching machine associated to
TVSBS is equal to |w|, thus greater than that of the Fastest strategy that
we compute.
The transformation into matching machines was implemented for a few other
pattern matching approaches, for instance the SA algorithm (the Baeza-YatesGonnet algorithm) based on bitwise operations [2], or the string-matching automaton [7]. Since the asymptotic and average speeds of these two algorithms
are exactly 1, whatever the pattern, the model and the text, there is no point
in displaying them.
7.2
Results
We shall evaluate:
• the pre-existing pattern matching algorithms presented in Section 7.1,
• the 1- 2- and 3-Heuristics and
• the Fastest strategy (each time it is possible).
20
ra
tt
st
Fa
st
e
ris
tic
eu
ris
tic
3H
eu
ris
tic
2H
0.78
0.69
0.59
0.71
0.66
0.56
0.56
0.76
0.76
0.56
0.56
0.66
0.71
0.59
0.69
0.78
0.72
0.39
0.54
0.50
0.61
0.50
0.39
0.62
0.62
0.39
0.50
0.61
0.50
0.54
0.39
0.72
0.93
0.73
0.62
0.69
0.62
0.73
0.67
0.73
0.73
0.67
0.73
0.62
0.69
0.62
0.73
0.93
0.52
0.52
0.53
0.53
0.54
0.54
0.53
0.53
0.53
0.53
0.54
0.54
0.53
0.53
0.52
0.52
1.50
1.37
1.19
1.30
1.23
1.22
1.27
1.47
1.47
1.27
1.22
1.23
1.30
1.19
1.37
1.50
1.69
1.52
1.33
1.43
1.34
1.33
1.31
1.59
1.59
1.31
1.33
1.34
1.43
1.33
1.52
1.69
1.80
1.60
1.35
1.54
1.38
1.36
1.34
1.64
1.64
1.34
1.36
1.38
1.54
1.35
1.60
1.80
1.83
1.60
1.37
1.56
1.38
1.43
1.34
1.69
1.69
1.34
1.43
1.38
1.56
1.37
1.60
1.83
V
eu
1H
as
hq
H
1.18
1.18
0.73
0.73
0.73
0.73
0.94
0.94
0.94
0.94
0.73
0.73
0.73
0.73
1.18
1.18
O
EB
M
S
T
SB
FJ
0.98
0.51
0.51
0.69
0.63
0.50
0.55
0.75
0.75
0.55
0.50
0.63
0.69
0.51
0.51
0.98
S
H
or
sp
o
ol
-P
rc
h
ris
1.00
0.94
0.89
0.84
0.80
0.80
0.73
0.70
0.70
0.73
0.80
0.80
0.84
0.89
0.94
1.00
ui
Q
ck
se
a
t
hM
or
ra
t
0.70
0.76
0.76
0.76
0.73
0.70
0.70
0.70
0.70
0.70
0.70
0.73
0.76
0.76
0.76
0.70
nu
t
K
ris
-P
M
or
0.53
0.53
0.53
0.53
0.53
0.53
0.53
0.53
0.53
0.53
0.53
0.53
0.53
0.53
0.53
0.53
ai
ve
N
aaaa
aaab
aaba
aabb
abaa
abab
abba
abbb
baaa
baab
baba
babb
bbaa
bbab
bbba
bbbb
Table 1: Asymptotic speeds for the patterns of length 4 on {a, b} under the
uniform model.
7.2.1
Asymptotic speed
The asymptotic speeds are computed for texts and patterns on the binary alphabet {a, b}.
Table 1 displays the asymptotic speeds for all the patterns of length 4 on iid
texts drawn from the uniform distribution. As expected, the strategy computed
with the brute force algorithm (last column) is actually the fastest, but the
speeds of the 1-,2- and 3-Heuristics are very close. The pre-existing algorithms
are outperformed by all our approaches (even by the 1-Heuristic) for all the
patterns. We observe that the Naive, Morris-Pratt and Knuth-Morris-Pratt
algorithms have asymptotic speeds always smaller than 1. One cannot expect
them to be faster since, by construction, they access all the positions of a text
at least once. In the following, we will not display their speeds, nor that of
Quicksearch, for they are always smaller than at least one of the other preexisting algorithms. The full tables can easily be re-computed by using our
software.
Table 2 displays the asymptotic speeds with regard to the same patterns
as Table 1, but under the iid model (πa , πb ) = (0.1, 0.9). This table shows the
asymptotic speeds of the K-Heuristics and the Fastest strategies computed with
regard to an uniform iid model (the columns starting with “Unif.”). The strategies such obtained are not optimized according to the letter probabilities of the
model. They may be used as general purpose approaches, while the strategies
obtained from the model probabilities will be called adapted below. Overall,
21
1
1H
2H
eu
ris
tic
3H
eu
ris
tic
Fa
st
es
t
U
eu
r
ist
ic
st
ic
ris
t
U
ni
f.
ni
f.
Fa
st
e
ist
ic
3H
eu
ic
eu
r
ris
t
2H
1H
eu
0.67
0.66
0.66
0.63
0.66
0.63
0.61
0.46
0.66
0.65
0.63
0.54
0.63
0.54
0.36
0.19
f.
1.49
1.37
1.24
0.70
1.24
1.17
0.63
0.31
1.37
1.18
1.17
0.55
0.70
0.55
0.31
0.27
ni
1.68
0.27
1.55
0.29
1.63
0.28
0.82
0.31
1.33
0.22
1.16
0.21
1.06
0.23
0.73
0.24
U
1.97
0.53
0.87
0.53
1.24
0.54
0.87
0.57
1.49
0.37
0.80
0.39
1.10
0.31
0.90
0.27
f.
3.30
1.77
0.91
0.38
1.67
0.85
0.93
0.33
2.50
1.34
0.91
0.38
1.67
0.85
1.00
0.35
U
ni
EB
O
M
H
as
hq
T
V
SB
S
ol
FJ
S
po
or
s
H
aaaa
aaab
aaba
aabb
abaa
abab
abba
abbb
baaa
baab
baba
babb
bbaa
bbab
bbba
bbbb
3.02
2.43
1.77
1.34
1.80
1.29
1.06
1.08
2.44
1.75
1.09
1.03
1.09
1.00
1.02
1.03
3.46
2.55
2.14
1.71
2.14
1.78
1.31
1.12
2.60
1.75
1.35
0.91
1.72
0.83
1.09
1.03
3.50
2.55
2.14
1.77
2.17
1.68
1.51
1.14
2.61
1.75
1.44
1.04
1.84
1.06
1.17
1.05
3.50
2.55
2.14
1.77
2.16
1.64
1.54
1.15
2.61
1.75
1.74
1.05
1.83
1.08
1.17
1.05
3.02
2.43
1.77
1.74
1.80
1.42
1.30
1.08
2.44
1.75
1.09
1.04
1.09
1.01
1.08
1.03
3.47
2.60
2.19
1.79
2.15
1.80
1.73
1.10
2.60
1.75
1.78
1.04
1.72
1.08
1.16
1.03
3.50
2.60
2.19
1.80
2.18
1.80
1.80
1.14
2.61
1.75
1.84
1.04
1.84
1.08
1.24
1.05
3.50
2.61
2.19
1.80
2.18
1.81
1.80
1.15
2.61
1.75
1.84
1.05
1.84
1.08
1.24
1.05
Table 2: Asymptotic speeds for the patterns of length 4 on {a, b} under the iid
model (πa , πb ) = (0.1, 0.9).
our methods are faster than the pre-existing algorithms, with a few exceptions:
Horspool is faster than the 1-Heuristic for two patterns ending with the rare
letter a: aaaa and baaa. And EBOM is faster than the 1-Heuristic for searching baba. The K-Heuristics and the Fastest strategies computed with regard to
an uniform iid model have asymptotic speeds smaller than their counterparts
obtained from the actual probabilities of the text model (here highly unbalanced). Nevertheless, the uniform approaches still perform quite well, notably
better than the pre-existing algorithms, except for the uniform 1-Heuristic and
the same patterns as above.
Considering longer patterns leads to similar observations. Table 3 shows the
asymptotic speeds obtained for random patterns of length 10. The 3-Heuristic
outperforms all the others approaches (the Fastest strategy cannot be computed
for this length). The (uniform) 1-Heuristic is slower than algorithms such EBOM
or Hashq. But both the uniform 2- and 3-Heuristic overall perform better than
the pre-existing algorithms, though they are slightly slower for a few patterns.
7.3
Average speed
Our data benchmark consists in the Wigglesworthia glossinidia genome, known
for its bias in nucleotide composition (78% of {a, t}), and the Bible in English
from [10].
Table 4 displays the average speeds of patterns randomly picked from the
data. Let us remark that we are now dealing with real texts, which are not
iid. In particular, the Fastest strategy could possibly be outperformed (this is
22
1
ris
tic
3H
eu
1H
r is
eu
tic
r is
t
i
2c
H
eu
ris
tic
3H
eu
ris
tic
H
U
U
U
ni
f.
ni
f.
2-
H
eu
ris
tic
eu
1H
EB
O
as
hq
M
S
SB
V
T
S
FJ
ni
f.
l
oo
or
sp
H
babbbaabab
ababbbbbab
aaabaaaaba
bbbabbabab
bbabaabbab
baabbaaaaa
abbbababbb
baabbbabba
baabbaabab
bbbbababbb
0.85
0.70
0.91
0.84
0.80
4.10
0.31
0.90
0.85
0.31
0.42
0.55
0.87
0.27
0.31
2.17
0.58
0.81
0.37
0.26
0.22
0.29
3.01
0.24
0.22
1.86
0.32
0.68
0.22
0.20
1.42
0.77
3.93
1.45
2.21
2.40
1.32
1.39
2.22
0.95
1.57
0.85
2.61
1.77
1.92
2.45
1.20
1.14
2.29
0.84
1.09
1.19
3.04
1.02
1.15
1.99
1.08
1.61
1.77
1.03
1.98
1.34
4.78
1.05
1.41
4.03
1.33
1.12
2.21
1.05
1.54
1.47
4.92
1.72
1.69
4.72
1.52
1.63
2.26
1.10
1.22
1.43
3.04
1.02
1.15
1.99
1.35
1.73
1.77
1.03
2.38
1.87
4.79
1.29
2.14
4.06
1.90
2.44
3.15
1.20
2.71
2.34
5.27
2.30
3.02
4.78
2.29
2.78
3.54
1.50
ic
Fa
st
es
1t
H
eu
ris
tic
2H
eu
ris
tic
3H
eu
ris
tic
Fa
st
es
t
U
ni
ni
f.
U
f.
3H
eu
r
ist
ist
ic
eu
r
H
2-
1H
U
ni
f.
U
ni
f.
hq
as
H
O
M
EB
SB
S
1.17
1.69
1.58
2.82
1.53
2.47
1.36
1.67
1.92
1.07
3.16
3.20
3.53
2.84
3.04
3.13
3.15
2.72
2.73
3.35
0.77
1.11
0.85
1.70
0.77
1.56
0.70
1.12
0.82
0.87
2.02
1.91
2.18
1.78
1.59
1.76
2.02
1.81
1.80
1.97
0.74
1.28
0.66
1.50
0.71
1.45
0.76
1.27
0.93
0.89
1.85
1.78
1.87
1.70
1.48
1.60
1.85
1.69
1.69
1.73
1.04
1.10
0.92
1.43
1.05
1.23
1.26
1.09
1.34
1.06
1.39
1.42
1.48
1.42
1.45
1.45
1.46
1.35
1.36
1.49
0.60
0.63
0.58
0.63
0.63
0.63
0.65
0.63
0.65
0.63
0.66
0.66
0.66
0.66
0.66
0.67
0.66
0.65
0.66
0.66
1.46
1.64
1.57
2.72
1.74
2.12
1.67
1.64
1.97
1.75
2.88
3.01
3.50
2.89
2.89
3.01
2.96
2.94
2.94
3.23
1.70
2.16
1.76
3.03
1.87
2.67
1.77
2.15
2.11
2.04
3.23
3.28
3.60
2.99
3.05
3.16
3.22
3.03
3.04
3.53
1.74
2.16
1.84
3.08
1.83
2.77
1.81
2.14
2.09
1.87
3.23
3.25
3.60
3.03
3.06
3.16
3.22
2.87
2.87
3.53
1.80
2.16
1.84
3.09
1.87
2.77
1.85
2.14
2.11
2.04
3.24
3.28
3.62
3.06
3.01
3.11
3.22
3.05
3.06
3.53
V
T
S
FJ
atat
tatg
aaat
tccc
caat
aacc
acta
tatc
gtga
gatt
he m
, to
usal
le t
hem
are
at d
f th
r th
fede
H
or
sp
o
ol
eu
r
ist
ic
Table 3: Asymptotic speeds for some patterns of length 10 on {a, b} (drawn
from the uniform distribution) under the iid model (πa , πb ) = (0.1, 0.9).
1.46
1.64
1.57
2.72
1.74
2.12
1.67
1.64
1.97
1.75
2.88
3.01
3.50
2.89
2.89
3.01
2.96
2.94
2.94
3.23
1.73
2.16
1.80
3.03
1.94
2.69
1.85
2.15
2.17
2.04
3.24
3.29
3.62
3.07
3.05
3.16
3.22
3.10
3.12
3.53
1.73
2.16
1.90
3.08
1.97
2.77
1.88
2.14
2.18
2.04
3.24
3.29
3.62
3.07
3.06
3.16
3.22
3.10
3.12
3.53
1.81
2.17
1.90
3.09
1.97
2.77
1.88
2.15
2.19
2.05
3.24
3.29
3.62
3.07
3.06
3.16
3.22
3.10
3.12
3.53
Table 4: Average speeds for some patterns of length 4 picked from the benchmark data (the Wigglesworthia glossinidia complete genome and the Bible in
English).
23
1
FJ
S
T
EB
H
as
hq
U
ni
f.
1H
eu
U
ni
ris
f.
tic
2H
eu
U
ni
ris
f.
tic
3H
eu
1ris
H
eu
tic
ris
tic
2H
eu
ris
tic
3H
eu
ris
tic
1.8
2.8
3.0
1.4
1.4
2.7
3.0
2.7
1.9
2.1
11.9
13.3
11.9
12.4
14.8
10.1
16.2
12.8
12.4
12.1
1.0
1.8
1.7
1.0
1.0
1.6
1.7
1.1
1.1
1.3
5.9
7.2
6.0
6.1
7.5
5.8
8.4
6.8
5.5
6.3
2.2
3.3
2.0
2.1
1.7
3.7
2.8
2.3
1.9
2.9
8.0
8.4
8.9
8.1
9.0
7.6
9.6
8.6
8.0
9.4
6.1
7.2
6.2
7.0
6.1
8.3
6.3
7.4
6.4
6.8
11.7
12.6
13.2
11.7
12.0
11.7
12.8
11.9
11.7
12.8
4.7
5.2
4.4
4.4
5.0
5.4
4.7
5.5
4.9
4.7
8.6
8.7
8.8
8.6
8.5
8.6
8.8
8.5
8.8
8.6
tccttatgtaaaatataaatgtagcaattt
aaaagaaccccggcgaggggagtgaaatag
aattttcaactaatattaaaccacgttctg
aaaggtccattaagtattactatcacagca
agatttgcgtgatttaaaataatcatctaa
ataggaaaagattggattaaactagatatg
at the mount called the mount
ith Israel, to wit, with all t
esus going up to Jerusalem too
them, as they were able to hea
o in Osee, I will call them my
things are come upon thee, the
Syria, that dwelt at Damascus,
full of darkness. If therefor
e it: for there is no other sa
g, Syria is confederate with E
S
O
M
V
SB
sp
oo
l
H
or
tggataaaaatttgttattaccatatctat
cttctttaattatgttttctatttcttttt
gttctatttgttggagatttaaaataatta
tcctactttaacctctaaatgtcccttatt
2.3
2.5
2.6
2.5
2.2
3.4
2.0
2.3
2.4
2.2
8.0
8.9
9.5
7.8
9.0
8.5
10.4
9.4
7.7
9.8
4.6
5.0
5.3
5.5
4.7
6.9
4.8
5.1
4.9
5.1
17.2
17.1
18.0
16.5
18.5
16.3
19.3
18.5
16.4
18.6
7.5
7.9
8.0
8.4
7.5
11.3
8.1
8.1
7.5
8.3
18.0
18.0
18.5
17.9
19.3
17.7
20.0
18.9
17.6
18.9
2.3
2.6
2.7
2.6
2.3
3.5
2.2
2.5
2.5
2.3
8.0
9.1
9.4
7.8
8.9
8.5
10.4
9.4
7.7
9.8
5.5
5.3
6.1
6.5
5.5
9.3
5.6
6.6
5.8
5.9
17.6
18.0
18.1
16.6
18.8
16.8
19.6
18.7
16.9
18.6
8.9
8.5
9.1
9.9
8.5
12.5
9.7
10.2
9.1
9.7
18.4
18.8
18.7
18.0
19.3
18.1
20.1
19.1
17.9
18.9
Table 5: Average speeds for some patterns of length 30 picked from the benchmark data (the Wigglesworthia glossinidia complete genome and the Bible in
English).
not observed on the benchmark data). The 2- and 3-Heuristics, uniform and
adapted, are faster than the pre-existing algorithms for all the patterns, whereas
the 1-Heuristic is sometimes slightly outperformed by Horspool. Horspool is almost as fast as our approaches on the Bible while being sometimes significantly
outperformed on the Wigglesworthia glossinidia genome. The average speeds
are overall greater on the Bible than on the DNA sequence. In both cases, we
do not observe a wide performance gap between the uniform and the adapted
approaches, though our benchmark data are far from following an uniform iid
model. Let us remark that the 2- and 3-Heuristics have almost the same performances both in the uniform and the adapted cases.
Table 5 shows the averages speeds with regard to patterns of length 30.
The average speeds on the Bible are about twice those on the Wigglesworthia
glossinidia genome. One actually expects the speed to be greater in average on
texts with large alphabets, since the less likely the match between two symbols,
the greater the shift expectation per iteration. Again the 3-Heuristic, uniform or
adapted, outperforms the pre-existing algorithms. The speeds of the 3-Heuristic
and of the 2-Heuristic differ in a greater amount than with patterns of length
4 for the Wigglesworthia glossinidia genome, and, to a smaller extent, for the
Bible.
24
1
8
Discussion
In practical situations and though they do not take into account the letter frequencies, the uniform K-Heuristics and the uniform Fastest strategy perform
generally almost as well as their adapted counterparts. The greatest difference observed is for the patterns of length 30 on the Wigglesworthia glossinidia
genome (Table 5) and is relatively small. We do observe a notable amount
of difference for the quite extreme case of the asymptotic speed under the iid
model (πa , πb ) = (0.1, 0.9). But even for these frequencies, the uniform approaches show greater asymptotic speeds than any of the selected pre-existing
algorithms.
The 3-Heuristic has very good results whatever the pattern or the text.
There is no situation for which the performances of the 2-heuristic are far from
the best. On the contrary, the performance ranking of the pre-existing algorithms depends heavily on the patterns and on the texts or the model. For
instance, Horspool may perform very well, even almost optimally, for some patterns and texts or models while its speed may completely plummet in other
situations.
The question of selecting the most efficient order of K-Heuristic still deserves further investigations. A basic answer could be “the greater, the better”
but we should take into consideration that an higher order of heuristic comes
with an increased computational cost. After some experiments, we observed
that the asymptotic speed the K-Heuristic tends to stop improving beyond a
certain rank. For instance, the difference in average speed between the 2- and
3-Heuristics for patterns of length 4, both on the genome and on the Bible,
probably does not justify the computational cost of the 3-Heuristic, while it is
worth to use the 3-Heuristic rather than the 2-Heuristic for searching patterns
of length 30 in the Bible (not that much for the Wigglesworthia glossinidia
genome). The best trade-off for the order of the K-Heuristic depends on the
pattern (notably its length) and on the text features (in particular the alphabet
size and the letter frequencies).
It is certainly possible to obtain efficient heuristic with a lower computational
cost than for the K-Heuristic. Since in standard situation, the length of the text
is much greater than that of the pattern, there is no real reason for considering
only pattern matching algorithms with linear pre-processings of the pattern. In
the extreme case where the texts are arbitrarily long with regard to the patterns,
any pre-processing, i.e. whatever its computation time, would be beneficial as
soon as it improves the overall speed.
Authors’ contributions
Gilles Didier provided the initial idea, led the software development and wrote
all the manuscript but the section Evaluation. Laurent Tichit collaborated on
the software development, ran the tests and wrote the section Evaluation. Both
authors read, edited and approved the final manuscript.
25
References
[1] C. Allauzen, M. Crochemore, and M. Raffinot. Efficient experimental string
matching by weak factor recognition. In Combinatorial Pattern Matching,
pages 51–72. Springer, 2001.
[2] R. Baeza-Yates and G. H. Gonnet. A new approach to text searching.
Communications of the ACM, 35(10):74–82, 1992.
[3] R. A. Baeza-Yates and M. Régnier. Average running time of the BoyerMoore-Horspool algorithm. Theoretical Computer Science, 92(1):19 – 31,
1992.
[4] G. Barth. An analytical comparison of two string searching algorithms.
Information Processing Letters, 18(5):249 – 256, 1984.
[5] R. S. Boyer and J. S. Moore. A fast string searching algorithm. Communications of the ACM, 20(10):762–772, 1977.
[6] C. Charras and T. Lecroq. Handbook of Exact String Matching Algorithms.
King’s College Publications, 2004.
[7] T. Cormen, C. Leiserson, and R. Rivest. Introduction to Algorithms. MIT
Press, 1990.
[8] G. Didier. Optimal pattern matching algorithms. http://arxiv.org/abs/
1604.08437, 2016.
[9] S. Faro and T. Lecroq. Efficient variants of the Backward-Oracle-Matching
algorithm. International Journal of Foundations of Computer Science,
20(06):967–984, 2009.
[10] S. Faro and T. Lecroq. The Exact Online String Matching Problem: A
Review of the Most Recent Results. ACM Comput. Surv., 45(2):13:1–13:42,
Mar. 2013.
[11] F. Franek, C. G. Jennings, and W. F. Smyth. A simple fast hybrid patternmatching algorithm. In Combinatorial Pattern Matching, pages 288–297.
Springer, 2005.
[12] L. Guibas and A. Odlyzko. String overlaps, pattern matching, and nontransitive games. Journal of Combinatorial Theory, Series A, 30(2):183 –
208, 1981.
[13] R. N. Horspool. Practical fast searching in strings. Software: Practice and
Experience, 10(6):501–506, 1980.
[14] R. M. Karp and M. O. Rabin. Efficient randomized pattern-matching algorithms. IBM Journal of Research and Development, 31(2):249–260, 1987.
26
[15] D. E. Knuth, J. H. Morris, Jr, and V. R. Pratt. Fast pattern matching in
strings. SIAM journal on computing, 6(2):323–350, 1977.
[16] H. M. Mahmoud, R. T. Smythe, and M. Régnier. Analysis of BoyerMoore-Horspool string-matching heuristic. Random Struct. Algorithms,
10(1-2):169–186, 1997.
[17] T. Marschall, I. Herms, H. Kaltenbach, and S. Rahmann. Probabilistic
Arithmetic Automata and Their Applications. IEEE/ACM Trans. Comput.
Biol. Bioinformatics, 9(6):1737–1750, Nov. 2012.
[18] T. Marschall and S. Rahmann. Probabilistic Arithmetic Automata and
Their Application to Pattern Matching Statistics. In P. Ferragina and
G. M. Landau, editors, Combinatorial Pattern Matching, volume 5029 of
Lecture Notes in Computer Science, pages 95–106. Springer Berlin Heidelberg, 2008.
[19] T. Marschall and S. Rahmann. Exact Analysis of Horspools and Sundays
Pattern Matching Algorithms with Probabilistic Arithmetic Automata. In
A.-H. Dediu, H. Fernau, and C. Martı́n-Vide, editors, Language and Automata Theory and Applications, volume 6031 of Lecture Notes in Computer
Science, pages 439–450. Springer Berlin Heidelberg, 2010.
[20] T. Marschall and S. Rahmann. An Algorithm to Compute the Character
Access Count Distribution for Pattern Matching Algorithms. Algorithms,
4(4):285, 2011.
[21] M. Régnier and W. Szpankowski. Complexity of Sequential Pattern Matching Algorithms. In M. Luby, J. D. Rolim, and M. Serna, editors, Randomization and Approximation Techniques in Computer Science, volume 1518
of Lecture Notes in Computer Science, pages 187–199. Springer Berlin Heidelberg, 1998.
[22] R. T. Smythe. The Boyer-Moore-Horspool heuristic with Markovian input.
Random Struct. Algorithms, 18(2):153–163, 2001.
[23] D. M. Sunday. A very fast substring search algorithm. Communications of
the ACM, 33(8):132–142, 1990.
[24] R. Thathoo, A. Virmani, S. Sai Lakshmi, N. Balakrishnan, and K. Sekar.
TVSBS: A fast exact pattern matching algorithm for biological sequences.
Current Science, 91(1):47–53, 2006.
[25] T.-H. Tsai. Average Case Analysis of the Boyer-Moore Algorithm. Random
Struct. Algorithms, 28(4):481–498, July 2006.
[26] S. Wu, U. Manber. A fast algorithm for multi-pattern searching. Tech.
Report TR-94-17, CS Dept., University of Arizona, 1994.
[27] A. C.-C. Yao. The complexity of pattern matching for a random string.
SIAM Journal on Computing, 8(3):368–387, 1979.
27
| 8 |
Intrinsic Point of Interest Discovery from Trajectory Data
Matthew Piekenbrock
Derek Doran
Dept. of Computer Science & Engineering
Kno.e.sis Research Center
Wright State University, Dayton, OH, USA
[email protected]
Dept. of Computer Science & Engineering
Kno.e.sis Research Center
Wright State University, Dayton, OH, USA
[email protected]
arXiv:1712.05247v1 [cs.AI] 14 Dec 2017
Abstract
This paper presents a framework for intrinsic point of interest discovery from trajectory databases. Intrinsic points of interest are
regions of a geospatial area innately defined by the spatial and temporal aspects of trajectory data, and can be of varying size, shape,
and resolution. Any trajectory database exhibits such points of
interest, and hence are intrinsic, as compared to most other point
of interest definitions which are said to be extrinsic, as they require
trajectory metadata, external knowledge about the region the trajectories are observed, or other application-specific information.
Spatial and temporal aspects are qualities of any trajectory database, making the framework applicable to data from any domain
and of any resolution. The framework is developed under recent
developments on the consistency of nonparametric hierarchical
density estimators and enables the possibility of formal statistical
inference and evaluation over such intrinsic points of interest. Comparisons of the POIs uncovered by the framework in synthetic truth
data to thousands of parameter settings for common POI discovery
methods show a marked improvement in fidelity without the need
to tune any parameters by hand.
ACM Reference format:
Matthew Piekenbrock and Derek Doran. 2016. Intrinsic Point of Interest Discovery from Trajectory Data. In Proceedings of ACM Conference, Washington,
DC, USA, July 2017 (Conference’17), 10 pages.
DOI: 10.1145/nnnnnnn.nnnnnnn
1
Introduction
The development and deployment of location acquisition systems
have enabled large scale capturing of ‘movement’ or ‘trajectory’
data from people, cars, and other objects. Technologies like global
positioning systems (GPS), global system for mobile communications (GSM), wide area motion imagery (WAMI), and radio-frequency
identification (RFID) allow organizations and governments to collect and exploit trajectory patterns in many scenarios. More recent
initiatives (e.g. Uber’s Movement1 and IBM’s Smarter Cities2 programs) have even made such data available to the either the public
or city planning experts at large. With the rise in importance of this
1 https://movement.uber.com/cities
2 https://www.ibm.com/smarterplanet/us/en/smarter
cities/overview/
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a
fee. Request permissions from [email protected].
Conference’17, Washington, DC, USA
© 2016 ACM. 978-x-xxxx-xxxx-x/YY/MM. . . $15.00
DOI: 10.1145/nnnnnnn.nnnnnnn
data comes prevalent use of Geographic Information Systems (GIS)
and related platforms such as ArcGIS3 and Mapbox4 . Other related
use cases of GIS information have also emerged for surveillance [9]
and location-based service (LBS) applications [36]. In many of these
applications, trajectory data is exploited for knowledge acquisition
tasks [17], the integration of movement patterns to uncover “patterns of life” over a region [43], to expand situational awareness in
crises [40], and to support the value added by a LBS application [13].
In many of these knowledge acquisition tasks, the notion of
a “location” or “point of interest” (POI) is foundational to understanding the entirety of the common space in which the data are
observed [31]. For example, mapping systems must know the position and geometry of locations for navigation and automated
guidance control purposes. In LBS applications, the POIs and metadata such as their popularity (e.g. ‘star-rating’) are necessary to
provide useful location recommendations [13, 30, 44]. Because POIs
are not available from ‘raw’ trajectory data captured by location acquisition systems, they are often extrinsically defined by gazetteers
such as Google Places, FourSquare, GeoNames, or OpenStreetMap.
Yet external sources of location data present many difficulties when
faced with the problem of understanding how a given trajectory
dataset relates to the underlying geographical area where it was observed. For example, many gazetteers store varying types of either
POI metadata or POI relational data, allowing gazetteer-derived
information to present a source of bias. Furthermore, relying on
gazetteers explicitly defines the set of POIs that exist in a given
geographical region. When there is disagreement on this definition,
analysis becomes difficult. Furthermore, with POIs defined a priori,
one is faced with the problem of “fitting” observed trajectory data
to models defined by such POIs, many of which may or may not be
relevant to the given data at hand. For example, it may be desirable
for a city-planner gathering movement (trajectory) data following a
public event to discover ‘bottleneck’ congestion areas like parking
lots, roads, or sidewalk segments for the purpose of traffic analysis.
In this situation, it would more useful to discover POIs directly
from the data itself during the event, but such geographical POIs
may not be available in a gazetteer.
To over come these challenges, this paper investigates the POI
discovery problem in the most generic context possible. We ask:
given only trajectory data, without access to gazetteers, can we infer
subregions within a geospace that are “interesting” enough to call it a
POI? We seek intrinsic POIs, which are POIs recoverable without the
use of a gazetteer, are completely defined by observed movement
patterns, and can be used for any domain-specific application and
at any scale (e.g. from movements within a building to movements
across an entire city region). To make such a definition meaningful,
3 https://www.arcgis.com/
4 https://www.mapbox.com/
we build off recent theoretical work in density-based clustering
and introduce a data-driven, statistically rigorous definition of a
POI applicable to trajectory data of any (and even mixed) resolution. The definition follows from a recent minimax analysis of
the consistency of hierarchical density superlevel set estimators.
We use this definition to present a parameter-free framework for
extracting intrinsic POIs, i.e. yields an optimal unsupervised solution without ad hoc parameter-tuning.5 A comparative analysis
is performed on realistic simulations involving both vehicle and
pedestrian traffic. Validation results show marked improvements in
fidelity against several state of the art (SOTA) algorithms. Of interest to the authors, the simulation settings, the resulting traffic data,
the validation code, and the framework itself is all completely
reproducible and open source, available online.6
2
Point of Interest Discovery
This section provides preliminary information about the POI discovery problem, and provides context and definitions for this work.
It then formally defines a POI and (subsequently) an intrinsic POI,
and the framework their discovery.
2.1
Preliminaries
We consider a trajectory database of discrete, time-indexed spatial
data having at least the 3-tuple of attributes
(<object id>, <spatial component>, <temporal component>)
This minimal amount of information implies a trajectory for an
object of the form:
∆t 1
∆t 2
∆t n−1
T = p1 −−−→ p2 −−−→ . . . −−−−−→ pn
(1)
where p1 , p2 , ..., pn are chronologically ordered spatial coordinates.
In the geographical sense, these spatial components are often defined by a <latitude> and <longitude> pair, but in practice could
be from any coordinate system. Such representations require trajectory pattern mining techniques [42], or techniques that seek to
mine common spatiotemporal patterns across trajectories to assert
significance over areas where trajectory patterns emerge. Mined
patterns in trajectories are often referred to as mobility patterns,
characterizing some specific trajectory quality of interest, such as
heading, stopping rate, velocity, rotation, curvature, or shape [5].
Such mobility patterns exhibit properties that make the formal retrieval of significant areas challenging. For example, if the timespan
of an observed trajectory is long, the processes driving the mobility
pattern may be non-stationary (e.g. road traffic that changes due to
construction, or congestion effects due to time of day shifts in the
work schedule). There may also be paths of objects that are transient
(some areas are never traveled to more than once). Furthermore,
the spatial components in trajectory data often have a high degree
of autocorrelation, breaking assumptions of independence [13]. A
variety of models have been proposed to handle thee situations,
largely focusing on estimating individual trajectory statistics under
these assumptions. This includes, for examples, adaptive Kalman
5 We
see this as a necessary form of usability, an important feature to have in the
modern clustering era. It is well-known that having several sensitive, real-valued
parameters results in combinatorial explosion of the parameter space of an algorithm,
resulting in the need for the user to use one or more parameter-tuning methods to
arrive at a solution that befits the application.
6 ¡Anonymized for review purposes.¿
filters for vehicle navigation [20], state-space models [16], and
trajectory path uncertainty models [34].
The knowledge mined from individual trajectories says little of
the macroscopic patterns driving such trajectory observations. Rather
than focusing on the statistics of individual trajectories, collective models preprocess the trajectory data to extract characteristics
across a swath of trajectories. Such preprocessing is desirable,
as it discards highly autocorrelated data representing redundant
information in favor of aggregating trajectory positions into observations of significance. Examples of this preprocessing scheme
include extracting “semantically enriched” points that intersect
known geographical regions [1], aggregating trajectory positions as
stay points using supplied spatial and/or temporal thresholds [43],
or processing trajectory data into groups using some convex combination of spatial, temporal, and semantic similarity kernels [25, 39].
In a collective model, we refer to the ‘important’ or semantically meaningful data samples aggregated from trajectory points
as exemplar positions, or simply, exemplars:
Definition 1. Exemplar
Consider a sample of n discrete points X n ⊂ Rd that constitute a
trajectory T . Define an aggregation function α : P(X n ) 7→ Rd that
maps any subset of points (e.g. a trajectory segment) in T to a set of
exemplar positions Σ ⊂ Rd .
The aggregation function of choice depends on the intent of the
analysis. For example, consider an urban environmental study
that defines α as a mapping of some isolated trajectory segment
{pk , pk +1 , ..., pk +l } to the mean coordinate of the segment if the
speed of the object traveling from pk to pk +l exceeds a certain
threshold. Groups of these exemplar positions may determine “highemission” zones in a city [4]. Alternatively, if the traffic is made of
pedestrians, such groups may represent tourist attraction areas, the
popularity of which are useful for LBS applications [44]. It is not
difficult to find this type of trajectory preprocessing in geospatial
applications, and the grouping of them is foundational to countless
tasks in trajectory mining [1, 24, 39, 43–45]. We generalize this
preprocessing step by referring to it as “exemplar extraction.”
An important aspect of exemplar extraction is to choose an
aggregation function that befits the intent of the analyst and thus
satisfies a study’s interpretation of “interesting.” This is inevitably
application-specific, and the proposed framework is agnostic to the
specific form of aggregation used, thus it is irrelevant to bestow a
particular interpretation of what “interesting” means. We consider
a more concrete and practical definition using a popular type of
aggregation in Section 3.
2.2
Defining a point of interest
Under the premise that exemplars represent meaningful aggregations of observations from a trajectory data source, it is natural
to define a POI as a region of exemplars. We seek a definition of
such regions with a statistical (rather than heuristic) foundation
as a means of reflecting the naturally occurring structure within
the data. Towards this end, we define a POI as a contiguous, high
density region of exemplars.
To formalize this definition, we follow the notation of Chaudhuri
et. al [10]. Let X be a subset of Rd and define a path as a function
P : [0, 1] → S where S ⊂ X. Also denote the equivalence relation C
Figure 1: Illustrating the cluster tree hierarchy and its interpretation of POIs. Consider an estimated density (right panel) of
exemplar positions extracted from trajectories in a geospace (middle, bottom panel). A POI is a geospatial region inhabited
by exemplar positions at some density threshold λ, with the number of the POIs extracted depend on this scale parameter
setting (left panel). Higher λ limits a POI to being specific and small, and could cause POIs to be manifested by random noise
or be overfitted to a particular set of observations. Low λ defines POIs as very broad areas of low exemplar position density.
The cluster tree hierarchy (left panel) summarizes the set of exemplar positions representing a POI at every density threshold,
thus capturing the entire collection of POIs over a common area (middle panel, upper layers).
as connected, where xCy iff P(0) = x and P(1) = y. Then C partitions
S into connected components or clusters. Each component represents
an area of high density and is called a high density cluster:
Definition 2. High density clusters
For a density function f on Rd , consider a partitioning:
{x : f (x) ≥ λ}, for some λ > 0
(2)
where λ is called the level, or high density threshold, parameter. Then
all maximally connected components in this set are high density
clusters at density level λ.
We relate this formal definition to the trajectory mining domain
with the following definition of a point of interest, defined over a
extracted set of exemplars.
Definition 3. Point of interest
Given a set of m exemplars {ε 1 , ε 2 , . . . , εm } ∈ Σ and a fixed “scale”
or resolution λ, each high density cluster of such exemplars forms a
point of interest at the density level λ.
The sets of high density clusters across all values of λ forms a
hierarchy often referred to as the cluster tree of the density f [10,
11]. A hierarchical definition of locations is common [44] and
matches the intuitive interpretation of a POI. For example, not only
may a particular restaurant in a mall food court be a POI, but the
food court itself may also be considered a POI, as well as entire
mall may be yet another POI. The cluster tree conceptualization
formalizes a POI as a maximally connected set of exemplars falling
along a higher density area, implying such areas are ‘significant,’
and that such connected exemplars may be related.
A visualization of hierarchical POIs and a dendrogram of the
corresponding cluster tree is provided in Figure 1. The middle
figure demonstrates a high-level view of what a set of trajectories
might look like, with the colored dots in the left and middle figures
representing exemplars. The right figure demonstrates a density
estimate of the positions of these exemplars. That is, when these
exemplars are very close to each other, they’re said to have a high
density and are thought to be related, constituting a POI—the scale
of the density depends on the analysis at hand. A sufficiently low
density threshold λ 0 will designate every exemplar as one POI.
From this definition, it may seem that any arbitrary density
estimator may be used to find high-density clusters: simply estimate
the density of every point by kernel density estimation (KDE), and
then iterate through all possible values of λ that create distinct
high-density clusters. Yet not every estimation will produce the
same hierarchy—different kernels (and kernel bandwidths) may
result in a completely different hierarchy of high-density clusters,
and by extension, a different set of POIs. From the cluster tree
perspective, the ideal kernel fn is one that is uniformly consistent
(i.e. supx | fn (x) − f (x)| → 0 as n → ∞) from a given sample X n .
In this case, a model could be fitted with the appropriate kernel and
bandwidth parameter, and the would KDE furnish a continuous
surface from which a cluster tree and its high-density clusters can
be derived [11]. The main issue is that the set of all high-density
clusters is not easy to compute for typical density estimates of
f [10] and generally require a significant amount of memory to
store. This computational inefficiency limits usability for large
trajectory datasets, often observed over wide geographical areas
and over long periods of time. From the applied perspective, many
state-of-the-art approaches find POIs by variants of hierarchical
clustering to find groups of exemplars. This has proved useful for
application-specific problems [43–45] but they are largely heuristic,
i.e. it is common for most clustering algorithms to have unstated or
unknown statistical properties, precluding the possibility of formal
inference [14]. The framework we introduce therefore examines
density-based clustering methods as they are designed to infer a
cluster tree [11] without facing the computational hurdles of KDEs.
A desirable property of any finite-sample density estimator is
some notion of consistency.7 In 1981, Hartigan establsihed a reasonable definition [19], often referred to as Hartigan consistency:
Definition 4. Hartigan Consistency
Let Cfn be the set of all high-density clusters from the cluster tree. For
any sets A, A0 ⊂ X, let An (respectively, An0 ) denote the smallest set
of Cfn containing A ∩ X n (respectively A0 ∩ X n ). Cfn is consistent if,
whenever A and A0 are different connected components of {x : f (x) ≥
λ} (for some λ > 0), P(An is disjoint from An0 ) → 1 as n → ∞.
This consistency definition essentially requires that two disjoint
high-density clusters from the unknown, population density (A and
A0 ) will also be disjoint components in a given empirical cluster tree
(An and An0 ), given enough samples (n). The proposed framework
for POI discovery is developed and implemented from the first
computationally tractable and provably consistent algorithm that
satisfies Hartigan consistency, as analyzed by Chaudhuri et. al [10],
to be discussed in the next section. Having a nonparametric model
satisfying this notion of consistency is important, as it transforms
the unsupervised problem of POI discovery into a formal statistical
estimation problem, not only enabling analysis driven by data, but
requiring minimal assumptions regarding the nature of the data.
Such a relation enables methods of formal statistical inference,
allowing one to quantify uncertainty, i.e. to create hypothesis tests
to discern “true” POIs as opposed to “false” POIs resulting from
random noise or artifacts of low sample sizes, or to create notions
of confidence in estimation [11].
Consistent cluster tree estimation: We next motivate a recent cluster tree estimator and discuss its relationship and applicability to POI discovery for the propsoed framework. Recall that
an empirical estimate of the cluster tree, applied over exemplars,
represents a hierarchy of POIs. Viewed from this perspective, what
we propose can be seen as an extension of Chaudhuri et. al’s work
on the cluster tree [10] to a trajectory mining context.
Consider using Single-Linkage (SL) clustering, an agglomerative
scheme that creates a hierarchical representation of clusters using
the minimum pairwise distance D between all points, as a tool
for clustering exemplars. Beginning with every exemplar x as a
singleton, SL iteratively merges exemplars into clusters according
to the linkage function:
D(x i , x j ) = min d(x i , x j )
x i ,x j ∈X
SL clustering is often criticized due to its tendency to create ‘excessive chaining’, wherein two clusters which may have been seen
as generally unrelated are amalgamated by chance at a distance
threshold that does not reflect the true dissimilarity between the resulting clusters. Hartigan proved SL is a consistent estimator of the
cluster tree for densities in R (for d = 1) and is not consistent for any
d > 1 [19], implying that any SL cluster that contains all the sample
points in A will also contain nearly all sample points in A0 , in probability as n → ∞.8 This is reflected in the geospatial sense as well:
that an estimator Θ̂n whose value θˆ is a point estimate of θ is consistent if,
as more samples are collected (n → ∞), Θ̂n converges in probability to the true
value of the parameter, i.e. plim (Θ̂n ) = θ .
7 Recall
n→∞
8 The
condition is related to the “thin bridge” between any two population modes.
Fractional consistency was shown for SL if for any pair A and A0 , the ratio of inf {f (x ) :
x ∈ A ∪ A0 } to sup{inf {f (x ) : x ∈ P } : paths P from A to A0 } is sufficiently large.
Figure 2: SL excessive chaining example. The bottom panel
denotes a possible clustering using SL when pedestrians
were found to stop between buildings.
consider the case where exemplars represent aggregated ‘stops’
within a set of trajectories, a case that we will also consider later
in Section 3. If an area is observed long enough, such exemplars
should naturally form an area high density in areas where people
stop frequently, e.g. within buildings. In such cases, it may be useful
to categorize exemplars within their respective POIs (this is done in
supervised way applications extract semantic information, see [1]
for example). However, it’s also possible that there exist a few stops
just outside of such buildings, which SL has a tendency to chain
together. An example of this is shown in Figure 2. This discovery
motivated efforts to modify SL not only to reduce this chaining
to make SL more ‘robust’, but also to achieve (at least) Hartigan
consistency for d = 2 and beyond. The first provably consistent
estimator, which we consider in this effort, is a generalization of
SL referred to as ‘Robust Single Linkage’ (RSL) [10].
Robust Single Linkage: Let X be a subset of Rd . Let k·k
denote the `2 norm and let B(x, r ) be a closed ball of radius r
around the point x. The RSL algorithm is given in the listing below:
Robust Single Linkage Algorithm
(1) For each x i set r k (x i ) = inf {r : B(x i , r ) contains k data
points}.
(2) As r grows from 0 to ∞:
(a) Construct a graph G r with nodes {x i : r k (x i ) ≤ r }.
Include edge (x i , x j ) if kx i − x j k ≤ αr
(b) Let Cfn (r ) be the connected components of G r .
The RSL algorithm has two free parameters which need to be set:
α and k. SL is equivalent to RSL with the setting α = 1, k = 2.
Whereas SL is equivalent to (and can be efficiently computed by)
the minimum spanning tree (MST) computed over all pairwise
distances, RSL scales these distances by a constant factor α, and
only reduces to the MST if the components are restricted from
connecting (satisfying {x i : r k (x i ) ≤ r }) within the MST computation. Chaudhuri et. al found that RSL is Hartigan consistent and
established finite-sample rates of convergence for all 1 ≤ α ≤ 2,
with the optimal rate of convergence with the setting α ≥ 2 [10].
2.3
Finding intrinsic points of interest
Using a consistent cluster tree estimator, such as RSL, on a set of
exemplars creates a hierarchical representation of POIs. However,
a nested set of multiple solutions is not always desirable, and a
‘flat’ solution (where each point is assigned a single label) may
be preferred. A traditional approach in hierarchical clustering,
“cutting” the empirical cluster tree at a given density threshold value
λ yields a set of high-density clusters Cfn (λ) = {C 1 , C 2 , . . . Cm } that
form m POIs, a possible ‘flat’ solution. However, the choice of λ
forces all POIs to be of the same scale, requires the user to know
which granularity to choose a priori, affecting the size and kinds
of POIs discovered. For example, a small λ may define shops in a
mall as POIs, while a larger λ may define the mall itself as a POI. It
may not be known ahead of time what granularity level is relevant.
Furthermore, it is reasonable to expect that relevant POIs exist at
multiple levels of granularity, such that a sprawling city park and a
small restaurant could both constitute a POI. Thus, it would useful
to have some sensible notion of “cluster quality” that can be used
(and optimized) as an objective function to discover POIs that are
not dependent on the analyst’s choice of λ, and are strongly intrinsic
to the geospace itself, e.g. are intrinsic POIs.
To capture POIs of any scale and hence satisfy our notion of an intrinsic POIs, we first recall that high-density clusters are contiguous,
relatively dense areas of the data space, separated by contiguous
and relatively non-dense areas, defined over a working definition of
density over a set of exemplars. From a statistical point of view, we
can think of a high-density cluster as a set of points with high density around some “neighborhood” or volume of the support. Müller
et. al quantify this using a functional called excess of mass [26]:
Definition 5. Excess of Mass
For a Ci ∈ Cfn (λ) for some value of λ > 0, the excess of mass of Ci
is given by:
∫
E(Ci ) =
f (x) − λmin (Ci ) dx
(3)
x ∈C i
where λmin (Ci ) represents the lowest density level Ci appears. Initially, this measure seems like a reasonable definition of the “quality”
of a clustering within the cluster tree estimate. Considering the
definition of a high-density cluster from Equation 2, where a cluster
exists along a mode of local maximum of the underlying density,
it’s far too likely that a finite-sample estimation may empirically
find a mode at a given point x 0 if the data is sparse, allowing an
arbitrarily low probability associated x 0 to be classified. A more
interesting result would be to associate a high-density cluster with
a region that exhibits relatively high probability over a neighborhood. See Müller et. al for visualization, along with a more in depth
description of this functional [26]. However, as Campello et. al
remark, this measure exhibits monotonic behavior in any direction
varying the density-level λ in the hierarchy, and instead propose
an alternative, local measure of cluster quality [8]:
Definition 6. Relative Excess of Mass
For a Ci ∈ Cfn (λ) for some value of λ > 0, the relative excess of
mass of Ci is given by:
∫
E R (Ci ) =
λmax (x, Ci ) − λmin (Ci ) dx
(4)
x ∈C i
where λmax (x, Ci ) = min{ f (x), λmax (Ci )} is the density level beyond which x is no longer part of Ci , and λmax (Ci ) is the highest
density beyond which Ci either becomes disconnected (creating
separate components) or disappears (creating singleton clusters).
It is important to note that relative excess of mass is defined in
terms of λ values associated with a specific cluster, as opposed to
a specific clustering. This implies that an ‘optimal’ clustering with
respect to the relative excess of mass estimate may not occur at a
fixed, global density threshold, but rather as a result of several local
density thresholds applied to the hierarchy. Intuitively, if a given
cluster Ci contains many points that have high density relative to
λmin (Ci ), such a cluster will exist across several thresholds of λ and
is thus robust to fluctuations in the scale of analysis. For this reason,
the relative excess of mass can be thought of as a measure of cluster
‘stability’ across different density levels, which we posit reflects an
intrinsic POI that is innately defined by the dataset independent of
density level. Such intrinsic POIs are thus defined as follows: let δi
be an indicator equal to 1 if cluster Ci ∈ Cfn represents an intrinsic
POI and 0 otherwise. Assign values to these indicators such that
the following is maximized:
m
Õ
maximize J =
δi E R (Ci )
δ 1, ...,δm
subject to
i=2
(
δi ∈ {0, 1}, i ∈ {2, . . . , m}
exactly one δ (·) = 1 per disjoint branch
(5)
Where the “per disjoint branch” constraint means that the indicator
function δ (·) equals 1 exactly once for all clusters in each path
from a leaf node to the root of the cluster tree. The optimization
of this objective function is beyond the scope of this paper; we
refer to Campello et. al’s cluster extraction method for general
cluster hierarchies [7] to solve this optimization, as it was developed
alongside an estimator very similar to RSL, is capable of producing
an optimal result at several density levels, and accounts for the
density thresholds at which points become noise (fall along densities
below a given threshold).
3
Experiments and Discussion
We next evaluate the proposed framework for intrinsic POI discovery. Because intrinsic POIs do not rely on gazetteers and may
manifest themselves in unknown locations, evaluation on real data
validated against “ground truth” external knowledge (such as imported location data from sources such as OpenStreetMap or Google
Places) is not feasible. A common approach to evaluate clusterings
when ground truth is absent is to use an internal cluster validity index (CVI). CVIs include common indices like the Silhouette
score, the Dunn Index, and the Calinski-Harabasz criterion (see
Arbelaitz [3] for an overview of these techniques and references
therein). Recent work recommends validation using multiple CVIs,
as they each score different aspects of a clustering such as the ratio of inter- to intra-cluster distances, sum of squares distance to
centroid, or graph-theory scores based on similarity [3]. We do not
believe scores are informative for intrinsic POI evaluation, as most
of these CVIs operate on unrealistic assumptions (e.g. symmetry or
convexity of cluster shape, a notion of minimal variance, the existence of a centroid or medoid, etc.). Contrary to these widespread
concepts, we do not assume that a cluster of exemplars representing
an intrinsic POI will maintain some hyper-spherical or -elliptical
shape. Indeed, there are a number of features within a geographical
area that may be considered POIs, yet inevitably exhibit arbitrary
shapes (e.g. buildings, parks, gathering areas, etc.) and manifest at
varying densities (e.g. a busy intersection that is small and concentrated in exemplar density vs. a parking lot that is large and more
uniform in density). Following the advice of Guyon and Luxburg et
al. [18, 37], we evaluate the efficacy of the framework in the context
of its end-use. We use an external validation where “truth” can be
defined a priori over simulated data, enabling a direct evaluation of
intrinsic POIs against “truly interesting” regions, while ensuring
the latent patterns in the generated data mimic the real geospatial
dynamics of cars and pedestrians over a region.
3.1
Generating synthetic data
To generate synthetic data for evaluation, we turn to the Simulation of Urban MObility (SUMO) software [23]. SUMO is an open
source traffic simulation system capable of generating trajectories
of many objects of multiple modalities (e.g. car, truck, person, plane,
etc.). Given a shapefile that defines avenues for travel (e.g., a road
network, a map of footpaths within a university campus, or the
floor plan of a mall or large building), SUMO is able to generate
trajectories following the avenues provided. Default parameter
settings generate traffic and trajectories in ways that satisfy their
measured physical properties have been shown to be incredibly
accurate [23]. We use SUMO to generate two simulations of both
pedestrian and vehicular traffic under different geographical areas:
an urban region having a mixture of vehicle and pedestrian traffic (the area surrounding The Ohio State University (OSU)), and
a suburban area where pedestrian traffic is more prominent (the
area surrounding Wright State University (WSU)). Details about
the simulation, the simulation data used in this paper, and the code
that produced the resulting evaluation are all publicly available and
reproducible online.9 The (RSL) cluster tree framework itself is part
of a larger open source effort by the author.10
Simulation configuration: SUMO requires every object to
have a trip defined by departure and destination nodes, which SUMO
refers to as junctions. Junctions are connected by edges representing
a possible travel path. Given a file containing the trip definitions of
every object, SUMO dynamically generates routes, or sequences of
edges the object travels along to get from departure junction A to its
destination junction B. We leave nearly all simulation parameters at
their default settings, only modifying simulation length and arrival
parameters (binomially distributed arrivals) to generate pedestrian
and vehicle demand.
Because pedestrian traffic within unrestricted and indoor areas
may constitute intrinsic POIs in a realistic setting, and because
SUMO can generate only outdoor pedestrian traffic, we extended
SUMO to simulate indoor pedestrian traffic as well. Figure 3 illustrates how this extension interplays with vehicular traffic generated
by SUMO.11 Shapefiles denoting the location of buildings are first
loaded into SUMO (the peach colored regions in Figure 3 inlet).
Then, within the shapefile, a random number of pedestrian-only
junctions are generated within the building and registered to nearby
pedestrian-only edges (such as sidewalks). If a generated track is
labeled as a pedestrian and its trip includes a junction contained
within the building region (Figure 3; lower right inlet), the pedestrian undergoes a random walk within the junctions generated in
the building. This random walk is emulated by choosing a random
ordered subset of the generated junctions for a random amount of
9 See
the following for simulation details: ¡Anonymized for review purposes.¿
the following package: ¡Anonymized for review purposes.¿
11 See ¡Anonymized for review purposes.¿
10 See
Figure 3: Top-level view of extending SUMO to support indoor pedestrian traffic. Shapefiles defining buildings are
loaded into SUMO and registered as junction. If pedestriantrack visits an attached junction during a trip, the simulator
chooses ordered random set of junctions to follow within
the building, exiting after a random period of time.
time. The pedestrian visits these interior junctions and then travels
to an ‘exit junction’ attached to the building polygon, continuing
along the original (outdoor) route generated by SUMO.
Defining truth: Recall that intrinsic POIs are inferred by exemplars representing the specific mobility pattern of interest. With
both vehicular and more realistic pedestrian demand generated, the
next step in data generation is to define an aggregation function
to extract meaningful exemplars. To give a concrete use-case of
the proposed framework, we align our experiment with much of
the applied literature related to this topic [28, 42, 45] and extract
exemplars representing the “stay points” of an object. A stay point
is a position where objects have stopped or significantly slowed
down. Extracting such points from simulated SUMO data is trivial,
as the true speed of any traveling object is known at any given
time. For pedestrian traffic, we extract trajectory points where
pedestrians stopped moving. For vehicular traffic, we extract either a) the points where the vehicles stopped moving or b) the
slowest point in a vehicle braking sequence using SUMOs exported
braking signals, whichever is available. From these stay points
(exemplars), we next establish a mapping between each exemplar
and its presence within a “true” intrinsic POI, allowing external
validation. Since exemplars represent object stopped moving, a
natural definition of an intrinsic POI is an assignment defined by
the mechanism causing such objects to stop. Specifically, we define
a building that pedestrians stop within as a “true” intrinsic POI,
as this is very natural and useful grouping. We follow a similar
pattern for vehicular traffic, assigning exemplars a common label if
stopped at identical intersections, stop signs or stop light, or other
junctions. This mechanistic assignment of creating “true” intrinsic
POIs has the benefit of not only being tractable (in the sense that
SUMO provides this information directly), but also being semantically meaningful in the sense that the mechanisms encouraging
objects to stop moving are intrinsic to the geospace.
3.2
Experimental Design
To evaluate the fidelity of the POIs extracted by the proposed framework under multiple settings, we run SUMO simulations over the
OSU and WSU geospaces with parameter settings reflecting differences between the two regions. These settings are shown in
Table 1: SUMO Simulation Parameters
Region
# Build# Veh.
ings
# Ped.
Region
size
Sim.
Length
OSU
70
2,933
2,935
342km2
8 hours
WSU
26
2,050
4,327
461km2
6 hours
Table 1. The OSU geospace covers a smaller area, has an equal
mix of vehicles and pedestrians, and nearly three times as many
buildings. The OSU geospace also has a larger number of roadways
and traffic intersections where intrinsic POIs involving vehicles
may materialize. Being within the main campus, the WSU geospace
has a larger proportion of pedestrian traffic, with few roadways for
vehicles to traverse and smaller number of buildings pedestrians
may visit. Figures 4(a) and 5(a) show what this SUMO generated
POIs labeling creates for the OSU and WSU campus areas, respectively. Qualitatively, examination of these clusters appear to be
reasonable labels of intrinsic POIs. For example, the clusters representing “true” POIs across OSU in Figure 4(a) finds buildings
surrounding the OSU oval quad, particular locations on the ring
road around the quad (which tend to be busy OSU intersections for
both vehicles and pedestrians), and parking lots around the OSU
recreation builds west of the oval to represent POIs. Across WSU
in Figure 5(a), the truth POIs represent each of the major buildings
around the campus, with particularly complex, separate areas of
movement in WSU’s large student union (the yellow points in the
large building in the lower left part of the figure).
Evaluation Measures: As discussed at the beginning of this
section, the unsupervised nature of intrinsic POI discovery make
it difficult to carry out a meaningful evaluation of POI discovery
methods using internal (not requiring ‘truth’ labels) validation measures. Instead, we consider a multifaceted approach: an external,
quantitative evaluation of whether the intrinsic POIs discovered
aligns with SUMO generated POIs using the well-known Adjusted
Rand Index (ARI) [21], and qualitative evaluation of the quality of
the intrinsic POIs our approach unearths as compared to the “true”
intrinsic POIs as defined above. The Rand-family of indices were
chosen due to their transparency and simplicity—although, whereas
the traditional RI measures the proportion of pairwise agreements
between two partitions, the ARI also adjusts the score based on the
expected value of agreements under the null hypothesis that the
agreements were completely random, and thus is what we report.
Algorithms Compared: We further compare the fidelity of
the intrinsic POIs extracted by the proposed framework against
other clustering algorithms commonly used for POI discovery from
trajectories. We either downloaded the implementation of, or implemented ourselves, a number of these algorithms for comparison.
Aside from RSL, the selected methods includes the well-known
density-based algorithms DBSCAN [12] and OPTICS [2], the widespread hierarchical algorithms single linkage (SL), average linkage
(AL), and wards criterion (WL) [27], along with the partitioning-like
algorithms k-means and CLARA [22]. These algorithms were chosen due to their relevance to this problem, wide-spread availability,
known success in the clustering world.
Parameter Settings: Clustering algorithms generally require
parameter-tuning in order to ‘fit’ to a given data set, but the number
and semantics of these parameters often changes with the algorithm
used, leaving comparisons between parameter settings difficult.
Although most hierarchical algorithms carry no free parameters
to create a (hierarchical) set of solutions, they do require either a
threshold value (h) or the exact number of clusters to extract (k)
to be specified to extract a ‘flat’ clustering. Similarly, k-means and
CLARA also require k to be specified a priori. Because the k parameter has the same interpretation in multiple algorithms, we will
use k to refer to the number of clusters extracted. Density based
algorithms have multiple parameters with interpretations compared to the aforementioned algorithms. For example, DBSCAN
requires a minimum cluster size parameter minPts and a distance
(or ‘scale’) threshold ϵ to be set. OPTICS, often cited as extension to
DBSCAN, is an ordering algorithm that—given a parameter setting
for minPts—can be be used to extract either a flat, DBSCAN-like
cluster extraction or a simplified hierarchy using a either the a distance threshold ϵ 0 or a reachability-based threshold ξ , respectively.
The DBSCAN-like cluster extraction is reported here. RSL requires
the setting of α and k, the former relating to scaling the connection
radii used to connect components, and the latter to the saliency of
cluster estimates. Note that in RSL, k is more similar to the minPts
parameter in that it is a minimum neighborhood parameter. The
number of clusters is automatically determined by optimized the
defined relative excess of mass functional from Section 2.
Each algorithm reflects a large set of possible solutions over
its parameter setting. Choosing a single parameter setting for
evaluation would represent a source of possible bias. Rather, we
employ a more comprehensive approach by comparing a wide
range of parameter settings for each algorithm. To define these
ranges, let seq(x, y, s) denote the sequential range operator, skipping s values in the sequence of integers from x to y. For example,
seq(1, n, 1) = {1, 2, . . . , n}, and seq(1, n, i) = {1, 1 + i, . . . , n − i, n}.
For the hierarchical clustering algorithms (SL, AL, and WL) the
number of flat clusters extracted k is varied in the range seq(2, nt , 1)
where nt is the number of “true” POIs assigned by SUMO. We see
this as a reasonable strategy, as it gives a better view of how multiple levels extracted from the hierarchy matched the data set as well
as how well the merge criterion (or linkage function) collectively
captures the true POIs in the geospace. We use the same range
to vary k for the k-means and CLARA algorithms. The density
based methods DBSCAN and OPTICS are evaluated by first varying
minPts, and then (for each value of minPts) by varying the scale parameters ϵ and ϵ 0 , respectively. Recall minPts relates to a minimum
neighborhood value that constitutes a cluster. Thus, and to allow
the testing to be tractable, we set minPts to reflect the possible sizes
of the POIs, along the quantiles qnt = seq(0.10, 0.95, 0.025) corresponding to the the number of exemplars per POI in the SUMO
“truth” data. The distance thresholds ϵ for DBSCAN and ϵ 0 OPTICS
are also varied along the quantiles seq(0.01, 0.20, 0.01) of the pairwise distances computed over the data set. Since all density-based
methods mark points that fall in areas of the data set not sufficiently
dense as ‘noise’ according to a scale parameter—leading to severe
overfitting if not guided with a measure like stability—all densitybased solutions were deemed only valid if at least 75% of the data
is classified with a non-noise label. Finally, the RSL also contains
(a) “True” POIs
(b) Inferred intrinsic POIs
Figure 4: Intrinsic POI comparison, OSU
(c) “True” POIs
(d) Inferred intrinsic POIs
Figure 5: Intrinsic POI comparison, WSU
two parameters, a k value, and an α parameter. We use Chaudhuri
et. al’s analysis to determine how to set these. RSL
√ was shown
to have optimal rates of convergence
when
α
≥
2, so we leave
√
it at that constant value ( 2). Similarly, the rate only holds for k
at least as large as d log(n), where d is the dimensionality of the
data set (d = 2 in this case). After varying through the small set of
k values in a similar fashion as was performed for DBSCAN and
OPTICS (k ∈ qnt ∀k >= d log(n)). In total, 2,196 and 1,995 cluster
configurations were performed for the OSU and WSU simulations
respectively, totalling 4,191 reported configurations.
3.3
Validation Testing and Discussion
Qualitative comparison to truth: Figures 4 and 5 compare
the intrinsic POIs discovered by our framework agains the simulation’s “true” POIs. Recall that points with low density are discarded
as noise (not shown). Direct comparison of the true POIs defined by
the simulation show clear similarities. Over the OSU simulation in
Figure 4(b), the framework recovers intrinsic POIs within buildings
no matter its shape, the density, or closeness to other buildings. It
also recovers intrinsic POIs over parking lots and street intersections around the OSU oval. Some buildings are decomposed into
a collection of individual intrinsic POIs. For example, the easternmost large campus building by the northeast corner of the oval
contains three separate intrinsic POIs: one at its entrance by the
road, another in the center of the building, and a third at its back entrance. Although these labels may not match what SUMO assigned,
they are in some sense more natural, i.e. it’s quite possible for large
buildings to have dense, isolated areas of people movement.
Looking at the intrinsic POIs over the WSU dataset in Figure 5(b),
we find each building in general is recovered as an intrinsic POI
and align in shape compared to the shape of the “true” POIs from
Figure 5(a). We also note that the framework determines that some
movement within buildings but covering very small areas were
found not be significant enough be an intrinsic POI. Large buildings
also showed further decomposition like the OSU simulation. For
example, in the WSU student union (large building in the lower left
corner of the figure) the framework defines intrinsic POI’s at the
center and two back exits from the building.
Quantitative comparison to other approaches: The proposed intrinsic POI framework measured ARI scores of 0.966 on
the OSU data set and 0.922 for the WSU data set. Note that this
is not the maximum ARI of any RSL solution, but the ARI of the
solution found using the highest predefined notion of stability, determined completely without any knowledge of the surrounding
geographical area. RSL performed consistently in terms of having
low variability compared to other algorithm, with overall high similarity to the semantically driven SUMO assigned locations. Figure 6
shows the distribution of the ARI for the algorithms we compared
our method against with the parameter settings discussed in Section 3.2. The orange line corresponds with the ARI of the proposed
framework, which compares favorably with best possible settings
of other algorithms. Note that although DBSCAN (like others) performed well with very specific configurations of minPts and ε 0 , the
settings of these parameters is often not very intuitive in unsupervised scenarios where the truth is unknown (and thus external
measures like ARI cannot be computed). Of the hierarchical algorithms, we see the impact of the linkage criterion used and how
they are influenced by the ‘shape’ of the true clusters. For example,
SL clustering performed fairly well on the well separated WSU data
set, but substantially lower on the more density-varied OSU data
set. This is reflected in AL as well. k-means was able capture much
of the true clustering structure with the right parameter settings
Figure 6: The distribution of Adjusted Rand Index (ARI) scores of various clustering algorithms (after varying free parameters).
The orange line corresponds to the ARI of the proposed framework.
for the WSU simulation (with max/mean ARI scores of (0.92, 0.71)),
however exhibited degraded performance when the POIs were less
separated in OSU data set ((0.61, 0.84)). OPTICS, with a few specific
parameter settings, performed well on the WSU data set, however
again suffered on the more variable-scale OSU data set.
4
Related Research
The trajectory field has been largely progressed by “extensive and
intensive individual efforts” [42]. Nonetheless, conceptual models have been proposed for how to deal with the patterns within
trajectories and how to relate such patterns to geographical areas of interest for various purposes. One such model postulates
that trajectories and their spatiotemporal patterns are essentially
driven by the semantics the application associates with trajectory
itself [29] [33], and have contributed significantly to the “Stops
and Moves of Trajectories” (SMoT) family of classification algorithms [1, 28, 32], where the premise of the analysis is that by
partitioning trajectory data into a labeled set of ‘stop’ and ‘move’
segments, one can take then annotate these segments with semantic information, derive specific mobility patterns, and as a result
discover ‘interesting’ locations. Alvares et al. developed IB-SMoT to
find interesting positions based on semantic annotations describing
the places a trajectory visited [1]. Palma et al. reduced IB-SMoT’s
reliance on prior knowledge about positions that are likely to be
interesting by incorporating the speed at which tracks are traveling
with their variation CB-SMoT [28]. DB-SMoT finds clusters of common trajectories based on similar direction changes and stopping
points [32]. Zhou et al. tackle the problem of finding positions of
interest to an individual track based on data about a track’s location
preferences, position over time, and tags of locations provided by
web services such as Google Maps [45].
Many related efforts encode or are reliant on varying notions of
an “interesting place” using, for example, techniques from natural
language processing (NLP) [15], data clustering [32, 35, 45], sequential pattern mining [38], and social network analysis [41] methods.
Zheng et. al pioneered the use of ‘stay points’, corresponding to an
aggregation of consecutive GPS points that collectively are within
a user-supplied time and distance threshold, thereby characterizing
a ‘virtual’ location [43, 44]. It is interesting to note that Zheng et. al
used OPTICS to create a hierarchical clustering ‘stay points’ for an
LBS-type application with Microsoft called ‘Geolife’ [43]. Indeed,
Zheng anticipated a number of developments in the trajectory mining field [42]; the theoretical cluster tree may be viewed as a more
statistically based conception of the‘Tree Based Hierarchical Graph’
that is used to represent POIs in that application as well.
It’s worth noting that our definition of an intrinsic POI, having
a more theoretical foundation in density-based clustering, is both
conceptually very similar to OPTICS and computationally, more
recently, to Hierarchical DBSCAN (HDBSCAN) [6]. There exist a
number of commonalities between both OPTICS and DBSCAN and
the theory of the cluster tree. A comprehensive exposition of this
relationship is beyond the scope of this paper, see Campello et. al [8]
for a thorough review of the subject. Although it’s not mentioned
in such efforts, the usage of RSL with a relative excess of mass
functional to cluster extraction is equivalent to the flat clusters
“HDBSCAN” extracts with a setting of α = 1 and k = minPts.
However, the asymptotic consistency of the setting of the pair
(α = 1, k ∼ d log n) has not been established [10]. When alpha = 1,
k must be much larger, exponential in√the dimensionality of the
data set, d. Thus, we use RSL with α ≥ 2.
5
Concluding Remarks
This paper proposed a general framework for intrinsic POI discovery, without needing to rely on external gazetteers, based on recent
theoretical advances in hierarchical, nearest neighbor density estimation. It discussed a conceptually sound basis for automated POI
discovery specifically in the context of geospatial data, and introduced a framework that provides a rigorous and usable solution
to an applied domain primarily dominated by intuitively reasonable, but heuristically-based methods. With novel extensions to
SUMO to support pedestrian movement in buildings, an evaluation
of simulated trajectory data over diverse geographical areas supports the conclusion that the proposed framework is a useful tool
for extracting intrinsic POIs. The framework has both theoretical
guarantees and practical benefits, requires no ad hoc parameter
tuning, and exhibits improved fidelity against common approaches
over thousands of parameter settings.
In future work, with the help of the asymptotic analysis done by
Chaudhuri et. al, we plan to develop model-selection techniques
for POI extraction. This is imperative in exploratory settings, such
as large urban environments where the number of POIs is not
known ahead of time, there is little useful knowledge to gain from
ad hoc or heuristic-based cluster analysis, especially when the
solution space is large. By relating the concept of a POI to the theory
of the cluster tree, RSL and associated estimators enable future
theoretical work may further augment models reliant on POI data,
such as location recommendation systems, collaborative filtering
techniques, or social networking models built from POI data, such
the the ‘Location-Based Social Networks’ reviewed in [41, 42].
References
[1] Luis Otavio Alvares, Vania Bogorny, Bart Kuijpers, Jose Antonio Fernandes de
Macedo, Bart Moelans, and Alejandro Vaisman. 2007. A model for enriching
trajectories with semantic geographical information. In Proc. of the 15th annual
ACM Intl. Symposium on Advances in Geographic Information Systems. ACM.
[2] Mihael Ankerst, Markus M Breunig, Hans-Peter Kriegel, and Jörg Sander. 1999.
OPTICS: ordering points to identify the clustering structure. ACM Sigmod record
28, 2 (1999).
[3] Olatz Arbelaitz, Ibai Gurrutxaga, Javier Muguerza, JesúS M PéRez, and IñIgo
Perona. 2013. An extensive comparative study of cluster validity indices. Pattern
Recognition 46, 1 (2013).
[4] Shumeet Baluja. 2016. Reducing Vehicle Emissions via Machine Learning for
Traffic Signal Program Selection. (2016).
[5] Maike Buchin, Anne Driemel, Marc van Kreveld, and Vera Sacristán Adinolfi.
2011. Segmenting trajectories: A framework and algorithms using spatiotemporal
criteria. Journal of Spatial Information Science 3 (2011).
[6] Ricardo JGB Campello, Davoud Moulavi, and Jörg Sander. 2013. Density-based
clustering based on hierarchical density estimates. In Pacific-Asia Conference on
Knowledge Discovery and Data Mining.
[7] Ricardo JGB Campello, Davoud Moulavi, Arthur Zimek, and Jörg Sander. 2013. A
framework for semi-supervised and unsupervised optimal extraction of clusters
from hierarchies. Data Mining and Knowledge Discovery 27, 3 (2013).
[8] Ricardo JGB Campello, Davoud Moulavi, Arthur Zimek, and Joerg Sander. 2015.
Hierarchical density estimates for data clustering, visualization, and outlier
detection. ACM Trans. on Knowledge Discovery from Data 10, 1 (2015).
[9] Aileen Y Chang, Maria E Parrales, Javier Jimenez, Magdalena E Sobieszczyk,
Scott M Hammer, David J Copenhaver, and Rajan P Kulkarni. 2009. Combining
Google Earth and GIS mapping technologies in a dengue surveillance system for
developing countries. Intl. Journal of Health Geographics 8, 1 (2009).
[10] Kamalika Chaudhuri and Sanjoy Dasgupta. 2010. Rates of convergence for the
cluster tree. In Advances in Neural Information Processing Systems.
[11] Yen-Chi Chen, Jisu Kim, Sivaraman Balakrishnan, Alessandro Rinaldo, and
Larry Wasserman. 2016. Statistical Inference for Cluster Trees. arXiv preprint
arXiv:1605.06416 (2016).
[12] Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu, et al. 1996. A densitybased algorithm for discovering clusters in large spatial databases with noise. In
Proc. on Intl. Conf. on Knowledge Discovery and Data Mining.
[13] Flavio Figueiredo, Bruno Ribeiro, Jussara M Almeida, and Christos Faloutsos.
2016. Tribeflow: Mining & predicting user trajectories. In Proc. of the 25th Intl.
Conference on World Wide Web.
[14] Chris Fraley and Adrian E Raftery. 2002. Model-based clustering, discriminant
analysis, and density estimation. J. Amer. Statist. Assoc. 97, 458 (2002).
[15] Lorenzo Gabrielli, Salvatore Rinzivillo, Francesco Ronzano, and Daniel Villatoro.
2014. From tweets to semantic trajectories: mining anomalous urban mobility
patterns. In Citizen in Sensor Networks. Springer.
[16] Tobias Gindele, Sebastian Brechtel, and Rüdiger Dillmann. 2010. A probabilistic
model for estimating driver behaviors and vehicle trajectories in traffic environments. In 2010 13th Intl. IEEE Conference on Intelligent Transportation Systems.
IEEE.
[17] Marta C Gonzalez, Cesar A Hidalgo, and Albert-Laszlo Barabasi. 2008. Understanding individual human mobility patterns. Nature 453, 7196 (2008).
[18] Isabelle Guyon, Ulrike Von Luxburg, and Robert C Williamson. 2009. Clustering:
Science or art. In NIPS 2009 workshop on clustering theory.
[19] John A Hartigan. 1981. Consistency of single linkage for high-density clusters.
J. Amer. Statist. Assoc. 76, 374 (1981).
[20] Congwei Hu, Wu Chen, Yongqi Chen, and Dajie Liu. 2009. Adaptive Kalman
filtering for vehicle navigation. Journal of Global Positioning Systems 1, 04 (2009).
[21] Lawrence Hubert and Phipps Arabie. 1985. Comparing partitions. Journal of
classification 2, 1 (1985).
[22] Leonard Kaufman and Peter Rousseeuw. 1987. Clustering by means of medoids.
North-Holland.
[23] Daniel Krajzewicz, Jakob Erdmann, Michael Behrisch, and Laura Bieker. 2012.
Recent Development and Applications of SUMO - Simulation of Urban MObility.
Intl. Journal On Advances in Systems and Measurements 5, 3&4 (December 2012).
[24] Liang Xu Liu, Jia Tao Song, Bo Guan, Zhao Xiao Wu, and Ke Jia He. 2012. Tradbscan: a algorithm of clustering trajectories. In Applied Mechanics and Materials,
Vol. 121. Trans Tech Publ.
[25] Siyuan Liu, Shuhui Wang, Kasthuri Jayarajah, Archan Misra, and Ramayya Krishnan. 2013. TODMIS: Mining Communities from Trajectories. In Proceedings of
the 22Nd ACM International Conference on Information & Knowledge Management
(CIKM ’13). ACM, New York, NY, USA, 2109–2118.
[26] Dietrich Werner Müller and Günther Sawitzki. 1991. Excess mass estimates and
tests for multimodality. J. Amer. Statist. Assoc. 86, 415 (1991).
[27] Fionn Murtagh and Pierre Legendre. 2014. Wardfis hierarchical agglomerative
clustering method: which algorithms implement wardfis criterion? Journal of
Classification 31, 3 (2014).
[28] Andrey Tietbohl Palma, Vania Bogorny, Bart Kuijpers, and Luis Otavio Alvares.
2008. A clustering-based approach for discovering interesting places in trajectories. In Proc. of the 2008 ACM symposium on Applied computing. ACM.
[29] Christine Parent, Stefano Spaccapietra, and Esteban Zimányi. 2006. Conceptual
modeling for traditional and spatio-temporal applications: The MADS approach.
Springer Science & Business Media.
[30] Moon-Hee Park, Jin-Hyuk Hong, and Sung-Bae Cho. 2007. Location-based
recommendation system using bayesian userfis preference model in mobile
devices. In Intl. Conference on Ubiquitous Intelligence and Computing. Springer.
[31] Marco Pavan, Stefano Mizzaro, Ivan Scagnetto, and Andrea Beggiato. 2015.
Finding important locations: A feature-based approach. In IEEE Intl. Conference
on Conf. on Mobile Data Management, Vol. 1.
[32] Jose Antonio MR Rocha, Valeria C Times, Gabriel Oliveira, Luis O Alvares, and
Vania Bogorny. 2010. DB-SMoT: A direction-based spatio-temporal clustering
method. In Intelligent systems, 2010 5th IEEE Intl. conference. IEEE.
[33] Stefano Spaccapietra, Christine Parent, Maria Luisa Damiani, Jose Antonio de
Macedo, Fabio Porto, and Christelle Vangenot. 2008. A conceptual view on
trajectories. Data & knowledge engineering 65, 1 (2008).
[34] Goce Trajcevski, Roberto Tamassia, Hui Ding, Peter Scheuermann, and Isabel F
Cruz. 2009. Continuous probabilistic nearest-neighbor queries for uncertain
trajectories. In Proc. of the 12th Intl. Conference on Extending Database Technology:
Advances in Database Technology. ACM.
[35] Md Reaz Uddin, Chinya Ravishankar, and Vassilis J Tsotras. 2011. Finding regions
of interest from trajectory data. In 2011 12th IEEE Intl. Conference on Mobile Data
Management, Vol. 1. IEEE.
[36] Kirsi Virrantaus, Jouni Markkula, Artem Garmash, Vagan Terziyan, Jari Veijalainen, Artem Katanosov, and Henry Tirri. 2001. Developing GIS-supported
location-based services. In 2001. Proc. of the Second Intl. Conference on Web Information Systems Engineering, Vol. 2. IEEE.
[37] Ulrike Von Luxburg, Robert C Williamson, and Isabelle Guyon. 2012. Clustering:
Science or art?. In ICML Unsupervised and Transfer Learning.
[38] Xiangye Xiao, Yu Zheng, Qiong Luo, and Xing Xie. 2014. Inferring social ties
between users with human location history. Journal of Ambient Intelligence and
Humanized Computing 5, 1 (2014).
[39] Josh Jia-Ching Ying, Wang-Chien Lee, Tz-Chiao Weng, and Vincent S Tseng.
2011. Semantic trajectory mining for location prediction. In Proc. of the 19th
ACM SIGSPATIAL Intl. Conference on Advances in Geographic Information Systems.
ACM.
[40] Ping Zhang, Qing Deng, Xiaodong Liu, Rui Yang, and Hui Zhang. 2017.
Emergency-Oriented Spatiotemporal Trajectory Pattern Recognition by Intelligent Sensor Devices. IEEE Access 5 (2017).
[41] Yu Zheng. 2011. Location-based social networks: Users. In Computing with
spatial trajectories. Springer.
[42] Yu Zheng. 2015. Trajectory data mining: an overview. ACM Trans. on Intelligent
Systems and Technology 6, 3 (2015).
[43] Yu Zheng, Xing Xie, and Wei-Ying Ma. 2010. GeoLife: A Collaborative Social
Networking Service among User, Location and Trajectory. IEEE Data Eng. Bull.
33, 2 (2010).
[44] Yu Zheng, Lizhu Zhang, Xing Xie, and Wei-Ying Ma. 2009. Mining interesting
locations and travel sequences from GPS trajectories. In Proc. of the 18th Intl.
conference on World wide web. ACM.
[45] Changqing Zhou, Dan Frankowski, Pamela Ludford, Shashi Shekhar, and Loren
Terveen. 2007. Discovering personally meaningful places: An interactive clustering approach. ACM Trans. on Information Systems 25, 3 (2007).
| 2 |
Critical Parameters in Particle Swarm Optimisation
arXiv:1511.06248v1 [cs.NE] 19 Nov 2015
J. Michael Herrmann∗, Adam Erskine, Thomas Joyce
Institute for Perception, Action and Behaviour
School of Informatics, The University of Edinburgh
10 Crichton St, Edinburgh EH8 9AB, Scotland, U.K.
Abstract
Particle swarm optimisation is a metaheuristic algorithm which finds reasonable solutions in
a wide range of applied problems if suitable parameters are used. We study the properties of the
algorithm in the framework of random dynamical systems which, due to the quasi-linear swarm
dynamics, yields analytical results for the stability properties of the particles. Such considerations
predict a relationship between the parameters of the algorithm that marks the edge between
convergent and divergent behaviours. Comparison with simulations indicates that the algorithm
performs best near this margin of instability.
1
PSO Introduction
Particle Swarm Optimisation (PSO, [1]) is a metaheuristic algorithm which is widely used to solve
search and optimisation tasks. It employs a number of particles as a swarm of potential solutions.
Each particles shares knowledge about the current overall best solution and also retains a memory of the best solution it has encountered itself previously. Otherwise the particles, after random
initialisation, obey a linear dynamics of the following form
vi,t+1
xi,t+1
=
=
ωvi,t + α2 R1 (pi − xi,t ) + α2 R2 (g − xi,t )
xi,t + vi,t+1
(1)
Here xi,t and vi,t , i = 1, . . . , N , t = 0, 1, 2, . . . , represent, respectively, the d-dimensional position in
the search space and the velocity vector of the i-th particle in the swarm at time t. The velocity update
contains an inertial term parameterised by ω and includes attractive forces towards the personal best
location pi and towards the globally best location g, which are parameterised by α1 and and α2 ,
respectively. The symbols R1 and R2 denote diagonal matrices whose non-zero entries are uniformly
distributed in the unit interval. The number of particles N is quite low in most applications, usually
amounting to a few dozens.
In order to function as an optimiser, the algorithm uses a nonnegative cost function F : Rd → R,
where without loss of generality F (x∗ ) = 0 is assumed at an optimal solution x∗ . In many problems,
where PSO is applied, there are also states with near-zero costs can be considered as good solutions.
The cost function is evaluated for the state of each particle at each time step. If F (xi,t ) is better
than F (pi ), then the personal best pi is replaced by xi,t . Similarly, if one of the particles arrives at a
state with a cost less than F (g), then g is replaced in all particles by the position of the particle that
has discovered the new solution. If its velocity is non-zero, a particle will depart from the current
best location, but it may still have a chance to return guided by the force terms in the dynamics.
Numerous modifications and variants have been proposed since the algorithm’s inception [1] and
it continues to enjoy widespread usage. Ref. [2] groups around 700 PSO papers into 26 discernible
application areas. Google Scholar reveals over 150,000 results for “Particle Swarm Optimisation” in
total and 24,000 for the year 2014.
In the next section we will report observations from a simulation of a particle swarm and move
on to a standard matrix formulation of the swarm dynamics in order to describe some of the existing
∗ corresponding
author: [email protected]
1
analytical work on PSO. In Sect. 3 we will argue for a formulation of PSO as a random dynamical
system which will enable us to derive a novel exact characterisation of the dynamics of one-particle
system, which will then be generalised towards the more realistic case of a multi-particle swarm. In
Sect. 4 we will compare the theoretical predictions with simulations on a representative set of benchmark functions. Finally, in Sect. 5 we will discuss the assumption we have made in the theoretical
solution in Sect. 3 and address the applicability of our results to other metaheuristic algorithms and
to practical optimisation problems.
2
2.1
Swarm dynamics
Empirical properties
The success of the algorithm in locating good solutions depends on the dynamics of the particles in
the state space of the problem. In contrast to many evolution strategies, it is not straight forward to
interpret the particle swarm as following a landscape defined by the cost function. Unless the current
best positions p or g change, the particles do not interact with each other and follow an intrinsic
dynamics that does not even indirectly obtain any gradient information.
The particle dynamics depends on the parameterisation of the Eq. 1. To obtain the best result
one needs to select parameter settings that achieve a balance between the particles exploiting the
knowledge of good known locations and exploring regions of the problem space that have not been
visited before. Parameter values often need to be experimentally determined, and poor selection may
result in premature convergence of the swarm to poor local minima or in a divergence of the particles
towards regions that are irrelevant for the problem.
Empirically we can execute PSO against a variety of problem functions with a range of ω and
α1,2 values. Typically the algorithm shows performance of the form depicted in Fig. 1. The best
solutions found show a curved relationship between ω and α = α1 + α2 , with ω ≈ 1 at small α, and
α ' 4 at small ω. Large values of both α and ω are found to cause the particles to diverge leading
to results far from optimality, while at small values for both parameters the particles converge to
a nearby solution which sometimes is acceptable. For other cost functions similar relationships are
observed in numerical tests (see Sect. 4) unless no good solutions found due to problem complexity
or run time limits, see Sect. 5.3. For simple cost functions, such as a single well potential, there are
also parameter combinations with small ω and small α will usually lead to good results. The choice
of α1 and α2 at constant α may have an effect for some cost functions, but does not seem to have a
big effect in most cases.
2.2
Matrix formulation
In order to analyse the behaviour of the algorithm it is convenient to use a matrix formulation by
inserting the velocity explicitly in the second equation (1).
zt+1 = M zt + α1 R1 (p, p)⊤ + α2 R2 (g, g)⊤
⊤
with z = (v, x)
(2)
and
M=
ωId
ωId
−α1 R1 − α2 R2
Id − α1 R1 − α2 R2
,
(3)
where Id is the unit matrix in d dimensions. Note that the two occurrence of R1 in Eq. 3 refer to
the same realisation of the random variable. Similarly, the two R2 ’s are the same realisation, but
different from R1 . Since the second and third term on the right in Eq. 2 are constant most of the
time, the analysis of the algorithm can focus on the properties of the matrix M . In spite of its
wide applicability, PSO has not been subject to deeper theoretical study, which may be due to the
multiplicative noise in the simple quasi-linear, quasi-decoupled dynamics. In previous studies the
effect of the noise has largely been ignored.
2.3
Analytical results
An early exploration of the PSO dynamics [4] considered a single particle in a one-dimension space
where the personal and global best locations were taken to be the same. The random components were
2
Figure 1: Typical PSO performance as a function of its ω and α parameters. Here a 25 particle
swarm was run for pairs of ω and α values (α1 = α2 = α/2). Cost function here was the d = 10
non-continuous rotated Rastrigin function [3]. Each parameter pair was repeated 25 times and the
minimal costs after 2000 iterations were averaged.
replaced by their averages such that apart from random initialisation the algorithm was deterministic.
Varying the parameters was shown to result in a range of periodic motions and divergent behaviour
for the case of α1 + α2 ≥ 4. The addition of the random vectors was seen as beneficial as it adds
noise to the deterministic search.
Control of velocity, not requiring the enforcement of an arbitrary maximum value as in Ref. [4],
is derived in an analytical manner by [5]. Here eigenvalues derived from the dynamic matrix of a
simplified version of the PSO algorithm are used to imply various search behaviours. Thus, again the
α1 + α2 ≥ 4 case is expected to diverge. For α1 + α2 < 4 various cyclic and quasi-cyclic motions are
shown to exist for a non-random version of the algorithm.
In Ref. [6] again a single particle was considered in a one dimensional problem space, using a
deterministic version of PSO, setting R1 = R2 = 0.5. The eigenvalues of the system were determined
as functions of ω and a combined α, which leads to three conditions: The particle is shown to converge
when ω < 1, α > 0 and 2ω −α+2 > 0. Harmonic oscillations occur for ω 2 +α2 −2ωα−2ω −2α+1 < 0
and a zigzag motion is expected if ω < 0 and ω−α+1 < 0. As with the preceding papers the discussion
of the random numbers in the algorithm views them purely as enhancing the search capabilities by
adding a drunken walk to the particle motions. Their replacement by expectation values was thus
believed to simplify the analysis with no loss of generality.
We show in this contribution that the iterated use of these random factors R1 and R2 in fact
adds a further level of complexity to the dynamics of the swarm which affects the behaviour of the
algorithm in a non-trivial way. In Ref. [7] these factors were given some consideration. Regions
of convergence and divergence separated by a curved line were predicted. This line separating these
regions (an equation for which is given in Ref. [8]) fails to include some parameter settings that lead to
convergent swarms. Our analytical solution of the stability problem for the swarm dynamics explains
why parameter settings derived from the deterministic approaches are not in line with experiences
from practical tests. For this purpose we will now formulate the PSO algorithm as a random dynamical
system and present an analytical solution for the swarm dynamics in a simplified but representative
case.
3
3
3.1
Critical swarm conditions for a single particle
PSO as a random dynamical system
As in Refs. [4, 6] the dynamics of the particle swarm will be studied here as well in the single-particle
case. This can be justified because the particles interact only via the global best position such that,
while g (1) is unchanged, single particles exhibit qualitatively the same dynamics as in the swarm.
For the one-particle case we have necessarily p = g, such that shift invariance allows us to set both to
zero, which leads us to the following is given by the stochastic-map formulation of the PSO dynamics
(2).
zt+1 = M zt
(4)
Extending earlier approaches we will explicitly consider the randomness of the dynamics, i.e. instead
of averages over R1 and R2 we consider a random dynamical system with dynamical matrices M
chosen from the set
ωId
−αR
Mα,ω =
, Rij = 0 for i 6= j and Rii ∈ [0, 1] ,
(5)
ωId Id − αR
with R being in both rows the same realisation of a random diagonal matrix that combines the effects
of R1 and R2 (1). The parameter α is the sum α1 + α2 with α1 , α2 ≥ 0 and α > 0. As the diagonal
elements of R1 and R2 are uniformly distributed in [0, 1], the distribution of the random variable
Rii = αα1 R1,ii + αα2 R2,ii in Eq. 4 is given by a convolution of two uniform random variables, namely
Pα1 ,α2 (r) =
αr
max{α1 ,α2 }
α
max{α ,α }
α(α−r)1 2
α1 α2
if 0 ≤ r ≤ min{ αα1 , αα2 }
if min{ αα1 , αα2 } < r ≤ max{ αα1 , αα2 }
if max αα1 , αα2 < r ≤ 1
(6)
if the variable r ∈ [0, 1] and Pα1 ,α2 (r) = 0 otherwise. Pα1 ,α2 (r) has a tent shape for α1 = α2 and a
box shape in the limits of either α1 → 0 or α2 → 0. The case α1 = α2 = 0, where the swarm does
not obtain information about the fitness function, will not be considered here.
We expect that the multi-particle PSO is well represented by the simplified version for α2 ≫ α1
or α1 ≫ α2 , the latter case being irrelevant in practice. For α1 ≈ α2 deviations from the theory may
occur because in the multi-particle case p and g will be different for most particles. We will discuss
this as well as the effects of the switching of the dynamics at discovery of better solutions in Sect. 5.2.
3.2
Marginal stability
While the swarm does not discover any new solutions, its dynamical properties are determined by
an infinite product of matrices from the set M (5). Such products have been studied for several
decades [9] and have found applications in physics, biology and economics. Here they provide a
convenient way to explicitly model the stochasticity of the swarm dynamics such that we can claim
that the performance of PSO is determined by the stability properties of the random dynamical
system (4).
Since the equation (4) is linear, the analysis can be restricted to vectors on the unit sphere in the
(v, x) space, i.e. to unit vectors
⊤
⊤
a = (x, v) / k (x, v) k,
(7)
where k · k denotes the Euclidean norm. Unless the set of matrices shares the same eigenvectors
(which is not the case here) standard stability analysis in terms of eigenvalues is not applicable.
Instead we will use means from the theory of random matrix products in order to decide whether
the set of matrices is stochastically contractive. The properties of the asymptotic dynamics can be
described based on a double Lebesgue integral over the unit sphere S 2d−1 and the set M [10, 11].
As in Lyapunov exponents, the effect of the dynamics is measured in logarithmic units in order to
account for multiplicative action.
Z
Z
λ (α, ω) = dνα,ω (a) dPα,ω (M ) log kM ak
(8)
4
If a (α, ω) is negative the algorithm will converge to p with probability 1, while for positive a arbitrarily
large fluctuations are possible. While the measure for the inner integral (8) is given by Eq. 6, we
have to determine the stationary distribution ν on the unit sphere for the outer integral. It is given
as the solution of the integral equation
Z
Z
να,ω (a) = dνα,ω (b) dPα,ω (M ) δ (a, M b/ kM bk) , a, b ∈ S 2d−1 .
(9)
The existence of the invariant measure requires the dynamics to be ergodic which is ensured if at least
some of elements of M have complex eigenvalues, such as being the case for ω 2 +α2 /4−ωα−2ω−α+1 <
0 (see above, [6]). This condition excludes a small region in the parameters space at small values of
ω, such that there we have to take all ergodic components into account. There are not more than two
components which due to symmetry have the same stability properties. It depends on the parameters
α and ω and differs strongly from a homogenous distribution, see Fig. 2 for a few examples in the
case d = 1. Critical parameters are obtained from Eq. 8 by the relation
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0
1
2
3
4
5
6
Figure 2: Stationary distribution να,ω (a) on the unit circle (a ∈ [0, 2π)) in the (x, v) plane for a
one-particle system (4) for ω = 0.7 and α = α2 = 0.5, 1.5, 2.5, 3.5, 4.5 (the distribution with peak
near π is for α = 0.5, otherwise main peaks are highest for largest α).
λ (α, ω) = 0.
(10)
Solving Eq. 10 is difficult in higher dimensions, so we rely on the linearity of the system when
considering the (d = 1)-case as representative. The curve in Fig. 3 represents the solution of Eq. 10
for d = 1 and α = α2 . For other settings of α1 and α2 the distribution of the random factors has
a smaller variance rendering the dynamics more stable such that the contour moves towards larger
parameter values (see Fig. 4). Inside the contour λ (α, ω) is negative, meaning that the state will
approach the origin with probability 1. Along the contour and in the outside region large state
fluctuations are possible. Interesting parameter values are expected near the curve where due to a
coexistence of stable and unstable dynamics (induced by different sequences of random matrices) a
theoretically optimal combination of exploration and exploitation is possible. For specific problems,
however, deviations from the critical curve can be expected to be beneficial.
3.3
Personal best vs. global best
Due to linearity, the particle swarm update rule (1) is subject to a scaling invariance which was
already used in Eq. 7. We now consider the consequences of linearity for the case where personal best
and global best differ, i.e. p 6= g. For an interval where pi and g remain unchanged, the particle i
with personal best pi will behave like a particle in a swarm where together with x and v, pi is also
scaled by a factor κ > 0. The finite-time approximation of the Lyapunov exponent (see Eq. 8)
λ(t) =
1
log hk(xt , vt )ki
t
5
(11)
Figure 3: Solution of Eq. 10 representing a single particle in one dimension with a fixed best value at
g = p = 0. The curve that has higher α-values on the right (magenta) is for α1 = α2 , the other curve
(green) is for α = α2 , α1 = 0. Except for the regions near ω = ±1, where numerical instabilities can
occur, a simulation produces an indistinguishable curve. In the simulation we tracked the probability
of a particle to either reach a small region (10−6 ) near the origin or to escape beyond a radius of
106 after starting from a random location on the unit circle. Along the curve both probabilities are
equal.
will be changed by an amount of 1t log κ by the scaling. Although this has no effect on the asymptotic
behaviour, we will have to expect an effect on the stability of the swarm for finite times which may be
relevant for practical applications. For the same parameters, the swarm will be more stable if κ < 1
and less stable for κ > 1, provided that the initial conditions are scaled in the same way. Likewise, if
kpk is increased, then the critical contour will move inwards, see Fig. 5. Note that in this figure, the
low number of iterations lead to a few erroneous trials at parameter pairs outside the outer contour
which have been omitted here. We also do not consider the behaviour near α = 0 which is complex
but irrelevant for PSO. The contour (10) can be seen as the limit κ → 0 such that only an increase
of kpk is relevant for comparison with the theoretical stability result. When comparing the stability
results with numerical simulations for real optimisation problems, we will need to take into account
the effects caused by differences between p and g in a multi-particle swarm with finite runtimes.
4
Optimisation of benchmark functions
Metaheuristic algorithms are often tested in competition against benchmark functions designed to
present different problem space characteristics. The 28 functions [3] contain a mix of unimodal, basic
multimodal and composite functions. The domain of the functions in this test set are all defined to
be [−100, 100]d where d is the dimensionality of the problem. Particles were initialised within the
same domain. We use 10-dimensional problems throughout. Our implementation of PSO performed
no spatial or velocity clamping. In all trials a swarm of 25 particles was used. We repeated the
algorithm 100 times, on each occasion allowing 200, 2000, 20000 iterations to pass before recording
the best solution found by the swarm. For the competition 50000 fitness evaluation were allowed
which corresponds to 2000 iterations with 25 particles. Other iteration numbers were included for
comparison. This protocol was carried out for pairs of ω ∈ [−1.1, 1.1] and α ∈ [0, 5] This was repeated
for all 28 functions. The averaged solution costs as a function of the two parameters showed curved
valleys similar to that in Fig. 1 for all problems. For each function we obtain different best values
along (or near) the theoretical curve (10). There appears to be no preferable location within the
valley. Some individual functions yield best performance near ω = 1. This is not the case near ω = 0,
although the global average performance over all test functions is better in the valley near ω = 0 than
near ω = 1, see Fig 4.
6
Figure 4: Best parameter regions for 200 (blue), 2000 (green), and 20000 (magenta) iterations: For
more iterations the region shifts towards the critical line. Cost averaged over 100 runs and 28 CEC
benchmark functions. The red (outer) curve represents the zero Lyapunov exponent for N = 1, d = 1,
α1 = α2 .
At medium values of ω the difference between the analytical solutions for the cases α1 = α2
and α1 = 0 is strongest, see Fig. 4. In simulations this shows to a lesser extent, thus revealing a
shortcoming of the one-particle approximation. Because in the multi-particle case, p and g are often
different, the resulting vector will have a smaller norm than in the one-particle case, where p = g.
The case p 6= g violates a the assumption of the theory the dynamics can be described based unit
vectors. While a particle far away from both p and g will behave as predicted from the one-particle
case, at length scales smaller than kp − gk the retractive forces will tend to be reduced such that
the inertia becomes more effective and the particle is locally less stable which shows numerically in
optimal parameters that are smaller than predicted.
5
5.1
Discussion
Relevance of criticality
Our analytical approach predicts a locus of α and ω pairings that maintain the critical behaviour
of the PSO swarm. Outside this line the swarm will diverge unless steps are taken to constrain it.
Inside, the swarm will eventually converge to a single solution. In order to locate a solution within
the search space, the swarm needs to converge at some point, so the line represents an upper bound
on the exploration-exploitation mix that a swarm manifests. For parameters on the critical line,
fluctuations are still arbitrary large. Therefore, subcritical parameter values can be preferable if the
settling time is of the same order as the scheduled runtime of the algorithm. If, in addition, a typical
length scale of the problem is known, then the finite standard deviation of the particles in the stable
parameter region can be used to decide about the distance of the parameter values from the critical
curve. These dynamical quantities can be approximately set, based on the theory presented here,
such that a precise control of the behaviour of the algorithm is in principle possible.
The observation of the distribution of empirically optimal parameter values along the critical
curve, confirms the expectation that critical or near-critical behaviour is the main reason for success
of the algorithm. Critical fluctuations are a plausible tool in search problem if apart from certain
smoothness assumption nothing is known about the cost landscape: The majority of excursions will
exploit the smoothness of the cost function by local search, whereas the fat tails of the distribution
allow the particles to escape from local minima.
7
Figure 5: For p 6= g we define neutral stability as the equilibrium between divergence and convergence. Convergence means here that the particle approaches the line connecting p and g. Curves
are for a one-dimensional problem with p = 0.1 and g = 0 scaled (see Sect. 3.3) by κ = 1 (outer
curve) κ = 0.1 and κ = 0.04 (inner curve). Results are for 200 iterations and averaged over 100000
repetitions.
5.2
Switching dynamics at discovery of better solutions
Eq. 2 shows that the discovery of a better solution affects only the constant terms of the linear
dynamics of a particle, whereas its dynamical properties are governed by the linear coefficient matrices.
However, in the time step after a particle has found a new solution the corresponding force term in the
dynamics is zero (see Eq. 1) such that the particle dynamics slows down compared to the theoretical
solution which assumes a finite distance from the best position at all (finite) times. As this affects
usually only one particle at a time and because new discoveries tend to become rarer over time, this
effect will be small in the asymptotic dynamics, although it could justify the empirical optimality of
parameters in the unstable region for some test cases.
The question is nevertheless, how often these changes occur. A weakly converging swarm can still
produce good results if it often discovers better solutions by means of the fluctuations it performs
before settling into the current best position. For cost functions that are not ‘deceptive’, i.e. where
local optima tend to be near better optima, parameter values far inside the critical contour (see
Fig. 3) may give good results, while in other cases more exploration is needed.
5.3
The role of personal best and global best
A numerical scan of the (α1 , α2 ) plane shows a valley of good fitness values, which, at small fixed
positive ω, is roughly linear and described by the relation α1 +α2 = const, i.e. only the joint parameter
α = α1 + α2 matters. For large ω, and accordingly small predicted optimal α values, the valley is less
straight. This may be because the effect of the known solutions is relatively weak, so the interaction
of the two components becomes more important. In other words if the movement of the particles is
mainly due to inertia, then the relation between the global and local best is non-trivial, while at low
inertia the particles can adjust their p vectors quickly towards the g vector such that both terms
become interchangeable.
Finally, we should mention that more particles, longer runtime as well as lower search space
dimension increase the potential for exploration. They all lead to the empirically determined optimal
parameters being closer to the critical curve.
8
6
Conclusion
PSO is a widely used optimisation scheme which is theoretically not well understood. Existing theory
concentrates on a deterministic version of the algorithm which does not possess useful exploration
capabilities. We have studied the algorithm by means of a product of random matrices which allows
us to predict useful parameter ranges and may allow for more precise settings if a typical length scale
of the problem is known. A weakness of the current approach is that it focuses on the standard
PSO [1] which is known to include biases [12, 13], that are not necessarily justifiable, and to be
outperformed on benchmark set and in practical applications by many of the existing PSO variants.
Similar analyses are certainly possible and are expected to be carried out for some of the variants, even
though the field of metaheuristic search is often portrayed as largely inert to theoretical advances. If
the dynamics of particle swarms is better understood, the algorithms may become useful as efficient
particle filters which have many applications beyond heuristic optimisation.
Acknowledgments
This work was supported by the Engineering and Physical Sciences Research Council (EPSRC), grant
number EP/K503034/1.
References
[1] J. Kennedy and R. Eberhart. Particle swarm optimization. In Proceedings IEEE International
Conference on Neural Networks, volume 4, pages 1942–1948. IEEE, 1995.
[2] R. Poli. Analysis of the publications on the applications of particle swarm optimisation. Journal
of Artificial Evolution and Applications, 2008(3):1–10, 2008.
[3] CEC2013. http://www.ntu.edu.sg/home/EPNSugan/index files/CEC2013/CEC2013.htm.
[4] J. Kennedy. The behavior of particles. In V.W. Porto, N. Saravanan, D.Waagen, and A. E.
Eiben, editors, Evolutionary programming VII, pages 579–589. Springer, 1998.
[5] M. Clerc and J. Kennedy. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation, 6(1):58–73, 2002.
[6] I. C. Trelea. The particle swarm optimization algorithm: convergence analysis and parameter
selection. Information Processing Letters, 85(6):317–325, 2003.
[7] M. Jiang, Y. Luo, and S. Yang. Stagnation analysis in particle swarm optimization. In Swarm
Intelligence Symposium, 2007. SIS 2007. IEEE, pages 92–99. IEEE, 2007.
[8] C. W. Cleghorn and A. P Engelbrecht. A generalized theoretical deterministic particle swarm
model. Swarm Intelligence, 8(1):35–59, 2014.
[9] H. Furstenberg and H. Kesten. Products of random matrices. Annals of Mathematical Statistics,
31(2):457–469, 1960.
[10] V. N. Tutubalin. On limit theorems for the product of random matrices. Theory of Probability
& Its Applications, 10(1):15–27, 1965.
[11] R. Z. Khas’minskii. Necessary and sufficient conditions for the asymptotic stability of linear
stochastic systems. Theory of Probability & Its Applications, 12(1):144–147, 1967.
[12] M. Clerc. Confinements and biases in particle swarm optimisation. Technical Report hal00122799, Open archive HAL, 2006.
[13] W. M. Spears, D. Green, and D. F. Spears. Biases in particle swarm optimization. International
Journal of Swarm Intelligence Research, 1(2):34–57, 2010.
9
| 9 |
1
On the detection of low rank matrices in the
high-dimensional regime.
Antoine Chevreuil and Philippe Loubaton
Gaspard Monge Computer Science Laboratory (LIGM) - UMR 8049 CNRS
Université de Paris-Est/Marne-la-Vallée
5 Bd. Descartes 77454 Marne-la-Vallée (France)
arXiv:1804.04851v1 [eess.SP] 13 Apr 2018
Abstract
We address the detection of a low rank n × ndeterministic matrix X0 from the noisy observation X0 + Z when n → ∞,
where Z is a complex Gaussian random matrix with independent identically distributed Nc (0, n1 ) entries. Thanks to large random
matrix theory results, it is now well-known that if the largest singular value λ1 (X0 ) of X0 verifies λ1 (X0 ) > 1, then it is possible
to exhibit consistent tests. In this contribution, we prove a contrario that under the condition λ1 (X0 ) < 1, there are no consistent
tests. Our proof is inspired by previous works devoted to the case of rank 1 matrices X0 .
Index Terms
statistical detection tests, large random matrices, large deviation principle.
I. I NTRODUCTION
The problem of testing whether an observed n1 × n2 matrix Y is either a zero-mean independent identically distributed
Gaussian random matrix Z with variance n12 , or X0 + Z for some low rank deterministic matrix X0 , with no known structure,
called also a spike, is a fundamental problem arising in numerous applications such as the detection of low-rank multivariate
signals or the Gaussian hidden clique problem. When the two dimensions n1 , n2 converge towards ∞ in such a way that
n1 /n2 → c > 0 (the rank of X0 remaining fixed), known results on the so-called additive spiked large random matrix models
have enabled to re-consider this fundamental detection problem (see e.g. [13], [4], [3]). It was established a long time ago (see
e.g. [1] and the references
therein) that in the above asymptotic regime, the largest singular value λ1 (Z) of Z converges almost
√
surely towards
1
+
c.
More
recently, under mild technical extra assumptions, [3] proved that λ1 (X0 + Z) still converges
√
towards 1 + c if λ1 (X0 ) converges towards a limit strictly less than c1/4 . On the√contrary, if the limit of λ1 (X0 ) is strictly
greater than c1/4 , then λ1 (X0 +Z) converges towards a limit strictly greater than 1+ c. This result implies that the Generalized
Likelihood Ratio Test (GLRT) is consistent (i.e. both the probability of false alarm and the probability of missed detection
converge towards 0 in the above asymptotic regime) if and only if λ1 (X0 ) is above the threshold c1/4 . In order to simplify
the exposition, we assume from now on that n1 = n2 = n, so that ratio c reduces to 1.
While the detection problem was extensively addressed in the zone λ1 (X0 ) > 1, the case where λ1 (X0 ) < 1 was much
less studied. Montanari et al. [12] consider the zone λ1 (X0 ) < 1 when X0 is a rank 1 matrix. Thanks to simple information
geometry tools, [12] prove that, in this region, it is impossible to find a consistent test for the detection of the spike. Irrespective
of the standard random matrix tools, this approach is extended to the more general case when X0 and Z are tensors of order
d ≥ 3; namely, if the Frobenius norm of the tensor X0 is stricly less than a threshold depending in d, then the probability
distributions of the observation under the two hypotheses are asymptotically undistinguishable, so that any detection test cannot
behave better than a random guess. This property, which is stronger than the non-existence of a consistent test, does not hold
in the matrix case d = 2: see for instance [14] where a non-consistent test is exhibited that has a better performance than a
random guess.
In this paper, we extend the above methodolodgy to the general case where X0 has rank r. Our contribution is to prove
that under λ1 (X0 ) < 1, the consistent detection is impossible. While this theoretical result is not unexpected, we believe that
it provides a better understanding of the above fundamental detection problem in large dimensions without resorting to the
machinery of large random matrices.
We mention that the works [9] (when the spike is symmetric) and [10] (non-symmetric case) are clearly related to the
above problem. However, two major differences arise: firstly, the detection is not addressed but rather the estimation of X0 ;
second, a statistical model of the spike is needed. The results are in general not explicit. However for a certain prior and
for the rank one model, it can be deduced that, in the zone λ1 (X0 ) < 1, it is impossible to find estimates of the spike that
have better performance than any dummy estimate (i.e. an estimate that does not rely on the observation). The authors rely on
the computation of the mutual information between X0 and Y: this computation involves non-obvious results extending the
approach of Tallagrand for studying the Sherrington-Kirkpatrick model [15].
2
II. M ODEL , NOTATION , ASUMPTION
The set of complex-valued matrices Cn×npis a complex vector-space endowed with the standard scalar product hX, Yi =
Tr(XY∗ ) and the Frobenius norm kXkF = hX, Xi. The spectral norm of a matrix X is denoted by kXk2 . The spike (“the
signal”) is assumed to be a matrix of fixed rank r and hence admits a SVD such as
X0 =
r
X
λj uj vj∗ = UΛV∗
(1)
j=1
where λi = λi (X0 ) are the singular values of X0 sorted in descending order and where Λ is the diagonal matrix gathering
the (λj )j=1,...,r in the descending order. As X0 has to be defined for any n, we impose a non-erratic behavior of X0 , namely
that all its singular values (λj )j=1,...,r do not depend on n for n large enough. This hypothesis could be replaced by the
condition that (λj )j=1,...,r all converge towards a finite limit at an ad’hoc rate. However, this would introduce purely technical
difficulties.
The noise matrix Z is assumed to have i.i.d. entries distributed as Nc (0, 1/n). We consider the alternative H0 : Y = Z
versus H1 : Y = X0 + Z. We denote by p1,n (y) the probability probability density of Y under H1 and p0,n (y) the density
p1,n (Y)
of Y under H0 . L(Y) = p0,n
(Y) is the likelihood ratio and we denote by E0 the expectation under H0 . We now recall the
fundamental information geometry results used in [12] in order to address the detection problem.The following properties are
well known (see also [2] section 3):
2
is bounded,
• (i) If E0 L(Y)
then no consistent detection test exists.
2
• (ii) If moroever E0 L(Y)
= 1 + o(1), the total variation distance between p0,n and p1,n converges towards 0, and no
test performs better than a decision at random.
We however mention that (i) and (ii) are only sufficient conditions. In particular, E0 L(Y)2 unbounded does not imply the
existence of consistent tests.
III. P RIOR ON THE SPIKE . E XPRESSION OF THE SECOND - ORDER MOMENT.
2
The density of Z, seen as a collection of n2 complex-valued random variables, is obviously p0,n (z) = κn exp −n kzkF
n2
where κn = nπ
. On the one hand, we notice that the study of the second-order momentof the likelihood
ratio is not suited
2
to the
deterministic
model
of
the
spike
as
presented
previously.
Indeed,
in
this
case
E
L(Y)
has
the
simple
expression
0
2
exp 2n kX0 kF and always diverges. On the other hand, the noise matrix shows an invariance property: if Θ1 , Θ2 are
unitary n × n matrices , then the density of Θ1 ZΘ2 equals this of Z. We hence modify the data according to the procedure:
we pick two independent unitary Θ1 , Θ2 according to the Haar measure (which corresponds to the uniform distribution on
the set of all unitary n × n matrices), and change the data tensor Y according to Θ1 YΘ2 . As said above, this does not affect
the distribution of the noise, but this amounts to assume a certain prior on the spike. Indeed, this amounts to replace ui by
Θ1 ui and vi by Θ∗2 vi . In the following, the data and the noise tensors after this procedure are still denoted respectively by
Y and Z.
We are now in position to give a closed-form expression of the second-order moment of L(Y) . We have p1,n (Y) =
EX [p0,n (Y − X)] where EX is the mathematical
expectation
over the prior distribution of the spike, or equivalently over
the Haar matrices Θ1 , Θ2 . It holds that E0 L(Y)2 = E [exp (2nR hX, X0 i)] where the expectation is over independent
0
0
0
0
copies
X,2X
of the spike (R stands for the real part); X and X being respectively associated with (Θ1 , Θ2 ) and (Θ1 , Θ2 ),
E0 L(Y) has the expression
h
∗
∗ i
E exp 2nRTr Θ1 X0 Θ2 Θ02 X∗0 Θ01
.
∗
∗
As Θk and Θ0k are Haar and independent, then (Θ01 ) Θ1 and Θ2 (Θ02 ) are also independent, Haar distributed and it holds
E0 L(Y)2 = E [exp (2nη)] ,
(2)
where the expectation is over the independent Haar matrices Θ1 , Θ2 and η = RTr (Θ1 X0 Θ2 X∗0 ). The ultimate simplification
comes from the decomposition (1) which implies that
η = RTr (ΛΨ1 ΛΨ2 )
(3)
where Ψ1 = U∗ Θ1 U and Ψ2 = V∗ Θ2 V. It is clear that Ψ1 and Ψ2 are independent matrices that are both distributed as
the upper r × r diagonal block of a Haar unitary matrix.
3
IV. R ESULT
The main result of our contribution is the following
Theorem 1. If λ1 (X0 ) < 1 then
lim sup E0 L(Y)2 ≤
1
1 − λ1 (X0 )4
r2
and it is not possible to find a consistent test.
We remind that we are looking for a condition on X0 (due to (2,3), this is a condition on Λ) under which E [exp (2nη)]
is bounded. Evidently, the divergence may occur only when η > 0. We hence consider E1 = E [exp (2nη) Iη> ] and E2 =
E [exp (2nη) Iη≤ ], and prove that, for a certain small enough > 0 to be specified later, E1 = o(1) and that E2 is bounded.
V. T HE E1 TERM : COMPUTATION OF THE GRF OF η.
It is clear that the boundedness of the integral E1 is achieved when η rarely deviates from 0. As remarked in [12], the
natural machinery to consider is this of the Large Deviation Principle (LDP). In essence, if η follows the LDP with rate n,
there can be found a certain non-negative function called Good Rate Function (GRF) Iη such that for any Borel set A of R,
1
n log P (η ∈ A) converges towards supx∈A −Iη (x). The existence of a GRF allows one to analyze the asymptotic behaviour
of the integral E1 . In the next section, we thus justify that η follows a Large Deviation Principle with rate n, and we compute
the associated GRF.
A. Computation of the GRF of η
Pr
Eq. (3) and the Cauchy-Schwarz inequality imply that the random variable η is bounded: |η| ≤ ηmax with ηmax = j=1 λ2j .
We first recall that for i = 1, 2, the random matrix Ψi follows a LDP with rate n and that its GRF at the parameter
ψ ∈ Cr×r , kψk2 ≤ 1, is log det (Ir − ψ ∗ ψ) (see Theorem 3-6 in [8]). Besides, η is a function of the i.i.d. matrices (Ψi )i=1,2
and therefore, the contraction principle applies to η (see Theorem 4.2.1 in [7]): it ensures that η follows a LDP with rate n
and its GRF is such that, for each real |x| ≤ ηmax , −Iη (x) is the solution of the following optimization problem:
Problem 2. Maximize in Cr×r
log det (I − ψ ∗1 ψ 1 ) + log det (I − ψ ∗2 ψ 2 ) .
(4)
under the constraints
RTr (Λψ 1 Λψ 2 ) = x
(5)
kψ i k2 ≤ 1, i = 1, 2
(6)
We provide a closed-form solution of Problem 2. In this respect, we define for each k = 1, . . . , r the interval Ik defined by
∀k = 1, ..., r − 1 : Ik =]
k
X
X
k+1
λ2i − λ2k ,
λ2i − λ2k+1 ]
i=1
(7)
i=1
Pr
and Ir =] i=1 λ2i − λ2k , ηmax ]. It is easy to check that (Ik )k=1,...,r are disjoint and that ∪rk=1 Ik =]0, ηmax ]. The following
result holds:
Theorem 3. The maximum of Problem 2 is given by
− Iη (x) = 2
r
X
k=1
"
P
log
k
i=1
λ2i
k
− |x|
#k
1
IIk (|x|)
Πki=1 λ2i
(8)
It is easy to check that the function x 7→ −Iη (x) is continuous on ]0, ηmax [. The proof of Theorem 3 is provided in the
Appendix.
We illustrate Theorem 3 through the following experiment. The rank of the spike is fixed to r = 3 and the singular values
have been set to (λ1 , λ2 , λ3 ) = (1 , 0.7 , 0.2). We have computed millions of random
P2 samples of the matrices (ψ 1 , ψ 2 ). Each
pair is associated with a point (x, y) defined as x = RTr (Λψ 1 Λψ 2 ) and y = i=1 log det (I − ψ ∗i ψ i ) . We obtain a cloud
of points, the upper envelope of which is expected to be −Iη (x). We have also plotted the graph of the function y = −Iη (x).
In addition, we mention that, in the more general context of tensors of order d, the second-order moment of L(Y) is still
given by (2) but the random variable - call it ηd - has a more complicated form than (3), see [5]; the asymptotics of the term
E1 can still be studied by evaluating the GRF of ηd . This GRF is the solution of an optimization problem that, apparently,
cannot be solved in closed form for d ≥ 3. In [5],
bound of the opposite of the true GRF is computed; this upper
an upper
|x|
bound, valid for any d is given for d = 2 by log 1 − ηmax . We thus also represent in Figure V-A this upper bound; clearly,
it is not tight.
−6
−12
−10
−8
k
∑ log(det(I−ψkψHk )
−4
−2
0
4
−1.0
−0.5
0.0
Tr(Λψ1Λψ2)
0.5
1.0
Fig. 1. graph of −Iη (x) seen as an upper envelope. Upper curve: the upper bound computed in [5]
B. Computation of E1
The Varadhan lemma (see Theorem 4.3.1 in [7]) states that n1 log E [exp (2nη) Iη> ] → supx> (2x − Iη (x)) and hence the
E1 term converges towards 0 when supx> (2x − Iη (x)) < 0. Consider any of the intervals Ik defined in (7). The derivative
2
of 2x − Iη (x) for any x ∈ Ik is 2 − 2k/(λ
+ λ2k − x) : it is decreasing on Ik and the limit in the left extremity of Ik ,
1 + ...
Pk−1 2
1
2
i.e. ( j=1 λj ) − (k − 1)λk , is simply 2 1 − λ2 . If λ1 (X0 ) < 1, then for all the indices k, 1 − λ12 < 0. This shows that
k
k
2x + Iη (x) is strictly decreasing on every Ik . Hence, for every x ∈]0, ηmax ], we have 2x − Iη (x) < 0 − Iη (0) = 0. We have
proved that E1 = o(1).
VI. T HE E2 TERM : CONCENTRATION OF η.
Notice that the upper block r × r Ψ of a unitary Haar matrix Θ has the same distribution as
−1/2
G G̃∗ G̃
where the n×r matrix G̃ has i.i.d. entries distributed as NC (0, 1) and G is the top r ×r block of G̃. Obviously, E[G̃∗ G̃] = nI.
It is a standard result that a random variable distributed as a χ2 (n) is concentrated around its mean. This can be easily extended
to the matrix G̃∗ G̃:
Lemma 4. For any 0 < δ < 1, there exists a constant c such that
δ2
1 ∗
P
G̃ G̃ − I > δ ≤ c exp −n
.
n
2
2
We take G̃1 and G̃2 independent, distributed as
consider the upper r × r blocks
G and G2 of G̃1 and G̃2 . It
G̃ and
−1/2
−1/2 1
∗
∗
follows that η has the same distribution as 2RTr ΛG1 G̃1 G̃1
ΛG2 G̃2 G̃2
. Take now any δ < 1. We may
split the integral E2 in two parts:
E exp (2nη) I{η≤}∩B1c ∩B2c + E exp (2nη) I{η≤}∩(B1 ∪B2 ) .
{z
}
|
{z
} |
E200
E20
where we have defined the events Bi =
n
1
∗
n G̃i G̃i
E200
o
−I
2
> δ . Thanks to the above concentration result, we have
≤ exp(2n) (P(B1 ) + P(B2 ))
≤ exp(2n)2c exp −nδ 2 /2
As it is always possible to choose δ and such that δ 2 − 4 > 0 and δ < 1 it follows that E200 = o(1).
Let us now inspect the term E20 . Since we have, for i = 1, 2, n1 G̃∗i G̃i − I ≤ δ, then there exist ∆i for i = 1, 2 such
2
−1/2
that G̃∗i G̃i
= √1n (I + ∆i )with k∆i k2 ≤ δ/2. We hence have
E20 ≤ E [exp (2RTr (ΛG1 (I + ∆1 ) ΛG2 (I + ∆2 ))] .
5
We expand 2RTr (ΛG1 (I + ∆1 ) ΛG2 (I + ∆2 )) as the sum of four terms. Take for instance
T2 = 2RTr (ΛG1 ∆1 ΛG2 )
Thanks to von Neumann’s lemma [11], we have
T2 ≤ 2
r
X
λk (∆1 )λk (ΛG2 ΛG1 )
k=1
≤ 2 k∆1 k
r
X
λk (ΛG2 ΛG1 )
k=1
As
Pr
k=1
λk (ΛG2 ΛG1 ) ≤
√ pPr
2
r
k=1 λk (ΛG2 ΛG1 ), it yields
√ q
T2 ≤ 2 k∆1 k r Tr (ΛG2 ΛG1 G∗1 ΛG∗2 Λ).
Invoking the von Neumann’s lemma three times, it holds that
q
√
T2 ≤ 2 k∆1 k r Λ2
Tr (G1 G∗1 ) Tr (G2 G∗2 )
√
≤ r k∆1 k Λ2 (Tr (G1 G∗1 ) + Tr (G2 G∗2 ))
Similar manipulations can be done on the other terms of the expansion. so that E20 is less than
E [exp (2RTr (ΛG1 ΛG2 ) + βTr ((G1 G∗1 ) + Tr (G2 G∗2 )))]
√
2
with β = 2r δ(2 + δ) kΛk . The above expectation is to be understood as the expectation over (G1 , G2 ). As G1 and G2 are
independent, we consider first the expectation over G1 . This gives, up to the factor exp (βTr (G2 G∗2 ))
Z
−r 2
π
exp (2RTr (g1 E) + (β − 1) Tr (g1∗ g1 ) ) dg1
with E = ΛG2 Λ. It is always possible to choose δ such that β < 1. With such a β, the above integral is
!
2
2
1
−r 2
√
Tr (EE∗ )
(1 − β)
exp
4
1−β
4
As Tr (EE∗ ) ≤ kΛk2 Tr (G2 G∗2 ) we finally obtain after multiplying by exp (βTr (G2 G∗2 )) and taking the expectation over
G2 :
!
4
−r 2 Z
2
(1
−
β)
−
kΛk
(1
−
β)
2
E20 ≤
exp −
Tr (g2∗ g2 ) dg2 .
1−β
π r2
2
If kΛk2 < 1, it is always possible to adjust δ such that the above integral converges. In this condition, we have
!r2
1
E20 ≤
.
4
(1 − β)2 − kΛk2
This must be true for all β arbitrarily small, hence the result.
A PPENDIX
We prove Theorem 3 when x > 0. As the function to be maximized converges towards −∞ if kψ 1 k → 1 or kψ 2 k → 1,
any argument (ψ 1 , ψ 2 ) of the maximization problem satisfies kψ i k < 1, i = 1, 2. Therefore, the Karush-Kuhn-Tucker (KKT)
conditions imply the existence
P2 of a scalar Lagrange multiplier µ ≥ 0 such that (ψ 1 , ψ 2 ) is a stationary point of the Lagrangian
`(ψ 1 , ψ 2 , µ) defined by i=1 log det (Ir − ψ ∗i ψ i ) + µ RTr (Λψ 1 Λψ 2 ) . As ` is a real valued function, a stationary point is
computedwhen setting the differential w.r.t. the entries of ψ 1 and ψ 2 to zero. It can be checked that (ψ 1 , ψ 2 ) is a stationary
point of ` when
µ Λψ 2 Λ = ψ ∗1 (I − ψ 1 ψ ∗1 )−1
µ Λψ 1 Λ = ψ ∗2 (I − ψ 2 ψ ∗2 )−1
In a first step, these equations can be shown to be satisfied only if ψ 1 and ψ 2 are diagonal up to permutations of the
columns. Then, is can be deduced that there exists a diagonal matrix 0 ≤ P ≤ I and a matrix of permutation Π such that
6
log det (Ir − ψ ∗1 ψ 1 ) + log det (Ir − ψ ∗2 ψ 2 ) = 2 log det(I − P)and RTr (Λψ 1 Λψ 2 ) = Tr(ΛΠ∗ ΛΠ P). This invites us to
consider the following
Problem 5. Maximize
log det(I − P)
(9)
jointly over all the r! permutations Π and over diagonal matrices P verifying 0 ≤ P ≤ I and the constraint
Tr(ΛΠ∗ ΛΠ P) = x.
(10)
In a first step, we set Π = I in the above problem and consider the
Problem 6. Maximize
r
X
log(1 − pi )
(11)
under the constraints that 0 ≤ pi ≤ 1 for each i = 1, . . . , r and
r
X
λ2i pi = x.
(12)
i=1
i=1
The maximum is denoted by JΛ (x).
This is a variant of the celebrated water-filling problem (see e.g. [16] and Chap. 9 of [6]) that was solved to evaluate the
capacity of a frequency selective Gaussian channel, the difference being that in the latter problem, log(1 − pi ) is replaced by
log(1 + pi ).In order to solve Problem 6, we assume that the non zero singular values (λi )i=1,...,r are distinct. If this is not
the case, a standard perturbation argument can be used in order to address the general case. As the function to be maximized
is strictly concave on the set defined by the constraints, the maximum is reached
p∗ verifying
pi,∗
Pr at a unique point P
Pr< 1 for
r
2
each i. We consider the Lagrangian corresponding to Problem (6) given by i=1 log(1 − pi ) + µ
λ
p
+
i=1 i i
i=1 δi pi
where µ ≥ 0 and δi ≥ 0 for i = 1, . . . , r. The partial derivatives w.r.t. parameters (pi )i=1,...,r are zero at p∗ . This leads to
for i = 1, . . . , r :
1
= µ∗ λ2i + δi,∗
1 − pi,∗
(13)
The first remark is that necessarily, these equations imply that the numbers pi,∗ are sorted in decreasing order. To verify this
claim, we assume that i < j and that pi,∗ = 0 and pj,∗ > 0. Then, it holds that µ∗ λ2i + δi,∗ = 1 and that µ∗ λ2j = 1−p1 j,∗ > 1
because pj,∗ > 0 implies δj,∗ = 0. Therefore, λ2i ≤ µ1∗ < λ2j , a contradiction because λ2i ≥ λ2j . We denote by s(x) the
number of non-zero entries of p∗ . Hence, the first s(x) entries of p∗ are non zero. Morever, the equations µ∗ λ2i = 1−p1 i,∗ for
i = 1, . . . , s(x) imply that p1,∗ ≥ . . . ≥ ps(x),∗ > 0 = ps(x)+1,∗ = . . . = pr,∗ .
We now analytically characterize s(x). On the one hand, (13) computed at for i = s(x) and for i = s(x) + 1 both imply
1
λ2s(x)+1 ≤
< λ2s(x)
(14)
µ∗
On the other hand, the constraint (12) imposes that 1/µ∗ verifies
Ps(x) 2
λ −x
1
= i=1 i
.
µ∗
s(x)
Therefore, it holds that
s(x)
(
X
s(x)
λ2i ) − s(x)λ2s(x) < x ≤ (
i=1
X
λ2i ) − s(x)λ2s(x)+1
(15)
i=1
such that s(x) coincides with the integer k for which x ∈ Ik (see (7) for the definition of these intervals). The maximum
Ps(x)
i=1 log(1 − pi,∗ ) is direcly computed as
"
#s(x)
Ps(x) 2
λ
−
x
1
i
i=1
JΛ (x) = log
(16)
s(x) 2
s(x)
Π
λ
i=1
i
In order to show that the GRF of η is Iη (x) = −2JΛ (x), it remains to show that the solution of Problem 5 is reached
when the permutation matrix Π is the identity. In this respect, we introduce a nested problem motivated by the following
observation. We denote by α and β the r–dimensional vectors whose components are respectively the diagonal entries of Λ2
and of ΛΠ∗ ΛΠ arranged in the decreasing order. Evidently, α majorizes β in the sense that
for k = 1, . . . , r :
k
X
i=1
αi ≥
k
X
i=1
βi
(17)
7
We thus consider the relaxed problem
Problem 7. Maximize log det(I − P) over the diagonal matrices 0 ≤ P ≤ I and over vectors β = (β1 , ..., βr ) satisfying
β1 ≥ β2 ≥ . . . βr ≥ 0, the majorization constraint (17), and the equality constraint
r
X
βi pi = x
(18)
i=1
The maximum of Problem 7 is above the maximum of Problem 5 which is itself above the maximum JΛ (x) of Problem 6.
We actually show that the maximum of Problem 7 is less than JΛ (x), and that it is reached for a vector β that coincides with
α. This will imply that the optimal permutation Π in Problem 5 is I and Iη (x) = −2JΛ (x).
We give some elements for solving Problem 7. We consider a stationary point (p∗ , β ∗ ) of the associated Lagrangian and
compute the KKT conditions. We suppose that this stationary point attains the maximum. If s denotes the number of non-zero
components in p∗ , we prove that, necessarily, p1,∗ ≥ p2,∗ ≥ ... ≥ ps,∗ > 0 and β1,∗ ≥ β2,∗ ≥ ... ≥ βs,∗ . We let j1 be the
Pj1
Pj1
βi (this index exists otherwise β ∗ = α and the problem is solved). This implies that
αi > i=1
first index such that i=1
Pj1 +k
Pj+k
βi,∗ = αi for all indices i = 1, ..., j1 − 1. Notice this fact: if we suppose that the condition i=1
αi > i=1 βi is true
whatever k, then it is possible to add a small > 0, and update βj1,∗ as βj1 ,∗ + in such a way that the majorization constraints
still hold, the constraint (18) holds and the updated p∗ increases the function to maximize. This isPin contradiction
with the
Pj1 +j
j1 +j2
2
βi,∗ .
αi = i=1
definition of (p∗ , β ∗ ). This means that there exists an index j2 (we choose the smallest) such that i=1
It can be shown that it is necessary that all the βi,∗ are equal for i = j1 , ..., j1 + j2 . After some algebraic gymnastics, it
can be shown that it in this case, all the inequalities (17) at β ∗ are saturated hence implying that β ∗ = α. The value of
P
i log(1 − pi,∗ ) equals JΛ (x).
R EFERENCES
[1] Z. Bai and J.W. Silverstein. Spectral Analysis of Large Dimensional Random Matrices. Springer-Verlag Series in Statistics, 2010.
[2] J. Banks, C. Moore, R. Vershynin, N. Verzelen, and J. Xu. Information-theoretic bounds and phase transitions in clustering, sparse PCA, and submatrix
localization. In 2017 IEEE International Symposium on Information Theory (ISIT), pages 1137–1141, June 2017.
[3] Florent Benaych-Georges and Raj Rao Nadakuditi. The singular values and vectors of low rank perturbations of large rectangular random matrices.
Journal of Multivariate Analysis, 111:120–135, 2012.
[4] P. Bianchi, M. Debbah, M. Maı̈da, and M. Najim. Performance of statistical tests for single source detection using random matrix theory. IEEE
Transactions on Information Theory, 57(4):2400–2419, 2011.
[5] A. Chevreuil and Ph. Loubaton. On the non-detectability of spiked large random tensors. Arxiv, 1802.07093, 2018.
[6] T.M. Cover and J.A. Thomas. Elements of Information Theory, 2nd Edition. Wiley Interscience, 2006.
[7] A. Dembo and O. Zeitouni. Large Deviations Techniques and Applications. Springer-Verlag Berlin Heidelberg, 2009.
[8] Fabrice Gamboa and Alain Rouault. Operator-valued spectral measures and large deviations. J. of Stat. PLanning and Inference, 154(3):72–86, 2014.
[9] Marc Lelarge and Léo Miolane. Fundamental limits of symmetric low-rank matrix estimation. arXiv:1611.03888v3 [math.PR], 2016.
[10] Léo Miolane. Fundamental limits of low-rank matrix estimation: the non-symmetric case. arXiv:1702.00473v2, 2017.
[11] L. Mirsky. A trace inequality of John von Neumann. Monatshefte für Mathematik, 79(4):303–306, Dec 1975.
[12] Andrea Montanari, Daniel Reichman, and Ofer Zeitouni. On the limitation of spectral methods: from the gaussian hidden clique problem to rank one
perturbations of gaussian tensors. IEEE Trans. Inf. Theor., 63(3):1572–1579, March 2017.
[13] R.R Nadakuditi and A. Edelman. Sample eigenvalue based detection of high-dimensional signals in white noise using relatively few samples. IEEE
Transactions on Signal Processing, 56(7):2625–2637, 2008.
[14] A. Onatski, M.J. Moreira, and M. Hallin. Asymptotic power of sphericity tests for high-dimensional data. Ann. Statistics, 41(3):1204–1231, 2013.
[15] Michel Talagrand. Mean Field Models for Spin Glasses Book Subtitle. Volume I: Basic Examples. Springer-Verlag Berlin Heidelberg, 2010.
[16] H.S. Witsenhausen. A determinant maximization problem occuring in the theory of data communications. SIAM J. Appl. Math, 29(3):515–522, 1975.
| 10 |
Recurrent Neural Network Language Models for
Open Vocabulary Event-Level Cyber Anomaly Detection
Aaron Tuor,1 Ryan Baerwolf,2 Nicolas Knowles,2
Brian Hutchinson,1,2 Nicole Nichols1 and Robert Jasper1
1
Pacific Northwest National Laboratory
Richland, Washington
2
Western Washington University
Bellingham, Washington
Abstract
Automated analysis methods are crucial aids for monitoring
and defending a network to protect the sensitive or confidential data it hosts. This work introduces a flexible, powerful,
and unsupervised approach to detecting anomalous behavior in computer and network logs; one that largely eliminates
domain-dependent feature engineering employed by existing
methods. By treating system logs as threads of interleaved
“sentences” (event log lines) to train online unsupervised neural network language models, our approach provides an adaptive model of normal network behavior. We compare the effectiveness of both standard and bidirectional recurrent neural network language models at detecting malicious activity
within network log data. Extending these models, we introduce a tiered recurrent architecture, which provides context
by modeling sequences of users’ actions over time. Compared to Isolation Forest and Principal Components Analysis, two popular anomaly detection algorithms, we observe
superior performance on the Los Alamos National Laboratory Cyber Security dataset. For log-line-level red team detection, our best performing character-based model provides
test set area under the receiver operator characteristic curve
of 0.98, demonstrating the strong fine-grained anomaly detection performance of this approach on open vocabulary logging sources.
1
Introduction
To minimize cyber security risks, it is essential that organizations be able to rapidly detect and mitigate malicious
activity on their computer networks. These threats can originate from a variety of sources including malware, phishing,
port scanning, etc. Attacks can lead to unauthorized network
access to perpetrate further damage such as theft of credentials, intellectual property, and other business sensitive information. In a typical scenario, cyber defenders and network
administrators are tasked with sifting through vast amounts
of data from various logging sources to assess potential security risks. Unfortunately, the amount of data for even a
modestly-sized network can quickly grow beyond the ability of a single person or team to assess, leading to delayed
response. The desire for automated assistance has and continues to encourage inter-domain research in cyber security
and machine learning.
Signature-based approaches for automated detection can
be highly effective for characterizing individual threats. De-
spite their high precision, they suffer from low recall and
may fail to detect subtle mutations or novel attacks. Alternatively, given an unlabeled training set of typically benign activity logs, one can build a model of “normal behavior”. During online joint training and evaluation of this model, patterns of normal usage will be reinforced and atypical malicious activity will stand out as anomalous. The features used
to identify unusual behavior are typically statistical feature
vectors associated with time slices, e.g., vectors of counts for
types of activities taking place in a 24-hour window. Such
systems developed in research have been criticized as brittle
to differences in site-specific properties of real-world operational networks such as security constraints and variable
usage patterns (Sommer and Paxson 2010).
The approach we introduce aims to minimize site-specific
assumptions implicit in feature engineering, and effectively
model variability in network usage by direct online learning of language models over log lines. Language models
assign probabilities to sequences of tokens and are a core
component of speech recognition, machine translation, and
other language processing systems. Specifically, we explore
the effectiveness of several recurrent neural network (RNN)
language models for use in a network anomaly detection system. Our system dynamically updates the network language
model each day based on the previous day’s events. When
the language model assigns a low probability to a log-line it
is flagged as anomalous. There are several advantages to this
approach:
1. Reduced feature engineering: Our model acts directly
on raw string tokens, rather than hand-designed domainspecific statistics. This dramatically reduces the time to
deployment, and makes it agnostic to the specific network
or logging source configuration. It also removes the “blind
spots” introduced when tens of thousands of log-lines are
distilled down to a single aggregated feature vector, allowing our model to capture patterns that would have otherwise been lost.
2. Fine grained assessment: The response time for analysts
can be improved by providing more specific and relevant
events of interest. Baseline systems that alert to a user’s
day aggregate require sifting through tens of thousands of
actions. Our approach can provide log-line-level or even
token-level scores to the analyst, helping them quickly lo-
cate the suspicious activity.
3. Real time processing: With the ability to process events
in real time and fixed bounds on memory usage which
do not grow over time, our approach is suitable for the
common scenario in which log-line events are appearing
in a high-volume, high-velocity log stream.
We assess our models using the publicly available
Los Alamos National Laboratory (LANL) Cyber Security
Dataset, which contains real (de-identified) data with ground
truth red team attacks, and demonstrate language models
definitively outperforming standard unsupervised anomaly
detection approaches.
2
Prior work
Machine learning has been widely explored for network
anomaly detection, with techniques such as isolation forest (Gavai et al. 2015; Liu, Ting, and Zhou 2008) and principal component analysis (Novakov et al. 2013; Ringberg
et al. 2007) attracting significant interest. Machine learning classifiers ranging from decision trees to Naı̈ve Bayes
have been used for cyber security tasks such as malware detection, network intrusion, and insider threat detection. Extensive discussion of machine learning applications in cyber security is presented in (Bhattacharyya and Kalita 2013;
Buczak and Guven 2016; Dua and Du 2016; Kumar, Kumar,
and Sachdeva 2010; Zuech, Khoshgoftaar, and Wald 2015;
Rubin-Delanchy, Lawson, and Heard 2016).
Deep learning approaches are also gaining adoption for
specialized cyber defense tasks. In an early use of recurrent
neural networks, Debar, Becker, and Siboni (1992) model
sequences of Unix shell commands for network intrusion
detection. Anomaly detection has been demonstrated using
deep belief networks on the KDD Cup 1999 dataset (Alrawashdeh and Purdy 2016), and Bivens et al. (2002) use
multi-layer perceptrons for the DARPA 1999 dataset. Both
approaches use aggregated features and synthetic network
data. Tuor et al. (2017) and Veeramachaneni et al. (2016)
both employ deep neural network autoencoders for unsupervised network anomaly detection using time aggregated
statistics as features.
Some works of note have been previously published on
the LANL data. Turcotte, Heard and Kent (2016) develop
an online statistical model for anomaly detection in network
activity using Multinomial-Dirichlet models. Similarly, Turcotte et al. (2016) use Poission Factorization (Gopalan, Hofman, and Blei 2013) on the LANL authentication logs. A
user/computer authentication count matrix is constructed by
assuming each count comes from a Poisson distribution parameterized by latent factors for users and computers. The
learned distributions are then used to predict unlikely authentication behavior.
Several variants of tiered recurrent networks have been
explored in the machine learning and natural language processing communities (Koutnik et al. 2014; Ling et al. 2015b;
Ling et al. 2015a; Chung et al. 2015). They are often realized by a lower tier pre-processing network, whose output is
fed to an upper tier network and the separate tiers are jointly
trained. Ling et al. (2015b) use a character-level convolutional neural network to feed a word level long short-term
memory (LSTM) RNN for machine translation, with predictions made at the word-level. Both Hwang and Sung (2016)
and Ling et al. (2015a) use a character-based LSTM to feed
a second word or utterance-based LSTM for language modeling. Pascanu et al. (2015) create activity models from real
world data on a per-event (command) basis and sequences of
system calls are then modeled using RNN and echo state networks. The learned features are used to independently train
neural network and logistic regression classifiers. Max pooling is applied to hidden layers of the unsupervised RNN for
each time step in a session and the result is concatenated to
the final hidden state to produce feature vectors for the classifier. This is similar to our tiered approach, in which we use
the average of all hidden states concatenated with the final
hidden state as input to the upper-tier RNN. In contrast, our
model is completely unsupervised and all components are
jointly trained.
3
Approach
Our approach learns normal behavior for users, processing
a stream of computer and network log-lines as follows:
1. Initialize model weights randomly
2. For each day k in chronological order:
(a) Given model Mk−1 , produce log-line-level anomaly
scores for all events in day k
(b) Optionally, produce an aggregated anomaly score each
user for day k (from the log-line-level scores)
(c) Send per-user-day or per-user-event anomaly scores in
rank order to analysts for inspection
(d) Update model weights to minimize loss on all log-lines
in day k, yielding model Mk
This methodology interleaves detection and training in an
online fashion. In this section we detail the components of
our approach.
3.1
Log-Line Tokenization
To work directly from arbitrary log formats, we treat loglines as sequences of tokens. For this work, we consider two
tokenization granularities: word-level and character-level.
For word tokenization, we assume that tokens in the logline are delimited by a known character (e.g., space or
comma). After splitting the log-lines on this delimiter, we
define a shared vocabulary of “words” over all log fields,
consisting of the sufficiently-frequent tokens appearing in
the training set. To allow our model to handle previously
unseen tokens, we add an “out of vocbulary” token to our
vocabulary, <oov>. (For instance, not every IP address will
be represented in a training set; likewise, new PCs and users
are continually being added to large networks.) To ensure
that <oov> has non-zero probability, we replace sufficiently
infrequent tokens in the training data with <oov>. During
evaluation, tokens not seen before are labeled <oov>. In order to accommodate shifting word distributions in an online
environment, a fixed size vocabulary could be periodically
updated using a sliding window of word frequency statistics. For simplicity, we assume we have a fixed training set
from which we produce a fixed vocabulary.
To avoid the challenges of managing a word-level vocabulary, we also develop language models using a characterlevel tokenization. In this case our primitive vocabulary,
the alphabet of printable ASCII characters, circumvents the
open vocabulary issue by its ability to represent any log entry irrespective of the network, logging source, or log field.
With character-level tokenization, we keep the delimiter token in the sequence, to provide our models with cues to transitions between log-line fields.
3.2
Recurrent Neural Network Language Models
To produce log-line-level anomaly scores, we use recurrent
neural networks in two ways: 1) as a language model over
individual log-lines, and 2) to model the state of a user over
time. We first present two recurrent models that focus only
on (1), and then a tiered model that accomplishes both (1)
and (2). Both were implemented1 for our experiments using
TensorFlow (Abadi et al. 2015).
Event Model (EM). First we consider a simple RNN
model that operates on the token (e.g., word) sequences
of individual log-lines (events). Specifically, we consider
a Long Short-Term Memory (LSTM) (Hochreiter and
Schmidhuber 1997) network whose inputs are token embeddings and from whose output we predict distributions over
the next token.
For a log-line with K tokens, each drawn from a shared
vocabulary of size C, let X(1:K) = x(1) , x(2) , . . . , x(K) denote a sequence of one-hot representations of the tokens
(each x(t) ∈ RC ).
In this model, the hidden representation at token t, h(t) ,
from which we make our predictions, is a function of
x(1) , x(2) , . . . , x(t) according to the usual LSTM equations:
h(t)
c(t)
=
=
o(t) ◦ tanh(c(t) )
f(t) ◦ c(t−1) + i(t) ◦ g(t)
g(t)
=
f(t)
=
i(t)
=
o(t)
=
tanh x(t) W(g,x) + h(t−1) W(g,h) + b(g)
σ x(t) W(f,x) + h(t−1) W(f,h) + b(f )
σ x(t) W(i,x) + h(t−1) W(i,h) + b(i)
σ x(t) W(o,x) + h(t−1) W(o,h) + b(o) ,
(1)
(2)
(3)
(4)
(5)
(6)
where the initial hidden and cell states, c(0) and h(0) , are
set to zero vectors, and ◦ and σ denote element-wise multiplication and logistic sigmoid, respectively. Vector g(t) is
a hidden representation based on the current input and previous hidden state, while vectors f(t) , i(t) , and o(t) , are the
standard LSTM gates. The matrices (W) and bias vectors
(b) are the model parameters. We use each h(t−1) to produce a probability distribution p(t) over the token at time t,
as follows:
p(t) = softmax h(t−1) W(p) + b(p)
(7)
1
Code will soon be available at https://github.com/pnnl/safekit
p(1)
p(2)
p(K-1)
p(K)
softmax
softmax
softmax
softmax
LSTM
LSTM
LSTM
LSTM
<sos>
x(1)
x(K-2)
x(K-1)
LSTM
LSTM
LSTM
LSTM
x(2)
x(3)
x(K)
<eos>
…
Figure 1: Event Models. Set of black bordered nodes and
connections illustrate the EM model while set of all nodes
and connections illustrate the BEM model.
We use cross-entropy loss,
K
1 X
H(x(t) , p(t) ),
K t=1
(8)
for two important purposes: first, as per-log-line anomaly
score and second, as the training objective to update model
weights. We train this model using stochastic mini-batch
(non-truncated) back-propagation through time.
Bidirectional Event Model (BEM). Following the language model formulation suggested in (Schuster and Paliwal
1997), we alternatively model the structure of log lines with
a bidirectional LSTM. We define a new set of hidden vectors hb(K+1) , hb(K) , . . . , hb(1) by running the LSTM equations
backwards in time (starting with initial zero cell and hidden
states at time K + 1 set to zero). The weights W and biases
b for the backward LSTM are denoted with superscript b.
The probability distribution p(t) over the token at time t
is then:
b
p(t) = softmax h(t−1) W(p) + hb(t+1) W(p)
+ b(p) (9)
Tiered Event Models (T-EM, T-BEM). To incorporate
inter-log-line context, we propose a two-tiered recurrent
neural network. The lower-tier can be either event model
(EM or BEM), but with the additional input of a context vector
(generated by the upper-tier) concatenated to the token embedding at each time step. The input to the upper-tier model
is the hidden states of the lower-tier model. This upper tier
models the dynamics of user behavior over time, producing the context vectors provided to the lower-tier RNN. This
model is illustrated in Fig. 2.
In this model, x(u,j) denotes user u’s jth log line, which
consists of a sequence of tokens as described in the previous
subsections. The upper-tier models a sequence of user log
lines, x(u,1) , x(u,2) , . . . , x(u,Tu ) , using an LSTM. For each
user u and each log line j in the user’s log line sequence, a
lower-tier LSTM is applied to the tokens of x(u,j) . The input
to the upper-tier model at log-line j is the concatenation of:
1) the final lower-tier hidden state(s) and 2) the average of
the lower-tier hidden states. In the case of a lower-tier EM,
Context LSTM
Mean
LSTM
LSTM
Final
LSTM
…
<sos>
Context LSTM
<eos>
Mean
LSTM
<sos>
LSTM
Final
LSTM
…
<eos>
Figure 2: Tiered Event Model (T-EM)
(1) refers to the hidden state at time K; for the BEM, (1)
is the concatenation of the forward hidden state at time K
and the backward hidden state at time 1. For (2), we average over hidden states primarily to provide many short-cut
connections in the LSTM, which aids trainability. The output of the upper-tier LSTM at log-line j is a hidden state
ĥ(u,j) . This hidden vector serves to provide context for the
lower-tier model at the next time step: specifically, ĥ(u,j−1)
is concatenated to each of the inputs of the lower-tier model
operating on the jth log-line. Note that the upper-tier model
serves only to propagate context information across individual log-lines; no loss is computed directly on the values produced by the upper-tier model.
The upper- and lower-tier models are trained jointly to
minimize the cross-entropy loss of the lower-tier model. We
unroll the two-tier model for a fixed number of log-lines,
fully unrolling each of the lower-tier models within that window. The lower-tier model’s cross-entropy loss is also used
to detect anomalous behavior, as is described further in Section 4.2.
Minibatching becomes more challenging for the tiered
model, as the number of log-lines per day can vary dramatically between users. This poses two problems: first, it introduces the possibility that the most active users may have a
disproportionate impact on model weights; second, it means
that toward the end of the day, there may not be enough users
to fill the minibatch. To counteract the first problem, we fix
the number of log-lines per user per day that the model will
train on. The remaining log-lines are not used in any gradient updates. We leave compensating for the inefficiency that
results from the second to future work.
3.3
Baselines
Anomaly detection in streaming network logs often relies
upon computing statistics over windows of time and applying anomaly detection techniques to those vectors. Below
we describe the aggregate features and two anomaly detection techniques that are typical of prior work.
Aggregate Features We first define the set of per-userday features, which summarize users’ activities in the day.
To aggregate the features that have a small number of distinct values (e.g. success/failure, logon orientation) we count
the number of occurrences for each distinct value for the
user-day. For fields that have a larger number of distinct values (pcs, users, domains), we count the number of common
and uncommon events that occurred, rather than the number
of occurrences of each distinct value (this approach avoids
high dimensional sparse features). Furthermore, we define
two categories of common/uncommon; to the individual entity/user, and relative to all users. A value is defined as uncommon for the user if it accounts for fewer than 5% of the
values observed in that field (up to that point in time), and
common otherwise. A value is defined as uncommon for all
users if it occurs fewer times than the average value for the
field, and common otherwise.
For the LANL dataset, the prior featurization strategy
yields a 108-dimensional aggregate feature vector per userday. These feature vectors then serve as the input to the baseline models described next.
Models We consider two baseline models. The first uses
Principal Components Analysis (pca) to learn a low dimensional representation of the aggregate features; the anomaly
score is proportional to the reconstruction error after mapping the compressed representation back into the original dimension (Shyu et al. 2003). The second is an isolation forest
(iso) based approach (Liu, Ting, and Zhou 2008) as implemented in scikit-learn’s outlier detection tools (Pedregosa et
al. 2011). This was noted as the best performing anomaly
detection algorithm in the recent DARPA insider threat detection program, (Gavai et al. 2015).
4
Experiments
In this section we describe experiments to evaluate the effectiveness of the proposed event modeling algorithms.
4.1
Data
The Los Alamos National Laboratory (LANL) Cyber Security Dataset (Kent 2016) consists of event logs from LANL’s
internal computer network collected over a period of 58 consecutive days. The data set contains over one billion loglines from authentication, process, network flow, and DNS
logging sources. Identifying fields (e.g., users, computers,
and processes) have been anonymized.
The recorded network activities included both normal operational network activity as well as a series of red team activities that compromised account credentials over the first
30 days of data. Information about known red team attack
events is used only for evaluation; our approach is strictly
unsupervised.
For the experiments presented in this paper, we rely only
on the authentication event logs, whose fields and statistics
are summarized in Figure 3a. We filter these events to only
those log-lines linked to an actual user, removing computercomputer interaction events. Events on weekends and holidays contain drastically different frequencies and distributions of activities. In a real deployment a separate model
would be trained for use on those days, but because no malicious events were in that data it was also withheld.
Table 3b has statistics of our data split; the first 12 days
serve as the development set, while the remaining 18 days
are the independent test set.
4.2
Assessment Granularity
Our model learns normal behavior and assigns relatively
high loss to events that are unexpected. A principal advantage of our approach is this ability to score the anomaly of
Field
time
source user
dest. user
source pc
dest. pc
auth. type
logon type
auth. orient
success
Example
1
C625@DOM1
U147@DOM1
C625
C625
Negotiate
Batch
LogOn
Success
# unique labels
5011198
80553
98563
16230
15895
29
10
7
2
Model
pca
iso
EM
BEM
T-EM
T-BEM
EM
BEM
T-EM
T-BEM
(a)
Days
# Events
# Attacks
# User-days
Dev
1-12
133M
316
57
Test
13-58
918M
385
79
W
W
W
W
C
C
C
C
Figure 3: Dataset statistics: (a) Authentication log fields and
statistics and (b) dataset splits.
4.3
Metrics
We consider two performance metrics. First, we assess results using the standard area under the receiver operator
characteristic curve (AUC) which characterizes the trade-off
in model detection performance between true positives and
false positives, effectively sweeping through all possible analyst budgets. False positives are detections that are not truly
red team events, while true positives are detections that are.
To quantify the proportion of the data the analyst must
sift through to diagnose malicious behavior on the network,
we use the Average Percentile (AP) metric. Specifically, for
each red team event or user-day, we note the percentile of its
anomaly amongst all anomaly scores for the day. We then
average these percentiles for all of the malicious events or
AUC
0.754
0.763
0.802
0.876
0.782
0.864
0.750
0.843
0.772
0.837
AP
73.9
75.0
79.3
87.0
77.5
85.7
70.9
82.9
76.2
82.9
Table 1: User-day granularity test set AUC and AP. Language model anomaly scores calculated with average userday normalization (diff).
(b)
individual events, allowing us to flag at the event-level or
aggregate anomalies over any larger timescale.
For this work, we consider two timescales. First, we assess based on individual events; a list of events would be presented to the analyst, sorted descending by anomaly score.
Second, to facilitate comparison with traditional aggregation
methods, we aggregate anomaly scores over all of a user’s
events for the day (specifically, taking the max), producing
a single anomaly score per-user, per-day. In this scenario,
a list of user-days would be provided to the analyst, sorted
descending by anomaly score. We refer to this approach as
max, because the anomaly scores provided to the analyst are
produced by taking the maximum score over the event scores
in the window for that user (where event-level scoring is just
taking the max over a singleton set of one event).
In order to counter systematic offsets in users’ anomaly
scores for a day we also consider a simple normalization
strategy, which we refer to as diff, by which every raw
score is first normalized by subtracting the user’s average
event-level anomaly score for the day.
Tokenization
Word
Word
Word
Word
Char
Char
Char
Char
EM
BEM
T-EM
T-BEM
EM
BEM
T-EM
T-BEM
LOG
diff
max
0.964 0.932
0.974 0.895
0.959 0.948
0.959 0.902
0.940 0.935
0.973 0.979
0.859 0.927
0.945 0.969
DAY
diff
max
0.802 0.794
0.876 0.811
0.782 0.803
0.864 0.838
0.751 0.754
0.843 0.846
0.772 0.809
0.837 0.854
Table 2: Comparison of AUC for day-level and log-line-level
analysis with and without user-day normalization. Figures 4
and 5 provide a visualization of these results.
user-days. Note that if all true malicious events or user-days
are flagged as the most anomalous on the respective days,
then AP ≈ 100, while if all malicious events or user-days
are ranked as the least anomalous on their respective days,
AP ≈ 0. For both AUC and AP, a higher score is better.
Our model hyperparameters were manually tuned to maximize AP for day-level diff scores on the development set.
No separate training set is needed as our approach is unsupervised and trained online.
4.4
Results and Analysis
We begin by exploring the user-day granularity performance. Table 1 summarizes model detection performance
at this granularity on the test set for the AUC and AP metrics using the diff method to produce day level scores
from the language models. A few trends are evident from
these results. First, the aggregate feature baselines have nearequivalent performance by both metrics, with the isolation
forest approach having a slight edge. We hypothesize the
feature representation, which is common to these methods,
could be a bottleneck in performance. This highlights the
“blind spot” issue feature engineering introduces. Second,
despite having only the context of a single log-line at a
time, as opposed to features aggregated over an entire day,
the event model (EM) performs comparably to the baseline
models when a forward pass LSTM network is used with
100
8079
97
8987
81
9594
7880
40
9393
9797
8484
80
60
20
100
95
90
86
83
AUC
AUC
80
96
93
7575
92
85
80
77
96
94
8385
60
40
20
diff
max
LOG | DAY
W EM
LOG | DAY
W BEM
LOG | DAY
W T-EM
LOG | DAY
W T-BEM
diff
max
LOG | DAY
W EM
LOG | DAY
W BEM
LOG | DAY
W T-EM
LOG | DAY
W T-BEM
Figure 4: Word model comparison of AUC at day-level and
log-line-level granularities.
Figure 5: Character model comparison of AUC at day-level
and log-line-level granularities.
a character tokenization and outperforms the baselines with
word tokenization. The most pronounced performance gain
results from using bidirectional models. Finally, word-level
tokenization performs better than character-level; however,
even the bidirectional character models perform appreciably better than the baselines.
It is clear from these results that the tiered models perform
comparably to, but not better than, the event-level models.
This suggests that the event level model is able to characterize normal user behavior from information stored in the
model weights of the network, which are trained each day to
model user activity. Given the context of the past day’s activity stored in the model weights, the categorical variables
represented by the fields in an individual log line may eliminate the need for explicit event context modeling. We leave
tracking the state of individual computers, rather than users,
to future work, but hypothesize that it may make the tiered
approach more effective.
Next, we broaden our analysis of language modeling approaches, comparing performance across all language models, tokenization strategies, anomaly granularity, and normalization techniques. Figure 4 plots AUC for all language
model types using word tokenization, contrasting max and
diff normalization modes. Figure 5 compares the same
variations for character tokenization. Table 2 presents these
results in tabular form. With few exceptions, log-line-level
granularity vastly outperforms day-level; this is true for both
the character-level and word-level tokenization strategies,
with an average gain of 0.1 AUC. The most interesting outcome of these comparisons is that word tokenization performance gains are heavily reliant on the diff normalization,
whereas for character tokenization the diff normalization
has a minor detrimental effect for some models. This suggests that the character-level model could be used to provide
a more immediate response time, not having to wait until the
day is done to obtain the day statistics used in diff mode.
The two tokenization strategies may in fact be complementary as the versatility and response time gains of a character tokenization come at the expense of easy interpretibility of a word tokenization: the word tokenization allows
anomaly scores to be decomposed into individual log-line
fields, enabling analysts to pinpoint features of the event that
most contributed to it being flagged. Since we tuned hyperparameters using diff mode, the character-level model has
potential to do better with additional tuning.
Next, Figures 6 and 7 visualize the average percentiles of
red team detections for the subset of the test set with the
most red-team activity. Anomaly scores for both word and
character tokenizations are computed without average userday offset normalization. Red team log-line-level scores are
plotted as purple x’s with the x coordinate being the second in time at which the event occurred and y coordinate
the anomaly score for that event. Percentile ranges are colored to provide context for the red-team anomaly scores
against the backdrop of other network activity. The spread
of non-normalized anomaly scores is much greater for the
word-level tokenizations (Fig. 7) than character-level (Fig.
6), which could explain the different sensitivity of word level
tokenization to normalization. Also notice that there is an
expected bump in percentiles for windows of frequent redteam activity. Curiously, at the end of day 14 there are massive bumps for the 99th percentile, which suggest unplanned
and un-annotated anomalous events on the LANL network
for those hours. Notice that for the character tokenization almost all non-normalized red team anomaly scores are above
the 95th percentile, with a large proportion above the 99th
percentile.
Finally, Figure 8 plots the ROC curves for the best aggregate baseline (iso), the best user-day granularity language model (word BEM), and the best event-level granularity model (character BEM). It illustrates the qualitatively different curves obtained with the baselines, the user-day granularity, and the event-level granularity.
Since the proportion of red-team to normal events is vanishingly low in the data-set (< 0.001%), the false-positive
rate is effectively the proportion of data flagged to achieve a
particular recall. From this observation, Figure 8 shows the
character event model can achieve 100% recall from only
12% of the data whereas the other models considered only
achieve 100% recall when nearly all of the data has been
2.00
75-95 Percentile
95-99 Percentile
Red-event
1.75
Anomaly Score
1.50
1.25
1.00
0.75
0.50
0.25
15
pm
am
pm
ay
D
6
12
14
6
pm
pm
ay
D
6
12
am
6
13
D
ay
pm
pm
6
12
12
6
D
ay
am
0.00
Day/Time
Figure 6: Character-level red-team log-line anomaly scores
in relation to percentiles over time.
1.0
True positive
0.8
0.6
0.4
0.2
iso agg 0.76 AUC
W BEM day 0.88 AUC
C BEM log-line 0.98 AUC
0.0
0.0
0.2
0.4
0.6
False positive
0.8
1.0
Figure 8: ROC curves for best performing baseline, word
language model evaluated at day-granularity, and character
language model evaluated at log-line-granularity.
handed to the analyst. Further, the character event model can
achieve 80% recall by flagging only 3% of the data whereas
the word day language model needs 14% of the data and the
aggregate isolation forest model needs 55% of the data to
achieve the same result.
5
Conclusion
This work builds upon advances in language modeling to
address computer security log analysis, proposing an unsupervised, online anomaly detection approach. We eliminate
the usual effort-intensive feature engineering stage, making
our approach fast to deploy and agnostic to the system configuration and monitoring tools. It further confers the key
advantage of event-level detection which allows for a near
immediate alert response following anomalous activity.
In experiments using the Los Alamos National Laboratory Cyber Security Dataset, bidirectional language models significantly outperformed standard methods at day-level
Figure 7: Word-level red-team log-line anomaly scores in
relation to percentiles over time.
detection. The best log-line-level detection performance
was achieved with a bidirectional character-based language
model, obtaining a 0.98 area under the ROC curve, showing
that for the constrained language domain of network logs,
character based language modeling can achieve comparable
accuracy to word based modeling for event level detection.
We have therefore demonstrated a simple and effective approach to modeling dynamic networks with open vocabulary
logs (e.g. with new users, PCs, or IP addresses).
We propose to extend this work in several ways. First,
potential modeling advantages of tiered architectures merit
further investigation. The use of tiered architectures to track
PCs instead of network users, or from a richer set of logging
sources other than simply authentication logs may take better advantage of their modeling power. Next, we anticipate
interpretability can become lost with such detailed granularity provided by log-line-level detection from a characterbased model, therefore future work will explore alternate
methods of providing context to an analyst. Finally, we are
interested in exploring the robustness of this approach to
adversarial tampering. Similarly performing models could
have different levels of resilience that would lead to selection of one over another.
Acknowledgments The research described in this paper is
part of the Analysis in Motion Initiative at Pacific Northwest
National Laboratory. It was conducted under the Laboratory
Directed Research and Development Program at PNNL, a
multi-program national laboratory operated by Battelle for
the U.S. Department of Energy. The authors would also like
to thank the Nvidia corporation for their donations of Titan
X and Titan Xp GPUs used in this research.
References
[Abadi et al. 2015] Abadi, M.; Agarwal, A.; Barham, P.;
Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G. S.; Davis, A.;
Dean, J.; Devin, M.; Ghemawat, S.; Goodfellow, I.; Harp,
A.; Irving, G.; Isard, M.; Jia, Y.; Jozefowicz, R.; Kaiser, L.;
Kudlur, M.; Levenberg, J.; Mané, D.; Monga, R.; Moore, S.;
Murray, D.; Olah, C.; Schuster, M.; Shlens, J.; Steiner, B.;
Sutskever, I.; Talwar, K.; Tucker, P.; Vanhoucke, V.; Vasudevan, V.; Viégas, F.; Vinyals, O.; Warden, P.; Wattenberg, M.;
Wicke, M.; Yu, Y.; and Zheng, X. 2015. TensorFlow: Largescale machine learning on heterogeneous systems. Software
available from tensorflow.org.
[Alrawashdeh and Purdy 2016] Alrawashdeh, K., and Purdy,
C. 2016. Toward an online anomaly intrusion detection
system based on deep learning. In Machine Learning and
Applications (ICMLA), 2016 15th IEEE International Conference on, 195–200. IEEE.
[Bhattacharyya and Kalita 2013] Bhattacharyya, D. K., and
Kalita, J. K. 2013. Network anomaly detection: A machine
learning perspective. CRC Press.
[Bivens et al. 2002] Bivens, A.; Palagiri, C.; Smith, R.; Szymanski, B.; Embrechts, M.; et al. 2002. Networkbased intrusion detection using neural networks. Intelligent Engineering Systems through Artificial Neural Networks 12(1):579–584.
[Buczak and Guven 2016] Buczak, A. L., and Guven, E.
2016. A survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Communications Surveys & Tutorials 18(2):1153–1176.
[Chung et al. 2015] Chung, J.; Gulcehre, C.; Cho, K.; and
Bengio, Y. 2015. Gated feedback recurrent neural networks.
In International Conference on Machine Learning, 2067–
2075.
[Debar, Becker, and Siboni 1992] Debar, H.; Becker, M.;
and Siboni, D. 1992. A neural network component for an
intrusion detection system. In Proc. IEEE Symposium on
Research in Security and Privacy, 240–250.
[Dua and Du 2016] Dua, S., and Du, X. 2016. Data mining
and machine learning in cybersecurity. CRC press.
[Gavai et al. 2015] Gavai, G.; Sricharan, K.; Gunning, D.;
Hanley, J.; Singhal, M.; and Rolleston, R. 2015. Supervised
and unsupervised methods to detect insider threat from enterprise social and online activity data. Journal of Wireless
Mobile Networks, Ubiquitous Computing, and Dependable
Applications 6(4):47–63.
[Gopalan, Hofman, and Blei 2013] Gopalan, P.; Hofman,
J. M.; and Blei, D. M. 2013. Scalable recommendation with
Poisson factorization. arXiv preprint arXiv:1311.1704.
[Hochreiter and Schmidhuber 1997] Hochreiter, S., and
Schmidhuber, J. 1997. Long short-term memory. Neural
computation 9(8):1735–1780.
[Hwang and Sung 2016] Hwang, K., and Sung, W. 2016.
Character-level language modeling with hierarchical recurrent neural networks. arXiv preprint arXiv:1609.03777.
[Kent 2016] Kent, A. D. 2016. Cyber security data sources
for dynamic network research. Dynamic Networks and
Cyber-Security 1:37.
[Koutnik et al. 2014] Koutnik, J.; Greff, K.; Gomez, F.; and
Schmidhuber, J. 2014. A clockwork RNN. arXiv preprint
arXiv:1402.3511.
[Kumar, Kumar, and Sachdeva 2010] Kumar, G.; Kumar, K.;
and Sachdeva, M. 2010. The use of artificial intelligence
based techniques for intrusion detection: a review. Artificial
Intelligence Review 34(4):369–387.
[Ling et al. 2015a] Ling, W.; Luı́s, T.; Marujo, L.; Astudillo,
R. F.; Amir, S.; Dyer, C.; Black, A. W.; and Trancoso, I.
2015a. Finding function in form: Compositional character models for open vocabulary word representation. arXiv
preprint arXiv:1508.02096.
[Ling et al. 2015b] Ling, W.; Trancoso, I.; Dyer, C.; and
Black, A. W. 2015b. Character-based neural machine translation. arXiv preprint arXiv:1511.04586.
[Liu, Ting, and Zhou 2008] Liu, F. T.; Ting, K. M.; and
Zhou, Z.-H. 2008. Isolation forest. In Proc. ICDM.
[Novakov et al. 2013] Novakov, S.; Lung, C.-H.; Lambadaris, I.; and Seddigh, N. 2013. Studies in applying
PCA and wavelet algorithms for network traffic anomaly
detection. In High Performance Switching and Routing
(HPSR), 2013 IEEE 14th International Conference on, 185–
190. IEEE.
[Pascanu et al. 2015] Pascanu, R.; Stokes, J. W.; Sanossian,
H.; Marinescu, M.; and Thomas, A. 2015. Malware classification with recurrent networks. In Acoustics, Speech and
Signal Processing (ICASSP), 2015 IEEE International Conference on, 1916–1920. IEEE.
[Pedregosa et al. 2011] Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.;
Prettenhofer, P.; Weiss, R.; Dubourg, V.; Vanderplas, J.; Passos, A.; Cournapeau, D.; Brucher, M.; Perrot, M.; and Duchesnay, E. 2011. Scikit-learn: Machine learning in Python.
Journal of Machine Learning Research 12:2825–2830.
[Ringberg et al. 2007] Ringberg, H.; Soule, A.; Rexford, J.;
and Diot, C. 2007. Sensitivity of PCA for traffic anomaly
detection. In SIGMETRICS.
[Rubin-Delanchy, Lawson, and Heard 2016] RubinDelanchy, P.; Lawson, D. J.; and Heard, N. A. 2016.
Anomaly detection for cyber security applications. Dynamic Networks and Cyber-Security 1:137.
[Schuster and Paliwal 1997] Schuster, M., and Paliwal, K. K.
1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45(11):2673–2681.
[Shyu et al. 2003] Shyu, M.-L.; Chen, S.-C.; Sarinnapakorn,
K.; and Chang, L. 2003. A novel anomaly detection scheme
based on principal component classifier. In Proc. ICDM.
[Sommer and Paxson 2010] Sommer, R., and Paxson, V.
2010. Outside the closed world: On using machine learning for network intrusion detection. In Proc. Symposium on
Security and Privacy.
[Tuor et al. 2017] Tuor, A.; Kaplan, S.; Hutchinson, B.;
Nichols, N.; and Robinson, S. 2017. Deep learning for unsupervised insider threat detection in structured cybersecurity data streams. In Artificial Intelligence for Cybersecurity
Workshop at AAAI.
[Turcotte et al. 2016] Turcotte, M.; Moore, J.; Heard, N.; and
McPhall, A. 2016. Poisson factorization for peer-based
anomaly detection. In Intelligence and Security Informatics (ISI), 2016 IEEE Conference on, 208–210. IEEE.
[Turcotte, Heard, and Kent 2016] Turcotte, M. J.; Heard,
N. A.; and Kent, A. D. 2016. Modelling user behavior in
a network using computer event logs. Dynamic Networks
and Cyber-Security 1:67.
[Veeramachaneni et al. 2016] Veeramachaneni, K.; Arnaldo,
I.; Korrapati, V.; Bassias, C.; and Li, K. 2016. AI 2 : Training
a big data machine to defend. In Proc. HPSC and IDS.
[Zuech, Khoshgoftaar, and Wald 2015] Zuech, R.; Khoshgoftaar, T. M.; and Wald, R. 2015. Intrusion detection
and big heterogeneous data: a survey. Journal of Big Data
2(1):3.
| 9 |
IEEE SIGNAL PROCESSING LETTERS
1
Look Wider to Match Image Patches with
Convolutional Neural Networks
arXiv:1709.06248v1 [cs.CV] 19 Sep 2017
Haesol Park, and Kyoung Mu Lee
Abstract—When a human matches two images, the viewer has
a natural tendency to view the wide area around the target
pixel to obtain clues of right correspondence. However, designing
a matching cost function that works on a large window in
the same way is difficult. The cost function is typically not
intelligent enough to discard the information irrelevant to the
target pixel, resulting in undesirable artifacts. In this paper, we
propose a novel convolutional neural network (CNN) module to
learn a stereo matching cost with a large-sized window. Unlike
conventional pooling layers with strides, the proposed per-pixel
pyramid-pooling layer can cover a large area without a loss
of resolution and detail. Therefore, the learned matching cost
function can successfully utilize the information from a large area
without introducing the fattening effect. The proposed method is
robust despite the presence of weak textures, depth discontinuity,
illumination, and exposure difference. The proposed method
achieves near-peak performance on the Middlebury benchmark.
Index Terms—stereo matching,pooling,CNN
I. I NTRODUCTION
Most stereo matching methods first compute the matching
cost of each pixel with a certain disparity, before optimizing
the whole cost volume either globally or locally by using specific prior knowledge [1]. For decades, many researchers have
focused on the second step, designing a good prior function
and optimizing it [2], [3], [4], [5], [6]. Few studies have been
conducted on designing or selecting a better matching cost
function.
One of the most widely used matching cost functions is a
pixel-wise matching cost function, such as the one used in [7].
Along with sophisticated prior models, it sometimes produces
good results, especially in preserving the detailed structures
near the disparity discontinuities. However, the function fails
when the image contains weakly-textured areas or repetitive
textures. In such cases, a window-based matching cost, such
as CENSUS or SAD [8], produces a more reliable and distinctive measurement. The critical shortcoming of window-based
matching cost functions is their unreliability around disparity
discontinuities. Figure 1 visually illustrates the characteristics
of different matching cost measures.
One method to handle this trade-off is to make the windowbased versatile to its input patterns [10], [11], [12]. The key
idea is making the shape of the matching template adaptive so
that it can discard the information from the pixels that are irrelevant to the target pixel. However, knowing the background
pixels before the actual matching is difficult, making it a
H. Park and K. M. Lee are with Automation and Systems Research Institute,
Seoul National University, Seoul 151-744, Korea
matching cost for
Pixelwise
matching cost for
1
1
0.5
0.5
0
1
0
1
SAD
(11x11)
0.5
0.5
0
1
0
1
SAD
(37x37)
0.5
0.5
0
1
0
1
CENSUS
(11x11)
0.5
0.5
0
1
0
1
CENSUS
(37x37)
0.5
0.5
0
1
0
1
MC-CNN-arct[13]
(11x11)
0.5
0.5
0
1
0
1
0.5
0.5
0
0
Proposed
(37x37)
Fig. 1. The top image shows the reference image with two interested points,
x1 and x2 . The pixel positions are marked as blue dots, whereas the red
and green boxes represent 37 × 37 and 11 × 11 windows centered on them,
respectively. At the bottom, the matching costs for each pixel are visualized
as a normalized function of disparity for different matching cost functions.
The positions of true disparities are marked as red vertical lines. The pixelwise cost shows the lowest values at the true disparity, but it also gives zero
costs for other disparities. The SAD and CENSUS matching cost functions [9]
become less ambiguous as the matching window becomes larger. However,
these functions are affected by pixels irrelevant to the target pixel (x2 ). Even
the matching cost learned by using the baseline convolutional neural network
(CNN) architecture fails when the surface has a nearly flat texture (x1 ). On
the other hand, the proposed CNN architecture works well both on weakly
textured regions and disparity discontinuities.
‘chicken-and-egg’ problem. Therefore, the use of a CNN [13],
[14] is appropriate, as it automatically learns the proper shape
of the templates for each input pattern.
The existing methods, however, are based on conventional
CNN architectures resembling the AlexNet [15] or VGG [16]
network, which are optimized for image classification task
and not for image matching. The architectures comprise several convolution layers, each followed by a rectified linear
unit (ReLU) [15], and pooling layers with strides. One of the
limitations of using these architectures for matching is the
difficulty of enlarging the size of the patches that are to be
compared. The effective size of the patch is directly related to
IEEE SIGNAL PROCESSING LETTERS
the spatial extent of the receptive field of CNN, which can be
increased by (1) including a few strided pooling/convolution
layers, (2) using larger convolution kernels at each layer, or
(3) increasing the number of layers. However, use of strided
pooling/convolution layers makes the results downsampled,
losing fine details. Although the resolution can be recovered
by applying fractional-strided convolution [17], reconstructing
small or thin structures is still difficult if once they are lost
after downsampling. Increasing the size of the kernels is also
problematic, as the number of feature maps required to represent the larger patterns increases significantly. Furthermore, a
previous study [18] reported that the repetitive usage of small
convolutions does not always result in a large receptive field.
This paper contributes to the literature by proposing a novel
CNN module to learn a better matching cost function. The
module is an innovative pooling scheme that enables a CNN
to view a larger area without losing the fine details and without
increasing the computational complexity during test times.
The experiments show that the use of the proposed module
improves the performance of the baseline network, showing
competitive results on the Middlebury [1], [19] benchmark.
II. R ELATED W ORKS
Given the introduction of high-resolution stereo datasets
with the ground-truth disparity maps [20], [19], [21], many
attempts have been made to learn a matching cost function
using machine learning algorithms [13], [14], [22]. The most
impressive results are obtained by using CNN [13], [14].
The architecture proposed in [13] takes a small 11 × 11
window and processes it without the use of pooling. The
computed cost volume is noisy due to the limited size of
the window. Thus, it is post-processed by using the crossbased cost aggregation (CBCA) [23], semi-global matching
(SGM) [3], and additional refinement procedures. On the other
hand, the method in [14] uses multiple pooling layers and
spatial-pyramid-pooling (SPP) [24] to process larger patches.
However, the results show a fattening effect owing to the loss
of information introduced by pooling.
The main contribution of this paper is in proposing a
novel pooling scheme that can handle information from a
large receptive field without losing the fine details. Recently,
several attempts have been made to accomplish the same
goal in the context of semantic segmentation [25], [26], [27].
These methods combine the feature maps from the highlevel layers with those from the lower layers, with the aim
of correctly aligning the object-level information along the
pixel-level details. While this approach can successfully align
the boundaries of the big objects, its inherent limitation is
its inability to recover small objects in the final output once
they are lost during the abstraction due to multiple uses of
pooling. In the same context, the FlowNet [28] architecture
can upsample the coarse-level flow to the original scale by
using lower-level feature maps. However, it fails to recover the
extreme flow elements that are hidden due to the low resolution
of high-level feature maps.
The architecture most closely related to the current work
has been proposed in [24]. Unlike the other approaches, the
2
4
P
Fig. 2. The 4P module with pooling size vector s = [5, 3, 1] is visualized.
This figure shows its action for one channel of the feature maps for brevity;
it does the same job for all channels.
SPP network excludes pooling layers between convolutional
layers. Instead, it first computes highly-nonlinear feature maps
by cascading convolutional layers several times and then
generates high-level and mid-level information by pooling
them at different scales. By keeping the original feature maps
along with feature maps pooled at multiple scales, the SPP
network can combine the features from multiple levels without
losing fine details. Although the previously mentioned stereo
method in [14] uses SPP, it also employs conventional pooling
layers between convolutional layers, thus losing the detailed
information.
III. A RCHITECTURE OF THE N EURAL N ETWORK
The proposed architecture takes two input patches and
produces the corresponding matching cost. In the following
subsections, the newly proposed module is first introduced.
Then the detailed architecture of the entire network is presented.
A. Per-pixel Pyramid Pooling (4P)
The use of pooling layers in CNN has been considered
desirable because of its accuracy and efficiency in image
classification tasks. While the use of max-pooling layers has
been reported to provide an additional invariance in spatial
transformation, the most important gain comes from the
downsampling of feature maps. By performing pooling with a
stride that is larger than one, the output feature maps after
the pooling are scaled down. The final scale of the CNN
output is decreased exponentially in terms of the number of
pooling layers. Given that no parameters related to a pooling
operation exist, this method is an effective way to widen the
receptive field area of a CNN without increasing the number
of parameters. The drawback of strided pooling is that the
network loses fine details in the original feature maps as the
pooling is applied. Thus, a trade-off exists in seeing a larger
area and preserving the small details.
Inspired by the idea discussed in [24], we propose a novel
pooling scheme to overcome this trade-off. Instead of using a
small pooling window with a stride, a large pooling window
is used to achieve the desired size of the receptive field. The
use of one large pooling window can lead to the loss of
finer details. Thus, multiple poolings with varying window
sizes are performed, and the outputs are concatenated to
IEEE SIGNAL PROCESSING LETTERS
3
Matching score
Matching score
1x1 Conv. + Sigmoid (1)
1x1 Conv. + Sigmoid (1)
TABLE I
T HE QUANTITATIVE RESULTS ON THE ‘ TRAINING DENSE ’ SET OF THE
M IDDLEBURY BENCHMARK [1] ARE SHOWN . T HE ERROR REPRESENTS
THE PERCENTAGE OF BAD PIXELS WITH A DISPARITY THRESHOLD 2.0,
AND THE SAME WEIGHTING SCHEME IS APPLIED AS IN [1] WHEN
COMPUTING THE AVERAGE .
1x1 Conv. + ReLU (384)
4P (384)
1x1 Conv. (96)
1x1 Conv. + ReLU (384)
1x1 Conv. + ReLU (384)
1x1 Conv. + ReLU (384)
1x1 Conv. + ReLU (384)
1x1 Conv. + ReLU (384)
1x1 Conv. + ReLU (384)
Methods
WTA
after post-processing
3x3 Conv. + ReLU (112)
3x3 Conv. + ReLU (112)
3x3 Conv. + ReLU (112)
3x3 Conv. + ReLU (112)
3x3 Conv. + ReLU (112)
3x3 Conv. + ReLU (112)
3x3 Conv. + ReLU (112)
3x3 Conv. + ReLU (112)
3x3 Conv. + ReLU (112)
3x3 Conv. + ReLU (112)
L
R
Baseline
L
R
Proposed
Fig. 3. The network structures are visualized for the baseline network, ‘MCCNN-acrt’ [13], and the proposed network. The parenthesized numbers at
each layer represent the number of feature maps after the corresponding
operations. Note that this figure is drawn in terms of the fully convolutional
network.
create new feature maps. The resulting feature maps contain
the information from coarse-to-fine scales. The multi-scale
pooling operation is performed for every pixel without strides.
We call this whole procedure, “per-pixel pyramid pooling”
(4P), which is formally defined as follows:
P 4P (F, s) = [P (F, s1 ) , · · · , P (F, sM )] ,
(1)
where s is a vector having M number of elements, and
P (F, si ) is the pooling operation with size si and stride one.
The structure of this module is illustrated in Figure 2.
B. Proposed model
To validate the effect of the proposed module, we trained
and tested CNNs with and without the 4P module. The
baseline architecture is selected as the ‘MC-CNN-acrt’ [13].
The 4P module in the proposed architecture is constructed by
using the size vector s = [27, 9, 3, 1]. The structures of two
CNNs are visualized in Figure 3.
IV. I MPLEMENTATION D ETAILS
For a fair comparison, we followed the details in [13]
to train the proposed architecture with a few exceptions
mentioned below. First, the size of the training patch became
37×37. Furthermore, we only fine-tuned the parameters of the
last three 1 × 1 convolution layers of the proposed architecture
in Figure 3. The parameters of the earlier layers are borrowed
from the pre-trained ‘MC-CNN-acrt’ [13] network. In our
experiments, this resulted in a better performance than the
end-to-end training of the network with random initializations.
Moreover, training a few convolution layers with pre-trained
features is easier, making it converge faster. We have run a
avg. error
MC-CNN-acrt [13]
proposed
22.91
11.75
MC-CNN-acrt [13]
proposed
(w/ parameters in [13])
proposed
(w/ parameter tuning)
10.26
9.72
8.45
total of four epochs of training, where the last two epochs
were run with a decreased learning rate from 0.003 to 0.0003.
We also used the same post-processing pipeline as in [13]
during the test phase. The post-processing pipeline includes the
use of the CBCA [23] and SGM [3], and the disparity maps are
refined to have continuous values and undergo median filtering
and bilateral filtering.
V. E XPERIMENTS
To verify the effect of the proposed 4P module, we have
compared the results of the baseline and proposed network.
The performance is measured using the ‘training dense’ set
of the Middlebury benchmark [1]. The quantitative results are
briefly summarized in Table I using the average errors. All
experiments are performed by using the Intel core i7 4790K
CPU and a single Nvidia Geforce GTX Titan X GPU.
The proposed method outperforms the baseline architecture regardless of the use of post-processing. The benefit of
using the 4P module is clear when the disparity maps are
obtained by using the pixel-wise winner-takes-it-all (WTA)
rule without any post-processing. Given that the images in the
dataset contain many weakly-textured areas, the small-sized
11×11 window cannot distinguish the true matches from false
ones without the aid of post-processing. On the other hand,
the proposed architecture effectively sees the larger window,
37 × 37, by inserting the 4P module before the final decision
layer.
It is less straightforward to understand why the proposed
architecture still outperforms the baseline even after postprocessing. In that sense, it is worth to mention that the
best parameter setting for post-processing of the proposed
method largely differ from that of the baseline.1 The most
notable changes from the original parameter setting is that
we use much less number of CBCA [23], and it means that
multiple uses of CBCA [23] become redundant in the proposed
architecture. From this fact, we can interpret the role of 4P
module as adaptive local feature aggregation. Compared to the
hand-designed algorithm such as CBCA [23], the influence of
neighboring pixels to a certain pixel is automatically learned
1 Following the conventions in [13], the best parameter setting is as follows: cbca_num_iterations_1 = 0, cbca_num_iterations_2
= 1, sgm_P1 = 1.3, sgm_P2 = 17.0, sgm_Q1 = 3.6, sgm_Q2 =
36.0, and sgm_V = 1.4.
IEEE SIGNAL PROCESSING LETTERS
true disparity and left image
4
proposed
MC-CNN-acrt
Fig. 4. The results for PlaytableP and Vintage are visualized. For each datum, the upper row shows the disparity map and the bottom row shows the
corresponding error maps. While the ‘MC-CNN-acrt’ [13] shows errors around the weakly-textured areas, such as the surfaces of the chair and the table in
PlaytableP or the white wall in Vintage, the proposed method shows more reliable results.
and it can be jointly trained with the cost function itself.
Furthermore, the information exchange among pixels is done
in feature space which contains richer contextual information
than the final cost volume space.
Note that the improvement over the baseline clearly results
neither from the use of extra layers nor from the use of more
parameters, as the authors of [13] already have shown that the
additional use of fully-connected (FC) layers is less significant.
Using two additional FC layers leads to an improvement of
approximately 1.90%, whereas using the 4P module results in
a 21.42% improvement in terms of average error.
The main contribution of the proposed method lies in learning a less ambiguous matching cost function by inspecting a
larger area. Figure 4 shows that the proposed network actually
works better around the weakly-textured area than the ‘MCCNN-acrt’ [13]. The quantitative and qualitative results of
each dataset, including the ones in the ‘test dense’ set, are
available at the Middlebury benchmark [1] website.
VI. C ONCLUSIONS
Viewing a large area to estimate the dense pixel correspondence is necessary to fully utilize the texture information
to achieve less ambiguous and more accurate matching. A
conventional matching cost function fails because neighboring
pixels on the same surface as the target pixel are unknown. In
this paper, a novel CNN module is proposed to make the CNN
structure handle a large image patch without losing the small
details, which enables it to learn an intelligent matching cost
function for large-sized windows. The learned cost function
can discriminate the false matches for weakly-textured areas or
repeating textures, and can also conserve the disparity discontinuities well/. The learned cost function achieves competitive
performance on the Middlebury benchmark.
IEEE SIGNAL PROCESSING LETTERS
R EFERENCES
[1] D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense
two-frame stereo correspondence algorithms,” IJCV, vol. 47, no. 1-3,
pp. 7–42, 2002.
[2] V. Kolmogorov and R. Zabih, “Computing visual correspondence with
occlusions using graph cuts,” in ICCV, vol. 2. IEEE, 2001, pp. 508–515.
[3] H. Hirschmüller, “Stereo processing by semiglobal matching and mutual
information,” PAMI, vol. 30, no. 2, pp. 328–341, 2008.
[4] O. Woodford, P. Torr, I. Reid, and A. Fitzgibbon, “Global stereo
reconstruction under second-order smoothness priors,” PAMI, vol. 31,
no. 12, pp. 2115–2128, 2009.
[5] C. Rhemann, A. Hosni, M. Bleyer, C. Rother, and M. Gelautz, “Fast
cost-volume filtering for visual correspondence and beyond,” in CVPR.
IEEE, 2011, pp. 3017–3024.
[6] Q. Yang, “A non-local cost aggregation method for stereo matching,” in
CVPR. IEEE, 2012, pp. 1402–1409.
[7] S. Birchfield and C. Tomasi, “Depth discontinuities by pixel-to-pixel
stereo,” International Journal of Computer Vision, vol. 35, no. 3, pp.
269–293, 1999.
[8] H. Hirschmuller and D. Scharstein, “Evaluation of stereo matching costs
on images with radiometric differences,” IEEE transactions on pattern
analysis and machine intelligence, vol. 31, no. 9, pp. 1582–1599, 2009.
[9] H. Hirschmüller and D. Scharstein, “Evaluation of cost functions for
stereo matching,” in CVPR. IEEE, 2007, pp. 1–8.
[10] K. Wang, “Adaptive stereo matching algorithm based on edge detection,”
in ICIP, vol. 2. IEEE, 2004, pp. 1345–1348.
[11] K.-J. Yoon and I. S. Kweon, “Adaptive support-weight approach for
correspondence search,” PAMI, vol. 28, no. 4, pp. 650–656, 2006.
[12] F. Tombari, S. Mattoccia, L. D. Stefano, and E. Addimanda, “Classification and evaluation of cost aggregation methods for stereo correspondence,” in CVPR. IEEE, 2008, pp. 1–8.
[13] J. Žbontar and Y. LeCun, “Stereo matching by training a convolutional
neural network to compare image patches,” The Journal of Machine
Learning Research, vol. 17, no. 1, pp. 2287–2318, 2016.
[14] S. Zagoruyko and N. Komodakis, “Learning to compare image patches
via convolutional neural networks,” in CVPR, June 2015, pp. 4353–4361.
[15] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification
with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
[16] K. Simonyan and A. Zisserman, “Very deep convolutional networks for
large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[17] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation
learning with deep convolutional generative adversarial networks,” arXiv
preprint arXiv:1511.06434, 2015.
[18] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Object
detectors emerge in deep scene cnns,” arXiv preprint arXiv:1412.6856,
2014.
[19] D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nešić,
X. Wang, and P. Westling, “High-resolution stereo datasets with
subpixel-accurate ground truth,” in Pattern Recognition. Springer, 2014,
pp. 31–42.
[20] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous
driving? the kitti vision benchmark suite,” in CVPR, 2012.
[21] M. Menze and A. Geiger, “Object scene flow for autonomous vehicles,”
in CVPR, 2015.
[22] L. Ladickỳ, C. Häne, and M. Pollefeys, “Learning the matching function,” arXiv preprint arXiv:1502.00652, 2015.
[23] K. Zhang, J. Lu, and G. Lafruit, “Cross-based local stereo matching
using orthogonal integral images,” Circuits and Systems for Video
Technology, vol. 19, no. 7, pp. 1073–1079, 2009.
[24] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep
convolutional networks for visual recognition,” in ECCV. Springer,
2014, pp. 346–361.
[25] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks
for semantic segmentation,” in CVPR, 2015, pp. 3431–3440.
[26] B. Hariharan, P. Arbelaez, R. Girshick, and J. Malik, “Hypercolumns
for object segmentation and fine-grained localization,” in CVPR, June
2015, pp. 447–456.
[27] H. Noh, S. Hong, and B. Han, “Learning deconvolution network for
semantic segmentation,” arXiv preprint arXiv:1505.04366, 2015.
[28] P. Fischer, A. Dosovitskiy, E. Ilg, P. Häusser, C. Hazırbaş, V. Golkov,
P. van der Smagt, D. Cremers, and T. Brox, “Flownet: Learning optical
flow with convolutional networks,” arXiv preprint arXiv:1504.06852,
2015.
5
| 1 |
On the asymptotic structure of Brownian motions with
a small lead-lag effect
arXiv:1601.03614v2 [math.ST] 8 Apr 2018
Yuta Koike∗†‡
April 10, 2018
Abstract
This paper considers two Brownian motions in a situation where one is correlated to the other with a slight
delay. We study the problem of estimating the time lag parameter between these Brownian motions from their
high-frequency observations, which are possibly subject to measurement errors. The measurement errors are
assumed to be i.i.d., centered Gaussian and independent of the latent processes. We investigate the asymptotic
structure of the likelihood ratio process for this model when the lag parameter is asymptotically infinitesimal.
We show that the structure of the limit experiment depends on the level of the measurement errors: If the measurement errors locally dominate the latent Brownian motions, the model enjoys the LAN property. Otherwise,
the limit experiment does not result in typical ones appearing in the literature. We also discuss the efficient
estimation of the lag parameter to highlight the statistical implications.
Keywords and phrases: Asymptotic efficiency; Endogenous noise; Lead-lag effect; Local asymptotic normality;
Microstructure noise.
1
Introduction
Let Bt = (Bt1 , Bt2 ) (t ∈ R) be a bivariate two-sided Brownian motion such that B0 = 0, E[(B11 )2 ] =
E[(B12 )2 ] = 1 and E[B11 B12 ] = ρ for some ρ ∈ (−1, 0) ∪ (0, 1). Also, let ǫi = (ǫ1i , ǫ2i ) (i = 1, 2, . . . ) be a sequence
of i.i.d. bivariate standard normal variables independent of B. For each n ∈ N, we denote by Pn,ϑ the law of the
vector Zn = (X1 , . . . , Xn , Y1 , . . . , Yn )⊤ generated by the following model:
(
√
1 + √v ǫ1 ,
2
Xi = Bi/n
Yi = Bi/n−ϑ
+ vn ǫ2i if ϑ ≥ 0,
n i
√
2 + √ v ǫ2
1
if ϑ < 0
Xi = Bi/n−ϑ
+ vn ǫ1i , Yi = Bi/n
n i
for i = 1, . . . , n,
(1.1)
where vn is a non-negative number. ϑ ∈ R denotes the unknown time-lag parameter which we are interested in.
Especially, the sign of ϑ is unknown. The aim of this paper is to study the asymptotic structure of the sequence
of experiments (R2n , B 2n , (Pn,ϑ )ϑ∈R ), n = 1, 2, . . . , as n → ∞ when the time lag parameter ϑ is asymptotically
infinitesimal, i.e. ϑ tends to 0 as n → ∞ (here and below B m denotes the Borel σ-field of Rm for m ∈ N). More
precisely, we study the limit experiment of (Pn,rnu )u∈R as n → ∞ for the proper convergence rate rn .
∗
Department of Business Administration, Graduate School of Social Sciences, Tokyo Metropolitan University, Marunouchi Eiraku Bldg.
18F, 1-4-1 Marunouchi, Chiyoda-ku, Tokyo 100-0005 Japan
†
The Institute of Statistical Mathematics, 10-3 Midori-cho, Tachikawa, Tokyo 190-8562, Japan
‡
CREST, Japan Science and Technology Agency
1
If vn ≡ 0, model (1.1) is a special case of the Hoffmann-Rosenbaum-Yoshida (HRY) model introduced in
Hoffmann et al. (2013) to describe lead-lag effects in high-frequency financial data. A similar model has also
been studied in Robert and Rosenbaum (2010) with an asymptotic regime different from the current setting. Here,
the lead-lag effect refers to a situation where one time series is correlated to another time series at a later time,
which has especially drawn attention in analysis of economic time series data for a long time, and associated
econometric methods have been developed by many authors; see Section 1 of Hoffmann et al. (2013), Section 3 of
Robert and Rosenbaum (2010) and references therein. The practicality of the HRY model in empirical work has
recently been established by several authors such as Alsayed and McGroarty (2014), Huth and Abergel (2014) and
Bollen et al. (2017) for financial data and Iacus et al. (2015) for social media data. These empirical studies show
that time lag parameters are typically comparable to the observation frequencies in their scales. This motivates us
to study the HRY model when ϑ is small. In such a situation, one would especially be interested in how small lag
parameters can be identified in principle. To the author’s knowledge, however, there is few theoretical study for the
HRY model and, in particular, nothing has been known about the optimality of statistical inferences for the HRY
model. The purpose of this paper is trying to fill in this gap.
In this paper, as well as (a special case of) the HRY model, we also consider a situation where the model contains
measurement errors. This is motivated by the recent studies for the volatility estimation from ultra high frequency
financial data, which is typically modeled as a discretely observed semimartingale with market microstructure
noise. We refer to Chapter 7 of Aı̈t-Sahalia and Jacod (2014) for a brief description of this subject. In particular,
the asymptotic structure and the asymptotic efficiency bound have been established in the work of Gloter and Jacod
(2001a,b) (see also Cai et al. (2010)) for a statistical model of estimating the scale parameter σ > 0 from the
discrete observations
σWi/n +
√
vn δi ,
i = 1, . . . , n,
(1.2)
where W = (Wt )t∈[0,1] is a one-dimensional standard Wiener process and (δi )ni=1 is a sequence of centered
i.i.d. standard normal variables independent of W . They proved the LAN property for the above model and constructed asymptotically efficient estimators for σ (they indeed considered a more general setting). Extensions of
their LAN result to a multivariate setting have also been studied by several authors. The correlation estimation in a
bivariate setting is studied in Bibinger (2011), while a more general setting containing non-synchronous sampling
case is studied in Ogihara (2014). On the other hand, Reiß (2011) has studied the asymptotic structure of model
(1.2) when σ is a function of time rather than a constant and established the asymptotic equivalence between such a
model and a Gaussian white noise model. The result has been extended to the bivariate case by Bibinger and Reiß
(2014) and a multivariate setting containing non-synchronous case by Bibinger et al. (2014). Another type of extension, replacing the Wiener process W by a different continuous-time process, has also been studied. For example,
Sabel and Schmidt-Hieber (2014) consider the efficient estimation of σ in a situation where W is a more general
Gaussian process, especially a fractional Brownian motion.
The main contribution of this paper is (i) to determine the proper convergence rate rn , and (ii) to derive a
stochastic expansion for the likelihood ratio process for (Pn,rnu )u∈R . Analogously to Gloter and Jacod (2001a),
the proper convergence rate rn depends on the behavior of the sequence nvn . This is intuitively natural because
Var[Bi/n −B(i−1)/n ] = n−1 and thus the behavior of nvn determines how strongly the measurement errors (locally)
3
dominate the nature of the observed returns. In particular, we find that rn = n− 2 if vn ≡ 0, or more generally if
2
3
1
nvn is bounded.1 The rate n− 2 is much faster than the usual parametric rate n− 2 and even faster than the rate n−1 .
Since the time resolution of our model is n−1 , our result suggests that we could estimate lag parameters smaller
than the time resolution of observation data. This implication is at least true for our restrictive situation, as shown in
Section 3. Since the convergence rate of the estimator for the lag parameter ϑ proposed in Hoffmann et al. (2013)
cannot be faster than n−1 (see Proposition 2 of Hoffmann et al. (2013) and the discussion after this proposition),
our result shows that their estimator is suboptimal in the setting considered in this paper (although their estimator
works in a more general setting).
Given the proper convergence rate, we have the following stochastic expansion for the likelihood ratio process:
There are random variables Tn and Sn defined on (R2n , B 2n ) and non-negative numbers Iγ and Jγ such that
dPn,rnun
u2n
p
log
→0
under Pn,0 as n → ∞
(1.3)
− un Tn + |un |Sn −
(Iγ + Jγ ) −
dPn,0
2
for any bounded sequence un of real numbers and
d
(Tn , Sn ) −
→ N (0, Iγ ) ⊗ N (0, Jγ )
under Pn,0
as n → ∞.
(1.4)
Therefore, by a contiguity argument we deduce that the experiments (R2n , B 2n , (Pn,rn u )u∈R ) converge weakly to
the experiment (R2 , B 2 , (Qu )u∈R ) in the Le Cam sense, where Qu = N (uIγ , Iγ ) ⊗ N (|u|Jγ , Jγ ) (see Corol-
lary 2.3). The numbers Iγ and Jγ are determined by the asymptotic behavior of nvn and precisely defined by
(2.1)–(2.2). In particular, Iγ is always positive, while Jγ is positive if nvn is bounded and Jγ = 0 otherwise.
The case Jγ = 0 corresponds to the situation where the measurement errors locally dominate the signal, and in
this case our model enjoys the LAN property which commonly appears in regular experiments. This result is of
interest because model (1.1) exhibits irregularity in the sense that its likelihood function is not smooth in ϑ, and
the limit experiment of such a model typically deviates from the LAN structure as illustrated in Chapters V–VII of
Ibragimov and Has’minskii (1981). Our result means that the measurement errors have a kind of regularizing effect
on the asymptotic structure of model (1.1). On the other hand, if Jγ > 0, which corresponds to the cases where the
signal dominates or is balanced with the measurement errors, in addition to an observation from a usual Gaussian
shift experiment N (u, Iγ−1 ), the limit experiment contains an extra observation from the experiment N (|u|, Jγ−1 ).
Although this experiment looks simple, to the author’s knowledge it does not result in well-studied cases (such
as in Ibragimov and Has’minskii (1981)), so the definition of asymptotically efficient estimators in this case is not
obvious. To obtain the asymptotic efficiency bound for estimating the lag parameter in this case, in Section 3
we apply Ibragimov and Has’minskii (1981)’s theory to our problem, which is a common approach to establish
asymptotic efficiency bounds for experiments generated by diffusion type processes (see Kutoyants (2004) for details). As a result, we find that Bayesian estimators are asymptotically efficient, while the maximum likelihood
estimator is not always asymptotically efficient. This is a common phenomenon in irregular models; see Chapters V–VII of Ibragimov and Has’minskii (1981), Küchler and Kutoyants (2000), Chapter 3 of Kutoyants (2004),
Rubin and Song (1995) and Chapter 9 of van der Vaart (1998) for example.
This paper is organized as follows. Section 2 presents the main result of the paper. In Section 3 we discuss the
efficient estimation of the lag parameter in our setting. Section 4 is devoted to the proof of an auxiliary technical
result.
1
Indeed, an intuition for this fact has already been appeared in Hoffmann et al. (2013) (see Remark 2.2).
3
General notation
Em denotes the m × m-identity matrix. For a matrix A, we denote by kAksp and kAkF its spectral norm and
the Frobenius norm, respectively. That is, kAksp = sup{kAxk : kxk ≤ 1} and kAk2F = tr(A⊤ A). Also, we
denote by Aij the (i, j)-th entry of A.
2
Main result
We start with completing the definitions of the quantities rn , Iγ and Jγ appearing in the Introduction. First,
following Gloter and Jacod (2001a) we assume that the sequence nvn converges in [0, ∞] and set
γ := lim nvn .
n→∞
We also assume lim supn vn < ∞ as in Gloter and Jacod (2001a). Then we set
( p
n/vn if γ = ∞,
Nn =
n
otherwise.
Nn can be considered as an “effective” sample size in the sense that the proper convergence rate for estimating σ
−1
from model (1.1) is given by Nn 2 , which is seen as the regular parametric rate if we regard Nn as the sample size.
−3
Using this effective sample size Nn , we define our proper convergence rate as rn = Nn 2 . The constants Iγ and
Jγ appearing in (1.3)–(1.4) are defined by
ρ2
2(1−ρ
2
√
√ )
ρ
(1+ρ)(1+ρ+4γ)− (1−ρ)(1−ρ+4γ)−2ρ
Iγ =
8γ 2
ρ2
√
√
2( 1+ρ+ 1−ρ)
if γ = 0,
(2.1)
if 0 < γ < ∞,
if γ = ∞
and
Jγ =
3
4
n
1
16γ 2
0
o
1
if γ = 0,
+ (1−ρ)
2
1
1
3
3
2
2
2
2
1+ρ
1−ρ
1−ρ
1+ρ
+ 1−ρ+4γ
+ 1−ρ+4γ
+
if 0 < γ < ∞,
4−3
1+ρ+4γ
1+ρ+4γ
1
(1+ρ)2
(2.2)
if γ = ∞.
Remark 2.1. Iγ is always positive for any γ ∈ [0, ∞]. This is evident when γ = 0 or γ = ∞. When 0 < γ < ∞,
this is proven as follows. First suppose that 0 < ρ < 1. Then we have
p
(1 + ρ)(1 + ρ + 4γ) − { (1 − ρ)(1 − ρ + 4γ) + 2ρ}2
p
= 4ρ + 8γρ − 4ρ (1 − ρ)2 + 4γ(1 − ρ) − 4ρ2
p
= 4ρ (1 − ρ) + 2γ − (1 − ρ)2 + 4γ(1 − ρ)
p
p
= 4ρ
(1 − ρ)2 + 4γ(1 − ρ) + 4γ 2 − (1 − ρ)2 + 4γ(1 − ρ) > 0.
Hence we have Iγ > 0. On the other hand, if −1 < ρ < 0, applying the above inequality with replacing ρ by −ρ,
p
p
we obtain (1 − ρ)(1 − ρ + 4γ) > (1 + ρ)(1 + ρ + 4γ) − 2ρ. Hence we have Iγ > 0.
The following statement is our main result.
4
Theorem 2.1. There are two sequences Tn and Sn of random variables satisfying (1.3)–(1.4) for any bounded
sequence un of real numbers.
We can explicitly give the variables Tn and Sn in Theorem 2.1 by (2.8) below. Theorem 2.1 has some immediate
consequences. The first one is the direct consequence of the definition of the LAN property.
Corollary 2.1. If γ = ∞, (Pn,ϑ )ϑ∈R has the LAN property at ϑ = 0 with rate rn and asymptotic Fisher information
Iγ .
The second one follows from Le Cam’s first lemma (see e.g. Lemma 6.4 of van der Vaart (1998)).
Corollary 2.2. Pn,ϑn and Pn,0 are mutually contiguous if the sequence ϑn of real numbers satisfies ϑn = O(rn ).
The third one is derived from Corollary 2.2 and Theorem 61.6 of Strasser (1985) (we refer to Drost et al.
(2009), Le Cam (1986), Chapter 10 of Strasser (1985) and Chapters 8–9 of van der Vaart (1998) for the definition
and applications of the weak convergence of experiments).
Corollary 2.3. The sequence (R2n , B 2n , (Pn,rn h )h∈R ) of experiments converges weakly to the experiment (R2 , B 2 ,
(Qu )u∈R ) as n → ∞, where Qu = N (uIγ , Iγ ) ⊗ N (|u|Jγ , Jγ ).
Now we turn to the proof of Theorem 2.1. Although (Pn,ϑ )ϑ∈R consists of Gaussian distributions, the problem is not simple because the covariance matrix Cn (ϑ) of Pn,ϑ is a complicated function of the lag parameter
ϑ. In particular, Cn (ϑ) and Cn (ϑ′ ) are not simultaneously diagonalizable in general (even asymptotically) if
ϑ 6= ϑ′ . This could be troublesome because in analysis of Gaussian experiments the (asymptotically) simulta-
neous diagonalizability of the covariance matrices of the statistical model for different parameters typically plays
an important role (cf. Section 3 of Davies (1973), Lemma 8.1 of Gloter and Jacod (2001a) and Lemma C.4 of
Sabel and Schmidt-Hieber (2014)). For this reason we first transfer from the model Pn,ϑ to a more tractable
model defined as follows: For each n ∈ N, set Θn = {ϑ ∈ R : vn − nϑ2 + |ϑ| ≥ 0} = {ϑ ∈ R :
√
e n,ϑ the law of the vector Z
en =
|ϑ| ≤ (1 + 1 + 4nvn )/(2n)}. Then, for each ϑ ∈ Θn we denote by P
e1 , . . . , X
en , Ye1 , . . . , Yen )⊤ defined by
(X
where
(
√
2 − B2
2
2
e
ǫ2i,n = −nϑ(Bi/n
if ϑ ≥ 0,
(i−1)/n ) + vn − nϑ + ϑǫi
p
1
1
1
2
2
= −n|ϑ|(Bi/n − B(i−1)/n ) + vn − nϑ2 + |ϑ|ǫi ,
e
ǫi,n = ǫi if ϑ < 0
e
ǫ1i,n = ǫ1i ,
e
ǫ1i,n
2
Yei = Bi/n
+e
ǫ2i ,
ei = B 1 + e
ǫ1i ,
X
i/n
(2.3)
(2.4)
en,ϑ .
en (ϑ) the covariance matrix of P
for i = 1, . . . , n. We denote by C
e n,ϑ . To be precise, the Hellinger distance
In the following we will show that Pn,ϑ is well-approximated by P
en,ϑ tends to 0 as n → ∞, provided that ϑ tends to 0 sufficiently fast. Here, the Hellinger
between Pn,ϑ and P
distance H(P, Q) between two probability measures P and Q on a measurable space (X , A) is defined by
s
s
!2 1/2
Z
dP
dQ
dµ ,
−
H(P, Q) =
dµ
dµ
X
where µ is a σ-finite measure dominating both P and Q (µ = P + Q for example). It can easily be checked that
H(P, Q) does not depend on the choice of µ. See Appendix A.1 of Reiß (2011), Section 2 of Strasser (1985) and
Section 2.4 of Tsybakov (2009) for more information about the Hellinger distance.
5
e n,ϑ ) expectation with respect to Pn,ϑ (resp. P
en,ϑ ).
Throughout the paper, we denote by En,ϑ (resp. E
e n,ϑ .
Proposition 2.1. (a) If |ϑ| ≤ 1/n, then Pn,ϑ = P
en,ϑ ) ≤ 4v −2 (4 + 3ρ2 )n2 |ϑ|3 for any n ∈ N and any ϑ ∈ Θn .
(b) If vn > 0, H 2 (Pn,ϑ , P
n
4
−
e n,ϑ ) →
(c) If a sequence ϑn of positive numbers satisfies ϑn = o(n−1 ∨Nn 3 ) as n → ∞, then sup|ϑ|≤ϑn H(Pn,ϑ , P
0.
Proof. Claim (c) immediately follows from (a) and (b), so we focus on (a) and (b). By symmetry we may assume
e n,ϑ [xi xj ] for
ϑ ≥ 0. Let x1 , . . . , xn , y1 , . . . , yn be the canonical variables on R2n . Then we have En,ϑ [xi xj ] = E
all i, j. Moreover, a simple computation shows that
(
( i∧j
if i ∧ j ≥ nϑ,
n − ϑ) + vn 1{i=j}
En,ϑ [yi yj ] =
i∨j
(ϑ − n )+ + vn 1{i=j} otherwise
and
and
e n,ϑ [yi yj ] =
E
En,ϑ [xi yj ] =
(
i∧j
i∧j
− ϑ1{i≥j} − ϑ1{i≤j} + (vn + ϑ)1{i=j} =
− ϑ + vn 1{i=j}
n
n
if i < j − nϑ,
ρi/n
(
e n,ϑ [xi yj ] =
E
ρ(j/n − ϑ)+ otherwise,
ρi/n
if i < j,
ρ(j/n − ϑ) otherwise.
en (ϑ) if ϑ ≤ 1/n. This yields claim (a) because both Pn,ϑ and P
en,ϑ are centered
Therefore, we have Cn (ϑ) = C
Gaussian. On the other hand, from the above identities we also have
en (ϑ)k2F
kCn (ϑ) − C
n
X
i∨j
i∧j
ϑ−
=
−ϑ
−
n
n
+
2
i,j=1
i∧j<nϑ
+2
n
X
i=1
=
n
X
i,j=1
i∧j<nϑ
+2
X
ρ
i∨j
n
−
j:i<j≤i+nϑ
ϑ−
n X
X
+
ρ
i
n
j
−ϑ
n
i∧j
−ϑ
n
2
+
+
−
i
n
2
+
j:j≤i
2
X
ρ
X
ρ
j−i
−ϑ
n
2
j:i<j≤nϑ
j:i∨nϑ<j≤i+nϑ
2
2
nϑ
2nϑ
2
+ 6n · ρ · nϑ ·
= (8 + 6ρ2 )n2 ϑ3 .
≤ 2n · nϑ ·
n
n
i=1
1
j
−ϑ
n
+
+
X
−
j:j≤i∧nϑ
ρ
j
−ϑ
n
2
j
−ϑ
n
2
−1
Now if vn > 0, Cn (ϑ) is positive semidefinite and satisfies kCn (ϑ)− 2 ksp ≤ vn 2 by the monotonicity theorem
for eigenvalues (Corollary 4.3.3 of Horn and Johnson (1985)) because Cn (ϑ) − vn E2n is positive semidefinite.
Therefore, by Eqs.(A.4) and (A.6) from Reiß (2011) we obtain
en (ϑ))Cn (ϑ)− 12 k2 ≤ 4v −2 (4 + 3ρ2 )n2 ϑ3 ,
en,ϑ ) ≤ 2kCn (ϑ)− 21 (Cn (ϑ) − C
H 2 (Pn,ϑ , P
F
n
hence claim (b) holds true.
6
In the following we will frequently use the fact that the Hellinger distance dominates the total variation distance:
V (P, Q) ≤ H(P, Q),
(2.5)
where V (P, Q) = supA∈A |P (A) − Q(A)|. See e.g. Lemma 2.3 of Tsybakov (2009) for the proof. The following
properties of the total variation distance are immediate consequences of the definition and important for our purpose. For each n ∈ N, let Pn and Qn be two sequences of probability measures on a measurable space (Xn , An ),
and let ζn be a random variable on (Xn , An ) taking its value in a metric space D. Then, for any a ∈ D and any
probability measure µ on D, the following statements hold true:
p
p
V (Pn , Qn ) → 0, ζn −
→ a under Pn
⇒ ξn −
→ a under Qn ,
d
d
V (Pn , Qn ) → 0, ζn −
→ µ under Pn ⇒ ξn −
→ µ under Qn .
)
(2.6)
e n,ϑ as a tractable form. For this purpose we introduce some
en (ϑ) of P
Next we express the covariance matrix C
notation. The n × n matrix ∇n denotes the backward difference operator, i.e.
1
−1 1
∇n =
.
.. ..
.
.
−1 1
b n := (∇n ⊕ ∇n )Z
e n = (X
e1 , X
e2 − X
e1 , . . . , X
en − X
en−1 , Ye1 , Ye2 − Ye1 , . . . , Yen − Yen−1 )⊤ and denote by
We set Z
b n , i.e. Vn (ϑ) = (∇n ⊕ ∇n )C
en (ϑ)(∇n ⊕ ∇n )⊤ . Vn (ϑ) can explicitly be expressed
Vn (ϑ) the covariance matrix of Z
sign(ϑ)
¯n
as Vn (ϑ) = Ḡn − ρϑ∇
"
Gn
Ḡn = ρ
n En
, where
#
ρ
E
n
n
,
Gn
¯+
∇
n =
with Gn = n1 En + vn Fn , Fn = ∇n ∇⊤
n and
"
Rn =
0
∇⊤
n
#
···
0
···
..
.
0
..
.
0
0
∇n ρ−1 Rn
1
0
0
..
.
0
..
.
0 ···
¯−
∇
n =−
,
"
ρ−1 Rn ∇n
∇⊤
n
0
#
.
¯ sign(ϑ)
It is more convenient to rewrite the expression ∇
as follows. Let Sn and Tn be the symmetric and skewn
⊤
⊤
symmetric parts of 2∇⊤
n , respectively. That is, Sn = ∇n + ∇n and Tn = ∇n − ∇n . Then we set
#
#
"
"
Sn
Tn
1 ρ−1 Rn
1 −ρ−1 Rn
S̄n =
,
T̄n =
.
2
2
Sn
ρ−1 Rn
−Tn
ρ−1 Rn
¯ ± = T̄n ± S̄n , so we obtain Vn (ϑ) = Ḡn − ρ(ϑT̄n + |ϑ|S̄n ). This is a simple function of ϑ,
We can easily check ∇
n
so Vn (ϑ) is more tractable than Cn (ϑ): Although Vn (ϑ)’s are not simultaneously diagonalizable for different ϑ’s,
it is sufficient to consider a relationship between the matrices Ḡn , T̄n and S̄n . In fact, it turns out that the following
result is sufficient for our purpose.
7
−1
−1
Proposition 2.2. For any α, β ∈ R, we have kḠn 2 (αT̄n + β S̄n )Ḡn 2 ksp = O(Nn ) as n → ∞ and
−1
−1
lim ρ2 rn2 kḠn 2 (αT̄n + β S̄n )Ḡn 2 k2F = 2(α2 Iγ + β 2 Jγ ).
n→∞
(2.7)
The proof of Proposition 2.2 consists of elementary but complicated calculations, so we postpone it to the Appendix (Section 4). We remark that the proof requires a calculation essentially different from that of the Fisher information for the scale parameter estimation from observations of the form (1.2) such as in Gloter and Jacod (2001a)
and Sabel and Schmidt-Hieber (2014) (see Remark 4.1). Also, note that Proposition 2.2 yields the invertibility of
1
−1
1
−1
Vn (ϑn ) for sufficiently large n if ϑn = o(Nn−1 ) because Vn (ϑn ) = Ḡn2 (E2n − ρḠn 2 (ϑn T̄n + |ϑn |S̄n )Ḡn 2 )Ḡn2 .
Proof of Theorem 2.1. Define the function zbn : R2n → R2n by setting zbn (ζ) = (∇n ⊕ ∇n )ζ for ζ ∈ R2n . Then
we set
o
ρ n
−1
−1
Tn = − rn zbn⊤ Ḡ−1
T̄
Ḡ
z
b
−
tr(
Ḡ
T̄
)
,
n n n
n
n
n
2
o
ρ n
−1
−1
S̄
Sn = − rn zbn⊤ Ḡ−1
Ḡ
z
b
−
tr(
Ḡ
S̄
)
.
n n n
n
n
n
2
(2.8)
By virtue of Proposition 2.1 and (2.5)–(2.6), it suffices to prove the following statements:
dPn,rnun
u2n
p
en,0 ,
log
→ 0 under P
− un Tn + |un |Sn −
(Iγ + Jγ ) −
dPn,0
2
(2.9)
d
e n,0 .
(Tn , Sn ) −
→ N (0, Iγ ) ⊗ N (0, Jγ ) under P
(2.10)
(2.10) follows from Proposition 2.2 and Proposition 2 of Dalalyan and Yoshida (2011). On the other hand, setting
−1
−1
An = ρrn Ḡn 2 (un T̄n + |un |S̄n )Ḡn 2 , we have kAn k2F − 2u2n (Iγ + Jγ ) → 0 by Proposition 2.2. Therefore, by
Proposition 2.1, (2.5) and Proposition 2 from Chapter 4 of Le Cam (1986) we obtain (2.9) once we show that
en,rnun
dP
kAn k2F
p
e n,0 .
ξn := log
→ 0 under P
−
(2.11)
− un Tn + |un |Sn −
en,0
4
dP
The strategy of the proof of (2.11) is the same as that of Theorem 4.2 from Davies (1973). First, by Eq.(4.3) of
Davies (1973), for sufficiently large n we have
2
e n,0 [ξn ] = − 1 log det Vn (rn un ) − log det Vn (0) + tr(Vn (0)(Vn (rn un )−1 − Vn (0)−1 )) + kAn kF
E
2
4
1
1
2
−1
2
=−
log det(E2n − An ) + tr(An ) + tr(An ) + tr((E2n − An ) − E2n − An − An ) .
2
2
P
p
Note that it holds that (E2n − An )−1 = ∞
p=0 An for sufficiently large n because kAn ksp → 0 as n → ∞ by
Proposition 2.2. Combining this fact with inequality (v) from Appendix II of Davies (1973), we obtain
1
1
1
1
2
e
+
|En,0 [ξn ]| ≤ kAn ksp kAn kF
2
3 (1 − kAn ksp )3 1 − kAn ksp
e n,0 [ξn ] → 0.
for sufficiently large n. Hence Proposition 2.2 yields E
Next, noting that ξn can be rewritten as
1
en,0 [b
en,0 [ξn ],
ξn = − zbn⊤ Bn zbn − E
zn⊤ Bn zbn ] + E
2
−1
−1
where Bn = Vn (rn un )−1 − Vn (0)−1 − Ḡn 2 An Ḡn 2 , we obtain from Eq.(4.4) of Davies (1973)
1
1
2 VarePn,0 [ξn ] = Vn (0) 2 (Vn (rn un )−1 − Vn (0)−1 )Vn (0) 2 − An
8
2
F
= (E2n − An )−1 − E2n − An
2
F
.
Therefore, using the identity (E2n − An )−1 =
P∞
p
p=0 An
again we obtain 2 VarPen,0 [ξn ] ≤ kAn k2sp kAn k2F (1 −
kAn ksp )−2 for sufficiently large n. Hence Proposition 2.2 again yields VarPen,0 [ξn ] → 0, and we obtain (2.11).
We finish this section with some remarks.
3
Remark 2.2. It is worth mentioning that we can infer from Hoffmann et al. (2013) why the rate n− 2 is the proper
convergence rate of our model in the case of vn ≡ 0 as follows. Let us set
n
U (ϑ) =
n−1
X
(Xi+1 − Xi )(Yj+1 − Yj )1{[ i , i+1 )∩[ j −ϑ, j+1 −ϑ)6=∅}
n
i,j=1
n
n
n
for ϑ ∈ R. The principle used in Hoffmann et al. (2013) is that U n (ϑ) is close to the true correlation ρ if and
only if ϑ is close to the true time-lag parameter. Since the accuracy of estimating the correlation parameter is of
√
√
order 1/ n, we naturally consider the quantity | n(U n (ϑ) − ρ)| to measure the distance between U n (ϑ) and
√
ρ: | n(U n (ϑ) − ρ)| would take a large value if ϑ is not sufficiently close to the true time-lag parameter. In fact,
√
Proposition 1 from Hoffmann et al. (2013) implies that | n(U n (ϑ)−ρ)| does not diverge if and only if the distance
3
between ϑ and the true time-lag parameter is at most of order n− 2 . This information allows us to estimate the true
3
time-lag parameter with the accuracy of order n− 2 .
Remark 2.3. From an econometric point of view, Proposition 2.1 is of independent interest because the model
given by (2.3)–(2.4) has an economic interpretation different from model (1.1). This model contains measurement
errors correlated to the latent returns Bi/n − B(i−1)/n . The integrated volatility estimation in the presence of this
type of measurement error has been studied by Kalnina and Linton (2008) for example. In the market microstruc-
ture theory, such a correlation is often explained as an effect of asymmetric information (e.g. Glosten (1987)). Interestingly, some economic arguments suggest that such an information asymmetry would cause a lead-lag effect; see
Chan (1993) and Chordia et al. (2011) for instance. It would also be worth emphasizing that de Jong and Schotman
(2010) connect this type of model with the investigation of price discovery, for price discovery processes are closely
related to lead-lag effects, as seen in de Jong et al. (1998) and Hasbrouck (1995).
Remark 2.4. Our proof of the main result heavily depends on the Gaussianity of the model, and especially we
require the Gaussianity of the measurement errors. It is obvious that we need some restriction on the distribution
of the measurement errors to derive a specific limit experiment. In fact, if vn ≡ 1 and ǫi ’s take their values only
in integers, then we can completely recover the signal for sufficiently large n. Apart from such a trivial example,
the recent study of Bibinger et al. (2016) has shown that another (non-trivial) specification for the distribution of
the measurement errors δi ’s in (1.2) can improve the convergence rate for estimating the scale parameter σ. In the
light of the connection between the convergence rates for models (1.1)–(1.2), we naturally conjecture that a similar
specification for the measurement errors would affect the convergence rate for our model. This issue is beyond the
scope of this paper and left to future research.
3
Efficient estimation of the lag parameter
As an application of the results from the previous section, we construct efficient estimators for the lag parameter
ϑ in the models (Pn,ϑ )ϑ∈R at ϑ = 0. Here we consider a slightly extended setup as follows: letting ηn be a sequence
of positive numbers tending to 0 and C be a bounded open interval in R, we construct efficient estimators for the
parameter c in the models (Pn,cηn )c∈C at every c ∈ C. To make use of the results from the previous section, we
9
impose the following condition on ηn :
−4
ηn = o n−1 ∨ Nn 3
rn−1 ηn → ∞
and
as n → ∞.
(3.1)
Under (3.1) there is a positive integer n0 such that cηn ∈ Θn and Vn (cηn ) is invertible for any c ∈ C and n ≥ n0
due to Proposition 2.2. Throughout this section, we always assume that n is larger than such an n0 .
Remark 3.1. In a practical point of view, the dependence of the time-lag parameter ϑ on the sampling frequency
n is just a theoretical device to control the relative size of ϑ compared with n (which corresponds to ηn ) in the
asymptotic theory and it is only important whether the asymptotic order condition (corresponding to (3.1) in our
case) is acceptable as an approximation. Namely, our asymptotic theory concerns whether the time-lag parameter
ϑ is comparable to n−ι for some ι > 0 given a fixed sampling frequency n (the possible values of ι change in
accordance with the noise level vn ) and it does not require that the time-lag parameter varies in proportion to the
sampling frequency. This type of asymptotic theory is standard in econometrics: For example, when one considers
the volatility estimation of a financial asset with taking account of rounding, one usually lets the rounding level
shrink as the sampling frequency increases; see Rosenbaum (2009), Li and Mykland (2015), Li et al. (2015) and
Sato and Kunitomo (2015) for example.
We start with generalizing Proposition 2.2 by a matrix perturbation argument.
Lemma 3.1. For any α, β ∈ R we have
1
1
sup kVn (cηn )− 2 (αT̄n + β S̄n )Vn (cηn )− 2 ksp = O(Nn ),
c∈C
1
1
sup ρ2 rn2 kVn (cηn )− 2 (αT̄n + β S̄n )Vn (cηn )− 2 k2F − 2(α2 Iγ + β 2 Jγ ) → 0
c∈C
as n → ∞.
1
1
Proof. Setting Hn (c) = Vn (cηn )− 2 Ḡn2 , we have
−1
1
1
−1
Vn (cηn )− 2 (αT̄n + β S̄n )Vn (cηn )− 2 = Hn (c)Ḡn 2 (αT̄n + β S̄n )Ḡn 2 Hn (c)⊤
for any c ∈ C. Therefore, Ostrowski’s theorem (Theorem 4.5.9 of Horn and Johnson (1985)) implies that
1
−1
1
−1
kVn (cηn )− 2 (αT̄n + β S̄n )Vn (cηn )− 2 ksp ≤ kHn (c)Hn (c)⊤ ksp kḠn 2 (αT̄n + β S̄n )Ḡn 2 ksp
and
1
−1
1
−1
|kVn (cηn )− 2 (αT̄n + β S̄n )Vn (cηn )− 2 k2F − kḠn 2 (αT̄n + β S̄n )Ḡn 2 k2F |
−1
−1
≤ k(Hn (c)Hn (c)⊤ )2 − E2n ksp kḠn 2 (αT̄n + β S̄n )Ḡn 2 k2F
−1
−1
≤ kHn (c)Hn (c)⊤ − E2n ksp (kHn (c)Hn (c)⊤ ksp + 1)kḠn 2 (αT̄n + β S̄n )Ḡn 2 k2F .
Hence, Proposition 2.2 implies that the proof is completed once we show that supc∈C kHn (c)Hn (c)⊤ ksp = O(1)
and supc∈C kHn (c)Hn (c)⊤ − E2n ksp = o(1) as n → ∞. Since Hn (c)Hn (c)⊤ and Hn (c)⊤ Hn (c) share the
−1
same eigenvalues (Theorem 1.3.20 of Horn and Johnson (1985)) and Hn (c)⊤ Hn (c) = (E2n − ρηn Gn 2 (|c|S̄n +
−1
cT̄n )Gn 2 )−1 , the desired results follow from Proposition 2.2, (3.1) and the Neumann series representation of
Hn (c)⊤ Hn (c).
10
Using the above result, we can prove a uniform version of Theorem 2.1.
Proposition 3.1. Let Tn and Sn be defined by (2.8). Then
dPn,cηn+rn un
u2n
p
→0
(Iγ + Jγ ) −
− un Tn + |un |Sn −
log
dPn,cηn
2
d
under Pn,cηn as n → ∞ uniformly in c ∈ C for any bounded sequence un of real numbers. Moreover, (Tn , Sn ) −
→
N (0, Iγ ) ⊗ N (0, Jγ ) under Pn,cηn as n → ∞ uniformly in c ∈ C.
Proof. We can prove the first claim in a similar manner to the proof of Theorem 2.1 using Lemma 3.1 instead
d
of Proposition 2.2. To prove the second claim, it suffices to show that (Tn , Sn ) −
→ N (0, Iγ ) ⊗ N (0, Jγ ) under
Pn,cnηn as n → ∞ for any sequence cn of numbers in C, which follows from Lemma 3.1 and inequality (13) from
Dalalyan and Yoshida (2011).
Proposition 3.1 implies that the experiments (Pn,cηn )c∈C enjoy the LAN property if γ = ∞ and do not oth-
erwise. When the LAN property holds true, there is a well-established theory to define the asymptotic efficiency
of estimators (cf. Section II-11 of Ibragimov and Has’minskii (1981)): A sequence cn of estimators in the experiments (Pn,cηn )c∈C is said to be asymptotically efficient at c ∈ C if the variables rn−1 ηn (cn − c) converge in
law to N (0, Iγ−1 ) under Pn,cηn as n → ∞ (see Definition II-11.1 of Ibragimov and Has’minskii (1981)). Under
the LAN property, this definition of the asymptotic efficiency is supported by several theorems such as the convolution theorem (e.g. Theorem II-9.1 of Ibragimov and Has’minskii (1981)) and the local asymptotic minimax
theorem (e.g. Theorem II-12.1 of Ibragimov and Has’minskii (1981)). Moreover, it is well-known that both maximum likelihood and Bayesian estimators are asymptotically efficient under very general settings (cf. Chapter III
of Ibragimov and Has’minskii (1981)). On the other hand, if the LAN property fails, it is generally not obvious
how to define the asymptotic efficiency of estimators. Here, we adopt the approach from Küchler and Kutoyants
(2000) to define the asymptotic efficiency, which is based on Theorem I-9.1 of Ibragimov and Has’minskii (1981)
that derives an asymptotically minimax lower bound from the asymptotic properties of the Bayesian estimators. As
a consequence, the Bayesian estimators are turned out to be asymptotically efficient.
Now we explain the strategy to obtain asymptotically efficient estimators in our setting. As in the previous
en,ϑ rather than the original model Pn,ϑ . For this reason
section, we would like to work with the tractable model P
we consider the quasi-likelihood function based on the former as follows:
en,cηn
dP
1 ⊤
1
−1
p
exp − zbn Vn (cηn ) zbn ,
=
Ln (c) :=
dx
2
(2π)n det Vn (cηn )
c ∈ C.
Then we consider the quasi maximum likelihood and Bayesian estimators based on Ln (c) as our estimators and give
en,cηn )c∈C using the general scheme of Ibragimov and Has’minskii
their asymptotic behavior in the experiments (P
(1981) (see Proposition 3.2). Next we consider the case γ = ∞ where the LAN property holds true and thus
en,cηn )c∈C can be transferred to that in (Pn,cηn )c∈C by Proposition 2.1(c). Finally, we
convergence in law in (P
en,cη )c∈C = (Pn,cη )c∈C for sufficiently large n due to (3.1) and
consider the case γ < ∞ where we have (P
n
n
Proposition 2.1(a), hence we can apply the Ibragimov-Has’minskii method to define and obtain asymptotically
efficient estimators.
The quasi maximum likelihood estimator (QMLE) ĉn is defined as a solution of the equation
Ln (ĉn ) = sup Ln (c).
c∈C
11
Note that the above equation always has at least one solution belonging to the closure of C because c 7→ Ln (c)
is continuous. Moreover, we can choose ĉn so that it is measurable by the measurable selection theorem (see
e.g. Theorem 6.7.22 of Pfanzagl (1994)). Also, the quasi Bayesian estimator (QBE) c̃n for a prior density q : C →
(0, ∞) with respect to the quadratic loss is defined by
Z
Z
Ln (c)q(c)dc,
c̃n = cLn (c)q(c)dc
C
C
where the prior density q is assumed to be continuous and satisfy 0 < inf c∈C q(c) ≤ supc∈C q(c) < ∞. The
corresponding QMLE and QBE in the experiments (Pn,ϑ )ϑ∈R are given by ϑ̂n = ĉn ηn and ϑ̃n = c̃n ηn , respectively.
Remark 3.2. Since the quantity ηn seems the exact order of the true time-lag parameter, one may consider that
in a practical setting it is difficult to know ηn beforehand and thus it is difficult to use the estimators ϑ̂n and ϑ̃n .
However, when we construct the estimator ϑ̂n , ηn can be considered as the maximum order of the true time-lag
en,ϑ /dx and Cn = {cηn : c ∈ C}. Then the estimator ϑ̂n can be
parameter as follows. Let us set L′n (ϑ) = dP
considered as a solution of the equation
L′n (ϑ̂n ) = sup Ln (ϑ).
ϑ∈Cn
Therefore, in a practical situation (supc∈C c)ηn (resp. (inf c∈C c)ηn ) can be interpreted as an upper bound (resp. a
lower bound) of possible time-lag parameters ϑ. It is often not so difficult to find such bounds in a practical setting
(and they are typically “small” as pointed out in the Introduction). For example, we can find them by computing
the cross-correlations via Hoffmann et al. (2013)’s method as in Huth and Abergel (2014). The same remark can
be applied to the estimator ϑ̃n because it can be rewritten as
Z
Z
′
ϑLn (ϑ)qn (ϑ)dϑ
ϑ̃n =
Cn
Cn
L′n (ϑ)qn (ϑ)dϑ,
where qn (ϑ) = ηn−1 q(ϑ/ηn ) for ϑ ∈ Cn (so qn is a prior density on Cn ).
To describe the limit distribution of these estimators, we introduce the likelihood ratio process for the limit
experiment
u2
Z(u) = exp uζ1 + |u|ζ2 − (Iγ + Jγ ) ,
2
u ∈ R,
where ζ1 and ζ2 are two mutually independent variables such that ζ1 ∼ N (0, Iγ ) and ζ2 ∼ N (0, Jγ ). Then we set
(ζ1 + ζ2 )/(Iγ + Jγ ) if ζ1 ≥ (−ζ2 ) ∨ 0,
û = argmaxu∈R Z(u) =
(ζ1 − ζ2 )/(Iγ + Jγ ) if ζ1 < ζ2 ∧ 0,
0
otherwise
and
R∞
ũ = R−∞
∞
uZ(u)du
−∞ Z(u)du
.
en,cηn )c∈C using the
We first give the asymptotic behavior of the estimators ĉn and c̃n in the experiments (P
general scheme of Ibragimov and Has’minskii (1981). Note that in this situation ĉn and c̃n are true maximum
likelihood and Bayesian estimators, respectively.
12
Proposition 3.2. For any compact subset K of C, uniformly in c ∈ K it holds that rn−1 ηn (ĉn − c) converges in law
en,cη and E
e n,cη [|r −1 ηn (ĉn − c)|p ] → E[|û|p ] for any p > 0 as n → ∞. Also, uniformly in c ∈ K
to û under P
n
n
n
−1
e n,cηn and E
e n,cηn [|r −1 ηn (c̃n − c)|p ] → E[|ũ|p ] for any
it holds that r ηn (c̃n − c) converges in law to ũ under P
n
n
p > 0 as n → ∞.
en,cη +r u /dP
en,cη
Proof. For every c ∈ C, we set Un (c) = {u ∈ R : c + rn ηn−1 u ∈ C} and define Zn,c (u) = dP
n
n
n
for each u ∈ Un (c). According to Theorems I-10.1 and I-10.2 from Ibragimov and Has’minskii (1981), it suffices
to prove the following statements:
p
p
e n,cη [| Zn,c (u) − Zn,c (v)|2 ] < ∞,
(a) lim supn→∞ supc∈K supu,v∈Un (c) |u − v|−2 E
n
p
2
e n,cη [ Zn,c (u)] < ∞,
(b) there is a constant κ > 0 such that lim sup
sup
sup
eκu E
n→∞
c∈K
u∈Un (c)
n
(c) the marginal distributions of Zn,c converge in law to the marginal distributions of Z uniformly in c ∈ K as
n → ∞.
(c) is an immediate consequence of Proposition 3.1. On the other hand, by Eq.(A.4) from Reiß (2011) we
obtain
e n,cη
E
n
"
q
Zn,c (u) −
q
2
Zn,c (v)
#
en,cη +r u , P
en,cη +r v )
= H 2 (P
n
n
n
n
1
1
1
1
≤ 4ρ2 rn2 (u − v)2 kVn (cηn + rn u)− 2 T̄n Vn (cηn + rn u)− 2 k2F + kVn (cηn + rn u)− 2 S̄n Vn (cηn + rn u)− 2 k2F ,
hence Lemma 3.1 yields claim (a).
Now we consider (b). By Corollary 3.2a.1 from Mathai and Provost (1992) we have
q
1
1
1
−1
e n,cη
(E2n − An,c (u)) − E2n ,
log E
Zn,c (u) = − log det[E2n − An,c (u)] − log det E2n +
n
4
2
2
1
1
where An,c (u) = E2n − Vn (cηn )− 2 Vn (cηn + rn u)Vn (cηn )− 2 . Then we consider the following decomposition:
q
e n,cη
Z
(u)
log E
n,c
n
1
1
2
=−
log det[E2n − An,c (u)] + tr[An,c (u)] + kAn,c (u)kF
4
2
1
1
1
1
−
log det E2n + Bn,c (u) − tr (Bn,c (u)) + kBn,c (u)k2F
2
2
2
8
1
1
1
2
2
2
+
tr[An,c (u)] + kAn,c (u)kF − tr (Bn,c (u)) −
kAn,c (u)kF − kBn,c (u)kF
4
8
2
=: In,c (u) + IIn,c (u) + IIIn,c (u) + IVn,c (u),
where Bn,c (u) = (E2n − An,c (u))−1 − E2n . Let us set
1
1
1
1
S̃n,c = ρVn (cηn )− 2 S̄n Vn (cηn )− 2 .
T̃n,c = ρVn (cηn )− 2 T̄n Vn (cηn )− 2 ,
Then we have An,c (u) = cηn T̃n,c + |cηn |S̃n,c − (cηn + rn u)T̃n,c − |cηn + rn u|S̃n,c , hence it holds that
sup sup kAn,c (u)ksp ≤ sup |c| sup 2ηn (kT̃n,c ksp + kS̃n,c ksp ).
c∈C u∈Un (c)
c∈C
c∈C
13
Here, we use the fact that c+rn ηn−1 u ∈ C because of u ∈ Un (c). In particular, we have supc∈C supu∈Un (c) kAn,c (u)ksp <
P
k
1 for sufficiently large n by (3.1) and Lemma 3.1. For such an n we have Bn,c (u) = ∞
k=1 An,c (u) and thus
kBn,c (u)ksp ≤
kAn,c (u)ksp
,
1 − kAn,c (u)ksp
kBn,c (u)kF ≤
kAn,c (u)kF
,
1 − kAn,c (u)ksp
k−1 for k ≥ 1 to obtain the latter estimate.
where we use the inequality kAn,c (u)k kF ≤ kAn,c (u)kF kAn,c (u)ksp
Therefore, for sufficiently large n we have for any c ∈ C and any u ∈ Un (c)
|In,c (u)| ≤
1 kAn,c (u)ksp kAn,c (u)k2F
,
12
1 − kAn,c (u)k3sp
|IIn,c (u)| ≤
1 kBn,c (u)ksp kBn,c (u)k2F
6
8 − kBn,c (u)k3sp
by Appendix II-(v) from Davies (1973) and
|IIIn,c (u)| ≤
∞
X
k=3
tr(An,c (u)k ) ≤ kAn,c (u)k2F
kAn,c (u)ksp
1 − kAn,c (u)ksp
k−2 for k ≥ 3, as well as
by the inequality tr(An,c (u)k ) ≤ kAn,c (u)k2F kAn,c (u)ksp
kAn,c (u)k2F
1
.
1−
IVn,c (u) ≤ −
8
2(1 − kAn,c (u)ksp )2
Consequently, there is a constant κ0 > 0 such that for sufficiently large n it holds that
q
e
Zn,c (u) ≤ −κ0 kAn,c (u)k2F
log En,cηn
for any c ∈ C and any u ∈ Un (c). Now we consider giving an upper bound for −kAn,c (u)k2F . We have
−kAn,c (u)k2F = −u2 rn2 kT̃n,c k2F − 2rn u(|cηn + rn u| − |cηn |) tr(T̃n,c S̃n,c ) − (|cηn + rn u| − |cηn |)2 kS̃n,c k2F
≤ −2u2 Iγ + u2 rn2 kT̃n,c k2F − 2Iγ + 2 rn2 tr(T̃n,c S̃n,c ) .
Therefore, noting Iγ > 0, by Lemma 3.1 for sufficiently large n we have −kAn,c (u)k2F ≤ −Iγ u2 for any c ∈ C
and any u ∈ Un (c). Consequently, we obtain (b) by setting κ = κ0 Iγ .
If γ < ∞, (3.1) is equivalent to the condition that ηn = o(n−1 ) as n → ∞. Therefore, Proposition 2.1(a)
yields the following result:
e n,cη ’s are replaced by Pn,cη ’s.
Corollary 3.1. If γ < ∞, the statement of Proposition 3.2 still holds true while P
n
n
Now we return to the efficient estimation of the parameter c in the model (Pn,cηn )c∈C . First we consider the
case γ = ∞. In this case we know that (Pn,cηn )c∈C enjoy the LAN property at every c ∈ C by Proposition 3.1, so
the definition of the asymptotic efficiency of an estimator sequence is well-established as explained in the above.
Theorem 3.1. If γ = ∞, both ĉn and c̃n are asymptotically efficient at every c ∈ C in the experiments (Pn,cηn )c∈C .
That is, both rn−1 ηn (ĉn −c) and rn−1 ηn (c̃n −c) converge in law to N (0, Iγ−1 ) under Pn,cηn for any c ∈ C as n → ∞.
In particular, both ϑ̂n and ϑ̃n are asymptotically efficient at ϑ = 0 in the experiments (Pn,ϑ )ϑ∈R .
Next we turn to the case γ < ∞. In this case the experiments (Pn,cηn )c∈C no longer enjoy the LAN property, so
the definition of the asymptotic efficiency is not obvious. As explained in the above, here we follow the approach
of Küchler and Kutoyants (2000) to define the asymptotic efficiency for our experiments. We obtain the following
result by virtue of Corollary 3.1 and Theorem I-9.1 of Ibragimov and Has’minskii (1981):
14
Theorem 3.2. If γ < ∞, we have
lim lim inf
sup (rn−1 ηn )2 En,cηn [(c∗n − c)2 ] ≥ E[ũ2 ]
δ→0 n→∞ |c−c0 |<δ
for any c0 ∈ C and any estimator sequence c∗n in the experiments (Pn,cηn )c∈C . In particular, we also have
lim lim inf sup rn−2 En,ϑ [(ϑ∗n
δ→0 n→∞ |ϑ|<δηn
− ϑ)2 ] ≥ E[ũ2 ]
for any estimator sequence ϑ∗n in the experiments (Pn,ϑ )ϑ∈R .
Thanks to Theorem 3.2, an estimator sequence c∗n is said to be asymptotically efficient at c0 ∈ C in the experi-
ments (Pn,cηn )c∈C if it holds that
lim lim inf
sup (rn−1 ηn )2 En,cηn [(c∗n − c)2 ] = E[ũ2 ].
δ→0 n→∞ |c−c0 |<δ
Similarly, an estimator sequence ϑ∗n is said to be asymptotically efficient at ϑ = 0 in the experiments (Pn,ϑ )ϑ∈R if
it holds that
lim lim inf sup rn−2 En,ϑ [(ϑ∗n
δ→0 n→∞ |ϑ|<δηn
− ϑ)2 ] = E[ũ2 ]
for any sequence ηn of positive numbers satisfying (3.1). The following result is an immediate consequence of this
definition.
Theorem 3.3. If γ < ∞, the sequence c̃n is asymptotically efficient at every c ∈ C in the experiments (Pn,cηn )c∈C .
In particular, the sequence ϑ̃n is asymptotically efficient at ϑ = 0 in the experiments (Pn,ϑ )ϑ∈R .
In contrast, there is no guarantee of the asymptotic efficiency of the (Q)MLE ĉn if γ < ∞. In fact, c̃n may
perform much better than ĉn if γ = 0, as shown in the following proposition.
Proposition 3.3. It holds that
s !
!
p
Iγ Jγ
Jγ
1
E û
,
1 − arctan
+
π
Iγ
π(Iγ + Jγ )
Z ∞Z ∞
2
1
xΨ(x) − yΨ(y) 2
E ũ =
ψR (x, y)dxdy,
Iγ + Jγ −∞ −∞
Ψ(x) + Ψ(y)
where Ψ(x) =
R∞
eux−u
2
1
=
Iγ + Jγ
2 /2
(3.2)
(3.3)
du and ψR (x, y) denotes the bivariate normal density with standard normal marginals
and correlation R = (Jγ − Iγ )/(Jγ + Iγ ). In particular, lim|ρ|→1 E ũ2 /E û2 = 0 if γ = 0.
0
Proof. Let us denote by φa the normal density with mean 0 and variance a. Then, a simple calculation yields
Z ∞
Z ∞
2
2
2
φIγ (x)φJγ (z − x)dx.
z dz
E û =
(Iγ + Jγ )2 0
0
By formulae (3.322.2) and (6.292) from Gradshteyn and Ryzhik (2007) we have
Z
∞
2
z dz
0
Z
0
∞
Iγ + Jγ
Iγ + Jγ
−
arctan
φIγ (x)φJγ (z − x)dx =
2
2π
hence we obtain (3.2).
15
s
Jγ
Iγ
!
p
Iγ Jγ
,
+
2π
Next, by a change of variable we obtain
Z
∞
1
Z(u)du = p
Iγ + Jγ
−∞
(
ζ + ζ2
p1
Iγ + Jγ
Ψ
!
+Ψ
ζ − ζ1
p2
Iγ + Jγ
!)
.
Moreover, formulae (3.462.5) and (3.322.2) from Gradshteyn and Ryzhik (2007) imply that
!
!)
(
Z ∞
ζ2 − ζ1
1
ζ1 + ζ2
ζ2 − ζ1
ζ1 + ζ2
p
Ψ p
−p
Ψ p
.
uZ(u)du =
Iγ + Jγ
Iγ + Jγ
Iγ + Jγ
Iγ + Jγ
Iγ + Jγ
−∞
p
p
Since the distribution of the vector ((ζ1 +ζ2 )/ Iγ + Jγ , (ζ2 −ζ1 )/ Iγ + Jγ ) has the density ψR (x, y), we obtain
(3.3).
Finally, we prove the latter statement. Define the functions f and g on (−1, 1) by
! √
r
Z ∞Z ∞
1
xΨ(x) − yΨ(y) 2
1+r
1 − r2
f (r) = 1 − arctan
+
ψr (x, y)dxdy.
,
g(r) =
π
1−r
2π
Ψ(x) + Ψ(y)
−∞ −∞
Then we have E[û2 ] = f (R)/(Iγ + Jγ ) and E[ũ2 ] = g(R)/(Iγ + Jγ ). Since R → 1 as |ρ| → 1 if γ = 0 and
limr→1 f (r) = 12 , it suffices to prove limr→1 g(r) = 0. Because we have
g(r) =
Z
∞
−∞
Z
∞
−∞
!2
√
√
xΨ(x) − (rx + 1 − r 2 y)Ψ(rx + 1 − r 2 y)
√
φ1 (x)φ1 (y)dxdy,
Ψ(x) + Ψ(rx + 1 − r 2 y)
the dominated convergence theorem yields limr→1 g(r) = 0, which completes the proof.
4
Appendix: Proof of Proposition 2.2
Before starting the proof, we introduce some notation. We set
π
1
ξi := ξi,n :=
i−
,
2n + 1
2
i = 1, 2, . . . .
Then we define the n × n matrix Un = (uij )1≤i,j≤n by
2
2π
1
1
2
√
uij :=
i−
j−
=√
cos
cos [ξi (2j − 1)] ,
2n
+
1
2
2
2n + 1
2n + 1
which is often referred to as the Discrete Cosine Transform (DCT) of type-VIII (see Sabel and Schmidt-Hieber
(2014) and references therein). Note that Un⊤ = Un and Un is real orthogonal. It is known that Un diagonalizes Fn
as follows:
where λi := 2 [1 − cos (2ξi )] .
Un Fn Un = diag(λ1 , . . . , λn ),
(4.1)
See Lemma 1 of Kunitomo and Sato (2013) or Lemma C.2 of Sabel and Schmidt-Hieber (2014) for the proof.
For each a > 0 we define the functions fa and ga on R by
fa (x) =
a
+ 2vn (1 − cos(x)),
n
ga (x) =
sin(x)
fa (x)
(x ∈ R).
We also set Gn (a) = na En + vn Fn . From (4.1) we have
Un Gn (a)Un = Λn (a) := diag(fa (2ξ1 ), . . . , fa (2ξn )).
16
(4.2)
Remark 4.1. It turns out that the off-diagonal components of Un Tn Un play a dominant role to calculate the limit of
−1
−1
kḠn 2 T̄n Ḡn 2 kF . This is essentially different from the case of calculating the Fisher information for the scale pa-
rameter estimation from observations of the form (1.2), where the similarity transformations of Toeplitz matrices by
Un are sufficiently approximated by diagonal matrices as manifested by Lemma C.4 of Sabel and Schmidt-Hieber
(2014). For this reason we need rather specific calculations as seen in Lemmas 4.4 and 4.7.
For a square matrix A, spr(A) denotes the spectral radius of A. We will frequently use the identity kAksp =
spr(A) holding if A is a normal matrix.
Now we start the main body of the proof. We will frequently use the following well-known inequality for the
sine function,
π
.
0≤x≤
2
2
x ≤ sin(x) ≤ 1
π
Lemma 4.1. For any a > 0, we have
sup ga (x) = p
0≤x≤π
( na )2
1
,
+ 4 na vn
(4.3)
sup |ga′ (x)| ≤
0≤x≤π
Proof. The claim immediately follows from the identity ga′ (x) =
3n
.
a
a
+2vn ) cos(x)−2vn
(n
fa (x)2
=
a
(1+cos(x))
n
fa (x)2
−
1
fa (x) .
Lemma 4.2. Let ψ : [0, π] → R be continuous. Also, let mn be a sequence of positive integers such that mn ≤ n
and mn /n → c ∈ (0, 1] as n → ∞. Then, for any p > 0 we have
lim
1
n→∞
np+1
Z πc
mn
X
ψ(2ξi )
1
= p
ψ(x)dx
fa (2ξi )p
a π 0
i=1
provided that γ = 0.
Proof. By the fundamental theorem of calculus, we have
ψ(2ξi )
ψ(2ξi )
≤ pkψk∞
−
p
fa (2ξi )
(a/n)p
Z
vn λi
0
1
2pkψk∞ vn
dx ≤
.
p+1
(a/n + x)
(a/n)p+1
Hence the desired result follows from the standard Riemann sum approximation.
Lemma 4.3. Let mn be a sequence of positive integers such that mn ≤ n and mn /n → c ∈ (0, 1] as n → ∞.
Then
mn
X
1
1
lim
=
n→∞ nNn
f (2ξi )
i=1 a
π
√
2
a(a+4γ)
arctan
1
√
2 a
q
1+
4γ
a
tan
πc
2
if γ < ∞,
1
c − sin2π2πc
2ab
q
√
b(b+4γ)
4γ
πc
1 + b tan 2
mn
2πγ 2 (b−a) arctan
X
2
√
q
lim r
ga (2ξi )gb (2ξi ) =
a(a+4γ)
n→∞ n
4γ
πc
−
− 2πγ 2 (b−a) arctan
1 + a tan 2
i=1
1
√ √
2( a+ b)
for any a, b > 0 such that a 6= b.
17
(4.4)
if γ = ∞,
if γ = 0,
c
4γ 2
(4.5)
if 0 < γ < ∞,
if γ = ∞.
Proof. First, using the lower and upper Darboux sums of the integral
2n + 1
2π
Z
2ξmn
2ξ1
m
R 2ξmn
2ξ1
1
fa (x) dx,
n
X
1
1
2n + 1
1
1
dx +
≤
+
≤
fa (x)
fa (2ξmn )
fa (2ξi )
fa (2ξ1 )
2π
i=1
we obtain
Z
2ξmn
1
dx.
fa (x)
2ξ1
Now, formula (2.553.3) in Gradshteyn and Ryzhik (2007) yields
Z
y
0
r
1
2n
dx = p
arctan
fa (x)
a(a + 4nvn )
!
y
4nvn
,
tan
1+
a
2
hence we obtain (4.4). Next, a simple calculation yields
n
ga (x)gb (x) =
b−a
sin2 x sin2 x
−
fa (x)
fb (x)
.
Therefore, if γ = 0, Lemma 4.2 implies that
lim r 2
n→∞ n
mn
X
i=1
1
ga (2ξi )gb (2ξi ) =
πab
Z
πc
0
1
sin xdx =
2ab
2
sin 2πc
c−
2π
.
On the other hand, if γ 6= 0, for sufficiently large n we have 1 − cos(x) = (fa (x) − a/n)/(2vn ), hence
sin2 x = fa (x)/vn − a/(nvn ) − fa (x)2 /(4vn2 ) + fa (x)a/(2nvn2 ) − a2 /(2nvn )2 .
Therefore, we obtain
sin2 x sin2 x
−
=
fa(x)
fb (x)
b
b2
+
nvn (2nvn )2
1
−
fb (x)
a
a2
+
nvn (2nvn )2
1
b−a
−
.
fa (x) 4nvn2
Hence the desired result follows from (4.4).
Lemma 4.4. Let mn be a sequence of positive integers such that mn ≤ n and mn → ∞ as n → ∞. Then
lim
n→∞
4
2n + 1
2 X
mn
sin4 (n · 2ξ1 i)
= 2.
sin2 (2ξ1 i)
i=1
Proof. Since 4nξ1 = π − 2ξ1 , we have
o
1
1n
1 − (−1)l cos (2ξ1 l) =
sin (n · 2ξ1 l) = {1 − cos (4nξ1 l)} =
2
2
2
(
sin2 (ξ1 l)
if l is even,
cos2 (ξ1 l) if l is odd.
Therefore, using the formula sin(2ξ1 i) = 2 sin(ξ1 i) cos(ξ1 i), we can decompose the target quantity as
2 X
m
m
mn
n
n
X
X
4
4
sin4 (n · 2ξ1 i)
2
2
cot
(ξ
i)
=: A1,n + A2,n .
tan
(ξ
i)
+
=
1
1
2n + 1
(2n + 1)2
sin2 (2ξ1 i)
i=1
i=1
i=1
i: even
i: odd
First we prove limn A1,n = 0. Using the monotonicity of the tangent function and assumption mn ≤ n, we
obtain
A1,n
8
≤
(2n + 1)π
Z
0
18
π n+1
2 2n+1
tan2 (x)dx
Since formula (2.526.22) in Gradshteyn and Ryzhik (2007) yields limn
that limn A1,n = 0.
n+1
R π2 2n+1
0
tan2 (x)dx = 1−π/4, we conclude
Next we prove limn A2,n = 2. Our proof relies on the following inequality for the tangent function:
π π x
π
x ≤ tan
x ≤
(0 ≤ x < 1).
2
2
2 1 − x2
(4.6)
The lower estimate of (4.6) is well-known, and the upper estimate is known as the Becker-Stark inequality (Eq.(2)
of Becker and Stark (1978)). Now, using (4.6), we obtain
⌊ mn2+1 ⌋
⌊ mn2+1 ⌋
2
(2i − 1)2
16 X
16 X
1
1
−
+
.
≤ A2,n ≤ 2
2
2
2
4
π
(2i − 1)
(2n + 1)
(2n + 1)
π
(2i − 1)2
i=1
i=1
Therefore, using formula
P∞
i=1 (2i
− 1)−2 =
π2
8 ,
we conclude that limn A2,n = 2.
1
1
1
1
Lemma 4.5. For any a, b > 0, it holds that kGn (a)− 2 Rn Gn (b)− 2 ksp = O(Nn ) and kGn (a)− 2 Rn Gn (b)− 2 k2F =
O(Nn2 ) as n → ∞.
1
1
1
1
Proof. First, by definition we have kGn (a)− 2 Rn Gn (b)− 2 ksp = spr(Gn (b)− 2 Gn (a)− 2 Rn ), hence (4.2) and Theorem 5.6.9 of Horn and Johnson (1985) yield
− 12
kGn (a)
− 12
Rn Gn (b)
ksp
n
X
n
4 X
p
≤
≤ max
1≤i≤n
2n + 1
fa (2ξk )fb (2ξk )
k=1
k=1
uik uk1
1
1
1
+
fa (2ξk ) fb (2ξk )
.
1
Therefore, Lemma 4.3 implies that kGn (a)− 2 Rn Gn (b)− 2 ksp = O(Nn ). Moreover, since it holds that
1
1
kGn (a)− 2 Rn Gn (b)− 2 k2F = tr(Gn (a)−1 Rn Gn (b)−1 Rn )
n
X
u1k uk1 u1l ul1
=
≤
fa (2ξk )fb (2ξl )
n
1
4 X
2n + 1
fa (2ξk )
k,l=1
k=1
!
n
4 X 1
2n + 1
fb (2ξl )
l=1
!
,
Lemma 4.3 again yields the desired result.
Lemma 4.6. For any a > 0, we have
1
1
1
kGn (a)− 2 Sn Gn (a)− 2 k2sp = O(Nn ),
1
rn2 kGn (a)− 2 Sn Gn (a)− 2 k2F → Jγ0 (a)
(4.7)
as n → ∞, where for any a > 0 we set
6
if γ = 0,
a2
1/2
3/2
a
a
1
+ a+4γ
if 0 < γ < ∞,
Jγ0 (a) =
2γ 2 2 − 3 a+4γ
0
if γ = ∞.
Proof. Since Sn − Rn = Fn , by Lemma 4.5 it suffices to prove (4.7) in the case where Sn is replaced by Fn .
(4.1)–(4.2) imply that
1
1
λi
≤ min{vn−1 , 4n/a}
1≤i≤n fa (2ξi )
kGn (a)− 2 Fn Gn (a)− 2 ksp = max
19
(4.8)
and
1
1
rn2 kGn (a)− 2 Fn Gn (a)− 2 k2F = rn2
n
X
i=1
λ2i
.
fa (2ξi )2
(4.9)
The first equation in (4.7) immediately follows from (4.8). In order to prove the second equation in (4.7), we will
prove the right side of (4.9) converges to Jγ0 (a) as n → ∞.
First, if γ = 0, the desired result follows from Lemma 4.2.
Next, if 0 < γ < ∞, noting that fa (2ξi ) = a/n + vn λi , we have
a 2
1
1
1
a
λ2i
+
= 2 1−2
,
fa (2ξi )2
vn
n fa (2ξi )
n fa (2ξi )2
hence by Lemma 4.3 we obtain the desired equation once we prove
n
1 X
1
1
a
lim
= p
.
1+
n→∞ n3
f (2ξi )2
a + 4γ
2a a2 + 4aγ
i=1 a
The monotonicity of the cosine function yields
2n + 1
2π
Z
π
0
n
X
1
1
1
2n + 1
dx ≤
≤
+
2
2
2
fa (x)
fa (2ξi )
fa (2ξ1 )
2π
i=1
Z
π
0
1
dx.
fa (x)2
Since formula (3.661.4) in Gradshteyn and Ryzhik (2007) implies that
Z π
π
1
a/n
p
dx =
,
1+
2
a/n + 4vn
2(a/n) (a/n)2 + 4avn /n
0 fa (x)
we obtain the desired result.
Finally, if γ = ∞, using the inequality fa (2ξi ) ≥ vn λi we obtain
rn2
n
X
i=1
λ2i
1
n
,
≤ 3 2 =√
2
fa (2ξi )
N n vn
nvn
hence we deduce the desired result.
Lemma 4.7. If a and b are positive numbers such that a 6= b, we have
2
ab
√
√
1
1
b(b+4γ)− a(a+4γ)
2
−2
−2 2
lim rn kGn (a) Tn Gn (b) kF =
−
γ 2 (b−a)
n→∞
√ 2√
a+ b
if γ = 0,
1
γ2
if 0 < γ < ∞,
if γ = ∞.
Proof. Since u0i = u1i , we have
(Un Tn Un )ij + (Un Rn Un )ij = (Un (Tn + Rn )Un )ij =
n
X
(ui,k−1 − ui,k+1 )ukj .
k=1
x−y
Using the trigonometric identities cos(x) − cos(y) = −2 sin( x+y
2 ) sin( 2 ) and 2 sin(x) cos(y) = sin(x + y) −
sin(x − y), we obtain
n
(Un Tn Un )ij + (Un Rn Un )ij =
X
4
sin(2ξi )
{sin ((2k − 1) (ξi + ξj )) + sin ((2k − 1) (ξi − ξj ))} .
2n + 1
k=1
20
Then, using summation formula (1.342.3) of Gradshteyn and Ryzhik (2007), we have
2
sin (n(ξi + ξj )) sin2 (n(ξi − ξj ))
4
ij
ij
(Un Tn Un ) + (Un Rn Un ) =
sin(2ξi )
+
1{i6=j} .
2n + 1
sin(ξi + ξj )
sin(ξi − ξj )
(4.10)
Now, since (Un Tn Un )⊤ = −Un Tn Un and (Un Rn Un )⊤ = Un Rn Un , by (4.2), (4.10) and the unitary invariance of
the Frobenius norm, we obtain
1
1
1
1
kGn (a)− 2 Tn Gn (b)− 2 k2F − kGn (a)− 2 Rn Gn (b)− 2 k2F
4
2 X
n
4
sin (n(ξi + ξj )) sin4 (n(ξi − ξj ))
sin4 (n(ξi − ξj ))
=−
ga (2ξi )gb (2ξj )
−
1{i<j} −
1{i>j}
2n + 1
sin2 (ξi + ξj )
sin2 (ξi − ξj )
sin2 (ξi − ξj )
i,j=1
=: B1,n + B2,n + B3,n .
First we consider B1,n . Using inequalities (4.3) and (x + y)2 ≥ 4xy (x, y ∈ R), we have
X
n
1
|B1,n | ≤ 4 max ga (2ξi )
max gb (2ξi )
1
1≤i≤n
1≤i≤n
(i − 2 + j − 12 )2
i,j=1
!2
X
n
1
≤ 4 max ga (2ξi )
max gb (2ξi )
,
1≤i≤n
1≤i≤n
2i − 1
i=1
and thus Lemma 4.1 yields B1,n = O((Nn
log n)2 )
=
o(rn−2 ).
Next we consider B2,n . First we prove B2,n = B′2,n + o(rn−2 ), where
2 X
n
sin4 (n(ξi − ξj ))
4
′
ga (2ξi )gb (2ξi )
1{i<j} .
B2,n =
2n + 1
sin2 (ξi − ξj )
i,j=1
Lemma 4.1 and (4.3) yield
B2,n − B′2,n ≤
≤
≤
4
2n + 1
2
4
(2n + 1)2
n
sin4 (n(ξi − ξj ))
6n X
ga (2ξi )|ξi − ξj |
1{i<j}
b
sin2 (ξi − ξj )
6π 2 n
b
4 6πn
2n + 1 b
n
X
i,j=1
n
X
ga (2ξi )
i,j=1
ga (2ξi )
n
X
1
j=1
i=1
1
1
|ξi − ξj | {i<j}
j
.
If γ = ∞, by the property of ga we have
n
2π
2π X
ga (2ξi ) ≤
2 max ga (2ξi ) +
2n + 1
2n + 1 1≤i≤n
i=1
≤
hence Lemma 4.1 yields
Pn
Lemma 4.3 because ga (x)
i=1 ga (2ξi )
≤ fa (x)−1 .
4
2n + 1
2ξn
ga (x)dx
2ξ1
1
fa (2ξn )
2π
2 max ga (2ξi ) +
log
,
2n + 1 1≤i≤n
2vn
fa (2ξ1 )
= O(Nn2 log n). This also holds true in the case that γ < ∞ due to
Consequently, B2,n − B′2,n = O((Nn log n)2 ) = o(rn−2 ). Now, since
ξi − ξj = 2ξ1 (i − j), for any c ∈ (0, 1) we have
Z
2 X
n
n
X
X sin4 (n · 2jξ1 )
sin4 (n · 2jξ1 )
4
′
g
(2ξ
)g
(2ξ
)
ga (2ξi )gb (2ξi )
≤
B
≤
.
a
i
i
b
2,n
2n + 1
sin2 (2jξ1 )
sin2 (2jξ1 )
j=1
i=1
i=1
j=1
2 ⌊nc⌋
X
n−⌊nc⌋
21
Therefore, Lemma 4.4 implies that
2 lim inf rn2
n→∞
⌊nc⌋
X
i=1
ga (2ξi )gb (2ξi ) ≤ lim inf rn2 B′2,n ≤ lim sup rn2 B′2,n ≤ 2 lim sup rn2
n→∞
n→∞
n→∞
n
X
ga (2ξi )gb (2ξi ).
i=1
Then, letting c ↑ 1, by Lemma 4.3 we obtain limn→∞ rn2 B2,n = limn→∞ rn2 B′2,n = 2 limn→∞ rn2
Pn
i=1 ga (2ξi )gb (2ξi ).
By symmetry we have limn→∞ rn2 B3,n = limn→∞ rn2 B2,n , hence we complete the proof due to (4.5) and
Lemma 4.5.
Proof of Proposition 2.2. Set
1
Ūn = √
2
"
Un −Un
Un
Un
#
.
Then we have
−1
−1
Ḡn 2 S̄n Ḡn 2
1
= Ūn
2
"
Ln Un (Sn + ρ−1 Rn )Un Ln
0
0
−Ln Un (Sn − ρ−1 Rn )Un Ln
−1
−1
Ḡn 2 T̄n Ḡn 2
1
= Ūn
2
"
0
Ln Un (Tn + ρ−1 Rn )Un Ln
#
Ūn⊤
and
−Ln Un (Tn −
ρ−1 R
n )Un Ln
0
#
Ūn⊤ ,
1
1
where Ln = Λn (1 + ρ)− 2 and Ln = Λn (1 − ρ)− 2 . Hence we obtain
−1
−1
4kḠn 2 (αT̄n + β S̄n )Ḡn 2 k2F
1
1
= 2α2 kGn (1 + ρ)− 2 (Tn + ρ−1 Rn )Gn (1 − ρ)− 2 k2F
1
1
1
1
+ β 2 (kGn (1 + ρ)− 2 (Sn + ρ−1 Rn )Gn (1 + ρ)− 2 k2F + kGn (1 − ρ)− 2 (Sn − ρ−1 Rn )Gn (1 − ρ)− 2 k2F ).
−1
−1
1
Therefore, Lemmas 4.5–4.7 yield (2.7). On the other hand, since 2kḠn 2 S̄n Ḡn 2 ksp ≤ kGn (1 + ρ)− 2 (Sn +
1
1
−1
−1
1
ρ−1 Rn )Gn (1+ρ)− 2 ksp +kGn (1−ρ)− 2 (Sn −ρ−1 Rn )Gn (1−ρ)− 2 ksp , Lemmas 4.5–4.6 also yield kḠn 2 S̄n Ḡn 2 ksp =
O(Nn ).
−1
−1
−1
−1
Hence the proof is completed once we prove kḠn 2 T̄n Ḡn 2 ksp = O(Nn ). Note that Ḡn 2 Vn (ϑ)Ḡn 2 is positive
−1
−1
−1
−1
−1
−1
semidefinite and ρϑḠn 2 T̄n Ḡn 2 + Ḡn 2 Vn (ϑ)Ḡn 2 = E2n − ρ|ϑ|Ḡn 2 S̄n Ḡn 2 for any ϑ ∈ Θn . Note also that
−1
−1
both T̄n and S̄n are symmetric. Therefore, if λ is an eigenvalue of Ḡn 2 T̄n Ḡn 2 , by the monotonicity theorem
−1
−1
for eigenvalues (Corollary 4.3.3 of Horn and Johnson (1985)) we have ρϑλ ≤ kE2n − ρ|ϑ|Ḡn 2 S̄n Ḡn 2 ksp for
−1
−1
any ϑ ∈ Θn . Since we can take ϑ = ±Nn−1 , this inequality implies that |ρ|Nn−1 kḠn 2 T̄n Ḡn 2 ksp ≤ kE2n −
−1
−1
−1
−1
ρNn−1 Ḡn 2 S̄n Ḡn 2 ksp ≤ 1 + |ρ|Nn−1 kḠn 2 S̄n Ḡn 2 ksp , which yields the desired result.
Acknowledgements
The author is grateful to two anonymous referees for their careful reading and insightful comments which
have significantly improved a former version of this paper. The author also thanks the participants at ASC2013
Asymptotic Statistics and Computations, Statistics for Stochastic Processes and Analysis of High Frequency Data,
Statistique Asymptotique des Processus Stochastiques X, and Statistics for Stochastic Processes and Analysis of
High Frequency Data IV for valuable comments. This work was supported by CREST, JST.
22
References
Aı̈t-Sahalia, Y. and Jacod, J. (2014). High-frequency financial econometrics. Princeton University Press.
Alsayed, H. and McGroarty, F. (2014). Ultra-high-frequency algorithmic arbitrage across international index futures. J.
Forecast. 33, 391–408.
Becker, M. and Stark, E. L. (1978). On a hierarchy of quolynomial inequalities for tan x. Univerzitet u Beogradu. Publikacije
Elektrotehničkog Fakulteta. Serija Matematika i Fizika 620, 133–138.
Bibinger, M. (2011). Efficient covariance estimation for asynchronous noisy high-frequency data. Scand. J. Stat. 38, 23–45.
Bibinger, M., Hautsch, N., Malec, P. and Reiß, M. (2014). Estimating the quadratic covariation matrix from noisy observations:
local method of moments and efficiency. Ann. Statist. 42, 80–114.
Bibinger, M., Jirak, M. and Reiß, M. (2016). Volatility estimation under one-sided errors with applications to limit order
books. Ann. Appl. Probab. 26, 2754–2790.
Bibinger, M. and Reiß, M. (2014). Spectral estimation of covolatility from noisy observations using local weights. Scand. J.
Stat. 41, 23–50.
Bollen, N. P., O’Neill, M. J. and Whaley, R. E. (2017). Tail wags dog: Intraday price discovery in VIX markets. Journal of
Futures Markets 37, 431–451.
Cai, T. T., Munk, A. and Schmidt-Hieber, J. (2010). Sharp minimax estimation of the variance of Brownian motion corrupted
with Gaussian noise. Statist. Sinica 20, 1011–1024.
Chan, K. (1993). Imperfect information and cross-autocorrelation among stock prices. Journal of Finance 48, 1211–1230.
Chordia, T., Sarkar, A. and Subrahmanyam, A. (2011). Liquidity dynamics and cross-autocorrelations. Journal of Financial
and Quantitative Analysis 46, 709–736.
Dalalyan, A. and Yoshida, N. (2011). Second-order asymptotic expansion for a non-synchronous covariation estimator. Ann.
Inst. Henri Poincaré Probab. Stat. 47, 748–789.
Davies, R. B. (1973). Asymptotic inference in stationary Gaussian time-series. Adv. in Appl. Probab. 5, 469–497.
de Jong, F., Mahieu, R. and Schotman, P. (1998). Price discovery in the foreign exchange market: an empirical analysis of the
yen/dmark rate. Journal of International Money and Finance 17, 5–27.
de Jong, F. and Schotman, P. C. (2010). Price discovery in fragmented markets. Journal of Financial Econometrics 8, 1–28.
Drost, F. C., van den Akker, R. and Werker, B. J. (2009). The asymptotic structure of nearly unstable non-negative integervalued AR(1) models. Bernoulli 15, 297–324.
Glosten, L. R. (1987). Components of the bid-ask spread and the statistical properties of transaction prices. Journal of Finance
42, 1293–1307.
Gloter, A. and Jacod, J. (2001a). Diffusions with measurement errors. I. Local asymptotic normality. ESAIM Probab. Stat. 5,
225–242.
Gloter, A. and Jacod, J. (2001b). Diffusions with measurement errors. II. Optimal estimators. ESAIM Probab. Stat. 5, 243–260.
Gradshteyn, I. and Ryzhik, I. (2007). Table of integrals, series, and products. Elsevier Inc., seventh edn.
Hasbrouck, J. (1995). One security, many markets: Determining the contributions to price discovery. Journal of Finance 50,
1175–1199.
Hoffmann, M., Rosenbaum, M. and Yoshida, N. (2013). Estimation of the lead-lag parameter from non-synchronous data.
Bernoulli 19, 426–461.
Horn, R. A. and Johnson, C. R. (1985). Matrix analysis. Cambridge University Press.
23
Huth, N. and Abergel, F. (2014). High frequency lead/lag relationships — empirical facts. Journal of Empirical Finance 26,
41–58.
Iacus, S. M., Porro, G., Salini, S. and Siletti, E. (2015). Social networks, happiness and health: from sentiment analysis to a
multidimensional indicator of subjective well-being. Working paper, available at arXiv: http://arxiv.org/abs/1512.01569.
Ibragimov, I. and Has’minskii, R. (1981). Statistical estimation: Asymptotic theory. Springer.
Kalnina, I. and Linton, O. (2008). Estimating quadratic variation consistently in the presence of endogenous and diurnal
measurement error. J. Econometrics 147, 47–59.
Küchler, U. and Kutoyants, Y. A. (2000). Delay estimation for some stationary diffusion-type processes. Scand. J. Stat. 27,
405–414.
Kunitomo, N. and Sato, S. (2013). Separating Information Maximum Likelihood estimation of the integrated volatility and
covariance with micro-market noise. The North American Journal of Economics and Finance 26, 282–309.
Kutoyants, Y. A. (2004). Statistical inference for ergodie diffusion processes. Springer.
Le Cam, L. (1986). Asymptotic methods in statistical decision theory. Springer.
Li, Y. and Mykland, P. A. (2015). Rounding errors and volatility estimation. Journal of Financial Econometrics 13, 478–504.
Li, Y., Zhang, Z. and Li, Y. (2015). A unified approach to volatility estimation in the presence of both rounding and random
market microstructure noise. Working paper. Available at SSRN: http://ssrn.com/abstract=2707177.
Mathai, A. M. and Provost, S. B. (1992). Quadratic forms in random variables: Theory and applications. Marcel Dekker,
Inc.
Ogihara, T. (2014). Parametric inference for nonsynchronously observed diffusion processes in the presence of market microstructure noise. Working paper, available at arXiv: http://arxiv.org/abs/1412.8173.
Pfanzagl, J. (1994). Parametric statistical theory. Walter de Gruyter & Co.
Reiß, M. (2011). Asymptotic equivalence for inference on the volatility from noisy observations. Ann. Statist. 39, 772–802.
Robert, C. Y. and Rosenbaum, M. (2010). On the limiting spectral distribution of the covariance matrices of time-lagged
processes. J. Multivariate Anal. 101, 2434–2451.
Rosenbaum, M. (2009). Integrated volatility and round-off error. Bernoulli 15, 687–720.
Rubin, H. and Song, K.-S. (1995). Exact computation of the asymptotic efficiency of maximum likelihood estimators of a
discontinuous signal in a Gaussian white noise. Ann. Statist. 23, 732–739.
Sabel, T. and Schmidt-Hieber, J. (2014). Asymptotically efficient estimation of a scale parameter in Gaussian time series and
closed-form expressions for the Fisher information. Bernoulli 20, 747–774.
Sato, S. and Kunitomo, N. (2015). A robust estimation of integrated volatility under round-off errors, micro-market price
adjustments and noises. CIRJE Discussion Papers CIRJE-F-964, The University of Tokyo.
Strasser, H. (1985). Mathematical theory of statistics. Walter de Gruyter & Co.
Tsybakov, A. B. (2009). Introduction to nonparametric estimation. Springer.
van der Vaart, A. W. (1998). Asymptotic statistics. Cambridge University Press.
24
| 10 |
Robust Estimation via Robust Gradient Estimation
arXiv:1802.06485v1 [stat.ML] 19 Feb 2018
Adarsh Prasad‡
Arun Sai Suggala‡
Sivaraman Balakrishnan†
Pradeep Ravikumar‡
Machine Learning Department‡
Department of Statistics†
Carnegie Mellon University,
Pittsburgh, PA 15213.
Abstract
We provide a new computationally-efficient class of estimators for risk minimization.
We show that these estimators are robust for general statistical models: in the classical Huber ǫ-contamination model and in heavy-tailed settings. Our workhorse is a novel
robust variant of gradient descent, and we provide conditions under which our gradient descent variant provides accurate estimators in a general convex risk minimization problem.
We provide specific consequences of our theory for linear regression, logistic regression
and for estimation of the canonical parameters in an exponential family. These results
provide some of the first computationally tractable and provably robust estimators for
these canonical statistical models. Finally, we study the empirical performance of our
proposed methods on synthetic and real datasets, and find that our methods convincingly
outperform a variety of baselines.
1
Introduction
Robust estimation has a rich history in statistics with seminal contributions due to Box [4],
Tukey [42], Huber [24], Hampel [21] and several others. In the classical analysis of statistical
estimators, statistical guarantees are derived under strong model assumptions and in most
cases these guarantees hold only in the absence of arbitrary outliers, and other deviations from
the model assumptions. Strong model assumptions are rarely met in practice, and this has led
to the development of robust inferential procedures and various associated statistical concepts
such as the influence function, the breakdown point and the Huber ǫ-contamination model to
assess the robustness of estimators. Despite this progress however, the statistical methods with
the strongest robustness guarantees, for instance those based on non-convex M -estimators [24],
ℓ1 tournaments [13, 15, 45] and notions of depth [10, 20, 38], are computationally intractable.
In this paper, we present a class of estimators that are computationally tractable and have
strong robustness guarantees.
The estimators we propose are obtained by robustifying classical algorithms for risk minimization and are applicable to a wide-range of parametric statistical models for which parameter estimation can be cast within this framework. In contrast to classical work, for instance
on M-estimation, we do not attempt to replace the risk minimization objective with a robust
counterpart but instead focus on making canonical gradient-based optimization for the usual
risk minimization objective robust. We find that this shift in perspective enables a unified
treatment of different statistical models, leads to computationally tractable estimators and
leads to estimators with strong robustness guarantees.
In the risk minimization framework, the target parameter θ ∗ is defined as the solution to
an optimization problem:
(1)
θ ∗ = argmin R(θ) ≡ argmin Ez∼P L(θ; z) ,
θ∈Θ
θ∈Θ
1
where L is an appropriate loss-function, R is the population risk and Θ is the set of feasible
parameters. The goal of empirical risk minimization procedures is then to compute an approximate minimizer to the above program when given access to samples Dn = {z1 , . . . , zn }.
In this classical setting, a standard assumption that is imposed on Dn is that the data has no
outliers, and has no arbitrary deviations from model assumptions; i.e., it is typically assumed
that each of the zi ’s are independent and identically distributed according to the distribution
P . Many analyses of risk minimization further assume that P follows a sub-gaussian distribution, or has otherwise well-controlled tails in order to appropriately control the deviation
between the population risk and its empirical counterpart.
While our general results can be specialized to obtain results for a variety of models and
notions of robustness, we focus on developing estimators which are robust to two canonical
classes of deviations from the model assumptions:
1. Robustness to arbitrary outliers: In this setting, we focus on Huber’s ǫ-contamination
model, where rather than observe samples directly from P in (1) we instead observe samples drawn from Pǫ which for an arbitrary distribution Q is defined as:
Pǫ = (1 − ǫ)P + ǫQ.
The distribution Q allows for arbitrary outliers, which may correspond to gross corruptions or more subtle deviations from the assumed model. This model can be equivalently
viewed as model mis-specfication in the Total Variation (TV) metric.
2. Robustness to heavy-tails: In this setting, we are interested in developing estimators
under weak moment assumptions. We assume that the distribution P from which we
obtain samples only has finite low-order moments (see Section 5.4 for a precise characterization). Such heavy tailed distributions arise frequently in the analysis of financial data
and large-scale biological datasets (see for instance examples in [18, 47]). In contrast to
classical analyses of empirical risk minimization [43], in this setting the empirical risk
is not uniformly close to the population risk, and methods that directly minimize the
empirical risk perform poorly (see Section 4).
The goal of our work is to develop estimators which are computationally tractable and robust
in these models. Below, we provide an outline of our results and contributions.
1. Our first contribution is to introduce a new class of robust estimators for risk minimization (1). These estimators are based on robustly estimating gradients of the population
risk, and are computationally tractable by design. Building on prior work for robust
mean estimation in the Huber model [29], and in the heavy-tailed model [37], we design
robust gradient estimators for the population risk in (1). Our main insight is that in
this general risk minimization setting, the gradient of the population risk is simply a
multivariate mean vector, and we can leverage prior work on mean estimation to design robust gradient estimators. Through this we are able to significantly generalize the
applicability of mean estimation methods to general parametric models.
2. Our estimators are practical and our second contribution is to conduct extensive numerical experiments on real and simulated data with our proposed estimators. We provide
guidelines for tuning parameter selection and we compare the proposed estimators with
several competitive baselines [3, 19, 24]. Across different settings and according to various metrics we find that our estimators consistently perform well.
2
3. Finally, we provide rigorous robustness guarantees for the estimators we propose for
a variety of canonical statistical models including for linear regression, for logistic regression and for estimation of the canonical parameters in an exponential family. Our
contributions in this direction are two-fold: building on prior work [2] we provide a general result on the stability of gradient descent for risk minimization, showing that in
favorable cases gradient descent can be quite tolerant to inaccurate gradient estimates.
Subsequently, in concrete settings we provide a careful analysis of the quality of gradient
estimation afforded by our proposed gradient estimators and combine these results to
obtain guarantees on our final estimates.
Broadly, as we discuss in the sequel, our work suggests that estimators which are based on
robust gradient estimation offer a variety of practical, conceptual, statistical and computational
advantages for robust estimation.
1.1
Related Work
There is extensive work broadly in the area of robust statistics (see for instance [21] and
references therein), and we focus this section on some lines of work that are most related
to this paper. Classical work has already developed several estimators which are known to
be optimally robust for a variety of inferential tasks, including hypothesis testing [26], mean
estimation [38], general parametric estimation [11, 15, 45], and non-parametric estimation [13].
However, a major drawback of this classical line of work has been that most of the estimators
with strong robustness guarantees are computationally intractable [42], while the remaining
ones are heuristics which are not optimal [22].
Recently, there has been a flurry of research in theoretical computer science [9, 14, 16, 29,
32] designing provably robust estimators which are computationally tractable while achieving
near-optimal contamination dependence for special classes of problems. Some of the proposed
algorithms, are not practical as they rely on the ellipsoid algorithm or require solving semidefinite programs [9, 14, 16, 32] which can be slow for modern problem sizes. We build on
the work of Lai et al. [29], who study practical robust mean and covariance estimators for
distributions with appropriately controlled moments. A complementary line of recent research
[10, 20] has focused on providing minimax upper and lower bounds on the performance of
estimators under ǫ-contamination model, without the constraint of computational tractability.
In the ǫ-contaminated model, Q can be arbitrary, but there has been a lot of work in settings
where the contamination distribution is restricted in various ways. For example, recent work in
high-dimensional statistics (for instance [7, 12, 33, 34, 46]) have studied problems like principal
component analysis and linear regression under the assumption that the corruptions are evenly
spread throughout the dataset.
Another line of research has focused on designing robust estimators under the heavy tailed
distribution setting. These approaches relax the sub-gaussian or sub-exponential distributional
assumptions that are typically imposed on the target distribution P and allow it to be a heavy
tailed distribution. Most of the approaches in this category use robust mean estimators [8, 31]
that exhibit sub-gaussian type concentration around the true mean for distributions satisfying
mild moment assumptions. The median-of-means estimator [31] and Catoni’s mean estimator
[8] are two popular examples of such robust mean estimators.
Hsu and Sabato [23] use the median-of-means estimator to develop an alternative to ERM
under heavy tails. Although this estimator has strong theoretical guarantees and is computationally tractable, as noted by the authors in [23] it performs poorly in practice. In recent
3
work Brownlees et al. [5] replace empirical mean in the empirical risk minimization framework
(ERM) with Catoni’s mean estimator and perform risk minimization. The authors provide risk
bounds similar to the bounds one can achieve under sub-gaussian distributional assumptions.
However, their estimator is not easily computable and the authors do not provide a practical
algorithm to compute the estimator. Other recent works by Lerasle and Oliveira [31], Lugosi
and Mendelson [35] use similar ideas to derive estimators that perform well theoretically, in
heavy-tailed situations. However, these approaches involve optimization of complex objectives
for which no computationally tractable algorithms exist. We emphasize that in contrast to
our work, these works focus on robustly estimating the population risk which does not directly
lead to a computable estimator. We instead consider robustly estimating the gradient of the
population risk. When complemented with the gradient descent algorithm this leads naturally
to a computable estimator.
1.2
Outline
We conclude this section with a brief outline of the remainder of the paper. In Section 2, we
provide some background on risk minimization and the Huber and heavy-tailed noise models.
In Section 3, we introduce our class of estimators and provide concrete algorithms for the
ǫ-contaminated and heavy-tailed setting. In Section 4 we study the empirical performance of
our estimator on a variety of tasks and datasets. We complement our empirical results with
theoretical guarantees in Sections 5, 6 and 7. We defer technical details to the Appendix.
Finally, we conclude in Section 8 with a discussion of some open problems.
2
Background and Problem Setup
In this section we provide the necessary background on risk minimization, gradient descent
and introduce two notions of robustness that we consider in this work.
2.1
Risk Minimization and Parametric Estimation
In the setting of risk minimization, we assume that we have access to a differentiable
loss
function L : Θ × Z 7→ R, where Θ is a convex subset of Rp . Let R(θ) = Ez∼P L(θ; z) be the
population loss (risk), and let θ ∗ be the minimizer of the population risk R(θ), over the set Θ
θ ∗ = argmin R(θ).
(2)
θ∈Θ
The goal of risk minimization is to minimize the population risk R(θ), given only n samples
Dn = {zi }ni=1 . Whereas in parameter estimation we are interested in estimating the unknown
parameter θ ∗ from samples Dn .
In this work we assume that the population risk is convex to ensure tractable minimization.
Moreover, in order to ensure identifiability of the parameter θ ∗ , we impose two standard
regularity conditions [6] on the population risk. These properties are defined in terms of the
error of the first-order Taylor approximation of the population risk, i.e. defining, τ (θ1 , θ2 ) :=
R(θ1 ) − R(θ2 ) − h∇R(θ2 ), θ1 − θ2 i, we assume that
τℓ
τu
kθ1 − θ2 k22 ≤ τ (θ1 , θ2 ) ≤ kθ1 − θ2 k22 ,
2
2
(3)
where the parameters τℓ , τu > 0 denote the strong-convexity and smoothness parameters
respectively.
4
2.2
Gradient Descent and Empirical Risk Minimization
A starting point for the techniques we develop in this paper is the classical projected gradient
descent method for empirical risk minimization. Given data Dn = {zi }ni=1 , empirical risk
minimization (ERM) estimates the unknown parameter θ ∗ as the minimizer of the empirical
risk, i.e.
n
1X
L(θ; zi ).
θbn = argmin Rn (θ) :=
n
θ∈Θ
i=1
A popular method for solving this optimization problem is projected gradient descent. Projected gradient descent generates a sequence of iterates {θ t }∞
t=0 , by refining an initial parameter
θ0 ∈ Θ via the update:
θ t+1 = PΘ θ t − η∇Rn (θ t ) ,
where η > 0 is the step size and Pθ is the projection operator onto Θ. Despite its simplicity, the
gradient descent method is not robust for general convex losses. Furthermore, the empirical
risk minimizer is a poor estimator of θ ∗ in the presence of outliers in the data: since ERM
depends on the sample mean, outliers in the data can effect the sample mean and lead ERM
to sub-optimal estimates. This observation has led to a large body of research that focuses
on developing robust M-estimators which have favorable statistical properties, but are often
computationally intractable.
In this work we take a different approach. Our work relies on an important observation
that the gradient of the population risk (Ez∼P ∇L(θ; z) ) is simply a mean vector: one that
can be estimated robustly by leveraging recent advances in robust mean estimation [29, 37].
This leads to a general method for risk minimization based on robust gradient estimation (see
Algorithm 1).
2.3
Robust Estimation
One of the goals of this work is to develop general statistical estimation methods that are
robust in one of the following two models: Huber’s ǫ-contamination model or the heavy-tailed
model. We now briefly review these two notions of robustness.
1. Huber’s ǫ-contamination model:
Huber [25, 26] proposed the ǫ-contamination
model where we observe samples that are obtained from a mixture of the form
Pǫ = (1 − ǫ)P + ǫQ,
(4)
where P is the true distribution, ǫ is the expected fraction of outliers and Q is an
arbitrary outlier distribution. Given i.i.d. observations drawn from Pǫ , our objective is
to estimate θ ∗ , the minimizer of the population risk R(θ) = Ez∼P L(θ; z) , robust to
the contamination from Q.
2. Heavy-tailed model: In the heavy-tailed model it is assumed that the data follows
a heavy-tailed distribution (i.e, P is heavy-tailed). While heavy-tailed distributions
have various possible characterizations: in this paper we consider a characterization via
gradients.
For a fixed θ ∈ Θ we let Pgθ denote the multivariate distribution of the gradient of
population loss, i.e. ∇L(θ; z). We refer to a heavy-tailed distribution as one for which
5
Pgθ has finite second moments for any θ ∈ Θ. As we illustrate in Section 7, in various
concrete examples this translates to relatively weak low-order moment assumptions on
the data distribution P .
Given n i.i.d observations from P , our objective is to estimate the minimizer of the
population risk. From a conceptual standpoint, the classical analysis of risk-minimization
which relies on uniform concentration of the empirical risk around the true risk, fails in
the heavy-tailed setting necessitating new estimators and analyses [5, 23, 35, 36].
3
Gradient Estimation
Gradient descent and its variants are at the heart of modern optimization and are well-studied
in the literature. Suppose we have access to the true distribution Pθ∗ . Then to minimize the
population risk R(θ), we can use projected gradient descent, where starting at some initial θ 0
and for an appropriately chosen step-size η, we update our estimate according to:
θ t+1 ← PΘ (θ t − η∇R(θ t )).
(5)
However, we only have access to n samples Dn = {zi }ni=1 . The key technical challenges are
then to estimate the gradient of R(θ) from samples Dn , and to ensure that an appropriate
modification of gradient descent is stable to the resulting estimation error.
To address the first challenge we observe that the gradient of the population
risk at any
point θ is the mean of a multivariate distribution, i.e. ∇R(θ) = Ez∼P ∇L(θ; z) . Accordingly,
the problem of gradient estimation can be reduced to a multivariate mean estimation problem,
where our goal is to robustly estimate the true mean ∇R(θ) from n samples {∇L(θ; zi )}ni=1 .
For a given sample-size n and confidence parameter δ ∈ (0, 1) we define a gradient estimator:
Definition 1. A function g(θ; Dn , δ) is a gradient estimator, if for functions α and β, with
probability at least 1 − δ, at any fixed θ ∈ Θ, the estimator satisfies the following inequality:
kg(θ; Dn , δ) − ∇R(θ)k2 ≤ α(n, δ)kθ − θ ∗ k2 + β(n, δ).
(6)
In subsequent sections, we will develop conditions under which we can obtain gradient estimators with strong control on the functions α(n, δ) and β(n, δ) in the Huber and heavy-tailed
models. Furthermore, by investigating the stability of gradient descent we will develop sufficient conditions on these functions such that gradient descent with an inaccurate gradient
estimator still returns an accurate estimate.
To minimize R(θ), we replace ∇R(θ) in equation (5) with the gradient estimator g(θ; Dn , δ)
and perform projected gradient descent. In order to avoid complex statistical dependency
issues that can arise in the analysis of gradient descent, for our theoretical results we consider
a sample-splitting variant of the algorithm where each iteration is performed on a fresh batch
of samples (see Algorithm 1). We further assume that the number of gradient iterations T is
specified a-priori, and accordingly we define:
jnk
δ
and δe = ,
(7)
n
e=
T
T
We discuss methods for selecting T , and the impact of sample-splitting in later sections. As
confirmed in our experiments (see Section 4), sample-splitting should be viewed as a device
introduced for theoretical convenience which can likely be eliminated via more complex uniform
arguments (see for instance the work [2]).
6
Algorithm 1 Projected Gradient Descent
function PGD({z1 , . . . , zn }, Step Size η, Number of Iterations T , δ)
e.
Split samples into T subsets {Zt }Tt=1 of size n
for t = 0 to T − 1 do
e k2 .
θ t+1 = argminθ∈Θ kθ − θ t − ηg(θ t ; Zt , δ)
2
end for
end function
Next, we consider the two notions of robustness described in Section 2, and derive specific
gradient estimators for each of the models using the framework described above. Although
the major focus of this work is on Huber contamination and heavy-tailed models, our class of
estimators are more general and are not restricted to these two notions of robustness.
3.1
Gradient Estimation in Huber’s ǫ-contamination model
There has been a flurry of recent interest [9, 14, 16, 29, 32] in designing mean estimators
which, under the Huber contamination model, can robustly estimate the mean of a random
vector. While some of these results are focused on the case where the uncorrupted distribution
is Gaussian, or isotropic, we are more interested in robust mean oracles for more general
distributions. Lai et al. [29] proposed a robust mean estimator for general distributions,
satisfying weak moment assumptions, and we leverage the existence of such an estimator to
design a Huber gradient estimator g(θ; Dn , δ) which works in the Huber contamination model
(see Algorithm 2).
Now, we briefly describe the main idea behind Algorithm 2 and the mean estimator of Lai
et al. [29]. The algorithm builds upon the fact that in one-dimension, it is relatively easy to
estimate the gradient robustly. In higher-dimension, the crucial insight of Lai et al. [29] is that
the effect of the contamination Q on the mean of uncontaminated distribution P is effectively
one-dimensional provided we can accurately estimate the direction along which the mean is
shifted. In our context, if we can compute the gradient shift direction, i.e. the direction
of the difference between the sample (corrupted) mean gradient and the true (population)
gradient, then the true gradient can be estimated by using a robust 1D-mean algorithm along
the gradient-shift direction and a non-robust sample-gradient in the orthogonal direction since
the contamination has no effect on the gradient in this orthogonal direction. In order to
identify the gradient shift direction, we use a recursive Singular Value Decomposition (SVD)
based algorithm. In each stage of the recursion, we first remove gross-outliers via a truncation
algorithm (described in more detail in the Appendix) and subsequently identify two subspaces
using an SVD – a clean subspace where the contamination has a small effect on the mean
and another subspace where the contamination has a potentially larger effect. We use a
simple sample-mean estimator in the clean subspace and recurse our computation on the
other subspace. Building on the work of Lai et al. [29], in Lemma 1 and Appendix J we
provide a careful analysis of this gradient estimator.
7
Algorithm 2 Huber Gradient Estimator
function HuberGradientEstimator(Sample Gradients S = {∇L(θ; zi )}ni=1 , Corruption Level ǫ, Dimension p, δ)
Se = HuberOutlierGradientTruncation(S, ǫ, p, δ).
if p=1 then
e
return mean(S)
else
e
Let ΣSe be the covariance matrix of S.
Let V be the span of the top p/2 principal components of ΣSe and W be its complement.
e where PV is the projection operation on to V .
Set S1 := PV (S)
Let µ
bV := HuberGradientEstimator(S1 , ǫ, p/2, δ).
e
Let µ
bW := mean(PW S).
p
Let µ
b ∈ R be such that PV (b
µ) = µ
bV , and PW (b
µ) = µ
bW .
return µ
b.
end if
end function
3.2
Gradient Estimation in the Heavy-Tailed model
To design gradient estimators for the heavy-tailed model, we leverage recent work on designing
robust mean estimators in this setting. These robust mean estimators build on the classical
work of Alon et al. [1], Nemirovski and Yudin [39] and Jerrum et al. [27] on the so-called
median-of-means estimator. For the problem of one-dimensional mean estimation, Catoni
[8], Lerasle and Oliveira [31] propose robust mean estimators that achieve exponential concentration around the true mean for any distribution with bounded second moment. In this work
we require mean estimators for multivariate distributions. Several recent works ([23, 36, 37])
extend the median-of-means estimator of to general metric spaces. In this paper we use the
geometric median-of-means estimator (Gmom), which was originally proposed and analyzed
by Minsker [37], to design the gradient estimator g(θ; Dn , δ).
The basic idea behind the Gmom estimator is to first split the samples into non-overlapping
subsamples and estimate the sample mean of each of the subsamples. Then the Gmom estimator is given by the median-of-means of the subsamples. Formally, let {xi . . . xn } ∈ R be n i.i.d
random variables sampled from a distribution P . Then the Gmom estimator for estimating
the mean of P can be described as follows. Partition the n samples into b blocks B1 , P
. . . Bb each
1
of size ⌊n/b⌋. Let {b
µ1 , . . . , µ
bb } be the sample means in each block, where µ
bi = |Bi | xj ∈Bi xj .
Then the Gmom estimator is given by median{b
µ1 , . . . µ
bb }. In high dimensions where different
notions of the median have been considered Minsker [37] uses geometric median:
µ
b = argmin
µ
b
X
i=1
kµ − µ
bi k2 .
Algorithm 3 presents the gradient estimator g(θ; Dn , δ) obtained using Gmom as the mean
estimator.
8
Algorithm 3 Heavy Tailed Gradient Estimator
function HeavyTailedGradientEstimator(Sample Gradients S = {∇L(θ; zi )}ni=1 ,
δ)
Define number of buckets b = 1 + ⌊3.5 log 1/δ⌋.
Partition S into b blocks B1 , . . . Bb each of size ⌊n/b⌋.
for i = 1 . . . nX
do
s.
µ
bi = |B1i |
s∈Bi
end for
Let µ
b = argmin
µ
return µ
b.
end function
4
b
X
i=1
kµ − µ
bi k2 .
Experiments
In this section we demonstrate our proposed methods for Huber contamination and heavytailed models, on a variety of simulated and real data examples.
4.1
Huber Contamination
We first consider the Huber contamination model and demonstrate the practical utility of
gradient-descent based robust estimator described in Algorithms 1 and 2.
4.1.1
Synthetic Experiments: Linear Regression
In linear regression we observe paired samples {(x1 , y1 ), . . . (xn , yn )}, where each (xi , yi ) ∈
Rp × R. We assume that the (x, y) pairs sampled from the true distribution P are linked via
a linear model:
y = hx, θ ∗ i + w,
(8)
where w is drawn from a zero-mean normal distribution with variance σ 2 (w ∼ N (0, σ 2 )). We
use the squared loss as our loss function
1
L(θ; (x, y)) = (y − hx, θi)2 .
2
Note that the true parameter θ ∗ is the minimizer of the resulting population risk R(θ). We
now describe the experiment setup, the data model and present the results.
Setup We fix the contamination level ǫ = 0.1 and σ 2 = 0.1. Next, we generate (1 − ǫ)n
clean covariates from x ∼ N (0, Ip ), the corresponding clean responses using y = hx, θ ∗ i + w
where θ ∗ = [1, . . . , 1]T and w ∼ N (0, σ 2 ). We simulate an outlier distribution by drawing
the covariates from N (0, p2 Ip ), and setting the responses to 0. The total number of samples
are set to be(10 ǫp2 ) i.e. the sample size increases with the dimension. This scaling is used to
ensure that the statistical (minimax) error, in the absence of any contamination, is roughly
0.001. An optimally robust method should have error close to 0.1 (roughly equal to corruption
level), which ours does (see Figure 1).
9
25
1.7
OLS
RobustGD
TORRENT
Huber
Plugin
RANSAC
1.6
1.5
Parameter Error
||θb − θ∗ ||2
20
×10-3
15
10
1.4
1.3
1.2
1.1
5
1
0
100
150
200
250
300
350
400
450
0.9
0.1
500
0.15
0.2
p
0.25
0.3
ǫ
(a) Parameter error vs p for ǫ = 0.1
(b) Parameter error vs ǫ
3
epsilon=0.1
epsilon=0.2
epsilon=0.3
epsilon=0.4
log(Parameter Error)
2
1
0
-1
-2
-3
0
10
20
30
40
50
60
Iterations
(c) log(kθt − θ∗ k2 ) vs t for different ǫ.
Figure 1: Robust Linear Regression.
Metric We measure the parameter error in ℓ2 -norm. We also study the convergence properties of our proposed method, for different contamination levels ǫ. We use code provided by
Lai et al. [29] to implement our gradient estimator.
Baselines We use OLS, TORRENT [3], Huber-loss, RANSAC and plugin estimator as our
baselines. TORRENT is an iterative hard-thresholding based alternating minimization algorithm, where in one step, it calculates an active set of examples by keeping only (1−ǫ)n samples
which have the smallest absolute values of residual r = y − x, θ t , and in the other step it
updates the current estimates by solving OLS on the active set. Bhatia et al. [3] had shown the
superiority of TORRENT over other convex-penalty based outlier techniques, hence, we do
not compare against those methods.P
The plugin estimator is implemented using
P Algorithm 2
to estimate both the mean vector n1 ni=1 yi xi and the covariance matrix n1 ni=1 xi xTi .
Results We summarize our main findings here.
1. All estimators except our proposed algorithm perform poorly (Figure 1(a)). Note that the
TORRENT algorithm has strong guarantees when only the response y is corrupted but
10
performs poorly in the Huber contamination model where both x and y may be contaminated. The error for the robust plugin estimator increases with dimension. We investigate
this theoretically in Section 6.1, where we find that the error of the plugin estimator grows
√
with the norm of θ ∗ . In our experiment, we choose kθ ∗ k2 = p, and thus Figure 1(a)
corroborates Corollary 3 in Section 6.1.
2. In Figure 1(b) we find that the parameter error kθb − θ ∗ k2 increases linearly with the contamination rate ǫ and we study this further in Section 6.1.
3. Finally, Figure 1(c) shows that the convergence rate decreases with increasing contamination ǫ and after ǫ is high enough, the algorithm remains stuck at θ0 , corroborating Lemma 8
(in the Appendix).
Next, we study the performance of our proposed method in the context of classification.
4.1.2
Synthetic Experiments: Logistic Regression
In logistic regression we observe paired samples, {(x1 , y1 ), . . . , (xn , yn )} where each (xi , yi ) ∈
Rp × R. We assume that the (x, y) pairs sampled from the true distribution P are linked via
a linear model:
(
1
1 with probability 1+exp(−hx,
θ ∗ i) ,
(9)
y=
0 otherwise.
In this case, we use the negative conditional log-likelihood as our loss function, i.e.
L(θ; (x, y)) = −yhx, θi + log(1 + exp(hx, θi)).
Setup We simulate a linearly separable classification problem, where the clean covariates are
sampled from N (0, Ip ), the corresponding clean responses are computed as y = sign(hx, θ ∗ i)
√
√
where θ ∗ = [1/ p, . . . , 1/ p]T . We simulate the outlier distribution by adding asymmetric
noise, i.e. we flip the labels of one class, and increase the variance of the corresponding
covariates by multiplying them by p2 . The total number of samples are set to be (10 pǫ ).
Metric We measure the 0-1 classification error on a separate (clean) test set. We study how
the 0-1 error changes with p and ǫ and the convergence properties(parameter error) of our
proposed method for different contamination levels ǫ.
Baselines We use the logistic regression MLE and the linear Support Vector Machine (SVM)
as our baselines.
Results Figures 2(b) and 2(c) show qualitatively similar results to the linear regression
setting, i.e. that the error of our proposed estimator degrades gracefully (and grows linearly)
with the contamination level ǫ and that the gradient descent iterates converge linearly. In
Figure 2(a) we observe that both the SVM and logistic regression MLE perform poorly. The
logistic regression MLE completely flips the labels and has a 0-1 error close to 1, whereas the
linear SVM outputs a random hyperplane classifier that flips the label for roughly half of the
dataset.
11
1
1
RobustGD
logisticRegression
SVM
0.9
0.8
0.7
0.7
0.6
0.6
0-1 error
0-1 error
0.8
0.5
0.4
0.5
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
100
150
200
250
300
350
400
450
RobustGD
0.9
0
0.1
500
0.15
0.2
0.25
p
0.3
0.35
0.4
0.45
epsilon
(a) 0-1 Error vs p at ǫ = 0.1
(b) 0-1 error vs ǫ
0.5
eta=0.1
eta=0.2
eta=0.3
eta=0.4
log(Parameter Error)
0
-0.5
-1
-1.5
-2
-2.5
0
100
200
300
400
500
600
Iterations
(c) log(kθt − θ∗ k2 ) vs t for different ǫ
Figure 2: Robust Logistic Regression.
4.1.3
Robust Face Reconstruction
Setup In this experiment, we show the efficacy of our algorithm by attempting to reconstruct
face images that have been corrupted with heavy occlusion, where the occluding pixels play
the role the outliers. We use the data from the Cropped Yale Dataset [30] . The dataset
contains 38 subjects, and each image has 192×168 pixels. Following the methodology of Wang
et al. [44], we choose 8 face images per subject, taken under mild illumination conditions and
computed an eigenface set with 20 eigenfaces. Then given a new corrupted face image of a
subject, the goal is to get the best reconstruction/approximation of the true face. To remove
the scaling effects, we normalized all images to [0, 1] range. One image per person was used to
test reconstruction. Occlusions were simulated by randomly placing 10 blocks of size 30 × 30.
We repeated this 10 times for each test image. Note that in this example, we use a linear
regression model as the uncontaminated statistical model, which is almost certainly not an
Table 1: Fitting to original image error.
Mean RMSE
Best Possible
Proposed
TORRENT
OLS
SCRRR
0.05
0.09
0.175
0.21
0.13
12
exact match for the unknown ground truth distribution. Despite this model misspecification,
as our results show, that robust mean based gradient algorithms do well.
Metric We use Root Mean Square Error (RMSE) between the original and reconstructed
image to evaluate the performance of the algorithms. We also compute the best possible
reconstruction of the original face image by using the 20 eigenfaces.
Methods We use TORRENT, OLS as baselines. Wang et al. [44] implemented popular robust estimators such as RANSAC, Huber Loss etc. and showed their poor performance. Wang
et al. [44] then proposed an alternate robust regression algorithm called Self Scaled Regularized
Robust Regression(SCRRR), and showed its equivalence to and ℓ1 -penalized method. We also
compare against the best possible RMSE obtained by reconstructing the un-occluded image
using the eigenfaces.
Results Table 1 shows that the mean RMSE is best for our proposed gradient descent
based method and that the recovered images are in most cases closer to the un-occluded
original image. (Figure 4.1.3). Figure 3(c) shows a case when none of the methods succeed in
reconstruction.
(a) Successful Reconstruction
(b) Successful Reconstruction
(c) Failed Reconstruction
Figure 3: Robust Face recovery results: Top; in order from L to R: original image, occluded
image, best possible recovery with given basis. Bottom; in order from L to R: Reconstructions
using our proposed algorithm, TORRENT and ordinary least squares (OLS).
4.2
Heavy-tailed Estimation
We now consider the heavy-tailed model and present experimental results on synthetic and
real world datasets comparing the gradient descent based robust estimator described in Algo13
rithms 1 and 3 (which we call GMOM) with ERM and several other recent proposals. In these
experiments we focus on the problem of linear regression which is described in Section 4.1 and
work with heavy-tailed noise distributions.
4.2.1
Synthetic Experiments: Simple Linear Regression
Setup. The covariate x ∈ Rp is sampled from a zero-mean isotropic gaussian distribution.
√
We set each entry of θ ∗ to 1/ p. The noise w is sampled from a Pareto distribution, with mean
zero, variance σ 2 and tail parameter β. The tail parameter β determines the moments of the
Pareto random variable. More specifically, the moment of order k exists only if k < β, hence,
smaller the β the more heavy-tailed the distribution. In this setup, we keep the dimension p
fixed to 128 and vary n, σ and β. We always maintain the sample-size n to be at least 2p.
Methods. We use ERM as our baseline and compare it with GMOM. Since we are always
in the low-dimensional (n ≥ p) setting, the solution to ERM has a closed form expression and
is simply the OLS solution. We also study ERM-GD, which performs a gradient descent on
ERM and is equivalent to using empirical mean as the gradient oracle in our framework. We
also compare against the robust estimation techniques of Hsu and Sabato [23] and Duchi and
Namkoong [17]. In our experiments, all the iterative techniques are run until convergence.
Hyper Parameter Selection. The GMOM estimator depends on the confidence parameter
δ ∈ (0, 1), which needs to be tuned. In our experiments, we noticed that the performance of
GMOM varies very little when δ is selected from a reasonable range that is not too close to 0
(see Figure 5 and the discussion below). So we set δ = 0.1 in all our simulations.
Metrics. In our experiments, we vary σ and β, the parameters of Pareto distribution, which
can change the minimal risk R(θ ∗ ). So to compare various approaches across parameter values,
∗ ) − 1.0, where θ
b
b is
we use a scaled version of the Excess Risk, which we define as R(θ)/R(θ
the estimator.
To compare the performance of two estimators θb1 , θb2 , we define the notion of relative
efficiency:
(R(θb2 ) − R(θ ∗ )) − (R(θb1 ) − R(θ ∗ ))
.
RelEff(θb1 , θb2 ) =
R(θb1 ) − R(θ ∗ )
Roughly, this corresponds to the percentage improvement in the excess risk obtained using θb1
over θb2 . Whenever RelEff(θb1 , θb2 ) > 0, θb1 has a lower risk, and higher the value, the more
the fractional improvement.
Results. To reduce the variance in the plots presented here and in the next section, we
averaged results over 25 repetitions. Figure 4 shows the benefits of using GMOM over ERM.
In Figure 4(a) we plot the excess risk of ERM, ERM-GD and GMOM against the number
of Iterations. We see that upon convergence GMOM has a much lower population risk than
ERM. As expected, ERM-GD converges to ERM. However, the population risk of ERM-GD
in the first few iterations is much lower than the risk of ERM, suggesting early stopping.
Next, in Figure 4(b) we plot the scaled excess risk for ERM and GMOM as n/p increases. We
see that GMOM is always better than ERM, even when the number of samples is 12 times
the dimension p. In Figure 4(c), we plot the relative efficiency of GMOM and ERM against
σ. This shows that the percentage improvement in the excess risk by GMOM decreases as
14
the noise level σ decreases. This behavior is expected because in the noiseless setting both
methods would have a similar behavior. We do a similar study to see the relative efficiency
against the heavy-tailedness of the noise distribution. As noted before, as β is increased more
moments exist for the underlying distribution. Figure 4(d) shows that as the noise distribution
becomes more heavy-tailed, there is more benefit in using GMOM over ERM.
2
1
ERM
ERM (GD)
GMOM (GD)
1.9
Excess Risk / True Risk
1.8
1.7
Population Risk
ERM
GMOM (GD)
0.9
1.6
1.5
1.4
1.3
0.8
0.7
0.6
0.5
0.4
0.3
1.2
0.2
1.1
0.1
0
1
50
100
150
2
200
4
6
(a) Population Risk vs Iterations
10
12
(b) ExcessRisk/TrueRisk vs n/p
0.6
RelEff(GMOM,ERM)
1.2
8
n/p
Iterations
RelEff(GMOM,ERM)
0.5
Relative Efficiency
Relative Efficiency
1
0.8
0.6
0.4
0.4
0.3
0.2
0.1
0.2
0
0
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
2
4
6
8
10
12
β+1
σ
(c) Relative Efficiency vs σ
(d) Relative Efficiency vs β + 1
Figure 4: Linear Regression: Performance comparison of GMOM and ERM.
Dependence on Confidence Level. Figure 5(a) shows the performance of GMOM estimator for various values of δ. It can be seen that the choice of δ have very little effect on the
performance of the estimator. However, we notice that for small values of δ the performance
of the GMOM degrades. In practice, one can use either cross validation or a validation set for
choosing δ.
5
Theoretical Preliminaries
In this section we develop some theoretical preliminaries. We begin with a description of
some canonical examples of risk minimization in Section 5.1. Next we develop a general
15
2
ERM
δ=0.01
δ=0.05
δ=0.1
δ=0.2
1.9
1.8
Population Risk
1.7
1.6
1.5
1.4
1.3
1.2
1.1
1
50
100
150
200
Iterations
(a) Population Risk vs δ
Figure 5: Linear Regression: Dependence on Confidence Level δ.
theory on convergence of projected gradient descent in Section 5.2. We analyze the gradient
estimators defined in Algorithms 2 and 3 in Sections 5.3 and 5.4 respectively. Finally in
Sections 6,7 we present consequences of our general theory for the canonical examples, under
Huber contamination and heavy-tailed models.
For some of our examples, we will assume certain mild moment conditions. Concretely, for
a random vector x ∈ Rp , let µ = E[x] and Σ be the covariance matrix. Then x has bounded
2kth moments if there exists a constant C2k such that for every unit vector v we have that
i
h
k
.
(10)
E hx − µ, vi2k ≤ C2k E hx − µ, vi2
5.1
Illustrative Examples
The framework of risk minimization is a central paradigm of statistical estimation and is widely
applicable. In this section, we provide illustrative examples that fall under this framework.
5.1.1
Linear Regression
Here we observe paired samples {(x1 , y1 ), . . . (xn , yn )}, where each (xi , yi ) ∈ Rp ×R. We assume
that the (x, y) pairs sampled from the true distribution P are linked via a linear model:
y = hx, θ ∗ i + w,
(11)
where w is drawn from a zero-mean distribution such as normal distribution with variance σ 2
(N (0, σ 2 )) or a more heavy-tailed distribution such as student-t or Pareto distribution. We
suppose that under P the covariates x ∈ Rp , have mean 0, and covariance Σ.
For this setting we use the squared loss as our loss function, which induces the following
population risk:
1
1
(y − hx, θi)2 , and R(θ) = (θ − θ ∗ )T Σ(θ − θ ∗ ).
2
2
∗
Note that the true parameter θ is the minimizer of the population risk R(θ). The strongconvexity and smoothness assumptions from (3) in this setting require that τℓ ≤ λmin (Σ) ≤
λmax (Σ) ≤ τu .
L(θ; (x, y)) =
16
5.1.2
Generalized Linear Models
Here we observe paired samples {(x1 , y1 ), . . . (xn , yn )}, where each (xi , yi ) ∈ Rp × Y. We
suppose that the (x, y) pairs sampled from the true distribution P are linked via a linear model
such that when conditioned on the covariates x, the response variable has the distribution:
yhx, θ ∗ i − Φ(hx, θ ∗ i)
(12)
P (y|x) ∝ exp
c(σ)
Here c(σ) is a fixed and known scale parameter and Φ : R 7→ R is the link function. We focus
on the random design setting where the covariates x ∈ Rp , have mean 0, and covariance Σ.
We use the negative conditional log-likelihood as our loss function, i.e.
L(θ; (x, y)) = −yhx, θi + Φ(hx, θi).
(13)
Once again, the true parameter θ ∗ is the minimizer of the resulting population risk R(θ). It is
easy to see that Linear Regression with Gaussian Noise lies in the family of generalized linear
models. We now instantiate GLMs for logistic regression.
Logistic Regression In this case the (x, y) pairs are linked as:
(
1
1 with probability 1+exp(−hx,
θ ∗ i) ,
y=
0 otherwise.
(14)
This corresponds to setting Φ(t) = log(1 + exp(t)) and c(γ) = 1 in (12). The hessian of the
population risk is given by
exp hx, θi
T
2
xx .
∇ R(θ) = E
(1 + exp hx, θi)2
Note that as θ diverges, the minimum eigenvalue of the hessian approaches 0 and the loss is
no longer strongly convex. To prevent this, in this case we take the parameter space Θ to be
bounded.
5.1.3
Exponential Families and Canonical Parameters
Finally we consider the case where the true distribution P is in exponential family with
canonical parameters θ ∗ ∈ Rp , and a vector of sufficient statistics obtained from the map
φ : Z 7→ Rp . Note that while the linear and logistic regression models are indeed in an
exponential family, our interest in those cases was not in the canonical parameters.
In more details, we can write the true distribution P in this case as
P (z) = h(z) exp (hφ(z), θ ∗ i − A(θ ∗ )) ,
where h(z) is an arbitrary nuisance function. The negative log-likelihood gives us the following
loss function:
L(θ; z) = −hφ(z), θi + A(θ).
(15)
The strong-convexity and smoothness assumptions require that there are constants τℓ , τu such
that τℓ ≤ ∇2 A(θ) ≤ τu , for θ ∈ Θ.
17
5.2
Stability of Gradient Descent
In this section we develop a general theory for the convergence of the projected gradient
descent described in Algorithm 1. Note that our gradient estimators could be biased and are
not guaranteed to be consistent estimators of the true gradient ∇R(θ). This is especially true
in the Huber contamination model where it is impossible to obtain consistent estimators of the
gradient of the risk because of the non-vanishing bias caused by the contaminated samples.
Hence, we turn our attention to understanding the behavior of projected gradient descent with
a biased, inexact, gradient estimator of the form in (6). Before we present our main result, we
define the notion of stability of a gradient estimator, which plays a key role in the convergence
of gradient descent.
Definition 2 (Stability). A gradient estimator is stable for a given risk function R : Θ 7→ R
if for some φ ∈ [0, τℓ ),
α(e
n, δ̃) < τℓ − φ.
We denote by κ the following contraction parameter:
r
2ητℓ τu
e
κ := 1 −
+ ηα(e
n, δ),
τℓ + τu
and note that κ < 1. With these definitions in place we state our main result on the stability
of gradient descent:
Theorem 1. Suppose that the gradient estimator satisfies the condition in (6) and is stable for
the risk function R : Θ 7→ R. Then Algorithm 1 initialized at θ 0 with step-size η ≤ 2/(τℓ + τu ),
returns iterates {θbt }Tt=1 such that with probability at least 1 − δ for the contraction parameter
κ above we have that,
kθbt − θ ∗ k2 ≤ κt kθ 0 − θ ∗ k2 +
1
e
β(e
n, δ).
1−κ
(16)
We defer a proof of this result to the Appendix. Theorem 1 provides a general result for risk
minimization and parameter estimation. In any concrete instantiation for a given gradient
estimator, risk pair, we first study the distribution of the gradient of the risk to estimate the
e β(e
e and then apply Theorem 1.
error suffered by the gradient estimator (α(e
n, δ),
n, δ))
For the bound (16), the first term is decreasing in T , while the second term is increasing
in T . This suggests that for a given n and δ, we need to run just enough iterations for the
first term to be bounded by the second. Hence, we can fix the number of iterations T ∗ as the
smallest positive integer such that:
T ≥ log1/κ
(1 − κ)kθ 0 − θ ∗ k2
.
e
β(e
n, δ)
Since we obtain linear convergence, i.e. κ < 1, typically a logarithmic number of iterations
suffice to obtain an accurate estimate.
5.3
General Analysis of Algorithm 2
We now analyze the gradient estimator described in Algorithm 2 for Huber contamination
model and study the error suffered by it. As stated before, Algorithm 2 uses the robust mean
18
estimator of Lai et al. [29]. Hence, while our proof strategy mimics that of Lai et al. [29], we
present a different result which is obtained by a more careful non-asymptotic analysis of the
algorithm.
We define:
p log p log n/(pδ) 3/8 ǫp2 log p log
+
γ(n, p, δ, ǫ) :=
n
n
p log(p)
1/4
δ
,
(17)
and with this definition in place we have the following result:
Lemma 1. Let P be the true probability distribution of z and let Pθ be the true distribution
of the gradients ∇L(θ; z) on Rp with mean µθ = ∇R(θ), covariance Σθ , and bounded fourth
moments. There exists a positive constant C1 > 0, such that given n samples from the distribution in (4), the Huber Gradient Estimator described in Algorithm 2 when instantiated with
the contamination level ǫ, with probability at least 1 − δ, returns an estimate µ
b of µθ such that,
kb
µ − µθ k2 ≤ C1
√
1p
ǫ + γ(n, p, δ, ǫ) kΣθ k22 log p.
We note in particular, if n → ∞ (with other parameters held fixed) then γ(n, p, δ, ǫ) → 0 and
the error of our gradient estimator satisfies
kb
µ − µθ k2 ≤ C
p
kΣθ k2 ǫ log p,
and has only a weak dependence on the dimension p.
5.4
General Analysis of Algorithm 3
In this section we analyze the gradient estimator for heavy-tailed setting, described in Algorithm 3. The following result shows that the gradient estimate has exponential concentration around the true gradient, under the mild assumption that the gradient distribution has
bounded second moment. Its proof follows from the analysis of geometric median-of-means
estimator of Minsker [37]. We use tr (Σθ ) to denote the trace of the matrix Σθ .
Lemma 2. Let P be the probability distribution of z and Pθ be the distribution of the gradients
∇L(θ; z) on Rp with mean µθ = ∇R(θ), covariance Σθ . Then the heavy tailed gradient estib that satisfies the following exponential
mator described in Algorithm 3 returns an estimate µ
concentration inequality, with probability at least 1 − δ:
r
kb
µ − µθ k2 ≤ 11
6
tr (Σθ ) log 1.4/δ
.
n
Consequences for Estimation under ǫ-Contaminated Model
We now turn our attention to the examples introduced earlier, and present specific applications of Theorem 1, for parametric estimation under Huber contamination model. As shown
in Lemma 1, we need the added assumption that the true gradient distribution has bounded
fourth moments, which suggests the need for additional assumptions. We make our assumptions explicit and defer the technical details to the Appendix.
19
6.1
Linear Regression
We assume that the covariates x ∈ Rp have bounded 8th -moments and the noise w has bounded
4th moments.
Theorem 2 (Robust Linear Regression). Consider the statistical model in equation (11), and
C1√τℓ
e <
suppose that the number of samples n is large enough such that γ(e
n, p, δ)
and the
kΣk2 log p
2
ℓ
e
contamination level is such that ǫ < kΣkC2√τlog
− γ(e
n, p, δ)
for some constants C1 and C2 .
p
2
Then, there are universal constants C3 , C4 , such that if Algorithm 1 is initialized at θ 0 with
stepsize η ≤ 2/(τu + τℓ ) and Algorithm 2 as gradient estimator, then it returns iterates {θbt }Tt=1
such that with probability at least 1 − δ
p
C
σ
kΣk2 log p 1
3
t
∗
t
0
∗
e ,
kθb − θ k2 ≤ κ kθ − θ k2 +
n, p, δ)
(18)
ǫ 2 + γ(e
1−κ
for some contraction parameter κ < 1.
In the asymptotic setting when the number of samples n → ∞ (and other parameters are held
fixed), we see that for the Huber Gradient Estimator, the corresponding maximum allowed
C τ2
contamination level is ǫ < τ 2 1logℓ p . This says that the more well-conditioned the covariance
u
matrix Σ, the higher the contamination level we can tolerate.
Plugin Estimation For linear regression, the true parameter can be written in closed form
as θ ∗ = E[xxT ]−1 E[xy]. A non-iterative way to estimate θ ∗ is to separately estimate E[xxT ]
and E[xy] using robust covariance and mean oracles respectively. Under the assumption that
x ∼ N (0, Ip ), one can reduce the problem to robustly estimating E[xy]. Under this setting,
we now present a result using Lai et al. [29] as the mean estimator for estimation of E[xy].
Recall, the definition of γ in (17). We have the following result:
Corollary 3. Consider the model in equation(11) with the covariates drawn from N (0, Ip )
and w ∈ N (0, 1), then there are universal constants C1 , C2 such that if ǫ < C1 , then [29]
returns an estimate θb of E[xy], such that with probability at least 1 − δ
q
1
(19)
kθb − θ ∗ k2 ≤ C2 (1 + 2kθ ∗ k22 ) log p ǫ 2 + γ(n, p, δ, ǫ) .
Comparing bounds (18) and (19), we see that the error of the plugin estimator depends on
kθ ∗ k2 , which would make the estimator vacuous if kθ ∗ k2 scales with the dimension p. On
the other hand, the asymptotic rate of our robust gradient estimator is independent of the
kθ ∗ k2 . This disadvantage of plugin estimation is inescapable, due to known minimax results
for robust mean estimation [10] that show that the dependence on kθ ∗ k2 is unavoidable for
any oracle which estimates the mean of xy in the ǫ-contaminated setting. Next, we apply our
estimator to generalized linear models.
6.2
Generalized Linear Models
Here we assume that the covariates have bounded 8th moments. Additionally, we assume
smoothness of Φ′ (·) around θ ∗ . To be more precise, we assume that there exist universal
constants LΦ,2k , B2k such that
h
i
2k
≤ LΦ,2k kθ ∗ − θk2k
Ex Φ′ (hx, θi) − Φ′ (hx, θ ∗ i)
2 + BΦ,2k , for k = 1, 2, 4
20
k
We also assume that Ex [ Φ(t) (hx, θ ∗ i) ] ≤ MΦ,t,k where Φ(t) (·) is the tth -derivative of Φ(·).
Theorem 4 (Robust Generalized Linear Models). Consider the statistical model in equation (12), and suppose that the number of samples n is large enough such that
e <
γ(e
n, p, δ)
√
C1 τ ℓ
1
1
1
,
4
2
+ LΦ,2
]
log pkΣk22 [LΦ,4
and the contamination level is such that,
2
C
τ
2 ℓ
e ,
ǫ < √
− γ(e
n, p, δ)
1
1
1
2
4
2
log pkΣk2 [LΦ,4 + LΦ,2 ]
for some constants C1 and C2 . Then, there are universal constants C3 , C4 , such that if Algorithm 1 is initialized at θ 0 with stepsize η ≤ 2/(τu + τℓ ) and Algorithm 2 as gradient estimator,
then it returns iterates {θbt }Tt=1 such that with probability at least 1 − δ
kθbt − θ ∗ k2 ≤κt kθ 0 − θ ∗ k2
1
1
1
1
1
√
1
3
4
2
4
4
+ BΦ,2
+ c(σ) 2 MΦ,2,2
+ c(σ) 4 MΦ,4,1
] 1
C3 log pkΣk22 [BΦ,4
e
2
+
γ(e
n
,
p,
δ)
,
ǫ
+
1−κ
(20)
for some contraction parameter κ < 1.
Note that for the case of linear regression with gaussian noise, it is relatively straightforward
to see that LΦ,2k = C2k kΣkk2 , BΦ,2k = 0, MΦ,t,k = 1 ∀(t ≥ 2, k ∈ N ) and MΦ,t,k = 0 ∀(t ≥
3, k ∈ N ) under the assumption of bounded 8th moments of the covariates; which essentially
leads to an equivalence between Theorem 2 and Theorem 4 for this setting. In the following
section, we instantiate the above Theorem for logistic regression and compare and contrast
our results to other existing methods.
6.2.1
Logistic Regression
By observing that Φ(t) (·) is bounded for logistic regression for all t ≥ 1, we can see that
LΦ,2k = 0, and that there exists a universal constant C > 0 such that BΦ,2k < C and
MΦ,t,k < C ∀(t ≥ 1, k ∈ N ).
Corollary 5 (Robust Logistic Regression). Consider the model in equation(14), then there
are universal constants C1 , C2 , such that if ǫ < C1 , then Algorithm 1 initialized at θ 0 with
stepsize η ≤ 2/(τu + τℓ ) and Algorithm 2 as gradient estimator, returns iterates {θbt }Tt=1 , such
that with probability at least 1 − δ
p
C2 kΣk2 log p 1
t
∗
t 0
∗
e ,
b
n, p, δ)
(21)
ǫ 2 + γ(e
kθ − θ k2 ≤ κ kθ − θ k2 +
1−κ
for some contraction parameter κ < 1.
Under the restrictive assumption that x ∼ N (0, Ip ), Du et al. [16] exploited Stein’s trick to
derive a plugin estimator for logistic regression. However, similar to the linear regression, the
error of the plugin estimator scales with kθ ∗ k2 , which is avoided in our robust gradient descent
algorithm. We also note that our algorithm extends to general covariate distributions.
21
6.3
Exponential Family
Here we assume that the random vector φ(z), z ∼ P has bounded 4th moments.
Theorem 6 (Robust Exponential Family). Consider the model in equation(15), then there
are universal constants C1 , C2 , such that if ǫ < C1 , then Algorithm 1 initialized at θ 0 with
stepsize η ≤ 2/(τu + τℓ ) and Algorithm 2 as gradient oracle, returns iterates {θbt }Tt=1 , such that
with probability at least 1 − δ
√
C2 τu log p 1
t
∗
t 0
∗
e ,
b
ǫ 2 + γ(e
n, p, δ)
(22)
kθ − θ k2 ≤ κ kθ − θ k2 +
1−κ
for some contraction parameter κ < 1.
Plugin Estimation Since the true parameter θ ∗ is the minimizer of the negative loglikelihood, we know that E[∇L(θ ∗ )] = 0, which implies that ∇A(θ ∗ ) = Eθ∗ [φ(Z)]. This
shows that the true parameter θ ∗ can be obtained by inverting the ∇A operator, whenever
possible. In the robust estimation framework, we can use a robust mean of the sufficient
statistics to estimate Eθ∗ [φ(Z)]. We instantiate this estimator using the mean estimator of
[29] to estimate Eθ∗ [φ(Z)]:
Corollary 7. Consider the model in equation(15), then there are universal constants C1 , C2
b of E[φ(z)], such that with probability at
such that if ǫ < C1 , then [29] returns an estimate µ
least 1 − δ
√
τu log p 1
−1
∗
ǫ 2 + γ(n, p, δ, ǫ) ,
(23)
b − θ k2 ≤ C2
kPΘ (∇A) µ
τℓ
where PΘ [θ] = argminy∈Θ ky − θk22 is the projection operator onto the feasible set Θ.
6.4
Discussion and Limitations
In the asymptotic setting of n → ∞, Algorithm 1 √
with Algorithm 2 as gradient estimator
converges to a point θb such that kθb − θ ∗ k2 = O( ǫ log p). Hence, our error scales only
logarithmically with the dimension p. This dependency on the dimension p is a facet of
using the estimator from Lai et al. [29] for gradient estimation. Using better oracles will only
improve our performance. Next, we would like to point to the difference in the maximum
allowed contamination ǫ∗ between the three models. For logistic regression and exponential
C τ2
family, ǫ∗ < C1 , while for linear regression, ǫ∗ < τ 2 1logℓ p . These differences are in large part
u
due to differing variances of the gradients, which naturally depend on the underlying risk
function. This scaling of the variance of gradients for linear regression also provides insights
into the limitations of Algorithm 1 for gradient estimators. In the Appendix, we provide an
upper bound for the contamination level ǫ based on the initialization point θ 0 , above which,
Algorithm 1 would not work for any gradient estimator.
7
Consequences for Heavy-Tailed Estimation
In this section we present specific applications of Theorem 1 for parametric estimation, under
heavy tailed setting. The proofs of the results can be found in the Appendix.
22
7.1
Linear Regression
We first consider the linear regression model described in Equation (11). We assume that the
covariates x ∈ Rp have bounded 4th -moments and the noise w has bounded 2th moments. This
assumption is needed to bound the error in the gradient estimator (see Lemma 2).
Theorem 8 (Heavy Tailed Linear Regression). Consider the statistical model in equation (11).
There are universal constants C1 , C2 > 0 such that if
n
e>
C1 τu2
e
p log(1/δ)
τl2
and if Algorithm 1 is initialized at θ 0 with stepsize η ≤ 2/(τu + τℓ ) and Algorithm 3 as gradient
estimator, then it returns iterates {θbt }Tt=1 such that with probability at least 1 − δ
s
p
e
C2 σ kΣk2 p log(1/δ)
,
(24)
kθbt − θ ∗ k2 ≤ κt kθ 0 − θ ∗ k2 +
1−κ
n
e
for some contraction parameter κ < 1.
7.2
Generalized Linear Models
In this section we consider generalized linear models described in Equation (12), where the
covariate x is allowed to have a heavy tailed distribution. Here we assume that the covariates
have bounded 4th moment. Additionally, we assume smoothness of Φ′ (·) around θ ∗ . Specifically, we assume that there exist universal constants LΦ,2k , B2k such that
h
i
2k
≤ LΦ,2k kθ ∗ − θk2k
Ex Φ′ (hx, θi) − Φ′ (hx, θ ∗ i)
2 + BΦ,2k , for k = 1, 2
k
We also assume that Ex [ Φ(t) (hx, θ ∗ i) ] ≤ MΦ,t,k for t ∈ {1, 2, 4}, where Φ(t) (·) is the tth derivative of Φ(·).
Theorem 9 (Heavy Tailed Generalized Linear Models). Consider the statistical model in
equation (12). There are universal constants C1 , C2 > 0 such that if
p
LΦ,4 + LΦ,2
C1 kΣk2
e
n
e>
p log 1/δ,
τl2
and if Algorithm 1 is initialized at θ 0 with stepsize η ≤ 2/(τu + τℓ ) and Algorithm 3 as gradient
estimator, it returns iterates {θbt }Tt=1 such that with probability at least 1 − δ
kθbt − θ ∗ k2 ≤κt kθ 0 − θ ∗ k2
1
1
1
1
1
1
3
s
2
4
2
4
4
C2 kΣk2 BΦ,4 + BΦ,2 + c(σ) 2 MΦ,2,2 + c(σ) 4 MΦ,4,1
e
p log(1/δ) , (25)
+
1−κ
n
e
for some contraction parameter κ < 1.
We now instantiate the above Theorem for logistic regression model.
23
Corollary 10 (Heavy Tailed Logistic Regression). Consider the model in equation(14). There
are universal constants C1 , C2 > 0 such that if
n
e>
C12 kΣk2
e
p log 1/δ.
τl2
and if Algorithm 1 initialized at θ 0 with stepsize η ≤ 2/(τu + τℓ ) and Algorithm 3 as gradient
estimator, it returns iterates {θbt }Tt=1 such that with probability at least 1 − δ
s
p
e
C
kΣk
p
log(1/
δ)
2
2
,
kθbt − θ ∗ k2 ≤ κt kθ 0 − θ ∗ k2 +
1−κ
n
e
(26)
for some contraction parameter κ < 1.
7.3
Exponential Family
We now instantiate Theorem 1 for parameter estimation in heavy-tailed exponential family
distributions. Here we assume that the random vector φ(z), z ∼ P has bounded 2nd moments,
and we obtain the following result:
Theorem 11 (Heavy Tailed Exponential Family). Consider the model in equation(15). If
Algorithm 1 is initialized at θ 0 with stepsize η ≤ 2/(τu + τℓ ) and Algorithm 3 as gradient
estimator, it returns iterates {θbt }Tt=1 , such that with probability at least 1 − δ
kθbt − θ ∗ k2 ≤ κt kθ 0 − θ ∗ k2 +
1
C
1−κ
s
k∇2 A(θ ∗ )k2 p log 1/δe
,
n
e
(27)
for some contraction parameter κ < 1 and universal constant C.
8
Discussion
In this paper we introduced a broad class of estimators and showed that these estimators
can have strong robustness guarantees in Huber’s ǫ-contamination model and for heavy-tailed
distributions. These estimators leverage the robustness of gradient descent, together with the
observation that for risk minimization in most statistical models the gradient of the risk takes
the form of a simple multivariate mean which can be robustly estimated by using recent work
of robust mean estimation. These estimators based on robust gradient descent work well in
practice and in many cases outperform other robust (and non-robust) estimators.
There are several avenues for future work, including developing a better understanding of
robust mean estimation. Any improvement for robust mean estimation would immediately
translate to improved guarantees for the estimators we propose for general parametric models.
Finally, it would also be of interest to understand the extent to which we could replace gradient
descent with other optimization methods such as accelerated gradient descent or Newton’s
method. We note however, that although these methods may have faster rates of convergence in
the classical risk minimization setting: in our setup their stability (to using inexact gradients)
is far more crucial and warrants further investigation.
24
9
Acknowledgements
The research of SB was supported in part by the grant NSF-DMS-1713003. We thank Larry
Wasserman for helpful comments on the paper.
References
[1] Noga Alon, Yossi Matias, and Mario Szegedy. The space complexity of approximating the frequency moments. In Proceedings of the Twenty-eighth Annual ACM Symposium on Theory of
Computing, STOC ’96, pages 20–29, New York, NY, USA, 1996. ACM.
[2] Sivaraman Balakrishnan, Martin J Wainwright, and Bin Yu. Statistical guarantees for the em
algorithm: From population to sample-based analysis. The Annals of Statistics, 45(1):77–120,
2017.
[3] Kush Bhatia, Prateek Jain, and Purushottam Kar. Robust regression via hard thresholding. In
Advances in Neural Information Processing Systems, pages 721–729, 2015.
[4] G. E. P. Box. Non-normality and tests on variances. Biometrika, 40(3-4):318–335, 1953.
[5] Christian Brownlees, Emilien Joly, and Gábor Lugosi. Empirical risk minimization for heavytailed losses. The Annals of Statistics, 43(6):2507–2536, 2015.
[6] Sébastien Bubeck. Convex optimization: Algorithms and complexity. Foundations and Trends R
in Machine Learning, 8(3-4):231–357, 2015.
[7] Emmanuel J Candès, Xiaodong Li, Yi Ma, and John Wright. Robust principal component analysis.
Journal of the ACM (JACM), 58(3):11, 2011.
[8] Olivier Catoni. Challenging the empirical mean and empirical variance: a deviation study. In
Annales de l’Institut Henri Poincaré, Probabilités et Statistiques, volume 48, pages 1148–1185.
Institut Henri Poincaré, 2012.
[9] Moses Charikar, Jacob Steinhardt, and Gregory Valiant. Learning from untrusted data. In STOC,
2017.
[10] Mengjie Chen, Chao Gao, and Zhao Ren. Robust covariance matrix estimation via matrix depth.
arXiv preprint arXiv:1506.00691, 2015.
[11] Mengjie Chen, Chao Gao, Zhao Ren, et al. A general decision theory for huber’s epsiloncontamination model. Electronic Journal of Statistics, 10(2):3752–3774, 2016.
[12] Yudong Chen, Constantine Caramanis, and Shie Mannor. Robust sparse regression under adversarial corruption. In Proceedings of the 30th International Conference on Machine Learning,
ICML 2013, Atlanta, GA, USA, 16-21 June 2013, pages 774–782, 2013.
[13] L. Devroye and L. Györfi. Nonparametric density estimation: the L1 view. Wiley series in
probability and mathematical statistics. Wiley, 1985.
[14] Ilias Diakonikolas, Gautam Kamath, Daniel M Kane, Jerry Li, Ankur Moitra, and Alistair Stewart. Robust estimators in high dimensions without the computational intractability. In Foundations of Computer Science (FOCS), 2016 IEEE 57th Annual Symposium on, pages 655–664.
IEEE, 2016.
[15] David L Donoho and Richard C Liu. The" automatic" robustness of minimum distance functionals.
The Annals of Statistics, pages 552–586, 1988.
25
[16] Simon S Du, Sivaraman Balakrishnan, and Aarti Singh. Computationally efficient robust estimation of sparse functionals. Conference on Learning Theory, 2017.
[17] John Duchi and Hongseok Namkoong. Variance-based regularization with convex objectives.
arXiv preprint arXiv:1610.02581, 2016.
[18] Jianqing Fan, Weichen Wang, and Ziwei Zhu. A shrinkage principle for heavy-tailed data: Highdimensional robust low-rank matrix recovery, 2016.
[19] Martin A. Fischler and Robert C. Bolles. Random sample consensus: A paradigm for model
fitting with applications to image analysis and automated cartography. Commun. ACM, 24(6):
381–395, 1981.
[20] Chao Gao. Robust regression via mutivariate regression depth. 2017.
[21] Frank R Hampel, Elvezio M Ronchetti, Peter J Rousseeuw, and Werner A Stahel. Robust statistics:
the approach based on influence functions, volume 114. John Wiley & Sons, 2011.
[22] Cecil Hastings Jr, Frederick Mosteller, John W Tukey, and Charles P Winsor. Low moments for
small samples: a comparative study of order statistics. The Annals of Mathematical Statistics,
pages 413–426, 1947.
[23] Daniel Hsu and Sivan Sabato. Loss minimization and parameter estimation with heavy tails.
Journal of Machine Learning Research, 17(18):1–40, 2016.
[24] P. J. Huber. Robust Statistics. John Wiley & Sons, 1981.
[25] Peter J Huber. Robust estimation of a location parameter. The Annals of Mathematical Statistics,
35(1):73–101, 1964.
[26] Peter J Huber. A robust version of the probability ratio test. The Annals of Mathematical
Statistics, 36(6):1753–1758, 1965.
[27] Mark R. Jerrum, Leslie G. Valiant, and Vijay V. Vazirani. Random generation of combinatorial
structures from a uniform distribution. Theoretical Computer Science, 43:169 – 188, 1986.
[28] S Kakade, Shai Shalev-Shwartz, and Ambuj Tewari. Applications of strong convexity–strong
smoothness duality to learning with matrices. CoRR, abs/0910.0610, 2009.
[29] Kevin A Lai, Anup B Rao, and Santosh Vempala. Agnostic estimation of mean and covariance.
In Foundations of Computer Science (FOCS), 2016 IEEE 57th Annual Symposium on, pages
665–674. IEEE, 2016.
[30] Kuang-Chih Lee, Jeffrey Ho, and David J Kriegman. Acquiring linear subspaces for face recognition under variable lighting. IEEE Transactions on pattern analysis and machine intelligence, 27
(5):684–698, 2005.
[31] Matthieu Lerasle and Roberto I Oliveira. Robust empirical mean estimators. arXiv preprint
arXiv:1112.3914, 2011.
[32] Jerry Li. Robust sparse estimation tasks in high dimensions. Conference on Learning Theory,
2017.
[33] Po-Ling Loh. Statistical consistency and asymptotic normality for high-dimensional robust mestimators. Ann. Statist., 45(2):866–896, 04 2017.
[34] Po-Ling Loh and Martin J Wainwright. High-dimensional regression with noisy and missing data:
Provable guarantees with non-convexity. In Advances in Neural Information Processing Systems,
pages 2726–2734, 2011.
26
[35] Gabor Lugosi and Shahar Mendelson. Risk minimization by median-of-means tournaments. arXiv
preprint arXiv:1608.00757, 2016.
[36] Gábor Lugosi and Shahar Mendelson. Sub-gaussian estimators of the mean of a random vector.
The Annals of Statistics, 2017.
[37] Stanislav Minsker. Geometric median and robust estimation in banach spaces. Bernoulli, 21(4):
2308–2335, 2015.
[38] Ivan Mizera. On depth and deep points: a calculus. Annals of Statistics, pages 1681–1736, 2002.
[39] A.S. Nemirovski and D.B. Yudin. Problem Complexity and Method Efficiency in Optimization. A
Wiley-Interscience publication. Wiley, 1983.
[40] Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer
Science & Business Media, 2013.
[41] Joel A Tropp. User-friendly tail bounds for sums of random matrices. Foundations of computational mathematics, 12(4):389–434, 2012.
[42] John W Tukey. Mathematics and the picturing of data. In Proceedings of the international
congress of mathematicians, volume 2, pages 523–531, 1975.
[43] S. van de Geer. Empirical Processes in M-Estimation. Cambridge University Press, 2000.
[44] Yin Wang, Caglayan Dicle, Mario Sznaier, and Octavia Camps. Self scaled regularized robust
regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 3261–3269, 2015.
[45] Yannis G. Yatracos. Rates of convergence of minimum distance estimators and kolmogorov’s
entropy. Ann. Statist., 13(2):768–774, 06 1985.
[46] Xinyang Yi, Dohyung Park, Yudong Chen, and Constantine Caramanis. Fast algorithms for robust
PCA via gradient descent. In Advances in Neural Information Processing Systems 29: Annual
Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona,
Spain, pages 4152–4160, 2016.
[47] Wen-Xin Zhou, Koushiki Bose, Jianqing Fan, and Han Liu. A new perspective on robust mestimation: Finite sample theory and applications to dependence-adjusted multiple testing. 2017.
A
Proof of Theorem 1
In this section, we present the proof of our main result on projected gradient descent with an
inexact gradient estimator. To ease the notation we will often omit {Dn , δ} from g(θ; Dn , δ).
Proof. At any iteration step t ∈ {1, 2, . . . , T }, by assumption we have that with probability at
least 1 − Tδ ,
kg(θ t ; Dn , δ/T ) − ∇R(θ t )k2 ≤ α(n/T, δ/T )kθ − θ ∗ k2 + β(n/T, δ/T ).
(28)
Taking union bound, (28) holds over all iteration steps t ∈ {1 . . . T }, with probability at least
1 − δ. For the remainder of the analysis, we assume this event to be true.
27
Notation. Let g(θ k ) = ∇R(θ k ) + ek be the noisy gradient. Let α = α(n/T, δ/T ) and
β = β(n/T, δ/T ) for brevity.
We have the following Lemma from Bubeck [6].
Lemma 3. [Lemma 3.11 [6]] Let f be M -smooth and m-strongly convex, then for all x, y ∈ Rp ,
we have:
h∇f (x) − ∇f (y), x − yi ≥
mM
1
kx − yk22 +
k∇f (y) − ∇f (x)k22 .
m+M
m+M
By assumptions
we have that:
k∇R(θ k ) − g(θ k )k2 = kek k2 ≤ αkθ k − θ ∗ k2 + β. Our update
rule is θ k+1 = PΘ θ k − ηg(θ k ) . Then we have that:
kθ k+1 − θ ∗ k22 = kPΘ [θ k − ηg(θ k )] − θ ∗ k22 = kPΘ [θ k − ηg(θ k )] − PΘ [θ ∗ − η∇R(θ ∗ )]k22
≤ kθ k − ηg(θ k ) − (θ ∗ − η∇R(θ ∗ ))k22
=
≤
k
∗
k
kθ − θ − η(∇R(θ ) − ∇R(θ )) − ηek k22
kθ k − θ ∗ − η(∇R(θ k ) − ∇R(θ ∗ ))k22 + η 2 kek k22
+ 2kek k2 kθ k − θ ∗ − η(∇R(θ k ) − ∇R(θ ∗ ))k2 ,
(29)
∗
(30)
where Equation (29) follows from contraction property of projections. Now, we can write
kθ k − θ ∗ − η(∇R(θ k ) − ∇R(θ ∗ ))k2 as
kθk − θ∗ − η(∇R(θk ) − ∇R(θ∗ ))k22 = kθk − θ∗ k22 + η 2 k∇R(θk ) − ∇R(θ∗ )k22 − 2η ∇R(θk ) − ∇R(θ∗ ), θk − θ∗
1
τℓ τu
k
∗ 2
2
k
∗ 2
k
∗ 2
k
∗ 2
≤ kθ − θ k2 + η k∇R(θ ) − ∇R(θ )k2 − 2η
kθ − θ k2 +
k∇R(θ ) − ∇R(θ )k2
τℓ + τu
τℓ + τu
(31)
= kθk − θ∗ k22 (1 − 2ητℓ τu /(τℓ + τu )) + ηk∇R(θk ) − ∇R(θ∗ )k22 (η − 2/(τu + τℓ ))
≤ kθk − θ∗ k22 (1 − 2ητℓ τu /(τℓ + τu )),
(32)
(33)
where the second step follows from Lemma 3 and the last step follows from the step size
η ≤ 2/(τℓ + τu ).
Now, combining Equations (30) and (33), and using our assumption that kek k2 ≤ αkθ k −
θ ∗ k2 + β, we get:
2
p
kθ k+1 − θ ∗ k22 ≤ kθ k − θ ∗ k2 (1 − 2ητℓ τu /(τℓ + τu )) + ηkek k2
p
kθ k+1 − θ ∗ k2 ≤
hp
i
1 − 2ητℓ τu /(τℓ + τu ) + ηα kθ k − θ ∗ k2 + ηβ.
1 − 2ητℓ τu /(τℓ + τu ) + ηα. By assumption, we choose ǫ < ǫ∗ such that α < τℓ .
p
κ = 1 − 2ητℓ τu /(τℓ + τu ) + ηα
(34)
p
< 1 − 2ητℓ τu /(τℓ + τu ) + ητℓ .
(35)
p
p
Since 0 ≤ η ≤ 2/(τℓ + τu ), we get that 1 − 2ητℓ τu /(τℓ + τu ) ≤ 1 − η 2 τℓ τu . We have:
p
(36)
κ < 1 − η 2 τℓ τu + ητℓ
q
(∵ τℓ ≤ τu )
(37)
< 1 − η 2 τℓ2 + ητℓ
Let κ =
(38)
< 1.
28
Therefore, we have that,
kθ k+1 − θ ∗ k2 ≤ κkθ k − θ ∗ k2 + ηβ.
for some κ < 1. Solving the induction,we get:
kθ k − θ ∗ k2 ≤ κk kθ 0 − θ ∗ k2 +
B
1
ηβ.
1−κ
Proof of Theorem 4
To prove our result on Robust Generalized Linear Models, we first study the distribution of
gradients of the corresponding risk function.
Lemma 4. Consider the model in Equation (12), then there exist universal constants C1 , C2 >
0 such that
p p
C4 LΦ,4 + LΦ,2
kCov(∇L(θ)k2 ≤C1 k∆k22 kΣk2
q
p
p
3
+ C2 kΣk2 BΦ,2 + BΦ,4 + c(σ) 3MΦ,2,2 + c(σ) MΦ,4,1
Bounded fourth moments E
h
(∇L(θ) − E[∇L(θ)])T v
4 i
≤ C2 (Var[∇L(θ)T v])2 .
Proof. The gradient ∇L(θ) and it’s expectation can be written as:
∇L(θ) = −y.x + u(hx, θi).x
E[∇L(θ)] = E[x u(xT θ) − u(xT θ ∗ ) ],
where u(t) = Φ′ (t).
kE[∇L(θ)]k2 = sup y T E[∇L(θ)]
y∈Sp−1
≤ sup E[(y T x) u(xT θ) − u(xT θ ∗ ) ]
y∈Sp−1
≤ sup
y∈Sp−1
q
1
2
≤ C1 kΣk2
E[(y T x)2 ]
q
q
2
E[(u(xT θ) − u(xT θ ∗ )) ]
LΦ,2 k∆k22 + BΦ,2
where the last line follows from our assumption of smoothness.
Now, to bound the maximum eigenvalue of the Cov(∇L(θ)),
kCov(∇L(θ))k2 = sup z T E ∇L(θ)∇L(θ)T − E[∇L(θ)]E[∇L(θ)]T z
z∈Sp−1
≤ sup z T E ∇L(θ)∇L(θ)T z + sup z T E[∇L(θ)]E[∇L(θ)]T z
z∈Sp−1
z∈Sp−1
i
h
2
z + kE[∇L(θ)]k22
≤ sup z T E xxT u(xT θ) − y)
p−1
z∈S
h
2 i
z + kE[∇L(θ)]k22
≤ sup E z T xxT u(xT θ) − y
z∈Sp−1
≤ sup
z∈Sp−1
q
r h
i
E (u(xT θ) − y)4 ] + kE[∇L(θ)]k22
E [(z T x)4 ]
29
To bound E
h
u(xT θ) − y
4 i
, we make use of the Cr inequality.
Cr inequality. If X and Y are random variables such that E|X|r < ∞ and E|Y |4 < ∞
where r ≥ 1 then:
E|X + Y |r ≤ 2r−1 (E|X|r + E|Y |r )
Using the Cr inequality, we have that
h
h
h
4 i
4 i
4 i
+ E u(xT θ ∗ ) − y
≤ 8 E u(xT θ) − u(xT θ ∗ )
E u(xT θ) − y
≤ C LΦ,4 k∆k42 + BΦ,4 + c(σ)3 MΦ,4,1 + 3c(σ)2 MΦ,2,2
where the last line follows from our assumption that Pθ∗ (y|x) is in the exponential family,
hence, the cumulants are higher order derivatives of the log-normalization function.
q
√ p
p
p
p
2
3
LΦ,4 k∆k2 + BΦ,4 + c(σ) 3MΦ,2,2 + c(σ) MΦ,4,1 + kE[∇L(θ)]k22
kCov(∇L(θ))k2 ≤ C C4 kΣk2
q
√ p
p
p
p
2
3
≤ C C4 kΣk2
LΦ,4 k∆k2 + BΦ,4 + c(σ) 3MΦ,2,2 + c(σ) MΦ,4,1 + C12 kΣk2 LΦ,2 k∆k22 + BΦ,2
q
p p
p
p
≤ Ck∆k22 kΣk2
C4 LΦ,4 + LΦ,2 + C6 kΣk2 BΦ,2 + BΦ,4 + c(σ) 3MΦ,2,2 + c(σ)3 MΦ,4,1
Bounded Fourth Moment. To show that the fourth moment of the gradient distribution
is bounded, we have
h
h
4 i
4 i
≤ E (∇L(θ) − E[∇L(θ)])T v
E (∇L(θ) − E[∇L(θ])T v
≤ 8 E[|∇L(θ])T v|4 ] + E[|E[∇L(θ)]T v|4 ] .
{z
} |
{z
}
|
A
Control of A.
B
E[|∇L(θ])T v|4 ] = E[(xT v)4 (u(xT θ) − y)4 ]
q
q
T
8
≤ E[(x v) ] E[(u(xT θ) − y)8 ]
q
p
2
≤ C8 kΣk2 E[(u(xT θ) − u(xT θ ∗ ))8 ] + E[(u(xT θ ∗ ) − y)8 ]
v
u
8
X
p
u
2t
8
gt,k MΦ,t,k
≤ C8 kΣk2 LΦ,8 k∆k2 + BΦ,8 +
t,k=2
≤
√
v
u 8
uX
p
p
2
4
CkΣk2 LΦ,8 k∆k2 + BΦ,8 + t
gt,k MΦ,t,k
t,k=2
where the last step follows from the fact that the 8th central moment can be written as
a polynomial involving the lower cumulants, which in turn are the derivatives of the lognormalization function.
Control of B.
2
E[|E[∇L(θ)]T v|4 ] ≤ kE[∇L(θ)k42 ≤ C1 kΣk22 L2Φ,2 k∆k22 + BΦ,2
30
By assumption LΦ,k , BΦ,k , MΦ,t,k are all bounded for k, t ≤ 8, which implies that there exist
constants c1 , c2 > 0 such that
h
4 i
≤ c1 kΣk22 k∆k42 + c2
(39)
E (∇L(θ) − E[∇L(θ])T v
Previously, we say that kCov∇L(θ)k2 ≤ c3 kΣk2 k∆k22 +c4 , for some universal constants c3 , c4 >
0, hence the gradient ∇L(θ) has bounded fourth moments.
Having studied the distribution of the gradients, we use Lemma 1 to characterize the
stability of Huber Gradient estimator. Using Lemma 1, we know that at any point θ, the
Huber Gradient Estimator g(θ, δ/T ) satisfies that with probability 1 − δ/T ,
1
1p
e kCov(∇L(θ))k 2 log p.
n, p, δ)
kg(θ, δ/T ) − ∇R(θ)k2 ≤ C2 ǫ 2 + γ(e
2
Substituting the upper bound on kCov(∇L(θ))k2 from Lemma 4, we get that there are universal constants C1 , C2 such that with probability at least 1 - δ/T
1
p
1
1
1
4
2
e
log pkΣk22 [LΦ,4
(40)
kg(θ) − ∇R(θ)k2 ≤ C1 ǫ 2 + γ(e
+ LΦ,2
] k∆k2
n, p, δ)
{z
}
|
e
α(e
n,δ)
p
1
1
1
1
1
1
1
3
4
2
4
4
e
+ C2 ǫ 2 + γ(e
n, p, δ)
+ BΦ,2
+ c(σ) 2 MΦ,2,2
+ c(σ) 4 MΦ,4,1
]
log pkΣk22 [BΦ,4
|
{z
}
e
β(e
n,δ)
(41)
e < τℓ . Using Equation (40), we
To ensure stability of gradient descent, we need that α(e
n, δ)
get that gradient descent is stable as long as the number of samples n is large enough such
C1 τℓ
e <
that γ(e
n, p, δ)
, and the contamination level is such that
1
1
1
√
4 +L 2 ]
log pkΣk22 [LΦ,4
Φ,2
ǫ<
C2 τℓ
1
1
1
√
4 +L 2 ]
log pkΣk22 [LΦ,4
Φ,2
!2
e
− γ(e
n, p, δ)
for some constants C1 and C2 . Plugging the corre-
e into Theorem 1, we get back the result of Theorem 4.
sponding ǫ and β(e
n, δ)
C
Proof of Corollary 3
We begin by studying the distribution of the random variable xy = xxT θ ∗ + x.w.
Lemma 5. Consider the model in Equation (11), with x ∼ N (0, Ip ) and w ∼ N (0, 1) then
there exist universal constants C1 , C2 such that
E[xy] = θ ∗
kCov(xy)k2 = 1 + 2kθ ∗ k22
h
4 i
≤ C2 (Var[(xy)T v])2 .
Bounded fourth moments E (xy − E[xy])T v
Proof. Mean.
xy = xxT θ ∗ + x.w
T ∗
E[xy] = E[xx θ + x.w]
∗
(42)
(43)
(44)
E[xy] = θ .
31
Covariance.
Cov(xy) = E[(xxT − I)θ ∗ + x.w)((xxT − I)θ ∗ + x.w)T )]
∗ ∗T
Cov(xy) = E[(xxT − I)θ θ
(45)
(xxT − I)] + Ip .
(46)
Now, Z = (xxT − I)θ ∗ can be written as:
∗ ∗ 2
2
θ1 (x1 − 1) + x1 x2 θ2∗ + . . . + x1 xp θp∗
θ1
(x1 − 1)
x1 x2
...
x1 xp
∗
∗
2
∗
∗
x1 x2
x2 xp
(x22 − 1) . . .
θ2 x1 x2 θ1 + (x2 − 1)θ2 + . . . + x2 xp θp
T
∗
(xx −I)θ =
.
.. =
..
..
..
..
..
.
.
.
.
.
.
∗
2
2
∗
∗
∗
θp
x1 xp
x2 xp
. . . (xp − 1)
x1 xp θ1 + x2 xp θ2 + . . . + (xp − 1)θp
Then,
...
θ1∗ θp∗
2θ1∗ 2 + θ2∗ 2 + . . . + θp∗ 2
θ1∗ θ2∗
θ1∗ 2 + 2θ2∗ 2 + . . . + θp∗ 2 . . .
θ2∗ θp∗
θ1∗ θ2∗
T
E ZZ =
.
..
..
..
..
.
.
.
.
2
2
2
∗
∗
∗
∗
∗
∗
∗
. . . θ1 + θ2 + . . . + 2θp
θ2 θp
θp θ1
Hence the covariance matrix can be written as:
Cov(xy) = Ip (1 + kθ ∗ k22 ) + θ ∗ θ ∗ T .
Therefore kCov(xy)k2 = 1 + 2kθ ∗ k22 .
Bounded Fourth Moment.
E
h
(xy − E[xy])T v
4 i
≤E
We start from the LHS
h
(xy − E[xy])T v
4 i
(47)
4
= E ((xxT − I)θ ∗ + wx)T v
i4
h
= E (θ ∗T x)(xT v) − θ ∗T v + wv T x
4
T
∗T
∗T
≤ 8
8 E (θ x)(x v) + E θ v
|
{z
} | {z
A
B
(48)
(49)
+ E w(xT v) 4 .
| {z }
}
4
(50)
C
The last line follows from two applications of the following inequality:
Cr inequality. If X and Y are random variables such that E|X|r < ∞ and E|Y |4 < ∞
where r ≥ 1 then:
E|X + Y |r ≤ 2r−1 (E|X|r + E|Y |r ) .
Now to control each term on:
• Control of A. Using Cauchy Schwartz, and normality of 1D projections of normal
distribution
q
q
T 8
∗
A ≤ E[|θ x| ] E[|xT v|8 ]
(51)
- kθ ∗ k42 .
32
(52)
• Control of B, B ≤ kθ ∗ k42 .
• Control of C, C = O(1), using independence of w and normality of 1D projections of
normal distribution.
h
4 i
Therefore the E (xy − E[xy])T v
- c + kθ ∗ k42 .
For the RHS:
Var((xy)T v)2 = (v T Cov(xy)v)2 ≤ kCov(xy)k22 .
We saw that the kCov(xy)k2 - c + kθ ∗ k22 , so both the LHS and RHS scale with kθ ∗ k42 . Hence,
xy has bounded fourth moments.
Now that we’ve established that xy has bounded fourth moments implies that we can use
[29] as a mean estimation oracle. Using Theorem 1.3 [29], we know that the oracle of [29]
outputs an estimate θb of E[xy] such that with probability at least 1 − 1/pC1 , we have:
1
p
kθb − θ ∗ k2 ≤ C2 kCov(xy)k2 log p ǫ 2 + γ(n, p, δ, ǫ)
Using Lemma 5 to subsitute kCov(xy)k2 ≤ 1+2kθ ∗ k22 ), we recover the statement of Corollary 3.
D
Proof of Theorem 6
To prove our result on Robust Exponential Family, we first study the distribution of gradients
of the corresponding risk function.
Lemma 6. Consider the model in Equation (15), then there exists a universal constant C1
such that
E[∇L(θ)] = ∇A(θ) − ∇A(θ ∗ )
2
∗
kCov[∇L(θ)]k2 = k∇ A(θ )k2
h
4 i
≤ C1 (Var[∇L(θ)T v])2 .
Bounded fourth moments E (∇L(θ) − E[∇L(θ)])T v
(53)
(54)
(55)
Proof. By Fisher Consistency of the negative log-likelihood, we know that
Eθ∗ [∇L(θ ∗ )] = 0
(56)
=⇒ ∇A(θ ) − Eθ∗ [φ(z)] = 0
(57)
∗
∗
=⇒ ∇A(θ ) = Eθ∗ [φ(z)].
(58)
For the mean,
∇L(θ) = ∇A(θ) − φ(z)
(59)
E[∇L(θ)] = ∇A(θ) − Eθ∗ [φ(z)]
∗
E[∇L(θ)] = ∇A(θ) − ∇A(θ ).
Now, for the covariance:
T i
∇L(θ) − Eθ∗ [∇L(θ)] ∇L(θ) − Eθ∗ [∇(L(θ)]
i
h
T
∗
∗
∗
= Eθ (∇A(θ ) − φ(z)) (∇A(θ ) − φ(z))
= Covθ∗ ∇L(θ ∗ ) = ∇2 A(θ ∗ ).
Covθ∗ [∇L(θ)] = Eθ∗
h
33
(60)
(61)
Bounded moments follows from our assumption that the sufficient statistics have bounded 4th
moments.
Having studied the distribution of the gradients, we use Lemma 1 to characterize the
stability of Huber Gradient estimator. Using Lemma 1, we know that at any point θ, the
Huber Gradient Estimator g(θ, δ/T ) satisfies that with probability 1 − δ/T ,
1
1p
e kCov(∇L(θ))k 2 log p.
n, p, δ)
kg(θ, δ/T ) − ∇R(θ)k2 ≤ C2 ǫ 2 + γ(e
2
Substituting the upper bound on kCov(∇L(θ))k2 from Lemma 6, we get that there are universal constants C1 , C2 such that
1
p
√
e
kg(θ) − ∇R(θ)k2 ≤ C1 ǫ 2 + γ(e
n, p, δ)
log p τu .
|
{z
}
e
β(e
n,δ)
e = 0 < τℓ by assumption. Therefore we just have that ǫ <
In this case we have that α(e
n, δ)
e into Theorem 1,
C1 for some universal constant C1 . Plugging the corresponding ǫ and β(e
n, δ)
we get back the result of Corollary 6.
E
Proof of Corollary 7
Using the contraction property of projections, we know that
kPΘ (∇A)−1 µ
b − θ ∗ k2 = kPΘ (∇A)−1 µ
b − PΘ [θ ∗ ] k2 ≤ k(∇A)−1 µ
b − θ ∗ k2 .
By Fisher Consistency of the negative log-likelihood, we know that
∇A(θ ∗ ) = Eθ∗ [φ(z)].
The true parameter θ ∗ can be obtained by inverting the ∇A operator whenever possible.
k(∇A)−1 µ
b − θ ∗ k2 = k(∇A)−1 µ
b − (∇A)−1 Eθ∗ [φ(z)]k2
∗
∗
= k∇A µ
b − ∇A Eθ∗ [φ(z)]k2 .
(62)
(63)
where A∗ is the convex conjugate of A. We can use the following result to control the Lipschitz
smoothness A∗ .
Theorem 12. (Strong/Smooth Duality) Assume f (·) is closed and convex. Then f (·) is smooth
with parameter M if and only if its convex conjugate f (·) is strongly convex with parameter
1
.
m= M
A proof of the above theorem can be found in [28]. Hence, we have that:
1
µ − Eθ∗ [φ(z)]k2
b − θ ∗ k2 ≤ kb
kPΘ (∇A)−1 µ
τℓ
(64)
By assumption, we have that the fourth moments of the sufficient statistics are bounded. We
also know that Cov(φ(z) = ∇2 A(θ ∗ ) which implies that we can use [29] as our oracle. Using
Lemma 1, we get that, there exists universal constants C1 , C2 such that with probability at
least 1 − 1/pC1 ,
1
p
kb
µ − Eθ∗ [φ(z)]k2 ≤ C2 τu log p ǫ 2 + γ(n, p, δ, ǫ) .
Combining the above with Equation (64) recovers the result of Corollary 7.
34
F
Proof of Theorem 8
Before we present the proof of Theorem 8, we first study the distribution of gradients of the
loss function. This will help us bound the error in the gradient estimator.
Lemma 7. Consider the model in Equation (11). Suppose the covariates x ∈ Rp have bounded
4th -moments and the noise w has bounded 2th moments.. Then there exist universal constants
C1 , C2 such that
E[∇L(θ)] = Σ∆
kCov(∇L(θ)k2 ≤ σ 2 kΣk2 + C1 k∆k22 kΣk22 ,
where ∆ = θ − θ ∗ and E[xxT ] = Σ.
Proof. We start by deriving the results for E[∇L(θ)].
1
1
L(θ) = (y − xT θ)2 = (xT (∆) − w)2
2
2
∇L(θ) = xxT ∆ − x.w
(65)
(66)
(67)
E[∇L(θ)] = Σ∆.
Next, we bound the operator norm of the covariance of the gradients ∇L(θ) at any point θ.
Covariance.
Cov(∇L(θ)) = E[(xxT − Σ)∆ − x.w)((xxT − Σ)∆ − x.w)T )]
T
T
T
(68)
2
Cov(∇L(θ)) = E[(xx − Σ)∆∆ (xx − Σ)] + σ Σ.
(69)
Now, we want to bound kCov(∇L(θ))k2 = λmax (Cov(∇L(θ))).
λmax (Cov(∇L(θ))) ≤ σ 2 λmax (Σ) + λmax E[(xxT − Σ)∆∆T (xxT − Σ)]
2
≤ σ λmax (Σ) + sup y
y∈Sp−1
T
(70)
E[(xx − Σ)∆∆ (xx − Σ)] y
T
T
T
(71)
≤ σ 2 λmax (Σ) + sup y T E[(xxT − Σ)∆∆T (xxT − Σ)] y
y∈Sp−1
≤ σ 2 λmax (Σ) + k∆k22
≤ σ 2 λmax (Σ) + k∆k22
sup
y,z∈Sp−1
sup
≤ σ 2 λmax (Σ) + 2k∆k22
2
≤ σ λmax (Σ) +
≤ σ 2 kΣk2 +
E (y T (xxT − Σ)z)2
y,z∈Sp−1
≤ σ 2 λmax (Σ) + 2k∆k22
2k∆k22
sup
y,z∈Sp−1
sup
y,z∈Sp−1
sup
(72)
(73)
E 2(y T x)2 (xT z)2 + 2(y T Σz)2
(74)
E (y T x)2 (xT z)2 + kΣk22
E (y T x)2 (xT z)2 + kΣk22
q
E [(y T x)4 ]
y,z∈Sp−1
2
2k∆k2 (kΣk22 + C4 kΣk22 ),
q
E [(z T x)4 ]
+ kΣk22
(75)
(76)
(77)
(78)
where the second last step follows from Cauchy-Schwartz and the last step follows from our
assumption of bounded 4th moments (see Equation (10)).
35
We now proceed to the proof of Theorem 8. From Lemma 2, we know that at any point
e satisfies the following with
θ, the gradient estimator described in Algorithm 3, g(θ; Dne , δ),
probability at least 1 − δ,
e − ∇R(θ)k2 ≤ C
kg(θ; Dne , δ)
q
tr(Cov(∇L(θ))) log 1/δe
.
n
e
We substitute the upper bound for kCov(∇L(θ))k2 from Lemma 7 in the above equation
e − ∇R(θ)k2 ≤ C
kg(θ; Dne , δ)
≤ C
q
q
≤ C1
|
tr(Cov(∇L(θ))) log 1/δe
n
e
p(σ2 kΣk2 +2k∆k22 (kΣk22 +C4 kΣk22 )) log 1/δe
n
e
s
kΣk22 p log 1/δe
kθ − θ ∗ k2
n
e
{z
}
e
α(e
n,δ)
s
kΣk2 p log 1/δe
.
+ C2 σ
n
e
{z
}
|
e
β(e
n,δ)
To complete the proof of this theorem, we use the results from Theorem 1. Note that the
e < τl . This holds when
gradient estimator satisfies the stability condition if α(e
n, δ)
n
e>
C12 τu2
e
p log 1/δ.
τl2
e into Theorem 1 gives us
Now suppose n
e satisfies the above condition, then plugging β(e
n, δ)
the required result.
G
Proof of Theorem 9
To prove the Theorem we use the result from Lemma 4, where we derived the following
expression for covariance of ∇L(θ)
p p
kCov(∇L(θ)k2 ≤C1 k∆k22 kΣk2
C4 LΦ,4 + LΦ,2
q
p
p
3
+ C2 kΣk2 BΦ,2 + BΦ,4 + c(σ) 3MΦ,2,2 + c(σ) MΦ,4,1
From Lemma 2, we know that at any point θ, the gradient estimator described in Algorithm 3,
e satisfies the following with probability at least 1 − δ,
g(θ; Dne , δ),
e − ∇R(θ)k2 ≤ C
kg(θ; Dne , δ)
36
q
tr(Cov(∇L(θ))) log 1/δe
.
n
e
Substitute the upper bound for kCov(∇L(θ))k2 in the above equation, we get
q
log 1/δe
e
kg(θ; Dne , δ) − ∇R(θ)k2 ≤ C tr(Cov(∇L(θ)))
n
e
s
√ p
kΣk2
C4 LΦ,4 + LΦ,2 p log 1/δe
kθ − θ ∗ k2
≤ C1
n
e
{z
}
|
e
+ C2
|
α(e
n,δ)
v
u
p
u kΣk B + pB + c(σ)p3M
3
c(σ) MΦ,4,1 p log 1/δe
Φ,2
2
Φ,4
Φ,2,2 +
t
{z
n
e
e
β(e
n,δ)
We now use the results from Theorem 1. The gradient estimator satisfies the stability condition
e < τl . This holds when
if α(e
n, δ)
√ p
C12 kΣk2
C4 LΦ,4 + LΦ,2
e
n
e>
p log 1/δ.
τl2
e into Theorem 1 gives us
Now suppose n
e satisfies the above condition, then plugging β(e
n, δ)
the required result.
H
Proof of Theorem 11
The proof proceeds along similar lines as the proof of Theorem 9. To prove the Theorem we utilize the result of Lemma 6, where we showed that kCov[∇L(θ)]k2 = k∇2 A(θ ∗ )k2 . Combining
this result with Lemma 2 we get that with probability at least 1 − δ
q
log 1/δe
e
kg(θ; Dne , δ) − ∇R(θ)k2 ≤ C tr(Cov(∇L(θ)))
n
e
s
k∇2 A(θ ∗ )k2 p log 1/δe
≤ C
.
n
e
|
{z
}
e
β(e
n,δ)
e = 0, the stability condition is always satisfied, as long as τl > 0. Substituting
Since α(e
n, δ)
e into Theorem 1 gives us the required result.
β(e
n, δ)
I
Upper bound on Contamination Level
We provide a complementary result, which gives an upper bound for the contamination level ǫ
based on the initialization point θ 0 , above which, Algorithm 1 would not work. The key idea
is that the error incurred by any mean estimation oracle is lower bounded by the variance of
the distribution, and that if the zero vector lies within that error ball, then any mean oracle
can be forced to output 0 as the mean. For Algorithm 1, this implies that, in estimating the
mean of the gradient, if the error is high, then one can force the mean to be 0 which forces
the algorithm to converge. For the remainder of the section we consider the case of linear
regression with x ∼ N (0, Ip ) in the asymptotic regime of n → ∞.
37
}
.
Lemma 8. Consider the model in equation(11) with x ∼ N (0, Ip ) and w ∼ N (0, 1), then
0
∗
there exists a universal constant C1 such that if ǫ > C1 √ kθ −θ0 k2∗ 2 , then for every gradient
1+2kθ −θ k2
oracle, there exists a contamination distribution Q such that, Algorithm 1 will converge to θ 0
even when the number of samples n → ∞.
Proof. Using Lemma 5, we know that for any point θ,
∇L(θ) = xxT ∆ − x.w
Eθ∗ [∇L(θ)] = (θ − θ ∗ ) = ∆
kCov(∇L(θ)k2 = 1 + 2k∆k22 ,
where ∆ = θ − θ ∗ .
Let P∇L(θ) represent the distribution ∇Lθ. Similarly, let Pǫ,∇L(θ),Q represent the corresponding
ǫ-contaminated distribution. Then, using Theorem 2.1 [10], we know that the minimax rate
for estimating the mean of the distribution of gradients is given by:
µ − Eθ∗ [∇L(θ)]k22 ≥ Cǫ2 (1 + 2k∆k22 ) ≥ c.
inf sup Pǫ,∇L(θ),Q kb
µ
b θ∈Rp ,Q
The above
p statement says that at any point θ, any mean oracle Ψ will always incur an
error of Ω( Cǫ2 (1 + 2k∆k22 )) in estimating the gradient Eθ∗ [∇L(θ)].
kΨ(θ) − Eθ∗ [∇L(θ)]k2 ≥ Cǫ
q
(1 + 2k∆k22 ) ∀ Ψ
Forp
any oracle Ψ, there exists some adversarial contamination Q, such that whenever kEθ∗ [∇L(θ)]k2 <
Cǫ (1 + 2k∆k22 ), then kΨ(θ)k2 = 0.
Suppose that the contamination level ǫ is such that,
ǫ>
1 kEθ∗ [∇L(θ 0 )]k2
p
,
C (1 + 2kθ 0 − θ ∗ k22 )
then for every oracle there exists a corresponding Q such that Algorithm 1 will remain stuck
at θ 0 .
Plugging Eθ∗ [∇L(θ 0 )] = θ 0 − θ ∗ , we recover the statement of the lemma.
Chen et al. [10] provide a general minimax lower bound of Ω(ǫ) for ǫ-contamination√models
in this setting. In contrast, using Algorithm 1 with [29] as oracle, we can only O( ǫ log p)
close to the true parameter even when the contamination is small, which implies that our
procedure is not minimax optimal. Our approach is nonetheless the only practical algorithm
for robust estimation of general statistical models.
J
Details and Analysis of Algorithm 2
In this section we present a refined, non-asymptotic analysis of the algorithm from [29]. We begin by introducing some preliminaries. We subsequently analyze the algorithm in 1-dimension
and finally turn our attention to the general algorithm.
38
J.1
Preliminaries
Unless otherwise stated, we assume throughout that the random variable X has bounded
fourth moments, i.e. for every unit vector v,
2
E hX − µ, vi4 ≤ C4 E hX − µ, vi2 .
We summarize some useful results from [29], which bound the deviation of the conditional
mean/covariance from the true mean/covariance.
Lemma 9. [Lemma 3.11 [29]] Let X be a univariate random variable with bounded fourth
moments, and let A be any with event with probability P(A) = 1 − γ ≥ 21 . Then,
p
|E(X|A) − E(X)| ≤ σ 4 8C4 γ 3 .
Lemma 10. [Lemma 3.12 [29]] Let X be a univariate random variable with E[X] = µ,
E (X − µ)2 = σ 2 and let E((X − µ)4 ) ≤ C4 σ 4 . Let A be any with event with probability
P(A) = 1 − γ ≥ 12 . Then,
p
(1 − C4 γ)σ 2 ≤ E((X − µ)2 |A) ≤ (1 + 2γ)σ 2 .
Corollary 13. [Corollary 3.13 [29]] Let A be any event with probability P(A) = 1 − γ ≥ 21 ,
and let X be a random variable with bounded fourth moments. We denote Σ|A = E(XX T |A) −
(E(X|A))(E(X|A))T to be the conditional covariance matrix. We have that,
p
p
(1 − C4 γ − 8C4 γ 3 )Σ Σ|A (1 + 2γ)Σ.
For random variables with bounded fourth moments we can use Chebyshev’s inequality to
obtain tail bounds.
Lemma 11. [Lemma 3.14 [29]] Let X have bounded fourth moments, then for every unit
vector v we have that,
P(|hX, vi − E[hX, vi]| ≥ t
p
[E [hX − µ, vi2 ]]) ≤
C4
.
t4
Our proofs also use the matrix Bernstein inequality for rectangular matrices. As a preliminary,
we consider a finite sequence {Zk } of independent, random matrices of size d1 ×d2 . We assume
that each random matrix satisfies E(Zk ) = 0, and kZk kop ≤ R almost surely. We define:
(
)
X
X
σ 2 := max k
E(Zk ZkT )kop , k
E(Zk ZkT )kop .
k
k
With these preliminaries in place we use the following result from [41].
Lemma 12. For all t ≥ 0,
P
X
k
Zk
op
≥ t ≤ (d1 + d2 ) exp
−t2 /2
σ 2 + Rt/3
.
Equivalently, with probability at least 1 − δ,
s
X
d1 + d2
d1 + d2
2R
log
+
.
≤ 2σ 2 log
Zk
δ
3
δ
op
k
39
We let I denote the set of all intervals in R. The following is a standard uniform convergence
result.
Lemma 13. Suppose X1 , . . . , Xn ∼ P, then with probability at least 1 − δ,
r
n
4 log(en) + 2 log(2/δ)
1X
I(Xi ∈ I) ≤ 2
.
sup P(I) −
n
n
I∈I
i=1
Algorithm 4 Huber Outlier Gradients Truncation
function HuberOutlierGradientTruncation(Sample Gradients S, Corruption
Level ǫ, Dimension p,δ)
if p=1 then
r
|S|
1
Let [a, b] be smallest interval containing 1 − ǫ − C5
(1 − ǫ) fracδ
|S| log
tion of points.
Se ← S ∩ [a, b].
return Se
else
Let [S]i be the samples with the ith co-ordinates only, [S]i = {hx, ei i |x ∈ S}
for i = 1 to p do
a[i] = HuberGradientEstimator([S]i , ǫ, 1, δ/p).
end for
r Let B(r, a) be the ball of smallest radius centered at a containing (1 − ǫ −
Cp
p
|S|
log
|S|
pδ
(1 − ǫ) fraction of points in S.
Se ← S ∩ B(r, a).
return Se
end if
end function
We now turn our attention to an analysis of Algorithm 2 for the 1-dimensional case.
J.2
The case when p = 1
Firstly, we analyze Algorithm 2 when p = 1.
Lemma 14. Suppose that, Pθ∗ is a distribution on R1 with mean µ, variance σ 2 , and bounded
fourth moments. There exist positive universal constants C1 , C2 , C8 > 0, such that given n
samples from the distribution in (4), the algorithm with probability at least 1 − δ, returns an
estimate µ
b such that,
!3
!1 r
r
r
4
2
1
log
3/δ
log
3/δ
log(3/δ)
+t
+t
+ C2 σ ǫ +
kb
µ − µk2 ≤ C1 C44 σ ǫ +
2n
2n
n
where t = C8
q
1
n
n
δ
log
1
4
. which can be further simplified to,
kb
µ − µk2 ≤ C1 C4 σ ǫ + C8
r
!3
!1 r
r
n 4
n 2 log(1/δ)
1
1
+ C2 σ ǫ + C8
log
log
n
δ
n
δ
n
40
Proof. By an application of Hoeffding’s inequality we obtain that with probability at least
1 − δ/3,
q the fraction of corrupted samples (i.e. samples from the distribution Q) is less than
ǫ + log(3/δ)
2n . We condition on this event through the remainder of this proof. We let η
denote the fraction of corrupted samples. Further, we let SP be the samples from the true
distribution. Let nP be the cardinality of this set, i.e. nP := |SP |.
Let I1−η be the interval around µ containing 1 − η mass of Pθ∗ . Then, using Lemma 11,
we have that:
1
length (I1−η ) ≤
C44 σ
1
η4
.
Using Lemma 13 we obtain that with probability at least 1 − δ/3 the number of samples from
the distribution P that fall in the interval I1−η is at least 1 − η − t where t is upper bounded
as:
r
4 log(en) + 2 log(6/δ)
.
t≤2
n
Now we let Se be the set of points in the smallest interval containing (1 − η − t)(1 − η) fraction
of all the points.
• Using VC theory, we know that for every interval I ⊂ R, there exists some universal
constant C3 such that
P (|(P (x ∈ I|x ∼ D) − P (x ∈ I|x ∈u SD ))| > t/2) ≤ n2D exp(−nD t2 /8)
(79)
This can be re-written as, that with probability at least (1−δ/3), there exists a universal
constant C0 such that,
r
r
n
n
1
1
D
≤ C5
log
log
sup |(P (x ∈ I|x ∼ D) − P (x ∈ I|x ∈u SD ))| ≤ C0
nD
δ
n
δ
I
|
{z
}
t
• Using Equation (79), we know that (1 − η − t) fraction of SD lie in I1−η .
Let Se be the set of points in the smallest interval containing (1 − η − t)(1 − η) fraction
of the points.
• We know that the length of minimum interval containing (1 − η − t)(1 − η) fraction of
the points of S is less than length of smallest interval containing (1 − η − t) fraction of
points of SD , which in turn is less than length of I1−η .
• Now, I1−η and minimum interval containing (1 − η − t) fraction of points of SD need
to overlap. This is because, n is large enough such that t < 12 − η hence, the extreme
points for such an interval can be atmost 2length (I1−η ) away.
• Hence, the distance of all chosen noise-points from µ will be within the length (I1−η ).
• Moreover, the interval of minimum length with (1−η −t)(1−η) fraction of S will contain
at least 1 − 3η − t fraction of SD .
e by controlling the sources of error.
• Hence, we can bound the error of mean(S)
41
– All chosen noise points are within length (I1−η ), and there are atmost η of them,
hence the maximum error can be ηlength (I1−η ).
– Next, the mean of chosen good points will converge to the mean of the conditional
distribution. i.e. points sampled from D but conditioned to lie in the minimum
length interval. The variance of these random variables is upper bounded using
Lemma 10.
– To control the distance between the mean(E(X) and the conditional mean(E(X|A)),
where A is the event that a sample x is in the chosen interval. We know that
P (A) ≥ 1 − 3η − t, hence, using Lemma 3.11[29], we get that there exists a constant
C13 such that,
1
3
|E[X] − E[X|A]| ≤ C13 C44 σ(η + t) 4
• Hence, with probability at least 1 − δ/3, the mean of Se will be within
r
1
3
1
log(3/δ)
4
η × length (I1−η ) + C13 C4 σ(η + t) 4 + C6 σ(1 + 2η) 2
n
• q
Taking union-bound over all conditioning statements, and upper bounding, η with ǫ +
log(3/δ)
2n , we recover the statement of the lemma.
J.3
The case when p > 1
To prove the case for p > 1, we use a series of lemmas. Lemma 15 proves that the outlier
filtering constrains the points in a ball around the true mean. Lemma 17 controls the error in
e Lemma 18 controls
the mean and covariance the true distribution after outlier filtering (D).
e
the error for the mean of S when projected onto the bottom span of the covariance matrix
ΣSe.
Lemma 15. Suppose that, Pθ∗ is a distribution on Rp with mean µ, covariance Σ, and bounded
fourth moments. There exist positive universal constants C1 , C2 , C8 > 0, such that given n
samples from the distribution in Equation (4), we can find a vector a ∈ Rp such that with
probability at least 1 − δ,
1
4
ka − µk2 ≤C1 C4
+ C2
!3
np 4
1
tr (Σ) ǫ + C8
log
n
δ
!1
r
r
np 2 p
1
log(p/δ)
tr (Σ)
log
ǫ + C8
n
δ
n
r
p
Proof. Pick n orthogonal directions v1 , v2 , . . . , vn , and use method for one-dimensions, and
using union bound, we can recover the result.
Next, we prove the case when p > 1. Firstly, we prove that after the outlier step,
Lemma 16. After the outlier removal step, there exists universal constants C11 > 0 such that
with probability at least 1 − δ, every remaining point x satisfies,
kx − µk2 ≤ r1∗ + 2r2∗
42
√
q
1p
p
3
1
and r2∗ = C1 C44 pkΣk2 (η + t) 4 + C2 pkΣk2 (η + t) 2 log(1/δ)
and
n
q
q
log(1/δ)
is the fraction of samples corrupted.
t = C8 n1 log np
δ . Here η ≤ ǫ +
2n
1
where
r1∗
= C10
C44
pkΣk2
1
η4
Proof.
• Let Se be the set of points chosen after the outlier filtering. Let SeD be set of good
points chosen after the outlier filtering. Let SeN be the set of bad points chosen after the
outlier filtering.
• Using VC theory we know that for every closed ball B(µ, r) = {x|kx − µk2 ≤ r}, there
exists a constant C9 such that with probability at least 1 − δ
s
n
p
log
sup |P (x ∈ B|x ∼ D) − P (x ∈ B|x ∈u SD )| ≤ C9
n
pδ
B
|
{z
}
t2
1
• Let B ∗ = B(µ, r ∗ ) for r1∗ = C10
C44
1
(η) 4
p
pkΣk2 . Then, we claim that
P (x ∈ B ∗ |x ∼ D) ≥ 1 − η
– To see this, suppose we have some x ∈ D. LetPz = x − µ. Let zi = z T vi for some
orthogonal directions v1 , v2 , . . . , vp . Let Z 2 = zi2 = kzk22 .
–
1
2
C pkΣk2
P Z 2 ≥ 4 1 = P
(η) 2
Z4 ≥
C4 p2 kΣk22
(η)
≤
(η)E(Z 4 )
C4 p2 kΣk22
– Now, E(Z 4 ) ≤ p2 maxi E(zi4 ) ≤ C4 p2 kΣk22 . Plugging this in the above, we have
that P (x ∈ B ∗ |x ∼ D) ≥ 1 − η.
• Hence, we have that P (x ∈ B ∗ |x ∈u SD ) ≥ 1 − η − t2 .
• Using Lemma 15, we have that at least (1 − η − t2 ) fraction of good points are r1∗ + r2∗
away from a. Hence, we have that the minimum radius of the ball containing all the
(1 − η − t2 )(1 − η) has a radius of atmost r1∗ + r2∗ , which when combined with the triangle
inequality recovers the statement of lemma.
e µe =
As before, let Se be the set of points after outlier filtering. Let µSe = mean(S),
SD
mean(SeD ), µ e = mean(SeN ).
SN
Lemma 17. Let SeD be the set of clean points remaining after the outlier filtering. Then, with
probability at least 1 − δ, we have that
r
1
p
p
3p
log(p/δ)
1
4
kµSeD − µk2 ≤ C1 C4 (η + t2 ) 4 kΣk2 1 +
log(p/δ)
+ kΣk2 (1 + 2(η + t2 )
n
n
(r ∗ + 2r2∗ )
log(p/δ)
+ C15 1
n
43
and
kΣSeD k2 ≤ β(n, δ)kΣk2 ,
where
β(n, δ) =
!!
√
r
3
3
p C4 p
log(p/δ log(p/δ)
+
1 + 2C(η + t2 ) + 1 + √ + C4 p(η + t) 2 + (η + t2 ) 2
η
n
n
Proof. We first prove the bounds on the mean shift.
kµSeD − µk2 ≤ kµSeD − µDe k2 + kµDe − µk2
{z
} | {z }
|
B
A
µ −µ
• Control of B. We use Lemma 9 on X = xT kµ De−µk2 for x ∼ D, and A be the event
e
D
that x is not removed by the outlier filtering.
1
3
kµDe − µk2 ≤ C1 C44 (η + t2 ) 4
p
kΣk2
(80)
• Control of A. Using Lemma 10,we have that kΣDe k2 ≤ (1 + 2(η + t2 ))kΣk2 . Now, we
use Bernstein’s inequality . Lemma 12 with R = C(r1∗ + 2r2∗ + B), we get that, with
probability at least 1 − δ,
r
p
p
(r ∗ + 2r2∗ + B)
1
log(p/δ) + C15 1
log(p/δ)
kµSeD − µDe k2 ≤ C14 kΣk2 (1 + 2(η + t2 )
n
n
(81)
Next, we prove the bound for covariance matrix.
kΣDe − Σk2
|
{z
}
kΣSeD k2 ≤ kΣSeD − ΣDe k2 +
+kΣk2
(82)
≤2C(η+t2 )kΣk2 (By Corollary 13))
(x −µ )(x −µ )T −Σ
e
D
. From,
To control kΣSeD −ΣDe k2 , we use Bernstein’s inequality, with Zk = k De kn De
Lemma 16, we know that the points are constrained in a ball. Plugging this into Lemma 12,
!
r
log(p/δ log(p/δ)
2
kΣSeD − ΣDe k2 ≤ C(kΣk2 + R )
+
n
n
2
2
where R2 = C r1∗ + r2∗ + B 2 .
Plugging in the values, we get that,
kΣSeD − ΣDe k2 ≤ CkΣk2
!
√
r
3
3
p C4 p
log(p/δ log(p/δ)
1 + √ + C4 p(η + t) 2 + (η + t2 ) 2
+
η
n
n
Finally, we have that,
!!
√
r
3
3
log(p/δ log(p/δ)
p C4 p
+
kΣSeD k2 ≤ kΣk2 1 + 2C(η + t2 ) + 1 + √ + C4 p(η + t) 2 + (η + t2 ) 2
η
n
n
|
{z
}
β(n,δ)
44
Lemma 18. Let W be the bottom p/2 principal components of the covariance matrix after
filtering ΣSe. Then there exists a universal constant C > 0 such that with probability at least
1 − δ, we have that
1
2
2
kηPW δµ k2 ≤ Cη β(n, δ) + γ(n, δ)C4 )kΣk2 ,
where δµ = µSeN − µSeD , PW is the projection matrix on the bottom p/2-span of ΣSe, β(n, δ)
1
is as defined in Lemma 17 and γ(n, δ) = η 2 + (η + t)5/2 + η(η + t) log(1/δ)
n
Proof. We have
ΣSe = (1 − η)ΣSeD + ηΣSeN + (η − η 2 )δµ δµT
{z
} |
{z
}
|
E
(83)
F
(84)
By Weyl’s inequality we have that,
λp/2 (ΣSe) ≤ λ1 (E) + λp/2 (F )
• Control of λp/2 (F ).
tr (F )
p/2
((r ∗ )2 + (r2∗ )2 ) + B 2
≤ C15 η 1
p/2
1
1
log(1/δ)
5/2
2
≤ C16 C4 kΣk2 η 2 + (η + t) + η(η + t)
n
|
{z
}
λp/2 (F ) ≤
γ(n,δ)
where t = C8
q
1
n
log
• Control of λ1 (E).
np
δ .
λ1 (E) ≤ (1 − η)βkΣk2
Hence, we have that:
λp/2 (ΣSe) ≤ (1 − η)βkΣk2 + C16 γ
p
C4 kΣk2
Using that W is the space spanned by the bottom p/2 eigenvectors of ΣSe and PW is corresponding projection operator, we have that:
h
p i
T
PW
ΣSePW (1 − η)β + C16 γ C4 kΣk2 Ip
Following some algebraic manipulation in [29], we get that,
1
2
2
kηPW δµ k2 ≤ η (β(n, δ) + γC4 )kΣk2
45
Having established all required results, we are now ready to prove Lemma 1. We restate
the result for the sake of completeness.
Theorem 14. Suppose that, Pθ∗ is a distribution on Rp with mean µ, covariance Σ and
bounded fourth moments. There exist positive universal constant C > 0, such that given n
samples from the distribution in Equation (4), the algorithm with probability at least 1 − δ,
returns an estimate µ
b such that,
!1
r
1
1
1
p
3
log p log(p log(p/δ)) 2
√
√
2
4
2
4
kb
µ − µk2 ≤ CkΣk2 (1 + log p)
η + C4 (η + t2 ) +
ηpC4
n
where η = ǫ +
q
log(p) log(log p/δ)
2n
and t2 =
q
p log(p) log(n/(pδ))
.
n
Proof. We divide n samples into ⌊log(p)⌋ different sets. We choose the first set and keep that as
our active set of samples. We run our outlier filtering on this set, and let the remaining samples
after the outlier filtering be SeD . By orthogonality of subspaces spanned by eigenvectors,
coupled with triangle inequality and contraction of projection operators, we have that
µV − PV µk22
kb
µ − µk22 ≤ 2kPW (b
µ − µSeD )k22 + 2kPW (µSeD − µ)k22 + kb
µV − PV µk22
kb
µ − µk22 ≤ 2kPW (b
µ − µSeD )k22 + 2k(µSeD − µ)k22 + kb
bV is the mean vector
where V is the span of the top p/2 principal components of ΣSe and where µ
of returned by the running the algorithm on the reduced dimensions dim(V ) = p/2. From
Lemma 18, both β(n, δ) and γ(n, δ) are monotonically increasing in the dimension; moreover
the upper bound in Lemma 17 is also monotonically increasing in the dimension p, hence, the
error at each step of the algorithm can be upper bounded by error incurred when running on
dimension p, with n/ log(p) samples, and probability of δ/ log p. Hence, the overall error for
the recursive algorithm can be upper bounded as,
kb
µ − µk22 ≤ 2kPW (b
µ − µSeD )k22 + 2kµSeD − µk22 (1 + log p)
Combining Lemma 17 and Lemma 18 which are instantiated for n/ log p samples and
probability δ/ log p, we get,
!1
r
2
1p
1
1
3
log
p
log(p
log(p/δ))
√
√
kb
µ − µk2 ≤ CkΣk22 log p η + C44 (η + t2 ) 4 +
ηpC42
n
46
| 2 |
(Quasi)Periodicity Quantification in Video Data, Using
Topology
arXiv:1704.08382v2 [cs.CV] 21 Jan 2018
Christopher J. Tralie† and Jose A. Perea*
∗
†
January 23, 2018
Abstract
This work introduces a novel framework for quantifying the presence and strength of recurrent
dynamics in video data. Specifically, we provide continuous measures of periodicity (perfect repetition) and quasiperiodicity (superposition of periodic modes with non-commensurate periods), in
a way which does not require segmentation, training, object tracking or 1-dimensional surrogate
signals. Our methodology operates directly on video data. The approach combines ideas from
nonlinear time series analysis (delay embeddings) and computational topology (persistent homology), by translating the problem of finding recurrent dynamics in video data, into the problem
of determining the circularity or toroidality of an associated geometric space. Through extensive
testing, we show the robustness of our scores with respect to several noise models/levels; we show
that our periodicity score is superior to other methods when compared to human-generated periodicity rankings; and furthermore, we show that our quasiperiodicity score clearly indicates the
presence of biphonation in videos of vibrating vocal folds, which has never before been accomplished end to end quantitatively.
1
Introduction
Periodicity characterizes many natural motions including animal locomotion (walking/wing flapping/slithering), spinning wheels, oscillating pendulums, etc. Quasiperiodicity, thought of as the
superposition of non-commensurate frequencies, occurs naturally during transitions from ordinary to
chaotic dynamics [11]. The goal of this work is to automate the analysis of videos capturing periodic
and quasiperiodic motion. In order to identify both classes of motion in a unified framework, we generalize 1-dimensional (1D) sliding window embeddings [38] to reconstruct periodic and quasiperiodic
attractors from videos123 . We analyze the resulting attractors using persistent homology, a technique
which combines geometry and topology (Section 2.2), and we return scores in the range [0, 1] that
indicate the degree of periodicity or quasiperiodicity in the corresponding video. We show that our
periodicity measure compares favorable to others in the literature when ranking videos (Section 4.2).
Furthermore, to our knowledge, there is no other method able to quantify the existence of quasiperiodicity directly from video data.
Our approach is fundamentally different from most others which quantify periodicity in video.
For instance, it is common to derive 1D signals from the video and apply Fourier or autocorrelation
to measure periodicity. By contrast, our technique operates on raw pixels, avoiding common video
∗ † Department of Electrical And Computer Engineering, Duke University,Durham, NC, USA. e-mail:
[email protected]
† * Department of Mathematics and Department of Computational Mathematics, Science & Engineering, Michigan
State University, East Lansing, MI, USA. e-mail: [email protected]
1 Some of the analysis and results appeared as part of the Ph.D. thesis of the first author [?].
2 Code to replicate results: https://github.com/ctralie/SlidingWindowVideoTDA
3 Supplementary material and videos: https://www.ctralie.com/Research/SlidingWindowVideoQuasi
1
preprocessing and tracking entirely. Using geometry over Fourier/autocorrelation also has advantages
for our applications. In fact, as a simple synthetic example shows (Figure 3), the Fourier Transform
of quasiperiodic signals is often very close to the Fourier transform of periodic signals. By contrast,
the sliding window embeddings we design yield starkly different geometric structures in the periodic
and quasiperiodic cases. We exploit this to devise a quasiperiodicity measurement, which we use to
indicate the degree of “biphonation” in videos of vibrating vocal folds (Section 4.3), which is useful
in automatically diagnosing speech pathologies.
In the context of applied topology, our quasiperiodicity score is one of the first applications of
persistent H2 to high dimensional data, which is largely possible due to recent advancements in the
computational feasibility of persistent homology [3].
1.1
1.1.1
Prior Work on Recurrence in Videos
1D Surrogate Signals
One common strategy for detecting periodicity in video is to derive a 1D function to act as a surrogate
for its dynamics, and then to use either frequency domain (Fourier transform) or time domain (autocorrelation, peak finding) techniques. One of the earliest works in this genre finds level set surfaces
in a spatiotemporal “XYT” volume of video (all frames stacked on top of each other), and then uses
curvature scale space on curves that live on these “spatiotemporal surfaces” as the 1D function [1]. [34]
use Fourier Transforms on pixels which exhibit motion, and define a measure of periodicity based on
the energy around the Fourier peak and its harmonics. [10] extract contours and find eigenshapes from
the contours to classify and parameterize motion within a period. Frequency estimation is done by
using Fourier analysis and peak detection on top of other 1D statistics derived from the contours, such
as area and center of mass. Finally, [47] derive a 1D surrogate function based on mutual information
between the first and subsequent frames, and then look for peaks in the similarity function with the
help of a watershed method.
1.1.2
Self-Similarity Matrices
Another class of techniques relies on self-similarity matrices (SSMs) between frames, where similarity
can be defined in a variety of ways. [37] track a set of points on a foreground object and compare them
with an affine invariant similarity. Another widely recognized technique for periodicity quantification
[6], derives periodicity measures based on self-similarity matrices of L1 pixel differences. This technique
has inspired a diverse array of applications, including analyzing the cycles of expanding/contracting
jellyfish [33], analyzing bat wings [2], and analyzing videos of autistic spectrum children performing
characteristic repetitive motions such as “hand flapping” [22]. We compare to this technique in
Section 4.2.
1.1.3
Miscellaneous Techniques for Periodic Video Quantification
There are also a number of works that don’t fall into the two categories above. Some works focus
solely on walking humans, since that is one of the most common types of periodic motion in videos of
interest to people. [29] look at the “braiding patterns” that occur in XYT slices of videos of walking
people. [17] perform blob tracking on the foreground of a walking person, and use the ratio of the
second and first eigenvalues of PCA on that blob.
For more general periodic videos, [43] make a codebook of visual words and look for repetitions
within the resulting string. [23] take a deep learning approach to counting the number of periods that
occur in a video segment. They use a 3D convolutional neural network on spatially downsampled,
non-sequential regions of interest, which are uniformly spaced in time, to estimate the length of the
cycle. Finally, perhaps the most philosophically similar work to ours is the work of [41], who use
cohomology to find maps of MOCAP data to the circle for parameterizing periodic motions, though
this work does not provide a way to quantify periodicity.
2
1.1.4
Our Work
We show that geometry provides a natural way to quantify recurrence (i.e. periodicity and quasiperiodicity) in video, by measuring the shape of delay embeddings. In particular, we propose several
optimizations (section 3) which make this approach feasible. The resulting measure of quasiperiodicity, for which quantitative approaches are lacking, is used in section 4.3 to detect anomalies in
high-speed videos of vibrating vocal folds. Finally, in contrast to both frequency and time domain
techniques, our method does not rely on the period length being an integer multiple of the sampling
rate.
2
2.1
Background
Delay Embeddings And Their Geometry
Recurrence in video data can be captured via the geometry of delay embeddings; we describe this
next.
2.1.1
Video Delay Embeddings
We will regard a video as a sequence of grayscale4 image frames indexed by the positive real numbers.
That is, given positive integers W (width) and H (height), a video with W × H pixels is a function
X : R+ −→ RW ×H
In particular, a sequence of images X1 , X2 , . . . ∈ RW ×H sampled at discrete times t1 < t2 < · · · yields
one such function via interpolation. For an integer d ≥ 0, known as the dimension, a real number
τ > 0, known as the delay, and a video X : R+ −→ RW ×H , we define the sliding window (also
referred to as time delay) embedding of X – with parameters d and τ – at time t ∈ R+ as the vector
X(t)
X(t + τ )
(1)
SWd,τ X(t) =
∈ RW ×H×(d+1)
..
.
X(t + dτ )
The subset of RW ×H×(d+1) resulting from varying t will be referred to as the sliding window
embedding of X. We remark that since the pixel measurement locations are fixed, the sliding window
embedding is an “Eulerian” view into the dynamics of the video. Note that delay embeddings are
generally applied to 1D time series, which can be viewed as 1-pixel videos (W = H = 1) in our
framework. Hence equation (1) is essentially the concatenation of the delay embeddings of each
individual pixel in the video into one large vector. One of the main points we leverage in this paper
is the fact that the geometry of the sliding window embedding carries fundamental information about
the original video. We explore this next.
2.1.2
Geometry of 1-Pixel Video Delay Embeddings
As a motivating example, consider the harmonic (i.e. periodic) signal
π
π
t + cos
t
fh (t) = cos
5
15
and the quasiperiodic signal
π
1
fq (t) = cos
t + cos
t
5
5
(2)
(3)
4 For color videos we can treat each channel independently, yielding a vector in RW ×H×3 . In practice, there isn’t
much of a difference between color and grayscale embeddings in our framework for the videos we consider.
3
1
1
We refer to fh as harmonic because its constitutive frequencies, 10
and 30
, are commensurate; that is,
they are linearly dependent over the rational numbers Q ⊂ R. By way of contrast, the underlying fre1
1
quencies of the signal fq , 10
and 10π
, are linearly independent over Q and hence non-commensurate.
We use the term quasiperiodicity, as in the non-linear dynamics literature [19], to denote the superposition of periodic processes whose frequencies are non-commensurate. This differs from other
definitions in the literature (e.g. [?, 43]) which regard quasiperiodic as any deviation from perfect
repetition.
A geometric argument from [31] (see equation 7 below and the discussion that follows) shows that
given a periodic function f : [0, 2π] −→ R with exactly N harmonics, if d ≥ 2N and 0 < τ < 2π
d
then the sliding window embedding SWd,τ f is a topological circle (i.e. a closed curve without selfintersections) which wraps around an N -dimensional torus
S 1 = {z ∈ C : |z| = 1}
TN = S 1 × · · · × S 1 ,
{z
}
|
N −times
As an illustration, we show in Figure 1 a plot of fh and of its sliding window embedding SWd,τ fh ,
via a PCA (Principal Component Analysis) 3-dimensional projection.
Figure 1: Sliding window embedding of the harmonic signal fh . Colors in the signal correspond
to colors of the points in the PCA plot. The sliding window embedding traces a topological circle
wrapped around a 2-dimensional torus.
However, if g : R −→ R is quasiperiodic with N distinct non-commensurate frequencies then,
for appropriate d and τ , SWd,τ g is dense in (i.e. fills out) TN [30]. Figure 2 shows a plot of the
quasiperiodic signal fq (t) and a 3-dimensional projection, via PCA, of its sliding window embedding
SWd,τ fq .
Figure 2: Sliding window embedding of the quasiperiodic signal fq . Colors in the signal correspond
to colors of the points in the PCA plot. The sliding window embedding is dense in a 2-dimensional
torus.
4
The difference in geometry of the delay embeddings is stark compared to the difference between
their power spectral densities, as shown in Figure 3.
Figure 3: The power spectral densities (300 samples) of commensurate and non-commensurate signals
with relative harmonics at ratios 3 and π, respectively. The difference is not nearly as evident as
in the geometry of their sliding window embeddings. Additionally, unless sampling is commensurate
with a frequency, a fixed Fourier basis causes that frequency component to bleed into many frequency
bins in a sinc-like pattern, making precise peak finding difficult.
Moreover, as we will see next, the interpretation of periodicity and quasiperiodicity as circularity
and toroidality of sliding window embeddings remains true for videos with higher resolution (i.e.
max{W, H} > 1). The rest of the paper will show how one can use persistent homology, a tool
from the field of computational topology, to quantify the presence of (quasi)periodicity in a video by
measuring the geometry of its associated sliding window embedding. In short, we propose a periodicity
score for a video X which measures the degree to which the sliding window embedding SWd,τ X spans
a topological circle, and a quasiperiodicity score which quantifies the degree to which SWd,τ X covers
a torus. This approach will be validated extensively: we show that our (quasi)periodicity detection
method is robust under several noise models (motion blur, additive Gaussian white noise, and MPEG
bit corruption); we compare several periodicity quantification algorithms and show that our approach
is the most closely aligned with human subjects; finally, we provide an application to the automatic
classification of dynamic regimes in high-speed laryngeal video-endoscopy.
2.1.3
Geometry of Video Delay Embeddings
Though it may seem daunting compared to the 1D case, the geometry of the delay embedding shares
many similarities for periodic videos, as shown in [39]. Let us argue why sliding window embeddings
from (quasi)periodic videos have the geometry we have described so far. To this end, consider an
example video X that contains a set of N frequencies ω1 , ω2 , ..., ωN . Let the amplitude of the nth
frequency and ith pixel be ain . For simplicity, but without loss of generality, assume that each is a
cosine with zero phase offset. Then the time series at pixel i can be written as
Xi (t) =
N
X
ain cos(ωn t)
(4)
n=1
Grouping all of the coefficients together into a (W × H) × N matrix A, we can write
X(t) =
N
X
An cos(ωn t)
n=1
5
(5)
where An stands for the nth column of A. Constructing a delay embedding as in Equation 1:
An cos(ωn t)
N
X
..
SWd,τ X(t) =
.
n
n=1
A cos(ωn (t + dτ )
(6)
and applying the cosine sum identity, we get
SWd,τ X(t) =
N
X
~un cos(ωn t) − ~vn sin(ωn t)
(7)
n=1
where ~un , ~vn ∈ RW ×H×(d+1) are constant vectors. In other words, the sliding window embedding of
this video is the sum of linearly independent ellipses, which lie in the space of d + 1 frame videos
at resolution W × H. As shown in [31] for the case of commensurate frequencies, when the window
length is just under the length of the period, all of the ~un and ~vn vectors become orthogonal, and
so they can be recovered by doing PCA on SWd,τ X(t). Figure 4 shows the components of the first
8 PCA vectors for a horizontal line of pixels in a video of an oscillating pendulum. Note how the
oscillations are present both temporally and spatially.
X
t
Figure 4: Showing an XT slice of the principal components on SWd,1 for the synthetic video of an
oscillating pendulum; d is chosen just under the period length (∼ 25 frames).
2.1.4
The High Dimensional Geometry of Repeated Pulses
Using Eulerian coordinates has an important impact on the geometry of delay embeddings of natural
videos. As Figure 5 shows, pixels often jump from foreground to background in a pattern similar to
square waves.
These types of abrupt transitions require higher dimensional embeddings to reconstruct the geometry. To see why, first extract one period of a signal with period ` at a pixel Xi (t):
Xi (t) 0 ≤ t ≤ `
fi (t) =
(8)
0
otherwise
Then Xi (t) can be rewritten in terms of the pulse as
Xi (t) =
∞
X
fi (t − m`)
(9)
m=−∞
Since Xi (t) repeats itself, regardless of what fi (t) looks like, periodic summation discretizes the frequency domain [32]
∞
m m
X
F {Xi (t)} (k) ∝
F(fi (t))
δ
−k
(10)
`
`
m=−∞
6
Figure 5: An example of an Eulerian pixel witnessing a foreground/background transition in a video of
a woman doing jumping jacks. Red, green, and blue channels are plotted over time. These transitions
induce a per pixel periodic signal with sharp transitions, which leads to high dimensionality in an
appropriate sliding window embedding.
Switching back to the time domain, we can write Xi (t) as
Xi (t) ∝
∞
X
F (fi (t))
m=−∞
m
`
ei
2πm
` t
(11)
In other words, each pixel is the sum of some constant offset plus a (possibly infinite) set of
harmonics at integer multiples of 1` . For instance, applying Equation 11 to a square wave of period `
centered at the origin is a roundabout way of deriving the Fourier Series
2π
1
6π
1
10π
sin
t + sin
t + sin
t + ...
(12)
`
3
`
5
`
by sampling the sinc function sin(π`f )/(πf ) at intervals of m/2` (every odd m coincides with π/2+kπ,
proportional to 1/k, and every even harmonic is zero conciding with πk). In general, the sharper the
transitions are in Xi (t), the longer the tail of F{fi (t)} will be, and the more high frequency harmonics
will exist in the embedding, calling for a higher delay dimension to fully capture the geometry, since
every harmonic lives on a linearly independent ellipse. Similar observations about harmonics have
been made in images for collections of patches around sharp edges ( [48], Figure 2).
2.2
Persistent Homology
Informally, topology is the study of properties of spaces which do not change after stretching without
gluing or tearing. For instance, the number of connected components and the number of (essentially
different) 1-dimensional loops which do not bound a 2-dimensional disk, are both topological properties
of a space. It follows that a circle and a square are topologically equivalent since one can deform one
onto the other, but a circle and a line segment are not because that would require either gluing the
endpoints of the line segment or tearing the circle. Homology [12] is a tool from algebraic topology
designed to measure these types of properties, and persistent homology [50] is an adaptation of these
ideas to discrete collections of points (e.g., sliding window embeddings). We briefly introduce these
concepts next.
2.2.1
Simplicial Complexes
A simplicial complex is a combinatorial object used to represent and discretize a continuous space.
With a discretization available, one can then compute topological properties by algorithmic means.
7
Formally, a simplicial complex with vertices in a nonempty set V is a collection K of nonempty
finite subsets σ ⊂ V so that ∅ =
6 τ ⊂ σ ∈ K always implies τ ∈ K. An element σ ∈ K is called a
simplex, and if σ has (n + 1) elements then it is called an n-simplex. The cases n = 0, 1, 2 are special,
0-simplices are called vertices, 1-simplices are called edges and 2-simplices are called faces. Here is an
example to keep in mind: the circle S 1 = {z ∈ C : |z| = 1} is a continuous space but its topology can
be captured by a simplicial complex K with three vertices a, b, c, and three edges {a, b}, {b, c}, {a, c}.
That is, in terms of topological properties, the simplicial complex
K = {a}, {b}, {c}, {a, b}, {b, c}, {a, c}
can be regarded as a combinatorial surrogate for S 1 : they both have 1 connected component, one
1-dimensional loop which does not bound a 2-dimensional region, and no other features in higher
dimensions.
2.2.2
Persistent Homology of Point Clouds
The sliding window embedding of a video X is, in practice, a finite set SWd,τ X = {SWd,τ X(t) : t ∈ T }
determined by a choice of T ⊂ R finite. Moreover, since SWd,τ X ⊂ RW ×H×(d+1) then the restriction
of the ambient Euclidean distance endows SWd,τ X with the structure of a finite metric space. Discrete
metric spaces, also referred to as point clouds, are trivial from a topological point of view: a point
cloud with N points simply has N connected components and no other features (e.g., holes) in higher
dimensions. However, when a point cloud has been sampled from/around a continuous space with
non-trivial topology (e.g., a circle or a torus), one would expect that appropriate simplicial complexes
with vertices on the point cloud should reflect the topology of the underlying continuous space. This
is what we will exploit next.
Given a point cloud (X, dX ) – where X is a finite set and dX : X×X −→ [0, ∞) is a distance function
– the Vietoris-Rips complex (or Rips complex for short) at scale ≥ 0 is the collection of non-empty
subsets of X with diameter less than or equal to :
R (X) := σ ⊂ X : dX (x1 , x2 ) ≤ , ∀xi , xj ∈ σ
(13)
That is, R (X) is the simplicial complex with vertex set equal to X, constructed by adding an edge
between any two vertices which are at most apart, adding all 2-dimensional triangular faces (i.e.
2-simplices) whose bounding edges are present, and more generally, adding all the k-simplices whose
(k − 1)-dimensional bounding facets have been included. We show in Figure 6 the evolution of the
Rips complex on a set of points sampled around the unit circle.
epsilon = 0
epsilon = 0.30
epsilon = 0.35
1
1
1
0
0
0
-1
-1
-1
-1
0
1
-1
0
1
-1
0
1
Figure 6: The Rips complex, at three different scales ( = 0, 0.30, 0.35), on a point cloud with 40
points sampled around S 1 ⊂ R2 .
The idea behind persistent homology is to track the evolution of topological features of complexes
such as R (X), as the scale parameter ranges from 0 to some maximum value max ≤ ∞. For
instance, in Figure 6 one can see that R0 (X) = X has 40 distinct connect components (one for each
8
point), R0.30 (X) has three connected components and R0.35 (X) has only one connected component;
this will continue to be the case for every ≥ 0.35. Similarly, there are no closed loops in R0 (X) or
R0.30 (X) bounding empty regions, but this changes when increases to 0.35. Indeed, R0.35 (X) has
three 1-dimensional holes: the central prominent hole, and the two small ones to the left side. Notice,
however, that as increases beyond 0.35 these holes will be filled by the addition of new simplices;
in particular, for > 2 one has that R (X) will have only one connected component and no other
topological features in higher dimensions.
The family R(X) = {R (X)}≥0 is known as the Rips filtration of X, and the emergence/dissapearence
of topological features in each dimension (i.e., connected components, holes, voids, etc) as changes,
can be codified in what are referred to as the persistence diagrams of R(X). Specifically, for each
dimension n = 0, 1, . . . (0 = connected components, 1 = holes, 2 = voids, etc) one can record the
value of for which a particular n-dimensional topological feature of the Rips filtration appears (i.e.
its birth time), and when it disappears (i.e. its death time). The birth-death times (b, d) ∈ R2 of
n-dimensional features for R(X) form a multiset dgmn (R(X)) — i.e. a set whose elements can come
with repetition — known as the n-dimensional persistence diagram of the Rips filtration on X. Since
dgmn (R(X)) is just a collection of points in the region {(x, y) ∈ R2 : 0 ≤ x < y}, we will visualize it
as a scatter plot. The persistence of a topological feature with birth-death times (b, d) is the quantity
d − b, i.e. its lifetime. We will also include the diagonal y = x in the scatter plot in order to visually
convey the persistence of each birth-death pair. In this setting, points far from the diagonal (i.e.
with large persistence) represent topological features which are stable across scales and hence deemed
significant, while points near the diagonal (i.e. with small persistence) are often associated with unstable features. We illustrate in Figure 7 the process of going from a point cloud to the 1-dimensional
persistence diagram of its Rips filtration.
Original Point Cloud
5
4
3
Time of Death
2
1
0
1
2
3
1D Persistence Diagram
4.5
4.0
2
3.5
3.0
2.5
2.0
1
1.5
1.0
0.5
0.0
5
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5
Time of Birth
Class 1 Death (d = 1.84)
5
3
2
1 0
1
2
3
4
Class 2 Birth (d = 1.8)
5
4
3
2
1
0
1
2
3
4
4
3
3
3
2
2
2
1
1
1
0
0
0
1
1
1
2
2
2
3
3
2
1 0
1
2
3
4
5
3
2
1 0
1
2
3
4
5
3
3
2
1 0
1
2
3
4
5
Class 2 Death (d = 3.55)
5
4
3
Class 1 Birth (d = 0.774)
5
3
2
1 0
1
2
3
4
5
Figure 7: From a point cloud to the 1-dimensional persistence diagram of its Rips filtration. Connected
edges in the Rips filtration are drawn in blue, the birth/death of a class is indicated in red, and filled
in triangles are shaded green.
We remark that the computational task of determining all non-equivalent persistent homology
classes of a filtered simplicial complex can, surprisingly, be reduced to computing the homology of a
single simplicial complex [3, 50]. This is in fact a problem in linear algebra that can be solved via
9
elementary row and column operations on appropriate boundary matrices.
The persistent homology of R SWd,τ X , and in particular its n-dimensional persistence diagrams
for n = 1, 2, are the objects we will use to quantify periodicity and quasiperiodicity in a video
X. Figures 8 and 9 show the persistence diagrams of the Rips filtrations, on the sliding window
embeddings, for the commensurate and non-commensurate signals from Figures 1 and 2, respectively.
We use fast new code from the “Ripser” software package to make persistent H2 computation feasible
[3].
Figure 8: Sliding window embedding of the harmonic signal fh (left) and the n-dimensional persistence diagrams n = 1, 2 (right) of the associated Rips filtration. The sliding window embedding
SWd,τ fh traces a topological circle wrapped around a 2-dimensional torus. The persistence diagram
in dimension one (H1 ) shows only one birth-death pair with prominent persistence; this is consistent
with a point cloud sampled around a space with the topology of a circle.
Figure 9: Sliding window embedding of the quasiperiodic signal fq (left) and the n-dimensional
persistence diagrams n = 1, 2 (right) of the associated Rips filtration. The sliding window embedding
SWd,τ fq is dense on a 2-dimensional torus. The persistence diagram in dimension one (H1 ) shows two
birth-death pairs with prominent persistence, while the persistence diagram in dimension two (H2 )
shows one prominent birth-death pair; this is consistent with a point cloud sampled around a space
with the topology of a 2-dimensional torus.
3
3.1
Implementation Details
Reducing Memory Requirements with SVD
Suppose we have a video which has been discretely sampled at N different frames at a resolution
of W × H, and we do a delay embedding with dimension d, for some arbitrary τ . Assuming 32 bit
floats per grayscale value, storing the sliding window embedding requires 4W HN (d + 1) bytes. For
10
a low resolution 200 × 200 video only 10 seconds long at 30fps, using d = 30 already exceeds 1GB of
memory. In what follows we will address the memory requirements and ensuing computational burden
to construct and access the sliding window embedding. Indeed, constructing the Rips filtration only
requires pairwise distances between different delay vectors, this enables a few optimizations.
First of all, for N points in RW H , where N W H, there exists an N -dimensional linear subspace
which contains them. In particular let A be the (W × H) × N matrix with each video frame along
a column. Performing a singular value decomposition A = U SV T , yields a matrix U whose columns
form an orthonormal basis for the aforementioned N -dimensional linear subspace. Hence, by finding
the coordinates of the original frame vectors with respect to this orthogonal basis
 = U T A = U T U SV = SV
(14)
and using the coordinates of the columns of SV instead of the original pixels, we get a sliding window
embedding of lower dimension
U T X(t)
..
(15)
SWd,τ (t) =
.
U T X(t + dτ )
for which
kSWd,τ (t) − SWd,τ (t0 )k = kSWd,τ X(t) − SWd,τ X(t0 )k
Note that SV can be computed by finding the eigenvectors of AT A; this has a cost of O(W 2 H 2 +
N ) which is dominated by W 2 H 2 if W H N . In our example above, this alone reduces the memory
requirements from 1GB to 10MB. Of course, this procedure is the most effective for short videos where
there are actually many fewer frames than pixels, but this encompasses most of the examples in this
work. In fact, the break-even point for a 200x200 30fps video is 22 minutes. A similar approach was
used in the classical work on Eigenfaces [40] when computing the principal components over a set of
face images.
3
3.2
Distance Computation via Diagonal Convolutions
A different optimization is possible if τ = 1; that is, if delays are taken exactly on frames and
no interpolation is needed. In this case, the squared Euclidean distance between SWd,1 X(i) and
SWd,1 X(j) is
d
X
||SWd,1 X(i) − SWd,1 X(j)||22 =
||X(i + m) − X(j + m)||22
(16)
m=0
2
DX
Let
be the N × N matrix of all pairwise squared Euclidean distances between frames (possibly
computed with the memory optimization in Section 3.1), and let DY2 be the (N − d) × (N − d) matrix
of all pairwise distances between delay frames. Then Equation 16 implies that DY2 can be obtained
2
from DX
via convolution with a “rect function”, or a vector of 1s of length d + 1, over all diagonals in
2
DX (i.e. a moving average). This can be implemented in time O(N 2 ) with cumulative sums. Hence,
regardless of how d is chosen, the computation and memory requirements for computing DY2 depend
only on the number of frames in the video. Also, DY can simply be computed by taking the entry wise
square root of DY2 , another O(N 2 ) computation. A similar scheme was used in [16] when comparing
distances of 3D shape descriptors in videos of 3D meshes.
Figure 10 shows self-similarity matrices on embeddings of the pendulum video with no delay and
with a delay approximately matching the period. The effect of a moving average along diagonals with
delay eliminates the anti-diagonals caused by the video’s mirror symmetry.
Even for videos without mirror symmetries, such as a video of a running dog (Figure 11), introducing a delay brings the geometry into focus, as shown in Figure 12.
11
Pairwise Distances, Tau = 1, d = 28
Pairwise Distances, Tau = 1, d = 0
Figure 10: Self-similarity matrices DY0,1 and DY28,1 for a video of the oscillating pendulum. Bright
colors indicate far distances and dark colors indicate near distances. This example clearly shows
how adding a delay embedding is like performing block averaging along all diagonals of the pairwise
distance matrices, and it gets rid of the mirror symmetry.
Time
Figure 11: An animation of a periodic video of a running dog, which, unlike an oscillating pendulum,
does not have mirror symmetry in the second half of its period.
Pairwise Distances, Tau = 1, d = 0
Pairwise Distances, Tau = 1, d = 18
Figure 12: Self-similarity matrices DY0,1 and DY18,1 for a video of a running dog. Even without the
delay embedding (d = 0), the video frames still form a topological loop. However, a delay embedding
with d = 25 cleans up the geometry and leads to a rounder loop, as seen in the resulting SSM.
3.3
Normalization
A few normalization steps are needed in order to enable fair comparisons between videos with different
resolutions, or which have a different range in periodic motion either spatially or in intensity. First,
we perform a “point-center and sphere normalize” vector normalization which was shown in [31] to
have nice theoretical properties.
12
That is,
g d,τ (t) =
SW
SWd,τ (t) − (SWd,τ (t)T 1)1
||SWd,τ (t) − (SWd,τ (t)T 1)1||2
(17)
where 1 is a W H(d + 1) × 1 vector of all ones. In other words, one subtracts the mean of each
component of each vector, and each vector is scaled so that it has unit norm (i.e. lives on the unit
sphere in RW H(d+1) ). Subtracting the mean from each component will eliminate additive linear drift
on top of the periodic motion, while scaling addresses resolution / magnitude differences. Note that
we can still use the memory optimization in Section 3.1, but we can no longer use the optimizations
in Section 3.2 since each window is normalized independently.
Moreover, in order to mitigate nonlinear drift, we implement a simple pixel-wise convolution by
the derivative of a Gaussian for each pixel in the original video before applying the delay embedding:
X̂i (t) = Xi (t) ∗ −at exp−t
2
/(2σ 2 )
(18)
This is a pixel-wise bandpass filter which could be replaced with any other bandpass filter leveraging application specific knowledge of expected frequency bounds. This has the added advantage of
reducing the number of harmonics, enabling a smaller embedding dimesion d.
3.4
Periodicity/Quasiperiodicity Scoring
Once the videos are normalized to the same scale, we can score periodicity and quasiperiodicity based
on the geometry of sliding window embeddings. Let dgmn be the n-dimensional persistence diagram
for the Rips filtration on the sliding window embedding of a video, and define mpi (dgmn ) as the i-th
largest difference d − b for (b, d) ∈ dgmn . In particular
mp1 (dgmn ) = max{d − b : (b, d) ∈ dgmn }
and mpi (dgmn ) ≥ mpi+1 (dgmn ). We propose the following scores:
1. Periodicity Score (PS)
1
P S = √ mp1 (dgm1 )
3
(19)
Like [31], we exploit the fact that for the Rips filtration on S 1 , the √
1-dimensional
persistence
diagram has only one prominent birth-death pair with coordinates 0, 3 . Since this is the limit
shape of a normalized perfectly periodic sliding window video, the periodicity score is between
0 (not periodic) and 1 (perfectly periodic).
2. Quasiperiodicity Score (QPS)
r
QP S =
mp2 (dgm1 )mp1 (dgm2 )
3
(20)
This score is designed with the torus in mind. We score based on the second largest 1D persistence times the largest 2D persistence, since we want a shape that has two core circles and
encloses a void to get a large score. Based on the Künneth theorem of homology, the 2-cycle
(void) should die the moment the smallest 1-cycle dies.
3. Modified Periodicity Score (MPS)
1
M P S = √ (mp1 (dgm1 ) − mp2 (dgm1 ))
3
(21)
We design a modified periodicity score which should be lower for quasiperiodic videos, than
what the original periodicity score would yield.
13
Note that we use Z3 field coefficients for all persistent homology computations since, as shown
by [31], this works better for periodic signals with strong harmonics. Before we embark on experiments,
let us explore the choice of two crucial parameters for the sliding window embedding: the delay τ > 0
and the dimension d ∈ N. In practice we determine an equivalent pair of parameters: the dimension
d and the window size dτ .
3.5
Dimension and Window Size
Takens’ embedding theorem is one of the most fundamental results in the theory of dynamical systems
[38]. In short, it contends that (under appropriate hypotheses) there exists an integer D, so that for
all d ≥ D and generic τ > 0 the sliding window embedding SWd,τ X reconstructs the state space of the
underlying dynamics witnessed by the signal X. One common strategy for determining a minimal such
D is the false nearest-neighbors scheme [21]. The idea is to keep track of the k-th nearest neighbors
of each point in the delay embedding, and if they change as d is increased, then the prior estimates
for d were too low. This algorithm was used in recent work on video dynamics [42], for instance.
Even if we can estimate d, however, how does one choose the delay τ ? As shown in [31], the
sliding window embedding of a periodic signals is roundest (i.e. so that the periodicity score P S is
maximized) when the window size, dτ , satisfies the following relation:
d
πk
(22)
dτ =
L d+1
Here L is number of periods that the signal has in [0, 2π] and k ∈ N. To verify this experimentally, we
show in Figure 13 how the periodicity score P S changes as a function of window size for the pendulum
video, and how the choice of window size from Equation 22 maximizes P S. To generate this figure we
fixed a sufficiently large d and varied τ . Let us now describe the general approach: Given a video we
perform a period-length estimation step (see section 3.6 next), which results in a positive real number
d
`. For a given d ∈ N large enough we let τ > 0 be so that dτ = ` · d+1
.
Figure 13: Varying the window size, dτ , in a delay embedding of the synthetic pendulum video, which
has a period length around 25 frames. Red dashed lines are drawn at the window lengths that would
be expected to maximize roundness of the embedding for that period length based on theory in [31].
3.6
Fundamental Frequency Estimation
Though Figure 13 suggests robustness to window size as long as the window is more than half of
the period, we may not know what that is in practice. To automate window size choices, we do a
coarse estimate using fundamental frequency estimation techniques on a 1D surrogate signal. To get
a 1D signal, we extract the first coordinate of diffusion maps [4] using 10% nearest neighbors on the
14
raw video frames (no delay) after taking a smoothed time derivative. Note that a similar diffusionbased method was also used in recent work by [46] to analyze the frequency spectrum of a video
of an oscillating 2 pendulum + spring system in a quasiperiodic state. Once we have the diffusion
time series, we then apply the normalized autocorrelation method of [25] to estimate the fundamental
frequency. In particular, given a discrete signal x of length N , define the autocorrelation as
rt (τ ) =
t+N
−1−τ
X
xj xj+τ
(23)
j=t
However, as observed by [7], a more robust function for detecting periodicities is the squared
difference function
t+N
−1−τ
X
dt (τ ) =
(xj − xj+τ )2
(24)
j=t
which can be rewritten as dt (τ ) = mt (τ ) − 2rt (τ ) where
mt (τ ) =
t+N
−1−τ
X
(x2j + x2j+τ )
(25)
j=t
Finally, [25] suggest normalizing this function to the range [−1, 1] to control for window size and
to have an interpretation akin to a Pearson correlation coefficient:
nt (τ ) = 1 −
2rt (τ )
mt (τ ) − 2rt (τ )
=
mt (τ )
mt (τ )
(26)
The fundamental frequency is then the inverse period of the largest peak in nt which is to the
right of a zero crossing. The zero crossing condition helps prevent an offset of 0 from being the largest
peak. Defining the normalized autocorrelation as in Equation 26 has the added advantage that the
value of nt (τ ) at the peak can be used to score periodicity, which the authors call clarity. Values
closer to 1 indicate more perfect periodicities. This technique will sometimes pick integer multiples
of the period, so we multiply nt (τ ) by a slowly decaying envelope which is 1 for 0 lag and 0.9 for the
maximum lag to emphasize smaller periods. Figure 14 shows the result of this algorithm on a periodic
video, and Figure 15 shows the algorithm on an irregular video.
Figure 14: Diffusion maps + normalized autocorrelation fundamental frequency estimation on a periodic vocal folds video (Section 4.3). The chosen period length is 32, as indicated by the red dot over
the peak. This matches with the visually inspected period length.
15
Figure 15: Diffusion maps + normalized autocorrelation fundamental frequency estimation on a video
of vocal folds with irregular oscillations (Section 4.3).
4
Experimental Evaluation
Next we evaluate the effectiveness of the proposed (Modified) Periodicity and Quasiperiodicity scores
on three different tasks. First, we provide estimates of accuracy for the binary classifications periodic/notperiodic or quasiperiodic/not-quasiperiodic in the presence of several noise models and noise levels.
The results illustrate the robustness of our method. Second, we quantify the quality of periodicity
rankings from machine scores, as compared to those generated by human subjects. In a nutshell,
and after comparing with several periodicity quantification algorithms, our approach is shown to be
the most closely aligned with the perception of human subjects. Third, we demonstrate that our
methodology can be used to automatically detect the physiological manifestations of certain speech
pathologies (e.g., normal vs. biphonation), directly from high-speed videos of vibrating vocal folds.
4.1
Classification Under Varying Noise Levels/Models
As shown empirically in [8], a common source of noise in videos comes from camera shake (blur); this
is captured by point spread functions resembling directed random walks [8, Figure 1] and the amount
of blur (i.e. noise level) is controlled by the extent in pixels of the walk. Other sources are additive
white Gaussian noise (awgn), controlled by the standard deviation of the Gaussian kernel, and MPEG
bit errors quantified by the percentage of corrupted information. Figure 16 shows examples of these
noise types.
For classification purposes we use three main recurrence classes. Three types of periodic videos
(True periodic, TP): an oscillating pendulum, a bird flapping its wings, and an animation of a beating
heart. Two types of quasiperiodic videos (True quasiperiodic, TQ): one showing two solid disks which
oscillate sideways at non-commensurate rates, and the second showing two stationary Gaussian pulses
with amplitudes non-commensurately modulated by cosine functions. Two videos without significant
recurrence (True non-recurrent, TN): a video of a car driving past a landscape, and a video of an
explosion. Each one of these seven videos is then corrupted by the three noise models at three different
noise levels (blur = 20, 40, 80, awgn σ = 1, 2, 3, bit error = 5, 10, 20%) as follows: given a particular
video, a noise model and noise level, 600 instances are generated by sampling noise independently at
random.
Results: We report in Table 1 the area under the Receiver Operating Characteristic (ROC) curve,
or AUROC for short, for the classification task TP vs. TN (resp. TQ vs. TN) and binary classifier
furnished by Periodicity (resp. Quasiperiodicity) Score.
For instance, for the Blur noise model with noise level of 80 × 80 pixels, the AUROC from using
the Periodicity Score to classify the 600 instances of the Heartbeat video as periodic, and the 600
16
(b) 20 × 20 blur
(a) Original
(c) awgn σ = 2
(d) 5% Bit Err
Figure 16: The results of applying motion blur, additive white Gaussian noise, and MPEG bit corruption to a video frame.
Table 1: AUROC values for different levels of noise, from the binary classification task: periodic (bird
flapping, heart beating, pendulum) vs. non-recurrent (driving - left subcell, explosions - right subcell)
based on the periodicity score (Equation 19). Also two synthetic quasiperiodic videos (sideways disks,
modulated pulses) are compared to the same two non-recurrent videos based on the quasiperiodicity
score (Equation 20).
Awgn Awgn
σ=1 σ=2
Awgn
σ=3
Bit Err
5%
Bit Err
10%
1
1
1
1
1
1 0.97
1
0.91 0.94 1
1
1
1
1 0.94 0.84 0.85 0.43 0.68 1
1
1
1
1
1
1
Blur
20
Bird
Flapping
Heart
Beat
Pendulum
Blur
40
Blur
80
1
1
1
1
1
1
1
1
QuasiPeriodic
Disks
1 0.85
QuasiPeriodic
Pulses
1 0.9
1
1
1
Bit Err
15%
0.92 0.75 0.79
0.9 0.89 1 0.91 0.98 0.87 0.56 0.62
1
1
0.99 0.98 0.97
1
0.83 0.75 0.39 1
1
1
1
1
1
1 0.92 0.99 0.93 0.82 0.82
1
0.87
1
1
1
1
1
1 0.96 0.99 0.95 0.95 0.95
1
0.83 1
instances of the Driving video as not periodic is 0.91. Similarly, for the MPEG bit corruption model
with 5% of bit error, the AUROC from using the Quasiperiodicity score to classify the 600 instances
of the Quasiperiodic Sideways Disks as quasiperiodic and the 600 instances of the Explosions video
as not quasiperiodic is 0.92. To put these numbers in perspective, AUROC = 1 is associated with a
perfect classifier and AUROC = 0.5 corresponds to classification by a random coin flip.
Overall, the type of noise that degrades performance the most across videos is the bit error, which
makes sense, since this has the effect of randomly freezing, corrupting, or even deleting frames, which
all interrupt periodicity. The blur noise also affects videos where the range of motion is small. The
pendulum video, for instance, only moves over a range of 60 pixels at the most extreme end, so an
80x80 pixel blur almost completely obscures the motion.
4.2
Comparing Human and Machine Periodicity Rankings
Next we quantify the extent to which rankings obtained from our periodicity score (Equation 19), as
well as three other methods, agree with how humans rank videos by periodicity. The starting point is
a dataset of 20 different creative commons videos, each 5 seconds long at 30 frames per second. Some
17
videos appear periodic, such as a person waving hands, a beating heart, and spinning carnival rides.
Some of them appear nonperiodic, such as explosions, a traffic cam, and drone view of a boat sailing.
And some of them are in between, such as the pendulum video with simulated camera shake.
It is known that humans are notoriously bad at generating globally consistent rankings of sets
with more than 5 or 7 elements [27]. However, when it comes to binary comparisons of the type
“should A be ranked higher than B?” few systems are as effective as human perception, specially for
the identification of recurrent patterns in visual stimuli. We will leverage this to generate a globally
consistent ranking of the 20 videos in our initial data set.
We use Amazon’s Mechanical Turk (AMT) [5] to present each pair of videos in the set of 20,
20
2 = 190, each to three different users, for a total of 570 pairwise rankings. 15 unique AMT workers
contributed to our experiment, using an interface as the one shown in Figure 17.
Figure 17: The interface that humans are given on AMT for pairwise ranking videos by periodicity.
In order to aggregate this information into a global ranking which is as consistent as possible with
the pairwise comparisons, we implement a technique known as Hodge rank aggregation [18]. Hodge
rank aggregation finds the closest consistent ranking to a set of preferences, in a least squares sense.
More precisely, given a set of objects X, and given a set of comparisons P ⊂ X × X, we seek a scalar
function s on all of the objects that minimizes the following sum
X
|vab − (sb − sa )|2
(27)
(a,b)∈P
where vab is a real number which is positive if b is ranked higher than a and negative otherwise. Thus,
s is a function whose discrete gradient best matches the set of preferences with respect to an L2 norm.
Note that the preferences that we feed the algorithm are based on the pairwise rankings returned from
AMT. If video b is greater than video a, then we assign vab = 1, or -1 otherwise. Since we have 3
rankings for each video, we actually assign weights of +3, +1, -1, or -3. The +/- 3 are if all rankings
agree in one direction, and the +/- 1 are if one of the rankings disagrees with the other two. Figure 18
shows a histogram of all of the weighted scores from users on AMT. They are mostly in agreement,
though there are a few +/- 1 scores.
As comparison to the human scores, we use three different classes of techniques for machine ranking
of periodicity.
Sliding Windows (SW): We sort the videos in decreasing order of Periodicity Score (Equation 19).
We fix the window size at 20 frames and the embedding dimension at 20 frames (which is enough to
capture 10 strong harmonics). We also apply a time derivative of width 10 to every frame.
18
80
Histogram of Weighted Pairwise Turk Scores
Counts
60
40
20
0
-4
-3
-2
-1
0
1
2
3
4
Score
Figure 18: The histogram of scores that the workers on AMT gave to all pairwise videos.
Cutler-Davis [6]: The authors of this work present two different techniques to quantify periodicity
from a self-similarity matrix (SSM) of video frames. The first is a frequency domain technique based
on the peak of the average power spectral density over all columns (rows) of the SSM after linearly
de-trending and applying a Hann window. To turn this into a continuous score, we report the ratio of
the peak minus the mean over the standard deviation. This method will be referred to as Frequency
Score.
As the authors warn, the frequency peak method has a high susceptibility to false positives. This
motivated the design of a more robust technique in [6], which works by finding peaks in the 2D
normalized autocorrelation of the Gaussian smoothed SSMs. For videos with mirror symmetry, the
peaks will lie on a diamond lattice, while for videos without mirror symmetry, they will lie on a
square lattice. After peak finding within neighborhoods, one simply searches over all possible lattices
at all possible widths to find the best match with the peaks. Since each lattice is centered at the
autocorrelation point (0, 0), no translational checks are necessary.
To turn this into a continuous score, let E be the sum of Euclidean distances of the matched peaks
in the autocorrelation image to the best fit lattice, let r1 be the proportion of lattice points that have
been matched, and let r2 be the proportion of peaks which have been matched to a lattice point.
Then we give the final periodicity score as
CDscore = (1 + E/r1 )/(r1 r2 )3
(28)
A lattice which fits the peaks perfectly (r1 = 1) with no error (E = 0) and no false positive peaks
(r2 = 1) will have a score of 1, and any video which fails to have a perfectly matched lattice will have
a score greater than 1. Hence, we sort in increasing order of the score to get a ranking.
As we will show, this technique agrees the second best with humans after our Periodicity Score
ranking. One of the main drawbacks is numerical stability of finding maxes in non-isolated critical
points around nearly diagonal regions in square lattices, which will erroneously inflate the score. Also,
the lattice searching only occurs over an integer grid, but there may be periods that aren’t integer
number of frames, so there will always be a nonzero E for such videos. By contrast, our sliding window
scheme can work for any real valued period length.
Diffusion Maps + Normalized Autocorrelation “Clarity”: Finally, we apply the technique from Section 3.6 to get an autocorrelation function, and we report the value of the maximum peak of the
normalized autocorrelation to the right of a zero crossing, referred to as “clarity” by [25]. Values
closer to 1 indicate more perfect repetitions, so we sort in descending order of clarity to get a ranking.
Figure 19 shows an example of these three different techniques on a periodic video. There is a dot
which rises above the diagonal in the persistence diagram, a lattice is found which nearly matches the
critical points in the autocorrelation image, and autocorrelation function on diffusion maps has a nice
peak.
19
First Diffusion Coordinate
Figure 19: An example of the SW score (top), the clarity score (bottom left), and the CDscore (bottom
right, matched peaks in green and lattice in blue), on a periodic video of a man waving his arms from
the KTH dataset ( [36]).
By contrast, for a nonperiodic video (Figure 20), there is hardly any persistent homology, there is
no well matching lattice, and the first diffusion coordinate has no apparent periodicities.
First Diffusion Coordinate
Figure 20: An example of the SW score (top), the clarity score (bottom left), and the CDscore (bottom
left, matched peaks in green and lattice in blue), on a video of an explosion, which is nonperiodic.
Results: Once we have the global human rankings and the global machine rankings, we can compare
them using the Kendall τ score [20]. Given a set of objects N objects X and two total orders >1 and
>2 , where > (xa , xb ) = 1 if xa > xb and > (xa , xb ) = −1 if xa < xb , the Kendall τ score is defined as
20
τ=
X
1
(>1 (xi , xj ))(>2 (xi , xj ))
N (N − 1)/2 i<j
(29)
For two rankings which agree exactly, the Kendall τ score will be 1. For two rankings which are
exactly the reverse of each other, the Kendall τ score will be -1. In this way, it analogous to a Pearson
correlation between rankings.
Table 2: The Kendall τ scores between all of the machine rankings and the Hodge aggregated human
rankings.
Human
SW+TDA
Freq [6]
CDscore [6]
Clarity [7]
Human
1
0.663
-0.295
0.347
0.284
SW+TDA
0.663
1
-0.316
0.221
0.516
Freq [6]
-0.295
-0.316
1
-0.0842
-0.189
CDscore [6]
0.347
0.221
-0.0842
1
0.411
Clarity [7]
0.284
0.516
-0.189
0.411
1
Table 3: Average runtimes, in milliseconds, per video for all of the algorithms
SW+TDA
3101ms
Freq [6]
73ms
CDscore [6]
176ms
Clarity [7]
154ms
Table 2 shows the Kendall τ scores between all of the different machine rankings and the human
rankings. Our sliding window video methodology (SW+TDA) agrees with the human ranking more
than any other pair of ranking types. The second most similar are the SW and the diffusion clarity,
which is noteworthy as they are both geometric techniques. Table 3 also shows the average run times,
in milliseconds, of the different algorithms on each video on our machine. This does highlight one
potential drawback of our technique, since TDA algorithms tend to be computationally intensive.
However, at this scale (videos with at most several hundred frames), performance is reasonable.
4.3
Periodicity And Biphonation in High Speed Videos of Vocal Folds
In this final task we apply our methodology to a real world problem of interest in medicine. We
show that our method can automatically detect certain types of voice pathologies from high-speed
glottography, or high speed videos (4000 fps) of the left and right vocal folds in the human vocal
tract [9, 44, 45]. In particular, we detect and differentiate quasiperiodicity from periodicity by using
our geometric sliding window pipeline. Quasiperiodicity is a special case of what is referred to as
“biphonation” in the biological context, where nonlinear phenomena cause a physical process to bifurcate into two different periodic modes, often during a transition to chaotic behavior [15]. The torus
structure we sketched in Figure 9 has long been recognized in this context [28], but we provide a novel
way of quantifying it.
Similar phenomena exist in audio [14, 15], but the main reason for studying laryngeal high speed
video is understanding the biomechanical underpinnings of what is perceived in the voice. In particular,
this understanding can potentially lead to practical corrective therapies and surgical interventions.
On the other hand, the presence of biphonation in sound is not necessarily the result of a physiological
phenomenon; it has been argued that it may come about as the result of changes in states of arousal [?].
In contrast with our work, the existing literature on video-based techniques usually employs an
inherently Lagrangian approach, where different points on the left and right vocal folds are tracked,
and coordinates of these points are analyzed as 1D time series (e.g. [13, 24, 28, 35], [26]). This is
a natural approach, since those are the pixels where all of the important signal resides, and wellunderstood 1D signal processing technique can be used. However, edge detectors often require tuning,
21
and they can suddenly fail when the vocal folds close [24]. In our technique, we give up the ability
to localize the anomalies (left/right, anterior/posterior) since we are not tracking them, but in return
we do virtually no preprocessing, and our technique is domain independent.
Results: We use a collection of 7 high-speed videos for this analysis, drawn from a variety of different
sources [49], [26], [28], [13]. There are two videos which correspond to “normal” periodic vocal folds,
three which correspond to biphonation [28], and two which correspond to irregular motion5 . We
manually extracted 400 frames per video (100milliseconds) and autotuned the window size based on
autocorrelation of 1D diffusion maps (Section 3.6). We then chose an appropriate τ and chose a time
spacing so that each point cloud would have 600 points. As shown in Table 4, our technique is able to
differentiate between the four classes. We also show PCA and persistence diagrams for one example
for each class. In Figure 21, we see what appears to be a loop in PCA, and one strong 1D persistent
dot confirms this. In Figure 22, we see a prominent torus in the persistence diagram. In Figure 23, we
don’t see any prominent structures in the persistence diagram, even though PCA looks like it could
be a loop or a torus. Note, however, that PCA only preserves 13.7% of the variance in the signal,
which is why high dimensional techniques are important to draw quantitative conclusions.
Table 4: Results of our sliding window pipeline on videos of periodic vocal folds, biphonation, and
irregularities. We give the max persistence periodicity score (PS), the modified periodicity score
(MPS), the harmonic score (HS), and quasiperiodic score (QPS) presented in Section 3.4. We also
show the window size (Win) that the autocorrelation technique in Section 3.6 gives. We have bolded
the top three MPS and QPS scores across all videos. The max modified periodic scores include the
two periodic videos and one of the biphonation videos. The max quasiperiodic scores are all of the
biphonation videos, which means the one with a high periodicity score could be ruled out of the
periodicity category.
Video Name
Periodic 1 ( [13])
Periodic 2 ( [26], Figure 21)
Biphonation 1 ( [28])
Biphonation 2 ( [28])
Biphonation 3 ( [28], Figure 22)
Mucus Perturbed Periodic ( [49])
Irregular ( [13], Figure 23)
5
Win
16
32
53
42
67
94
232
PS
0.816
0.601
0.638
0.703
0.515
0.028
0.18
MPS
0.789
0.533
0.294
0.583
0.076
0.019
0.097
QPS
0.011
0.009
0.292
0.116
0.426
0.004
0.04
Discussion
We have shown in this work how applying sliding window embeddings to videos can be used to translate
properties of the underlying dynamics into geometric features of the resulting point cloud representation. Moreover, we also showed how topological/geometric tools such as persistence homology can
be leveraged to quantify the geometry of these embeddings. The pipeline was evaluated extensively
showing robustness to several noise models, high quality in the produced periodicity rankings and
applicability to the study of speech conditions form high-speed video data.
Moving forward, an interesting avenue related to medical applications is the difference between
biphonation which occurs from quasiperiodic modes and biphonation which occurs from harmonic
modes. [31] shows that Z3 field coefficients can be used to indicate the presence of a strong harmonic,
so we believe a geometric approach is possible. This could be used, for example, to differentiate
between subharmonic anomalies and quasiperiodic transitions [44].
5 Please
refer to supplementary material for an example video from each of these three classes
22
Figure 21: Video frames and sliding window statistics on a video of vocal folds undergoing normal
periodic vibrations [26]. One strong loop is visible in PCA and in the persistence diagrams
Figure 22: Video frames and sliding window statistics on a video of vocal folds undergoing biphonation,
courtesy of Juergen Neubauer [28]. PCA suggests a possible torus, and the persistence diagram indeed
has the signature of a torus (two strong independent 1-cycles and one 2-cycle)
Acknowledgments
The authors would like to thank Juergen Neubauer, Dimitar Deliyski, Robert Hillman, Alessandro
de Alarcon, Dariush Mehta, and Stephanie Zacharias for providing videos of vocal folds. We also
thank Matt Berger at ARFL for discussions about sliding window video efficiency, and we thank the
15 anonymous workers on the Amazon Mechanical Turk who ranked periodic videos.
23
Figure 23: Video frames and sliding window statistics of “irregular” vocal fold vibrations [13]. Though
2D PCA looks similar to Figure 22, no apparent 1D or 2D topological features are apparent in the
high dimensional state space.
References
[1] Mark Allmen and Charles R Dyer. Cyclic motion detection using spatiotemporal surfaces and
curves. In Pattern Recognition, 1990. Proceedings., 10th International Conference on, volume 1,
pages 365–370. IEEE, 1990.
[2] John Atanbori, Peter Cowling, John Murray, Belinda Colston, Paul Eady, Dave Hughes, Ian
Nixon, and Patrick Dickinson. Analysis of bat wing beat frequency using fourier transform. In
International Conference on Computer Analysis of Images and Patterns, pages 370–377. Springer,
2013.
[3] Ulrich Bauer. Ripser: a lean C++ code for the computation of VietorisRips persistence barcodes.
http://ripser.org, 20152017.
[4] Ronald R Coifman and Stéphane Lafon. Diffusion maps. Applied and computational harmonic
analysis, 21(1):5–30, 2006.
[5] Matthew JC Crump, John V McDonnell, and Todd M Gureckis. Evaluating amazon’s mechanical
turk as a tool for experimental behavioral research. PloS one, 8(3):e57410, 2013.
[6] Ross Cutler and Larry S. Davis. Robust real-time periodic motion detection, analysis, and
applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):781–796,
2000.
[7] Alain De Cheveigné and Hideki Kawahara. Yin, a fundamental frequency estimator for speech
and music. The Journal of the Acoustical Society of America, 111(4):1917–1930, 2002.
[8] Mauricio Delbracio and Guillermo Sapiro. Removing camera shake via weighted fourier burst
accumulation. IEEE Transactions on Image Processing, 24(11):3293–3307, 2015.
[9] Dimitar D Deliyski, Pencho P Petrushev, Heather Shaw Bonilha, Terri Treman Gerlach, Bonnie
Martin-Harris, and Robert E Hillman. Clinical implementation of laryngeal high-speed videoendoscopy: challenges and evolution. Folia Phoniatrica et Logopaedica, 60(1):33–44, 2007.
24
[10] Roman Goldenberg, Ron Kimmel, Ehud Rivlin, and Michael Rudzsky. Behavior classification by
eigendecomposition of periodic motions. Pattern Recognition, 38(7):1033–1043, 2005.
[11] Jerry P Gollub and Harry L Swinney. Onset of turbulence in a rotating fluid. Physical Review
Letters, 35(14):927, 1975.
[12] Allen Hatcher. Algebraic topology. University Press Ltd., 2002.
[13] Christian T Herbst, Jakob Unger, Hanspeter Herzel, Jan G Švec, and Jörg Lohscheller. Phasegram analysis of vocal fold vibration documented with laryngeal high-speed video endoscopy.
Journal of Voice, 30(6):771–e1, 2016.
[14] Hanspeter Herzel, David Berry, Ingo R Titze, and Marwa Saleh. Analysis of vocal disorders
with methods from nonlinear dynamics. Journal of Speech, Language, and Hearing Research,
37(5):1008–1019, 1994.
[15] Hanspeter Herzel, Robert Reuter, and Richard A Katz. Biphonation in voice signals. In AIP
Conference Proceedings, volume 375, pages 644–657. AIP, 1996.
[16] Peng Huang, Adrian Hilton, and Jonathan Starck. Shape similarity for 3d video sequences of
people. International Journal of Computer Vision, 89(2-3):362–381, 2010.
[17] Shiyao Huang, Xianghua Ying, Jiangpeng Rong, Zeyu Shang, and Hongbin Zha. Camera calibration from periodic motion of a pedestrian. In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, pages 3025–3033, 2016.
[18] Xiaoye Jiang, Lek-Heng Lim, Yuan Yao, and Yinyu Ye. Statistical ranking and combinatorial
hodge theory. Mathematical Programming, 127(1):203–244, 2011.
[19] Holger Kantz and Thomas Schreiber. Nonlinear time series analysis, volume 7. Cambridge
university press, 2004.
[20] Maurice G Kendall. A new measure of rank correlation. Biometrika, 30(1/2):81–93, 1938.
[21] Matthew B Kennel, Reggie Brown, and Henry DI Abarbanel. Determining embedding dimension
for phase-space reconstruction using a geometrical construction. Physical review A, 45(6):3403,
1992.
[22] Orrawan Kumdee and Panrasee Ritthipravat. Repetitive motion detection for human behavior
understanding from video images. In Signal Processing and Information Technology (ISSPIT),
2015 IEEE International Symposium on, pages 484–489. IEEE, 2015.
[23] Ofir Levy and Lior Wolf. Live repetition counting. In Proceedings of the IEEE International
Conference on Computer Vision, pages 3020–3028, 2015.
[24] Jörg Lohscheller, Hikmet Toy, Frank Rosanowski, Ulrich Eysholdt, and Michael Döllinger. Clinically evaluated procedure for the reconstruction of vocal fold vibrations from endoscopic digital
high-speed videos. Medical image analysis, 11(4):400–413, 2007.
[25] Philip Mcleod and Geoff Wyvill. A smarter way to find pitch. In In Proceedings of the International Computer Music Conference (ICMC05, pages 138–141, 2005.
[26] Daryush D Mehta, Dimitar D Deliyski, Thomas F Quatieri, and Robert E Hillman. Automated
measurement of vocal fold vibratory asymmetry from high-speed videoendoscopy recordings.
Journal of Speech, Language, and Hearing Research, 54(1):47–54, 2011.
[27] George A Miller. The magical number seven, plus or minus two: some limits on our capacity for
processing information. Psychological review, 63(2):81, 1956.
25
[28] Jürgen Neubauer, Patrick Mergell, Ulrich Eysholdt, and Hanspeter Herzel. Spatio-temporal
analysis of irregular vocal fold oscillations: Biphonation due to desynchronization of spatial
modes. The Journal of the Acoustical Society of America, 110(6):3179–3192, 2001.
[29] Sourabh A Niyogi, Edward H Adelson, et al. Analyzing and recognizing walking figures in xyt.
In CVPR, volume 94, pages 469–474, 1994.
[30] Jose A Perea. Persistent homology of toroidal sliding window embeddings. In Acoustics, Speech
and Signal Processing (ICASSP), 2016 IEEE International Conference on, pages 6435–6439.
IEEE, 2016.
[31] Jose A Perea and John Harer. Sliding windows and persistence: An application of topological
methods to signal analysis. Foundations of Computational Mathematics, 15(3):799–838, 2015.
[32] Mark A Pinsky. Introduction to Fourier analysis and wavelets, volume 102. American Mathematical Soc., 2002.
[33] Aaron M Plotnik and Stephen M Rock. Quantification of cyclic motion of marine animals from
computer vision. In OCEANS’02 MTS/IEEE, volume 3, pages 1575–1581. IEEE, 2002.
[34] Ramprasad Polana and Randal C Nelson. Detection and recognition of periodic, nonrigid motion.
International Journal of Computer Vision, 23(3):261–282, 1997.
[35] Qingjun Qiu, HK Schutte, Lide Gu, and Qilian Yu. An automatic method to quantify the
vibration properties of human vocal folds via videokymography. Folia Phoniatrica et Logopaedica,
55(3):128–136, 2003.
[36] Christian Schuldt, Ivan Laptev, and Barbara Caputo. Recognizing human actions: a local svm
approach. In Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, volume 3, pages 32–36. IEEE, 2004.
[37] Steven M Seitz and Charles R Dyer. View-invariant analysis of cyclic motion. International
Journal of Computer Vision, 25(3):231–251, 1997.
[38] Floris Takens. Detecting strange attractors in turbulence. In Dynamical systems and turbulence,
Warwick 1980, pages 366–381. Springer, 1981.
[39] Christopher Tralie. High-dimensional geometry of sliding window embeddings of periodic videos.
In LIPIcs-Leibniz International Proceedings in Informatics, volume 51. Schloss Dagstuhl-LeibnizZentrum fuer Informatik, 2016.
[40] Matthew Turk and Alex Pentland. Eigenfaces for recognition. Journal of cognitive neuroscience,
3(1):71–86, 1991.
[41] Mikael Vejdemo-Johansson, Florian T Pokorny, Primoz Skraba, and Danica Kragic. Cohomological learning of periodic motion. Applicable Algebra in Engineering, Communication and Computing, 26(1-2):5–26, 2015.
[42] V Venkataraman and P Turaga. Shape descriptions of nonlinear dynamical systems for videobased inference. IEEE transactions on pattern analysis and machine intelligence, 2016.
[43] Ping Wang, Gregory D Abowd, and James M Rehg. Quasi-periodic event analysis for social
game retrieval. In Computer Vision, 2009 IEEE 12th International Conference on, pages 112–
119. IEEE, 2009.
[44] Inka Wilden, Hanspeter Herzel, Gustav Peters, and Günter Tembrock. Subharmonics, biphonation, and deterministic chaos in mammal vocalization. Bioacoustics, 9(3):171–196, 1998.
26
[45] Thomas Wittenberg, Manfred Moser, Monika Tigges, and Ulrich Eysholdt. Recording, processing,
and analysis of digital high-speed sequences in glottography. Machine vision and applications,
8(6):399–404, 1995.
[46] Or Yair, Ronen Talmon, Ronald R Coifman, and Ioannis G Kevrekidis. No equations, no parameters, no variables: data, and the reconstruction of normal forms by learning informed observation
geometries. arXiv preprint arXiv:1612.03195, 2016.
[47] Jing Yang, Hong Zhang, and Guohua Peng. Time-domain period detection in short-duration
videos. Signal, Image and Video Processing, 10(4):695–702, 2016.
[48] Guoshen Yu, Guillermo Sapiro, and Stéphane Mallat. Solving inverse problems with piecewise
linear estimators: From gaussian mixture models to structured sparsity. IEEE Transactions on
Image Processing, 21(5):2481–2499, 2012.
[49] Stephanie RC Zacharias, Charles M Myer, Jareen Meinzen-Derr, Lisa Kelchner, Dimitar D
Deliyski, and Alessandro de Alarcón. Comparison of videostroboscopy and high-speed videoendoscopy in evaluation of supraglottic phonation. Annals of Otology, Rhinology & Laryngology,
page 0003489416656205, 2016.
[50] A. Zomorodian and G. Carlsson. Computing persistent homology. Discrete & Computational
Geometry, 33(2):249–274, 2005.
27
| 1 |
arXiv:1802.04834v1 [cs.LG] 13 Feb 2018
Challenging Images For Minds and Machines
Amir Rosenfeld, John K. Tsotsos
Department of Electrical Engineering and Computer Science
York University, Toronto, ON, Canada
[email protected],[email protected]
February 15, 2018
Abstract
There is no denying the tremendous leap in the performance of machine learning methods in the past half-decade. Some might even say that specific sub-fields
in pattern recognition, such as machine-vision, are as good as solved, reaching
human and super-human levels. Arguably, lack of training data and computation
power are all that stand between us and solving the remaining ones. In this position paper we underline cases in vision which are challenging to machines and
even to human observers. This is to show limitations of contemporary models that
are hard to ameliorate by following the current trend to increase training data, network capacity or computational power. Moreover, we claim that attempting to do
so is in principle a suboptimal approach. We provide a taster of such examples in
hope to encourage and challenge the machine learning community to develop new
directions to solve the said difficulties.
1
Introduction
Once only known to a few outside of academia, machine-learning has become ubiquitous in both popular media and in the industry. Superhuman capabilities are now
being gradually recorded in various fields: in the game of GO, ([1, 2]), in face verification ([3, 4]), image categorization ([5]) and even in logical reasoning in simple scenes
([6, 7, 8]).
Most current leading methods involve some variant of deep learning. Consequentially, they require large amounts of hand-labeled data (with the exception of [2] - which
used self-play to gain experience). This has elicited a data-hungry era, with increasingly large-scale datasets painstakingly labeled for object classification/detection/segmentation,
image annotation, visual question-answering, and pose estimation ([9, 10, 11, 12, 13])
to name a few. This is accompanied by a growing demand for computational power.
We bring forward challenges in vision which do not seem to be solved by current
methods - and more importantly - by current popular methodologies, meaning that neither additional data, nor added computational power will be the drivers of the solution.
1
Figure 1: A children’s puzzle where the goal is to find six hidden words: Book, words,
story, pages, read, novel. For a machine this is far from child’s play. Could this be
solved by providing a million similar examples to a deep-learning system? Does a
human need such training?
Related Work
Imbalanced or Small Data: datasets tend to be naturally imbalanced, and there is
a long history of suggested remedies ([14, 15, 16]). Handling lack of training data
has also been treated by attempting to use web-scale data of lesser quality than handannotated dataset [17], simulating data [cite data for cars, text recognition in the wild,
captcha]. Transfer Learning: reusing features of networks trained on large is a useful
starting point (cf [18]) One-Shot-Learning: attempting to reduce the number of required training example, in extreme cases to one or even zero examples ([19]); DeepLearning Failures: recently, some simple cases where deep learning fails to work as
one would possibly expect were introduced, along with theoretical justifications ([20]).
2
Challenging Cases
We present two examples and then discuss them. They have a few common characteristics: humans are able to solve them on the first “encounter” - despite not having
seen any such images before. Incidentally - but not critically - the two examples are
from the domain of visual text recognition. Moreover, though humans know how to
recognize text as seen in regular textbooks, street-signs, etc, the text in these images is
either hidden, rendered, or distorted in an uncharacteristic manner.
Children’s games: the first case is well exemplified by a child’s game, hidden
word puzzles. The goal is to find hidden words in an image. Fig. 1 shows an arbitrarily
selected example. For a human observer this is a solvable puzzle, though it may take a
few minutes to complete. We applied two state-of-the-art methods for text recognition
2
Sub Image
[21]
“sned”
“vvoz”
“novees”
“teg”
[22]
“score”
∅
∅
∅
Table 1: Text detected by two state-of-the-art scene-text recognition methods applied
to sub-images of a children’s puzzle. ∅ means no text was detected by the method
(images scaled to fit figure).
Figure 2: Variants of textual CAPTCHA. Captchas are becoming increasingly difficult
(reproduced from [24])
in the wild with available code ([21]) or an on line-demo ([22]1 ) on the image in Fig.
1. As this did not work immediately, we focused on the word “NOVEL” (the “N” is
below the forearm of the left person, ending with an “L” below his foot), by cropping
it an rotating so the text is level, cropping more tightly , and even cropping only the
letter “L”. See Table 1 for the corresponding sub-images (including the entire image at
the top row) and the results output by the two methods.
This is by no means a systematic test and some may even claim that it isn’t fair and they would be right: these systems were not trained on such images; [21] was only
trained on a photo-realistic dataset of 8 million synthetic training images, and [22] was
only trained on tens of thousands of images from coco-text ([23]), or used powerful
pre-trained networks where training data was less available.
CAPTCHA: a well-known mechanism to thwart automated misuse of websites by
distinguishing between humans and machines ([25]). Textual captchas involve presenting an image of text which has to be read and written by the user. We focus on this
type of captcha, though others exist ([26]). The introduction of captchas immediately
triggered the invention of new automatic ways to break them ([27]), which eventually
sparked an “arms race” between increasingly complex captchas and correspondingly
powerful automated methods ([28]). This caused a state where on one-hand the best
leading textual captcha-solution methods involve training DNN’s over data with similar distortion characteristics as the desired types of captcha - though still these systems
have limited success rates (at times less than 50%) - and on the other hand the level of
distortion has become such that humans have a hard-time solving some of them.
3
Machines vs Humans as Supervised Learners
One can rule out the suggested examples by saying that they are simply out-of-sample
datapoints on behalf of a statistical learner’s perspective. Yet it seems that with what1 http://east.zxytim.com
3
ever supervision human-beings receive - they are usually able to solve them despite
not being especially exposed to this kind of stimulus. Moreover, precisely these kinds
of images are used routinely in human IQ testing, so they are a universally accepted
indicator for human performance. If these examples may seem esoteric, we can revert
to more common cases: as a child, how often is one exposed to bounding boxes of
objects? How often to delineations of objects with precise segmentation masks? How
often to pose-configurations, facial and bodily key-points, and dense-meshes of 3D objects overlayed on their field of view ([13])? More critically, for how many different
object types does this happen (if any), for how many different instances, with what
level of precision of annotation, and in how many modalities?
The granularity of visual supervision given to machines seems to be much finer
than that given to humans. As for the amount of directly supervised data, it does not
seem to really be the main limiting factor; as already noted several times, performance
either saturates with training data ([29, 30]) or at best grows logarithmically ([17, 31],
increasing mAP from 53% to 58% when growing from 10M to 300M examples) making the solution of more data for better performance simply impractical - even for those
with the most resources. And this is for “common” problems, such as object detection.
Humans who only ever read street-signs and textbooks are able to solve captchas
of various kinds without any special training on their first encounter with them. The
same is true for the “picture puzzles” mentioned above, as it is for other cases not
mentioned here. We do not claim that humans are not subject to supervised learning
in their early life, and in later stages. On the contrary, supervisory signals arise from
multiple sources: caretakers who provide supervisory signals by teaching, “internal supervision” provided by innate biases ([32]) and finally rewards stemming from results
of behaviour, such as suffering pain from hitting an object. But any such supervision is
interspersed within a vast, continuous stream of unsupervised data, most of which does
not have an easily measurable supervisory affect on the observer.
There is something fundamentally different about the way humans construct or
use internal representations, enabling them to reason about and solve new patternrecognition tasks. We hypothesize that these are approached by generating procedures
of a compositional nature when presented with a novel - or known - task (as suggested
by the Visual Routines of [33] or the Cognitive Programs of [34]. We intend to maintain
a collection of examples beyond the ones suggested above, to encourage the community to attempt to solve them, not by learning from vast amounts of similar examples,
but by learning from related, simpler subtasks and learning to reason and solve them
by composing the appropriate solutions.
References
[1] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche,
J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot et al., “Mastering
the game of Go with deep neural networks and tree search,” Nature, vol. 529, no.
7587, pp. 484–489, 2016. 1
4
[2] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez,
T. Hubert, L. Baker, M. Lai, A. Bolton et al., “Mastering the game of go without
human knowledge,” Nature, vol. 550, no. 7676, p. 354, 2017. 1
[3] C. Lu and X. Tang, “Surpassing Human-Level Face Verification Performance on
LFW with GaussianFace.” 2015. 1
[4] X. Qi and L. Zhang, “Face Recognition via Centralized Coordinate Learning,”
arXiv preprint arXiv:1801.05678, 2018. 1
[5] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing
human-level performance on imagenet classification,” in Proceedings of the IEEE
international conference on computer vision, 2015, pp. 1026–1034. 1
[6] A. Santoro, D. Raposo, D. G. Barrett, M. Malinowski, R. Pascanu, P. Battaglia,
and T. Lillicrap, “A simple neural network module for relational reasoning,” in
Advances in neural information processing systems, 2017, pp. 4974–4983. 1
[7] E. Perez, H. De Vries, F. Strub, V. Dumoulin, and A. Courville, “Learning visual
reasoning without strong priors,” arXiv preprint arXiv:1707.03017, 2017. 1
[8] E. Perez, F. Strub, H. De Vries, V. Dumoulin, and A. Courville, “Film: Visual
reasoning with a general conditioning layer,” arXiv preprint arXiv:1709.07871,
2017. 1
[9] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang,
A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision, vol. 115, no. 3, pp.
211–252, 2015. 1
[10] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár,
and C. L. Zitnick, “Microsoft coco: Common objects in context,” in European
conference on computer vision. Springer, 2014, pp. 740–755. 1
[11] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma et al., “Visual genome: Connecting language and
vision using crowdsourced dense image annotations,” International Journal of
Computer Vision, vol. 123, no. 1, pp. 32–73, 2017. 1
[12] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and
D. Parikh, “Vqa: Visual question answering,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 2425–2433. 1
[13] R. A. Güler, N. Neverova, and I. Kokkinos, “DensePose: Dense Human Pose
Estimation In The Wild,” arXiv preprint arXiv:1802.00434, 2018. 1, 3
[14] J. J. Lim, R. R. Salakhutdinov, and A. Torralba, “Transfer learning by borrowing examples for multiclass object detection,” in Advances in neural information
processing systems, 2011, pp. 118–126. 1
5
[15] X. Zhu, D. Anguelov, and D. Ramanan, “Capturing long-tail distributions of object subcategories,” in Computer Vision and Pattern Recognition (CVPR), 2014
IEEE Conference on. IEEE, 2014, pp. 915–922. 1
[16] Y.-X. Wang, D. Ramanan, and M. Hebert, “Learning to Model the Tail,” in Advances in Neural Information Processing Systems, 2017, pp. 7032–7042. 1
[17] C. Sun, A. Shrivastava, S. Singh, and A. Gupta, “Revisiting unreasonable effectiveness of data in deep learning era,” in 2017 IEEE International Conference on
Computer Vision (ICCV). IEEE, 2017, pp. 843–852. 1, 3
[18] A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN features offthe-shelf: an astounding baseline for recognition,” in Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition Workshops, 2014, pp.
806–813. 1
[19] J. Snell, K. Swersky, and R. Zemel, “Prototypical networks for few-shot learning,” in Advances in Neural Information Processing Systems, 2017, pp. 4080–
4090. 1
[20] S. Shalev-Shwartz, O. Shamir, and S. Shammah, “Failures of deep learning,”
arXiv preprint arXiv:1703.07950, 2017. 1
[21] B. Shi, X. Bai, and C. Yao, “An end-to-end trainable neural network for imagebased sequence recognition and its application to scene text recognition,” IEEE
transactions on pattern analysis and machine intelligence, vol. 39, no. 11, pp.
2298–2304, 2017. 2
[22] X. Zhou, C. Yao, H. Wen, Y. Wang, S. Zhou, W. He, and J. Liang, “EAST: an efficient and accurate scene text detector,” arXiv preprint arXiv:1704.03155, 2017.
2
[23] A. Veit, T. Matera, L. Neumann, J. Matas, and S. Belongie, “Coco-text: Dataset
and benchmark for text detection and recognition in natural images,” arXiv
preprint arXiv:1601.07140, 2016. 2
[24] T. A. Le, A. G. Baydin, R. Zinkov, and F. Wood, “Using synthetic data to train
neural networks is model-based reasoning,” in Neural Networks (IJCNN), 2017
International Joint Conference on. IEEE, 2017, pp. 3514–3521. 2
[25] L. Von Ahn, M. Blum, N. J. Hopper, and J. Langford, “CAPTCHA: Using hard
AI problems for security,” in International Conference on the Theory and Applications of Cryptographic Techniques. Springer, 2003, pp. 294–311. 2
[26] V. P. Singh and P. Pal, “Survey of different types of CAPTCHA,” International
Journal of Computer Science and Information Technologies, vol. 5, no. 2, pp.
2242–2245, 2014. 2
6
[27] G. Mori and J. Malik, “Recognizing objects in adversarial clutter: Breaking a
visual CAPTCHA,” in Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on, vol. 1. IEEE, 2003, pp. I–I.
2
[28] J. Chen, X. Luo, Y. Guo, Y. Zhang, and D. Gong, “A Survey on Breaking Technique of Text-Based CAPTCHA,” Security and Communication Networks, vol.
2017, 2017. 2
[29] X. Zhu, C. Vondrick, D. Ramanan, and C. C. Fowlkes, “Do We Need More Training Data or Better Models for Object Detection?.” in BMVC, vol. 3. Citeseer,
2012, p. 5. 3
[30] X. Zhu, C. Vondrick, C. C. Fowlkes, and D. Ramanan, “Do we need more training
data?” International Journal of Computer Vision, vol. 119, no. 1, pp. 76–92,
2016. 3
[31] J. Hestness, S. Narang, N. Ardalani, G. Diamos, H. Jun, H. Kianinejad, M. Patwary, M. Ali, Y. Yang, and Y. Zhou, “Deep Learning Scaling is Predictable, Empirically,” arXiv preprint arXiv:1712.00409, 2017. 3
[32] S. Ullman, D. Harari, and N. Dorfman, “From simple innate biases to complex
visual concepts,” Proceedings of the National Academy of Sciences, vol. 109,
no. 44, pp. 18 215–18 220, 2012. 3
[33] S. Ullman, “Visual routines,” Cognition, vol. 18, no. 1-3, pp. 97–159, 1984. 3
[34] J. K. Tsotsos and W. Kruijne, “Cognitive programs: software for attention’s executive,” Frontiers in Psychology, vol. 5, 2014. 3
7
| 1 |
Some Theory for Ordinal Embedding
arXiv:1501.02861v2 [math.ST] 4 May 2016
Ery Arias-Castro∗
Abstract
Motivated by recent work on ordinal embedding (Kleindessner and von Luxburg, 2014), we
derive large sample consistency results and rates of convergence for the problem of embedding
points based on triple or quadruple distance comparisons. We also consider a variant of this
problem where only local comparisons are provided. Finally, inspired by (Jamieson and Nowak,
2011), we bound the number of such comparisons needed to achieve consistency.
Keywords: ordinal embedding, non-metric multidimensional scaling (MDS), dissimilarity comparisons, landmark multidimensional scaling.
1
Introduction
The problem of ordinal embedding, also called non-metric multidimensional scaling (Borg and Groenen,
2005), consists of finding an embedding of a set of items based on pairwise distance comparisons.
Specifically, suppose that δij ≥ 0 is some dissimilarity measure between items i, j ∈ [n] ∶= {1, . . . , n}.
We assume that δii = 0 and δij = δji for all i, j ∈ [n]. These dissimilarities are either directly available
but assumed to lack meaning except for their relative magnitudes, or only available via comparisons
with some other dissimilarities, meaning that we are only provided with a subset C ⊂ [n]4 such that
δij < δkℓ ,
∀(i, j, k, ℓ) ∈ C.
(1)
Note that the latter setting encompasses the former. Given C and a dimension d, the goal is to
embed the items as points p1 , . . . , pn ∈ Rd in a way that is compatible with the available information,
specifically
δij < δkℓ ⇒ ∥pi − pj ∥ ≤ ∥pk − pℓ ∥, ∀(i, j, k, ℓ) ∈ C,
(2)
where ∥⋅∥ denotes the Euclidean norm. The two most common situations are when all the quadruple
comparisons are available, meaning C = [n]4 , or all triple comparisons are available, meaning
C = {(i, j, i, k) ∶ i, j, k ∈ [n]}, which can be identified with [n]3 . This problem has a long history
surveyed in (Young and Hamer, 1987), with pioneering contributions from Shepard (1962a,b) and
Kruskal (1964).
The main question we tackle here is that of consistency. Suppose that the items are in fact points
x1 , . . . , xn ∈ Rd and δij = ∥xi −xj ∥. (When the δij ’s are available, suppose that δij = g(∥xi −xj ∥) where
g is an unknown increasing function.) Provided with a subset C = Cn of dissimilarity comparisons as
in (2), is it possible to reconstruct the original points in the large-sample limit n → ∞? Clearly, the
reconstruction can only be up to a similarity transformation — that is, a transformation f ∶ Rd ↦ Rd
such that, for some λ > 0, ∥f (x) − f (y)∥ = λ∥x − y∥ for all x, y ∈ Rd , or equivalently, of the form
f (x) = λR(x) + b where R is an orthogonal transformation and b is a constant vector — since such
∗
Department of Mathematics, University of California, San Diego, USA
1
2
a transformation leaves the distance comparisons unchanged. This question is at the foundation of
non-metric multidimensional scaling.
Early work only addressed the continuous case, where the x’s span a whole convex subset
U ⊂ Rd . In that setting, the goal becomes to characterize isotonic functions on U , that is, functions
f ∶ U ↦ Rd satisfying
∥x − y∥ < ∥x′ − y ′ ∥ ⇒ ∥f (x) − f (y)∥ ≤ ∥f (x′ ) − f (y ′ )∥,
∀x, y, x′ , y ′ ∈ U.
(3)
Shepard (1966) argues that such functions must be similarities, and cites earlier work (Aumann and Kruskal,
1958; Suppes and Winet, 1955) dealing with the case d = 1.
Only recently has the finite sample case been formally considered. Indeed, Kleindessner and von Luxburg
(2014) prove a consistency result, showing that if x1 , . . . , xn ∈ U ⊂ Rd , where U is a bounded, connected, and open subset of Rd satisfying some additional conditions — for example, a finite union
of open balls — and C = [n]4 , then in the large sample limit with x1 , . . . , xn becoming dense in U ,
it is possible to recover the x’s up to a similarity transformation. (Note that U is then uniquely
defined as the interior of {xi ∶ i ≥ 1}.) We note that Kleindessner and von Luxburg (2014) focus on
the strictly isotonic case, where the second inequality in (3) is strict. Our first contribution is an
extension of this consistency result for quadruple learning to triple learning where C = [n]3 . In the
process, we greatly simplify the arguments of Kleindessner and von Luxburg (2014) and weaken the
conditions on the sampling domain U . We note that Terada and Von Luxburg (2014) have partially
solved this problem by a reduction to the problem of embedding a nearest-neighbor graph. However, their arguments are based on an apparently incomplete proof in (Von Luxburg and Alamgir,
2013), which is itself based on a rather sophisticated approach. Our proofs are comparatively much
simpler and direct.
Our second contribution is to provide rates of convergence, a problem left open by Kleindessner and von Luxburg
(2014). In the context of quadruple learning, we obtain a rate in O(εn ), where εn is the Hausdorff
distance between the underlying sample {x1 , . . . , xn } and U , meaning, εn ∶= supx∈U mini∈[n] ∥x − xi ∥.
This is the first convergence rate for exact ordinal embedding that we know of. (We are not able
to obtain the same rate in the context of triple learning.) Compared to establishing consistency,
the proof is much more involved.
The last decade has seen a surge of interest in ordinal embedding, motivated by applications
to recommender systems and large-scale psychometric studies made available via the internet, for
example, databases for music artists similarity (Ellis et al., 2002; McFee and Lanckriet, 2011). Sensor localization (Nhat et al., 2008) is another possible application. Modern datasets being large,
all quadruple or triple comparisons are rarely available, motivating the proposal of embedding
methods based on a sparse set of comparisons (Agarwal et al., 2007; Borg and Groenen, 2005;
Jamieson and Nowak, 2011; Terada and Von Luxburg, 2014). Terada and Von Luxburg (2014)
study what they call local ordinal embedding, which they define as the problem of embedding
an unweighted K-nearest neighbor (K-NN) graph. With our notation, this is the situation where
C = {(i, j, k) ∶ δij ≤ δi(K) < δik }, δi(K) being the dissimilarity between item i and its Kth nearestneighbor. Terada and Von Luxburg (2014) argue that, when the items are points x1 , . . . , xn sampled
from a smooth density on a bounded, connected, convex, and open subset U ⊂ Rd with smooth
boundary, then K = Kn ≫ n2/(2+d) (log n)d/(2+d) is enough for consistency. Our third contribution
is to consider the related situation where C = {(i, j, k, ℓ) ∶ δij < δkℓ and max(δij , δik , δiℓ ) ≤ δi(K) },
which provides us with the K-NN graph and also all the quadruple
√ comparisons between the nearest
neighbors. In this setting, we are only able to show that Kn ≫ n log n is enough.
Beyond local designs, which may not be feasible in some settings, Jamieson and Nowak (2011)
consider the problem of adaptively (i.e., sequentially) selecting triple comparisons in order to minimize the number of such comparisons and yet deduce all the other triple comparisons. They
3
consider a few methods, among which a non-metric version of the landmark MDS method of
De Silva and Tenenbaum (2004). Less ambitious is the problem of selecting few comparisons in
order to consistently embed the items when these are points in a Euclidean space. Our fourth
contribution is to show that one can obtain a consistent embedding with a landmark design based
on an n queries, where an is any diverging sequence. Moreover, the embedding can be computed in
(expected) time ζ(an ) n, for some function ζ ∶ R+ ↦ R+ .
The rest of the paper is organized as follows. In Section 2, we state our theoretical results and
prove the simpler ones. We then gather the remaining proofs in Section 3. Section 4 concludes the
paper with a short discussion.
2
Theory
In this section we present our theoretical findings. Most proofs are gathered in Section 3.
We already defined isotonic functions in (3). Following (Kleindessner and von Luxburg, 2014),
we say that a function f ∶ U ⊂ Rd ↦ Rd is weakly isotonic if
∥x − y∥ < ∥x − z∥ ⇒ ∥f (x) − f (y)∥ ≤ ∥f (x) − f (z)∥,
∀x, y, z ∈ U.
(4)
Obviously, if a function is isotonic (3), then it is weakly isotonic (4). Weak isotonicity is in fact not
much weaker than isotonicity. Indeed, let P be a property (e.g., ‘isotonic’), and say that a function
f ∶ U ⊂ Rd ↦ Rd has the property P locally if for each x ∈ U there is r > 0 such that f has property
P on B(x, r) ∩ U , where B(x, r) denotes the open ball with center x and radius r.
Lemma 1. Any locally weakly isotonic function on an open U is also locally isotonic on U .
Proof. This is an immediate consequence of (Kleindessner and von Luxburg, 2014, Lem 6), which
implies that a weakly isotonic function on B(x, r) is isotonic on B(x, r/4).
Suppose we have data points x1 , . . . , xn ∈ Rd . Define
Ωn = {x1 , . . . , xn },
Ω = ⋃ Ωn = {xn ∶ n ≥ 1}.
(5)
n≥1
Let δij = ∥xi − xj ∥ and suppose that we are only provided with a subset Cn ⊂ [n]4 of distance
comparisons as in (1). To an (exact) ordinal embedding p ∶ [n] ↦ Rd — which by definition satisfies
(2) — we associate the map φn ∶ Ωn ↦ Rd defined by φn (xi ) = pi for all i ∈ [n]. We crucially observe
that, in the case of all quadruple comparisons (Cn = [n]4 ), the resulting map φn is isotonic on Ωn ; in
the case of all triple comparisons (Cn = [n]3 ), φn is only weakly isotonic on Ωn , instead. In light of
this, and the fact that the location, orientation and scale are all lost when only ordinal information
is available, the problem of proving consistency of (exact) ordinal embedding reduces to showing
that any such embedding is close to a similarity transformation as the sample size increases, n → ∞.
This is exactly what Kleindessner and von Luxburg (2014) do under some assumptions.
2.1
Ordinal embedding based on all triple comparisons
Our first contribution is to extend the consistency results of Kleindessner and von Luxburg (2014)
on quadruple learning to triple learning. Following their presentation, we start with a result where
the sample is infinite, which is only a mild generalization of (Kleindessner and von Luxburg, 2014,
Th 3).
4
Theorem 1. Let U ⊂ Rd be bounded, connected and open. Suppose Ω is dense in U and consider
a locally weakly isotonic function φ ∶ Ω ↦ Rd . Then there is a similarity transformation S that
coincides with φ on Ω.
The proof is largely based on that of (Kleindessner and von Luxburg, 2014, Th 3), but a bit
simpler; see Section 3.1.
We remark that there can only be one similarity with the above property, since similarities are
affine transformations, and two affine transformations of Rd that coincide on d+1 affine independent
points are necessarily identical.
In this theorem, the set Ω is dense in an open subset of Rd , and therefore infinite. In fact,
Kleindessner and von Luxburg (2014) use this theorem as an intermediary result for proving consistency as the sample size increases. Most of their paper is dedicated to establishing this, as their
arguments are quite elaborate. We found a more direct route by ‘tending to the limit as soon as
possible’, based on Lemma 2 below, which is at the core of the Arzelà-Ascoli theorem.
For the remaining of this section, we consider the finite sample setting:
U ⊂ Rd is bounded, connected and open,
Ωn = {x1 , . . . , xn } ⊂ U is such that Ω ∶= {xn ∶ n ≥ 1} is dense in U ,
and φn ∶ Ωn ↦ Q ⊂ Rd is a function with values in a bounded set Q.
(6)
In the context of (6), we implicitly extend φn to Ω, for example, by setting φn (x) = q for all
x ∈ Ω ∖ Ωn , where q is a given point in Q, although the following holds for any extension.
Lemma 2. Consider Ωn ⊂ Rd finite and φn ∶ Ωn ↦ Q ⊂ Rd , where Q is bounded. Then there is
N ⊂ N infinite such that φ(x) ∶= limn∈N φn (x) exists for all x ∈ Ω ∶= ⋃n Ωn .
This is called the diagonal process in (Kelley, 1975, Problem D, Ch 7). Although the result is
classical, we provide a proof for completeness.
Proof. Without loss of generality, suppose Ωn = {x1 , . . . , xn }. Let N0 = N. Since (φn (x1 ) ∶ n ∈
N0 ) ∈ Q and Q is bounded, there is N1 ⊂ N0 infinite such that limn∈N1 φn (x1 ) exists. In turn, since
(φn (x2 ) ∶ n ∈ N1 ) is bounded, there is N2 ⊂ N1 infinite such that limn∈N2 φn (x2 ) exists. Continuing
this process — which formally corresponds to a recursion — we obtain ⋯ ⊂ Nk+1 ⊂ Nk ⊂ ⋯ ⊂ N1 ⊂
N0 = N such that, for all k, Nk is infinite and limn∈Nk φn (xk ) exists. Let nk denote the kth element
(in increasing order) of Nk and note that (nk ∶ k ≥ 1) is strictly increasing. Define N = {nk ∶ k ≥ 1}.
Since {np , p ≥ k} ⊂ Nk , we have limn∈N φn (xk ) = limn∈Nk φn (xk ), and this is valid for all k ≥ 1.
Corollary 1. Consider the setting (6) and assume that φn is weakly isotonic. Then (φn ) is
sequentially pre-compact for the pointwise convergence topology for functions on Ω and all the
functions where it accumulates are similarity transformations restricted to Ω.
The corresponding result (Kleindessner and von Luxburg, 2014, Th 4) was obtained for isotonic
(instead of weakly isotonic) functions and for domains U that are finite unions of balls, and the
convergence was uniform instead of pointwise. For now, we provide a proof of Corollary 1, which
we derive as a simple consequence of Theorem 1 and Lemma 2.
Proof. Lemma 2 implies that (φn ) is sequentially pre-compact for the pointwise convergence topology. Let φ be an accumulation point of (φn ), meaning that there is N ⊂ N infinite such that
φ(x) = limn∈N φn (x) for all x ∈ Ω. Take x, y, z ∈ Ω such that ∥x − y∥ < ∥x − z∥. By definition, there
is m such that x, y, z ∈ Ωm , and therefore ∥φn (x) − φn (y)∥ ≤ ∥φn (x) − φn (z)∥ for all n ≥ m. Passing
to the limit along n ∈ N , we obtain ∥φ(x) − φ(y)∥ ≤ ∥φ(x) − φ(z)∥. Hence, φ is weakly isotonic on
Ω and, by Theorem 1, it is therefore the restriction of a similarity transformation to Ω.
5
It is true that (Kleindessner and von Luxburg, 2014, Th 4) establishes a uniform convergence
result. We do the same in Theorem 2 below, but with much simpler arguments. The key are
the following two results bounding the modulus of continuity of a (resp. weakly) isotonic function.
We note that the second result (for weakly isotonic functions) is very weak but sufficient for our
purposes here. For Λ ⊂ V ⊂ Rd , define δH (Λ, V ) = supy∈V inf x∈Λ ∥y − x∥, which is their Hausdorff
distance. We say that (yi ∶ i ∈ I) ⊂ Rd is an η-packing if ∥yi − yj ∥ ≥ η for all i ≠ j. We recall that
the size of the largest η-packing of a Euclidean ball of radius r is of exact order (r/η)−d . For a set
V ⊂ Rd , let diam(V ) = supx,y∈V ∥x − y∥ be its diameter and let
ρ(V ) = arg sup{∃v ∈ V ∶ B(v, r/2) ⊂ V },
(7)
r>0
which is the diameter of a largest ball inscribed in V .
Everywhere in the paper, d is fixed, and in fact implicitly small as we assume repeatedly that
the sample (of size n) is dense in a full-dimensional domain of Rd . In particular, all the implicit
constants of proportionality that follow depend solely on d.
Lemma 3. Let V ⊂ Rd be open. Consider Λ ⊂ V and set ε = δH (Λ, V ). Let ψ ∶ Λ ↦ Q be isotonic,
where Q ⊂ Rd is bounded. There is C ∝ diam(Q)/ρ(V ), such that
∥ψ(x) − ψ(x′ )∥ ≤ C(∥x − x′ ∥ + ε),
∀x, x′ ∈ Λ.
(8)
Proof. The proof is based on the fact that an isotonic function transforms a packing into a packing.
Take x, x′ ∈ Λ such that ξ ∶= ∥ψ(x) − ψ(x′ )∥ > 0, and let η = ∥x − x′ ∥. Since V is open it contains
an open ball of diameter ρ(V ). Let y1 , . . . , ym be an (η + 3ε)-packing of such a ball with m ≥
C1 (ρ(V )/(η + ε))d for some constant C1 depending only on d. Then let x1 , . . . , xm ∈ Λ such that
maxi ∥yi − xi ∥ ≤ ε. By the triangle inequality, for all i ≠ j we have
∥xi − xj ∥ ≥ ∥yi − yj ∥ − 2ε ≥ η + ε > ∥x − x′ ∥.
Because ψ is isotonic, we have ∥ψ(xi ) − ψ(xj )∥ ≥ ξ, so that ψ(x1 ), . . . , ψ(xm ) is a ξ-packing.
Therefore, there is a constant C2 depending only on d such that m ≤ C2 (diam(Q)/ξ)d . We conclude
that ξ ≤ (C2 /C1 )1/d (diam(Q)/ρ(V ))(η + ε).
For V ⊂ Rd and h > 0, let V h = {x ∈ V ∶ ∃y ∈ V s.t. x ∈ B(y, h) ⊂ V }. We note that V h is the
complement of the h-convex hull of V c ∶= Rd ∖ V — see (Cuevas et al., 2012) and references therein.
Lemma 4. In the context of Lemma 3, if ψ is only weakly isotonic, then there is C ∝ diam(Q),
such that for all h > 0,
√
1/d
(9)
∥ψ(x) − ψ(x′ )∥ ≤ C(∥x − x′ ∥/h + ε/h) , ∀x ∈ Λ ∩ V h , ∀x′ ∈ Λ.
Proof. Assume that V h ≠ ∅, for otherwise there is nothing to prove. Take x ∈ Λ ∩ V h and x′ ∈ Λ
such that ξ ∶= ∥ψ(x) − ψ(x′ )∥ > 0, and let η = ∥x − x′ ∥. Because ψ is bounded, it is enough to prove
the result when η, ε < h/2. Let y ∈ V be such that x ∈ B(y, h) ⊂ V . There is y ′ ∈ B(y, h) such
that y ∈ [xy ′ ] and ∥x − y ′ ∥ ≥ 2h/3. Define u = (y ′ − x)/∥y ′ − x∥. Let z0 = x, and for j ≥ 1, define
zj = zj−1 + (η + 5jε)u. Let k ≥ 0 be maximum such that ∑kj=1 (η + 5jε) < h/2. Since k satisfies
√
kη + 5k2 ε ≥ h/2, we have k ≥ min(h/(4η), h/(10ε)). By construction, for all j ∈ [k], zj ∈ [xy ′ ] and
B(zj , 2ε) ⊂ B(y, h). Let x−1 = x′ , x0 = x and take x1 , . . . , xk ∈ Λ such that maxj ∥xj − zj ∥ ≤ ε. By
the triangle inequality, for j = 2, . . . , k,
∥xj − xj−1 ∥ ≥ ∥zj − zj−1 ∥ − 2ε ≥ ∥zj−1 − zj−2 ∥ + 3ε ≥ ∥xj−1 − xj−2 ∥ + ε,
6
which implies by induction that
∥xj − xj−1 ∥ ≥ ∥x1 − x0 ∥ + ε ≥ ∥z1 − z0 ∥ = η + 5ε > ∥x − x′ ∥.
By weak isotonicity, this implies that ∥ψ(xj ) − ψ(xj−1 )∥ ≥ ∥ψ(x) − ψ(x′ )∥ = ξ. We also have, for
any i, j ∈ [k] such that 1 ≤ i ≤ j − 2,
∥xj − xi ∥ ≥ ∥zj − zi ∥ − 2ε ≥ ∥zj − zj−1 ∥ + η + 5ε − 2ε ≥ ∥xj − xj−1 ∥ + η + ε.
By weak isotonicity, this implies that ∥ψ(xj ) − ψ(xi )∥ ≥ ∥ψ(xj ) − ψ(xj−1 )∥ for all 0 ≤ i < j ≤ k.
Consequently, (ψ(xj ) ∶ j ∈ [k]) forms a ξ-packing of Q. Hence, k ≤ C ′ (diam(Q)/ξ)d , for some
constant C ′ . We conclude with the lower bound on k.
From this control on the modulus of continuity, we obtain a stronger version of Corollary 1.
Theorem 2. Under the same conditions as Corollary 1, we have the stronger conclusion that there
is a sequence (Sn ) of similarities such that, for all h > 0, maxx∈Ωn ∩U h ∥φn (x) − Sn (x)∥ → 0 as
n → ∞. If in fact each φn is isotonic, then this remains true when h = 0.
We remark that when U is a connected union of a possibly uncountable number of open balls of
radius at least h > 0, then U = U h . This covers the case of a finite union of open balls considered in
(Kleindessner and von Luxburg, 2014). We also note that, if U is bounded and open, and ∂U has
bounded curvature, then there is h > 0 such that U = U h . This follows from the fact that, in this
case, U c has positive reach (Federer, 1959), and is therefore h-convex when h is below the reach
by1 (Cuevas et al., 2012, Prop 1). Moreover, our arguments can be modified to accommodate sets
U with boundaries that are only Lipschitz, by reasoning with wedges in Lemma 4.
Theorem 2 now contains (Kleindessner and von Luxburg, 2014, Th 4), and extends it to weakly
isotonic functions and to more general domains U . Overall, our proof technique is much simpler,
shorter, and elementary.
Define εn = δH (Ωn , U ), which quantifies the density of Ωn in U . Because Ωn+1 ⊂ Ωn and Ω is
dense in U , we have εn ↘ 0 as n → ∞.
Proof. Let φ be an accumulation point of (φn ) for the pointwise convergence topology, meaning
there is N ⊂ N infinite such that φ(x) = limn∈N φn (x) for all x ∈ Ω. We show that, in fact, the
convergence is uniform.
First, suppose that each φn is isotonic. In that case, Lemma 3 implies the existence of a constant
C > 0 such that ∥φn (x)−φn (x′ )∥ ≤ C(∥x−x′ ∥+εn ) for all x, x′ ∈ Ωn , and for all n. Passing to the limit
along n ∈ N , we get ∥φ(x) − φ(x′ )∥ ≤ C∥x − x′ ∥ for all x, x′ ∈ Ω. (In fact, we already knew this from
Corollary 1, since we learned there that φ coincides with a similarity, and is therefore Lipschitz.)
Fix ε > 0. There is m such that εm ≤ ε. Then there is m′ ≥ m such that maxi∈[m] ∥φn (xi )−φ(xi )∥ ≤ ε
for all n ∈ N with n ≥ m′ . For such an n, and x ∈ Ωn , let i ∈ [m] be such that ∥x − xi ∥ ≤ εm . By the
triangle inequality,
∥φn (x) − φ(x)∥ ≤ ∥φn (x) − φn (xi )∥ + ∥φn (xi ) − φ(xi )∥ + ∥φ(xi ) − φ(x)∥
≤ C(∥x − xi ∥ + εn ) + ∥φn (xi ) − φ(xi )∥ + C∥xi − x∥
≤ C(εm + εn ) + ε + Cεm ≤ (3C + 1)ε.
Since x ∈ Ω is arbitrary and ε can be taken as small as desired, this shows that the sequence
(φn ∶ n ∈ N ) convergences uniformly to φ over (Ωn ∶ n ∈ N ).
1
This proposition is stated for compact sets (which is not the case of U c ) but easily extends to the case where set
is closed with compact boundary
7
When the φn are only weakly isotonic, we use Lemma 4 to get a constant C > 0 depending on
√
diam(Q) and h > 0 such that ∥φn (x) − φn (x′ )∥ ≤ C(∥x − x′ ∥ + εn )1/d for all x ∈ Ωn ∩ U h and all
x′ ∈ Ωn , and for all n. Passing to the limit along n ∈ N , we get ∥φ(x) − φ(x′ )∥ ≤ C∥x − x′ ∥1/d for all
x, x′ ∈ Ω. (In fact, ∥φ(x) − φ(x′ )∥ ≤ C∥x − x′ ∥ for all x, x′ ∈ Ω from Corollary 1, as explained above.)
The rest of the arguments are completely parallel. We conclude that (φn ∶ n ∈ N ) convergences
uniformly to φ over (Ωn ∩ U h ∶ n ∈ N ).
Let S denote the similarities of Rd . For any functions φ, ψ ∶ Ω ↦ Rd , define δn (φ, ψ) =
maxx∈Ωn ∩U h ∥φ(x) − ψ(x)∥, and also δn (φ, S) = inf S∈S δn (φ, S). Our end goal is to show that
δn (φn , S) → 0 as n → ∞. Suppose this is not the case, so that there is η > 0 and N ⊂ N
infinite such that δn (φn , S) ≥ η for all n ∈ N . By Corollary 1, there is N1 ⊂ N and S ∈ S
such that S(x) = limn∈N1 φn (x) for all x ∈ Ω. As we showed above, the convergence is in fact
uniform over (Ωn ∩ U h ∶ n ∈ N1 ), meaning limn∈N1 δn (φn , S) = 0. At the same time, we have
δn (φn , S) ≥ δn (φn , S) ≥ η. We therefore have a contradiction.
2.2
Rates of convergence
Beyond consistency, we are able to derive convergence rates. We do so for the isotonic case, i.e.,
the quadruple comparison setting. Recall that εn = δH (Ωn , U ).
Theorem 3. Consider the setting (6) with φn isotonic. There is C depending only on (d, U ), and
a sequence of similarities Sn such that maxx∈Ωn ∥φn (x) − Sn (x)∥ ≤ C diam(Q)εn . If U = U h for
some h > 0, then C = C ′ / diam(U ) where C ′ is a function of (d, h/ diam(U ), ρ(U )/ diam(U )).
The proof of Theorem 3 is substantially more technical than the previous results, and thus
postponed to Section 3. Although Kleindessner and von Luxburg (2014) are not able to obtain
rates of convergence, the proof of Theorem 3 bares resemblance to their proof technique, and in
particular, is also based on a result of Alestalo et al. (2001) on the approximation of ε-isometries;
see Lemma 18. We will also make use of a related result of Vestfrid (2003) on the approximation of
approximately midlinear functions; see Lemma 17. We mention that we know of a more elementary
proof that only makes use of (Alestalo et al., 2001), but yields a slightly slower rate of convergence.
We note that there is a constant c depending only on d such that εn ≥ cn−1/d . This is because
U being open, it contains an open ball, and this lower bound trivially holds for an open ball. And
such a lower bound is achieved when the xi ’s are roughly regularly spread out over U . If instead
the xi ’s are iid uniform in U , and U is sufficiently regular — for example, U = U h for some h > 0 —
then εn = O(log(n)/n)1/d , as is well-known. This would give the rate, and we do not know whether
it is optimal, even in dimension d = 1.
√
Remark 1. We are only able to get a rate in εn for the weakly isotonic case. We can do so by
adapting of the arguments underlying Theorem 3, but only after assuming that U = U h for some
h > 0 and resolving a few additional complications.
2.3
Ordinal embedding with local comparisons
Terada and Von Luxburg (2014) consider the problem of embedding an unweighted nearest-neighbor
graph, which as we saw in the Introduction, is a special case of ordinal embedding. Their arguments — which, as we explained earlier, seem incomplete at the time of writing — indicate that
K = Kn ≫ n2/(2+d) (log n)d/(2+d) is enough for consistently embedding a K-NN graph.
We consider here a situation where we have more information, specifically, all the distance
comparisons between K-nearest-neighbors. Formally, this is the situation where
Cn = {(i, j, k, ℓ) ∶ δij < δkℓ and {j, k, ℓ} ⊂ NKn (i)},
8
where NK (i) denotes the set of the K items nearest item i. If the items are points Ωn =
{x1 , . . . , xn } ⊂ Rd , an exact ordinal embedding φn is only constrained to be locally weakly isotonic as we explain now. We start by stating a standard result which relates a K-NN graph to an
r-ball graph.
Lemma 5. Let U ⊂ Rd be bounded, connected and open, and such that U = U h for some h > 0.
Sample x1 , . . . , xn iid from a density f supported on U with (essential) range in (0, ∞) strictly.
There is a constant C such that, if K ∶= [nr d ] ≥ C log n, then with probability tending to 1,
NeighK/2 (xi ) ⊂ {xj ∶ ∥xj − xi ∥ ≤ r} ⊂ Neigh2K (xi ),
∀i ∈ [n],
where NeighK (xi ) denotes the set of the K points in {xj ∶ j ∈ [n]} nearest xi .
The proof is postponed to Section 3 and only provided for completeness. Therefore, assuming
that K ≥ C log n, where C is the constant of Lemma 5, we may equivalently consider the case where
Cn = {(i, j, k, ℓ) ∶ δij < δkℓ and max(δij , δik , δiℓ ) < rn },
for some given rn > 0. An exact embedding φn ∶ Ωn ↦ Rd in that case is isotonic on Ωn ∩ B(x, rn )
for any x ∈ Ωn . We require in addition that
∥x − x′ ∥ < rn ≤ ∥x† − x‡ ∥ ⇒ ∥φn (x) − φn (x′ )∥ ≤ ∥φn (x† ) − φn (x‡ )∥,
∀x, x′ , x† , x‡ ∈ Ωn .
(10)
This is a reasonable requirement since it is possible to infer it from Cn . Indeed, for k, ℓ ∈ [n], we
have δkℓ < rn if, and only if, (k, k, k, ℓ) ∈ Cn or (ℓ, ℓ, ℓ, k) ∈ Cn . (Here we assume that δii = 0 for
all i and δij > 0 if i ≠ j, as is the case for Euclidean distances.) We can still infer this even if the
quadruples in Cn must include at least three distinct items. Indeed, suppose k, ℓ ∈ [n] are such
that there is no i such that (i, k, i, ℓ) ∈ Cn or (i, ℓ, i, k) ∈ Cn , then (a) δik = δiℓ for all i such that
max(δik , δiℓ ) < rn , or (b) δkℓ ≥ rn . Assume that rn ≥ Cεn with C > 0 sufficiently large, so that
situation (a) does not happen. Conversely, if (k, ℓ) is such that δkℓ < rn , then when (a) does not
happen, there is i such that (i, k, i, ℓ) ∈ Cn or (i, ℓ, i, k) ∈ Cn .
Theorem 4. Consider the setting (6) and assume in addition that U = U h for some h > 0, and that
φn is isotonic over balls of radius rn and satisfies (10). There is a constant C > 0 depending only on
(d, h, ρ(U ), diam(U ), diam(Q)) and similarities Sn such that maxx∈Ωn ∥φn (x) − Sn (x)∥ ≤ Cεn /rn2 .
Assume the data points are generated as in Lemma 5. In that case, we have εn = O(log(n)/n)1/d
and Theorem 4 implies consistency when rn ≫ (log(n)/n)1/2d . By Lemma 5, this corresponds to
the situation where we are provided with comparisons among Kn -nearest neighbors with Kn ≫
√
n log n. If the result of Terada and Von Luxburg (2014) holds in all rigor, then this is a rather
weak result.
2.4
Landmark ordinal embedding
Inspired by (Jamieson and Nowak, 2011), we consider the situation where there are landmark items
indexed by Ln ⊂ [n], and we are given all distance comparisons from any point to the landmarks.
Formally, with triple comparisons, this corresponds to the situation where
Cn = {(i, j, k) ∈ [n] × L2n ∶ δij < δik }.
If the items are points Ωn = {x1 , . . . , xn } ⊂ Rd , an exact ordinal embedding φn is only constrained
to be weakly isotonic on the set of landmarks and, in addition, is required to respect the ordering of
the distances from any point to the landmarks. The following is an easy consequence of Theorem 2.
9
Corollary 2. Theorem 2 remains valid in the landmark triple comparisons setting (meaning with
φn as just described) as long as the landmarks become dense in U .
Jamieson and Nowak (2011) study the number of triple comparisons that are needed for exact
ordinal embedding. With a counting argument, they show that at least Cn log n comparisons are
needed, where C is a constant depending only on d. If we only insist that the embedding respects
the comparisons that are provided, then Corollary 2 implies that a landmark design is able to
be consistent as long as the landmarks become dense in U . This consistency implies that, as the
sample size increases, an embedding that respects the landmark comparisons also respects all other
comparisons approximately. This is achieved with O(nℓ2n + ℓ3n ) triple comparisons, where ℓn ∶= ∣Ln ∣
is the number of landmarks, and the conditions of Corollary 2 can be fulfilled with ℓn → ∞ at any
speed, so that the number of comparisons is nearly linear in n.
Proof. We focus on the weakly isotonic case, where we assume that U = U h for some h > 0. Let Λn =
{xl ∶ l ∈ Ln } denote the set of landmarks. Since Λn becomes dense in U , meaning ηn ∶= δH (Λn , U ) →
0, by Theorem 2, there is a sequence of similarities Sn such that ζn ∶= maxx∈Λn ∥φn (x) − Sn (x)∥ → 0.
Now, for x ∈ Ωn , let x̃ ∈ Λn such that ∥x − x̃∥ ≤ ηn . We have
∥φn (x) − Sn (x)∥ ≤ ∥φn (x) − φn (x̃)∥ + ∥φn (x̃) − Sn (x̃)∥ + ∥Sn (x̃) − Sn (x)∥.
(11)
1/2d
by Lemma 4, for some constant C. The middle term is
The first term is bounded by Cηn
bounded by ζn . For the third term, express Sn in the form Sn (x) = βn Rn (x) + bn , where βn ∈ R, Rn
is an orthogonal transformation, and bn ∈ Rd . Take two distinct landmarks x† , x‡ ∈ Λn such that
∥x† − x‡ ∥ ≥ diam(U )/2, which exist when n is sufficiently large. Since
and, at the same time,
∥Sn (x† ) − Sn (x‡ )∥ = βn ∥x† − x‡ ∥ ≥ βn diam(U )/2
∥Sn (x† ) − Sn (x‡ )∥ ≤ ∥Sn (x† ) − φn (x† )∥ + ∥φn (x† ) − φn (x‡ )∥ + ∥φn (x‡ ) − Sn (x‡ )∥
≤ ζn + diam(Q) + ζn ≤ 2 diam(Q), eventually,
we have βn ≤ β̄ ∶= 4 diam(Q)/ diam(U ). Hence, the third term on the RHS of (11) is bounded by
1/2d
β̄ηn . Thus, the RHS of (11) is bounded by Cηn + ζn + β̄ηn , which tends to 0 as n → ∞. This
being valid for any x ∈ Ωn , we conclude.
We remark that at the very end of the proof, we obtained a rate of convergence as a function
of the density of the landmarks and the convergence rate implicit in Theorem 2. This leads to the
following rate for the quadruple comparisons setting, which corresponds to the situation where
Cn = {(i, j, k) ∈ [n] × L2n ∶ δij < δik } ⋃ {(i, j, k, ℓ) ∈ L4n ∶ δij < δik }.
Here, φn is constrained to be isotonic on the set of landmarks and, as before, is required to respect
the ordering of the distances from any data point to the landmarks.
Corollary 3. Consider the setting (6) in the landmark quadruple comparisons setting (meaning
with φn as just described). Let Λn denote the set of landmarks and set ηn = δH (Λn , U ). There is a
constant C > 0 and a sequence of similarities Sn such that maxx∈Ωn ∥φn (x) − Sn (x)∥ ≤ Cηn .
Proof. The proof is parallel to that of Corollary 2. Here, we apply Theorem 3 to get ζn ≤ C0 ηn .
This bounds the second term on the RHS of (11). The first term is bounded by C1 ηn by Lemma 3,
while the third term is bounded by β̄ηn as before. (C0 , C1 are constants.)
10
Computational complexity. We now discuss the computational complexity of ordinal embedding
with a landmark design. The obvious approach has two stages. In the first stage, the landmarks
are embedded. This is the goal of (Agarwal et al., 2007), for example. Here, we use brute force.
Proposition 1. Suppose that m items are in fact points in Euclidean space and their dissimilarities
are their pairwise Euclidean distances. Then whether in the triple or quadruple comparisons setting,
an exact ordinal embedding of these m items can be obtained in finite expected time.
Proof. The algorithm we discuss is very naive: we sample m points iid from the uniform distribution
on the unit ball, and repeat until the ordinal constraints are satisfied. Since checking the latter can
be done in finite time, it suffices to show that there is a strictly positive probability that one such
sample satisfies the ordinal constraints. Let Xm denote the set of m-tuples (x1 , . . . , xm ) ∈ B(0, 1)
that satisfy the ordinal constraints, meaning that ∥xi − xj ∥ < ∥xk − xℓ ∥ when (i, j, k, ℓ) ∈ C. Seeing
Xn as a subset of B(0, 1)m ⊂ Rdm , it is clearly open. And sampling x1 , . . . , xm iid from the uniform
distribution on B(0, 1) results in sampling (x1 , . . . , xm ) from the uniform distribution on B(0, 1)m ,
which assigns a positive mass to any open set.
In the second stage, each point that is not a landmark is embedded based on the order of
its distances to the landmarks. We quickly mention the work Davenport (2013), who develops a
convex method for performing this task. Here, we are contented with knowing that this can be
done, for each point, in finite time, function of the number of landmarks. For example, a brute
force approach starts by computing the Voronoi diagram of the landmarks, and iteratively repeats
within each cell, creating a tree structure. Each point that is not a landmark is placed by going
from the root to a leaf, and choosing any point in that leaf cell, say its barycenter.
Thus, if there are ℓ landmarks, the first stage is performed in expected time F (ℓ), and the
second stage is performed in time (n − ℓ)G(ℓ). The overall procedure is thus computed in expected
time F (ℓ) + (n − ℓ)G(ℓ).
Remark 2. The procedure described above is not suggested as a practical means to perform ordinal
embedding with a landmark design. The first stage, described in Proposition 1, has finite expected
time, but likely not polynomial in the number of landmarks. For a practical method, we can suggest
the following:
1. Embed the landmarks using the method of Agarwal et al. (2007) (which solves a semidefinite program) or the method of Terada and Von Luxburg (2014) (which uses an iterative
minimization-majorization strategy).
2. Embed the remaining points using the method of Davenport (2013) (which solves a quadratic
program).
Although practical and reasonable, we cannot provide any theoretical guarantees for this method.
3
More proofs
In this section we gather the remaining proofs and some auxiliary results. We introduce some
additional notation and basic concepts. For z1 , . . . , zm ∈ Rd , let Aff(z1 , . . . zm ) denote their affine
hull, meaning the affine subspace they generate in Rd . For a vector x in a Euclidean space, let
∥x∥ denote its Euclidean norm. For a matrix M ∈ √
Rp×q , let ∥M ∥ denote its usual operator norm,
meaning, ∥M ∥ = max{∥M x∥ ∶ ∥x∥ ≤ 1} and ∥M ∥F = tr(M ⊺ M ) its Frobenius norm.
Regular simplexes. These will play a central role in our proofs. We say that z1 , . . . , zm ∈ Rd , with
m ≥ 2, form a regular simplex if their pairwise distances are all equal. We note that, necessarily,
11
m ≤ d + 1, and that regular simplexes in the same Euclidean space and with same number of
(distinct) nodes m are similarity transformations of each other — for example, segments (m = 2),
equilateral triangles (m = 3), tetrahedron (m = 4). By recursion on the number of vertices, m, it is
easy to prove the following.
Lemma 6. Let z1 , . . . , zm form a√
regular simplex with edge length 1 and let µ denote the barycenter
of z1 , . . . , zm . Then ∥µ − zi ∥ = (m − 1)/2m, and if z, z1 , . . . , zm form a regular simplex, then
√
∥z − µ∥ = (m + 1)/2m. (In dimension m, there are exactly two such points z.)
3.1
Proof of Theorem 1
We assume d ≥ 2. See (Kleindessner and von Luxburg, 2014) for the case d = 1. We divide the
proof into several parts.
Continuous extension. Lemma 4 implies that φ is locally uniformly continuous. Indeed, take
x0 ∈ Ω and let r > 0 such that B(x0 , r) ⊂ U and φ is weakly isotonic on B(x0 , r) ∩ Ω. Applying
Lemma 4 with V = B(x0 , r) and Λ = Ω ∩ B(x0 , r) — so that δH (Λ, V ) = 0 because Λ is dense in V
— and noting that V r = V , yields a constant Cr > 0 such that ∥φ(x) − φ(x′ )∥ ≤ Cr ∥x − x′ ∥1/d , for all
x, x′ ∈ Ω ∩ B(x0 , r). Being locally uniformly continuous, we can uniquely extend φ to a continuous
function on U , also denoted by φ. By continuity, this extension is locally weakly isotonic on U .
Isosceles preservation. Sikorska and Szostok (2004) say that a function f ∶ V ⊂ Rd → Rd preserves isosceles triangles if
∥x − y∥ = ∥x − z∥ ⇒ ∥f (x) − f (y)∥ = ∥f (x) − f (z)∥,
∀x, y, z ∈ V.
In our case, by continuity, we also have that φ preserves isosceles triangles locally. Indeed, for the
sake of pedagogy, let u ∈ U and r > 0 such that B(u, r) ⊂ U and φ is weakly isotonic on B(u, r).
Take x, y, z ∈ B(u, r/2) be such that ∥x − y∥ = ∥x − z∥. For t ∈ R, define zt = (1 − t)x + tz. Let t > 1
such that zt ∈ B(u, r). Because ∥x − y∥ < t∥x − z∥ = ∥x − zt ∥, we have ∥φ(x) − φ(y)∥ ≤ ∥φ(x) − φ(zt )∥.
Letting t ↘ 1, we get ∥φ(x) − φ(y)∥ ≤ ∥φ(x) − φ(z)∥ by continuity of φ. Since y and z play the same
role, the converse inequality is also true, and combined, yield an equality.
Midpoint preservation. Let V ⊂ Rd be convex. We say that a function f ∶ V ↦ Rd preserves
midpoints if
x+y
f (x) + f (y)
f(
)=
, ∀x, y ∈ V.
2
2
We now show that φ preserves midpoints, locally. Kleindessner and von Luxburg (2014) also do
that, however, our arguments are closer to those of Sikorska and Szostok (2004), who make use of
regular simplexes. The important fact is that a function that preserves isosceles preserves regular
simplexes. Let u ∈ U and r > 0 such that B(u, r) ⊂ U and φ preserves isosceles on B(u, r). Take
x, y ∈ B(u, r/2), and let µ = (x + y)/2. Let z1 , . . . , zd form a regular simplex with barycenter µ and
side length s, and such that ∥x−zi ∥ = s for all i. In other words, x, z1 , . . . , zd forms a regular simplex
placed so that µ is the barycenter of z1 , . . . , zd . By
√ symmetry, y, z1 , . . . , zd forms a regular simplex
also. By Lemma 6, we have ∥zi − µ∥/∥x − µ∥ = (d − 1)/(d + 1), so that z1 , . . . , zd ∈ B(µ, r/2) ⊂
B(u, r), by the triangle inequality and the fact that ∥x − µ∥ < r/2. Hence, φ(x), φ(z1 ), . . . , φ(zd )
and φ(y), φ(z1 ), . . . , φ(zd ) are regular simplexes. If one of them is singular, so is the other one, in
which case φ(x) = φ(y) = φ(µ). Otherwise, necessarily φ(x) is the symmetric of φ(y) with respect
to Aff(φ(z1 ), . . . , φ(zd )); the only other possibility would be that φ(x)√
= φ(y), but in that case we
would still have that φ(zi ) = φ(x) for all i ∈ [d], since ∥x − zi ∥/∥x − µ∥ = 2d/(d + 1) by Lemma 6 —
implying that ∥x − zi ∥ < ∥x − y∥ — and φ is weakly isotonic in that neighborhood. So assume that
12
φ(x) is the symmetric of φ(y) with respect to Aff(φ(z1 ), . . . , φ(zd )). For a ∈ {x, y, µ}, ∥a − zi ∥ is
constant in i, and therefore so is ∥φ(a)−φ(zi )∥, so that φ(a) belongs to the line of points equidistant
to φ(z1 ), . . . , φ(zd ). This implies that x, y, µ are collinear. And because ∥µ − x∥ = ∥µ − y∥, we also
have ∥φ(µ) − φ(x)∥ = ∥φ(µ) − φ(y)∥, so that φ(µ) is necessarily the midpoint of φ(x) and φ(y).
Conclusion. We arrived at the conclusion that φ can be extended to a continuous function on
U that preserves midpoints locally. We then use the following simple results in sequence: with
Lemma 7, we conclude that φ is locally affine; with Lemma 8, we conclude that φ is in fact affine
on U ; and with Lemma 9, we conclude that φ is in fact a similarity on U .
Lemma 7. Let V be a convex set of a Euclidean space and let f be a continuous function on V
with values in a Euclidean space that preserves midpoints. Then f is an affine transformation.
Proof. This result is in fact well-known, and we only provide a proof for completeness. It suffices
to prove that f is such that f ((1 − t)x + ty) = (1 − t)f (x) + tf (y) for all x, y ∈ V and all t ∈ [0, 1].
Starting with the fact that this is true when t = 1/2, by recursion we have that this is true when
t is dyadic, meaning, of the form t = k2−j , where j ≥ 1 and k ≤ 2j are both integers. Since dyadic
numbers are dense in [0, 1], by continuity of f , we deduce the desired property.
Lemma 8. A locally affine function over an open and connected subset of a Euclidean space is the
restriction of an affine function over the whole space.
Proof. Let U be the domain and f the function. Cover U with a countable number of open balls
Bi , i ∈ I such that f coincides with an affine function fi on Bi . Take i, j ∈ I distinct. Since U
is connected, there must be a sequence i = k1 , . . . , km = j, all in I, such that Bks ∩ Bks+1 ≠ ∅ for
s ∈ [m − 1]. Since Bks ∩ Bks+1 is an open set, we must have fks = fks+1 , and this being true for all s,
it implies that fi = fj .
Lemma 9. An affine function that preserves isosceles locally is a similarity transformation.
Proof. Let f be an affine function that preserves isosceles in an open ball. Without loss of generality,
we may assume that the ball is B(0, 2) and that f (0) = 0 (so that f is linear). Fix u0 ∈ ∂B(0, 1)
and let a = ∥f (u0 )∥. Take x ∈ Rd different from 0 and let u = x/∥x∥. We have ∥f (x)∥/∥x∥ = ∥f (u)∥ =
∥f (u) − f (0)∥ = ∥f (u0 ) − f (0)∥ = ∥f (u0 )∥ = a. Hence, ∥f (x)∥ = a∥x∥, valid for all x ∈ Rd , and f
being linear, this implies that f is a similarity.
3.2
Auxiliary results
We list here a number of auxiliary results that will be used in the proof of Theorem 3.
The following result is a perturbation bound for trilateration, which is the process of locating
a point based on its distance to landmark points. For a real matrix Z, let σk (Z) denote its k-th
largest singular value.
Lemma 10. Let z1 , . . . , zd+1 ∈ Rd such that Aff(z1 , . . . , zd+1 ) = Rd and let Z denote the matrix with
columns z1 , . . . , zd+1 . Consider p, q ∈ Rd and define ai = ∥p − zi ∥ and bi = ∥q − zi ∥ for i ∈ [d + 1]. Then
∥p − q∥ ≤
√
1√
d σd (Z)−1 max ∣a2d+1 − a2i − b2i + b2d+1 ∣ ≤ d σd (Z)−1 max ∣a2i − b2i ∣.
i
i
2
Proof. Assume without loss of generality that zd+1 = 0. In that case, note that ad+1 = ∥p∥ and
bd+1 = ∥q∥. Also, redefine Z as the matrix with columns z1 , . . . , zd , and note that the first d
singular values remain unchanged. Since Aff(z1 , . . . , zd+1 ) = Rd , there is α = (α1 , . . . , αd ) ∈ Rd
13
and β = (β1 , . . . , βd ) ∈ Rd such that p = ∑i∈[d] αi zi = Zα and q = ∑i∈[d] βi zi = Zβ. For p, we
have a2i = ∥p − zi ∥2 = ∥p∥2 + ∥zi ∥2 − 2zi⊺ Zα for all i ∈ [d], or in matrix form, Z ⊺ Zα = 12 u, where
u = (u1 , . . . , ud ) and ui = a2d+1 − a2i + ∥zi ∥2 . Similarly, we find Z ⊺ Zβ = 12 v, where v = (v1 , . . . , vd ) and
vi = b2d+1 − b2i + ∥zi ∥2 . Hence, we have
1√
1
∥Z ⊺ (p − q)∥ = ∥Z ⊺ Zα − Z ⊺ Zβ∥ = ∥u − v∥ =
∑ (a2 − b2 − a2i + b2i )2
2
2 i∈[d] d+1 d+1
√
1√
d max ∣a2d+1 − a2i − b2i + b2d+1 ∣ ≤ d max ∣a2i − b2i ∣.
≤
i
i
2
Simultaneously, ∥Z ⊺ (p − q)∥ ≥ σd (Z)∥p − q∥. Combining both inequalities, we conclude.
For η ∈ [0, 1), we say that z1 , . . . , zm ∈ Rd form an η-approximate regular simplex if
min ∥zi − zj ∥ ≥ (1 − η) max ∥zi − zj ∥.
i≠j
i≠j
Lemma 11. Let z1 , . . . , zm form an η-approximate regular simplex with maximum edge length λ
′
achieved by ∥z1 − z2 ∥. There is a constant Cm and z1′ , . . . , zm
∈ Aff(z1 , . . . , zm ) with z1′ = z1 and
′
z2 = z2 and forming a regular simplex with edge length λ, such that maxi ∥zi′ − zi ∥ ≤ λCm η.
Proof. By scale equivariance, we may assume that λ = 1. We use an induction on m. In what
′
′′
follows, Cm , Cm
, Cm
, etc, are constants that depend only on m. For m = 2, the statement is
trivially true. Suppose that it is true for m ≥ 2 and consider an η-approximate regular simplex
z1 , . . . , zm+1 ∈ Rd with maximum edge length 1. By changing d to m if needed, without loss of
generality, assume that Aff(z1 , . . . , zm+1 ) = Rd . In that case, z1 , . . . , zm is an η-approximate regular
simplex with maximum edge length achieved by ∥z1 − z2 ∥ = 1, and by the inductive hypothesis, this
′
∈ A ∶= Aff(z1 , . . . , zm ) with z1′ = z1 and z2′ = z2 and forming a
implies the existence of z1′ , . . . , zm
regular simplex of edge length 1, such that maxi∈[m] ∥zi′ − zi ∥ ≤ Cm η for some constant Cm . Let
p be the orthogonal projection of zm+1 onto A. Before continuing, let P be the set of such p
′
obtained when fixing z1′ , . . . , zm
and then varying zi ∈ B(zi′ , Cm η) for i ∈ [m] and zm+1 among the
points that make an η-approximate regular simplex with z1 , . . . , zm . Let µ′ be the barycenter of
′
z1′ , . . . , zm
and note that µ′ ∈ P. Now, set δ = ∥zm+1 − p∥. By the Pythagoras theorem, we have
2
∥p − zi ∥ = ∥zm+1 − zi ∥2 − δ2 , with 1 − η ≤ ∥zm+1 − zi ∥ ≤ 1, so that 0 ≤ 1 − δ2 − ∥p − zi ∥2 ≤ 2η. By the
triangle inequality, ∣∥p − zi′ ∥ − ∥p − zi ∥∣ ≤ ∥zi − zi′ ∥ ≤ Cm η, so that
∣∥p − zi′ ∥2 − ∥p − zi ∥2 ∣ = ∣∥p − zi′ ∥ − ∥p − zi ∥∣(∥p − zi′ ∥ − ∥p − zi ∥) ≤ ∥zi − zi′ ∥∥zi − zi′ ∥
′
η,
≤ Cm η(2 + Cm η) ≤ Cm
using the fact that ∥p − zi ∥ ≤ ∥zm+1 − zi ∥ ≤ 1. Hence,
′′
P ⊂ {q ∶ ∥q − zi′ ∥2 = 1 − δ2 ± Cm
η, ∀i ∈ [m]}.
′′
η. By Lemma 10, this implies that
Since µ′ ∈√
P, we must therefore have ∥p − zi′ ∥2 = ∥µ′ − zi′ ∥2 ± 2Cm
′
′′′
′′
−1
′
′
′
∥p − µ ∥ ≤ m − 1 σm−1 ([z1 ⋯zm ])2Cm η =∶ Cm η. Let zm+1 be on the same side of A as zm+1 and such
′
′
′
, zm+1
form a regular simplex. Note that µ′ is the orthogonal projection of zm+1
onto
that z1′ , . . . , zm
A. By the Pythagoras theorem, applied multiple times, we obtain the following. First, we have
′
′
− µ′ + µ′ − p + p − zm+1 ∥2
− zm+1 ∥2 = ∥zm+1
∥zm+1
′
′
= ∥zm+1
− µ′ ∥2 − 2(zm+1
− µ′ )⊺ (zm+1 − p) + ∥p − zm+1 ∥2 + ∥µ′ − p∥2
′
= (∥zm+1
− µ′ ∥ − ∥p − zm+1 ∥)2 + ∥µ′ − p∥2 ,
14
′
because zm+1
− µ′ and zm+1 − p are orthogonal to A, and therefore parallel to each other and both
′′′
orthogonal to µ′ − p. For the second term, we already know that ∥µ′ − p∥ ≤ Cm
η, while the first
′′
2 2
term is bounded by (2Cm + 2) η since, on the one hand,
′
′
∥zm+1
− µ′ ∥2 = ∥zm+1
− z1′ ∥2 − ∥µ′ − z1′ ∥2 = 1 − ∥µ′ − z1′ ∥2
while, on the other hand,
∥p − zm+1 ∥2 = ∥zm+1 − z1 ∥2 − ∥p − z1 ∥2 = 1 ± 2η − ∥p − z1 ∥2 ,
′′
′
2
and we know that ∥µ′ − z1′ ∥2 = ∥p − z1 ∥2 ± 2Cm
η. Hence, we find that ∥zm+1
− zm+1 ∥2 ≤ Cm+1
η2
for some constant Cm+1 function of m only. This shows that the induction hypothesis holds for
m + 1.
′
Lemma 12. There are constants Cm , Cm
> 0 such that, if z1 , . . . , zm form an η-approximate regular
′
simplex with maximum edge length λ, then σm−1 ([z1 ⋯zm ]) ≥ λCm (1 − Cm
η).
′′
and
Proof. By scale equivariance, we may assume that λ = 1. By Lemma 11, there is a constant Cm
′
′
′
z1 , . . . , zm ∈ Aff(z1 , . . . , zm ) forming a regular simplex with edge length 1 such that maxi ∥zi − zi ∥ ≤
′′
Cm
η. By Weyl’s inequality (Horn and Johnson, 1990, Cor 7.3.8), σm−1 (Z) ≥ σm−1 (Z ′ ) − ∥Z − Z ′ ∥.
On the one hand, σm−1√
(Z ′ ) is a positive constant depending only on m, while on the other hand,
√
′
′
′′
η.
∥Z − Z ∥ ≤ ∥Z − Z ∥F = ∑i ∥zi − zi′ ∥2 ≤ mCm
Lemma 13. Let z1 , . . . , zm form an η-approximate regular simplex with maximum edge length λ
and barycenter µ. Let p ∈ Aff(z1 , . . . , zm ) and define γ = maxi ∥p − zi ∥2 − mini ∥p − zi ∥2 . There is a
constant Cm ≥ 1 depending only on m such that ∥p − µ∥ ≤ Cm λγ when η ≤ 1/Cm .
Proof. By scale equivariance, we may assume that λ = 1. By Lemma 10, we have
∥p − µ∥ ≤
1√
−1
m − 1 σm−1
([z1 ⋯zm ]) max ∣∥p − zm ∥2 − ∥p − zi ∥2 ∣.
i
2
′
−1
′
′
By Lemma 12, there is a constant Cm
. And we
when η ≤ 1/Cm
such that σm−1
([z1 ⋯zm ]) ≤ Cm
2
2
also have maxi ∣∥p − zm ∥ − ∥p − zi ∥ ∣ ≤ γ. From this, we conclude.
Lemma 14. Let ψ ∶ Λ ↦ Q be isotonic, where Λ, Q ⊂ Rd . Let v ∈ Rd and r > 0, and set ε =
δH (Λ, B(v, r)). There is C ∝ diam(Q)/r such that, for all x, x′ , x† , x‡ ∈ Λ with x, x′ ⊂ B(v, 3r/4)
and for all η ∈ (0, r/4 − 2ε),
∥x − x′ ∥ = ∥x† − x‡ ∥ ± η ⇒ ∥ψ(x) − ψ(x′ )∥ = ∥ψ(x† ) − ψ(x‡ )∥ ± C(η + ε).
(12)
Proof. Let ξ = ∥x − x′ ∥ and ξ † = ∥x† − x‡ ∥. Suppose that ξ < η + 2ε, which implies that ξ † < 2η + 2ε.
In that case, Lemma 3 — where the constant there is denoted here by C1 ∝ diam(Q)/r — yields
∥ψ(x) − ψ(x′ )∥ ≤ C1 (ξ + ε) ≤ C1 (η + 3ε) and, similarly, ∥ψ(x† ) − ψ(x‡ )∥ ≤ C1 (2η + 3ε). This proves
(12). Henceforth, we assume that ξ ≥ η + 2ε.
First assume that ξ > ξ † . In that case, we immediately have ∥ψ(x) − ψ(x′ )∥ ≥ ∥ψ(x† ) − ψ(x‡ )∥.
For the reverse, let yt = (1 − t)x + tx′ , and note that ∥yt − x∥ = tξ. Take t = 1 − (η + 2ε)/ξ and note
that t ∈ [0, 1], so that yt ∈ [xx′ ] ⊂ B(v, r), and therefore there is x⋆ ∈ Λ such that ∥x⋆ − yt ∥ ≤ ε.
We have ∥x⋆ − x∥ ≤ ∥yt − x∥ + ∥x⋆ − yt ∥ ≤ ξ − η − ε < ξ † , so that ∥ψ(x) − ψ(x⋆ )∥ ≤ ∥ψ(x† ) − ψ(x‡ )∥.
Applying the triangle inequality and Lemma 3, we then have
∥ψ(x) − ψ(x⋆ )∥ ≥ ∥ψ(x) − ψ(x′ )∥ − ∥ψ(x′ ) − ψ(x⋆ )∥
≥ ∥ψ(x) − ψ(x′ )∥ − C1 (∥x′ − x⋆ ∥ + ε),
15
with ∥x′ − x⋆ ∥ ≤ ∥x′ − yt ∥ + ∥yt − x⋆ ∥ ≤ η + 3ε.
When ξ < ξ † , we choose t = 1 + (η + 2ε)/ξ. Because x, x′ ⊂ B(v, 3r/4), we still have yt ∈ B(v, r)
because of the constraint on η. The remaining arguments are analogous.
When ξ = ξ † , repeating what we just did both ways and with η = 0 yields the result.
Lemma 15. Consider ψ ∶ Λ ↦ Rd isotonic, where Λ ⊂ Rd . Let V denote the convex hull of Λ. Set
ε = δH (Λ, V ) and c = diam(ψ(Λ))/5 diam(Λ). Then ∥ψ(x) − ψ(x′ )∥ ≥ c∥x − x′ ∥ for all x, x′ ∈ Λ such
that ∥x − x′ ∥ ≥ 4ε.
Proof. We first prove that, if c > 0 and η ≥ 4ε are such that ∥ψ(x) − ψ(x′ )∥ ≤ cη for all x, x′ ∈ Λ with
∥x − x′ ∥ < η, then diam(ψ(Λ)) < c(4 diam(Λ) + η). Indeed, take x, x′ ∈ Λ. Let u = (x′ − x)/∥x′ − x∥
and L = ∥x − x′ ∥, and define yj = x + sj u where sj = j(η − 3ε) for j = 0, . . . , J ∶= ⌊L/(η − 3ε)⌋, and then
let sJ+1 = L. By construction, yj ∈ [xx′ ] ⊂ V , with y0 = x and yJ+1 = x′ . Let xj ∈ Λ be such that
∥xj − yj ∥ ≤ ε, with x0 = x and xJ+1 = x′ . By the triangle inequality, ∥xj+1 − xj ∥ ≤ ∥yj+1 − yj ∥ + 2ε =
sj+1 − sj + 2ε < η. Hence,
J
∥ψ(x) − ψ(x′ )∥ ≤ ∑ ∥ψ(xj ) − ψ(xj+1 )∥ ≤ (J + 1)cη ≤ c
j=0
Lη
+ cη < c(4 diam(Λ) + η),
η − 3ε
since η − 3ε ≥ η − 3η/4 = η/4 and L ≤ diam(Λ).
Now assume that ψ is isotonic and suppose that ∥ψ(x) − ψ(x′ )∥ < c∥x − x′ ∥ for some x, x′ ∈ Λ
such that η ∶= ∥x − x′ ∥ ≥ 4ε. Then we have ∥ψ(x† ) − ψ(x‡ )∥ ≤ cη when x† , x‡ ∈ Λ satisfy ∥x† − x‡ ∥ < η.
We just showed that this implies that diam(ψ(Λ)) < c(4 diam(V ) + η), and we conclude using the
fact that η ≤ diam(Λ).
The following result is on 1-nearest neighbor interpolation.
Lemma 16. Let Λ be a subset of isolated points in V ⊂ Rd and set ε = δH (Λ, V ). For any function
ψ ∶ Λ ↦ Rd , define its 1-nearest neighbor interpolation as ψ̂ ∶ V ↦ Rd as
ψ̂(y) =
1
∑ ψ(x),
∣NΛ (y)∣ x∈NΛ (y)
NΛ (y) ∶= arg min ∥x − y∥.
(13)
x∈Λ
Consider the modulus of continuity of ψ, which for η > 0 is defined as ω(η) = sup{∥ψ(x) − ψ(x′ )∥ ∶
x, x′ ∈ Λ, ∥x − x′ ∥ ≤ η}. Then the modulus of continuity of ψ̂, denoted ω̂, satisfies ω̂(η) ≤ ω(η + 2ε).
Moreover, for any y, y ′ ∈ V and any x, x′ ∈ Λ such that ∥x − y∥ ≤ ε and ∥x′ − y ′ ∥ ≤ ε,
∥ψ̂(y) − ψ̂(y ′ )∥ = ∥ψ(x) − ψ(x′ )∥ ± 2ω(2ε).
Proof. Fix η > 0 and take y, y ′ ∈ V such that ∥y − y ′ ∥ ≤ η. We have ∥x − y∥ ≤ ε for all x ∈ NΛ (y) and
∥x′ − y ′ ∥ ≤ ε for all x′ ∈ NΛ (y ′ ), so that ∥x − x′ ∥ ≤ ∥y − y ′ ∥ + 2ε for all such x and x′ , by the triangle
inequality. Therefore,
∥ψ̂(y) − ψ̂(y ′ )∥ ≤ sup {∥ψ(x) − ψ(x′ )∥ ∶ x ∈ NΛ (y), x′ ∈ NΛ (y ′ )}
≤ sup {∥ψ(x) − ψ(x′ )∥ ∶ x, x′ ∈ Λ, ∥x − x′ ∥ ≤ η + 2ε} = ω(η + 2ε).
Since this is true for all y, y ′ ∈ V such that ∥y − y ′ ∥ ≤ η, we conclude that ω̂(η) ≤ ω(η + 2ε).
For the second part of the lemma, we have
∥ψ̂(y) − ψ̂(y ′ )∥ = ∥ψ(x) − ψ(x′ )∥ ± ∥ψ̂(y) − ψ(x)∥ ± ∥ψ̂(y ′ ) − ψ(x′ )∥,
16
where the second term is bounded by
∥ψ̂(y) − ψ(x)∥ ≤ sup {∥ψ(x̃) − ψ(x)∥ ∶ x̃ ∈ NΛ (y)}
≤ sup {∥ψ(x̃) − ψ(x)∥ ∶ ∥x̃ − x∥ ≤ 2ε} ≤ ω(2ε),
using the fact that ∥x̃ − x∥ ≤ ∥x̃ − y∥ + ∥y − x∥ ≤ 2ε, and similarly for the third term.
Let V ⊂ Rd be convex. In our context, we say that f ∶ V ↦ Rd is η-approximately midlinear if
∥f (
1
x+y
) − (f (x) + f (y))∥ ≤ η,
2
2
∀x, y ∈ V.
Lemma 17. Let V ⊂ Rd be star-shaped with respect to some point in its interior. There is a
constant C depending only on V such that, for any η-approximately midlinear function f ∶ V ↦ Rd ,
there is a affine function T ∶ Rd ↦ Rd such that supx∈V ∥f (x) − T (x)∥ ≤ Cη.
Note that, if V is a ball, then by invariance considerations, C only depends on d.
Proof. This is a direct consequence of (Vestfrid, 2003, Th 1.4).
We say that f ∶ V ⊂ Rd ↦ Rd is an ε-isometry if
∥x − y∥ − ε ≤ ∥f (x) − f (y)∥ ≤ ∥x − y∥ + ε,
∀x, y ∈ V.
For a set V ⊂ Rd , define its thickness as
θ(V ) = inf { diam(u⊺ V ) ∶ u ∈ Rd , ∥u∥ = 1}.
Recalling the definition of ρ in (7), we note that θ(V ) ≥ ρ(V ), but that the two are distinct in
general.
Lemma 18. Let V ⊂ Rd be compact and such that θ(V ) ≥ η diam(V ) for some η > 0. There is a
constant C depending only on d such that, if f ∶ V ↦ Rd is an ε-isometry, then there is an isometry
R ∶ Rd ↦ Rd such that maxx∈V ∥f (x) − R(x)∥ ≤ Cε/η.
Proof. This is a direct consequence of (Alestalo et al., 2001, Th 3.3).
Lemma 19. Let T ∶ Rd ↦ Rd be an affine function that transforms a regular simplex of edge length
1 into an η-approximate regular simplex of maximum edge length λ > 0. There is a constant C,
depending only on d, and an isometry R, such that ∥T (x) − λR(x)∥ ≤ Cλη for all x ∈ B(0, 1).
Proof. By invariance, we may assume T is linear and that the regular simplex is formed by
0, z1 , . . . , zd and has edge length 1. Letting wi = T (zi ), we have that 0, w1 , . . . , wd form an ηapproximate regular simplex of maximum edge length λ ∶= maxi ∥wi ∥. Lemma 11 gives 0, w1′ , . . . , wd′
forming a regular simplex of edge length λ such that maxi ∥wi − wi′ ∥ ≤ C1 λη for some constant
C1 . Let R be the orthogonal transformation such that R(zi ) = wi′ /λ for all i ∈ [d]. We have
∥T (zi ) − λR(zi )∥ = ∥wi − wi′ ∥ ≤ C1 λη for all i. In matrix notation, letting Z ∶= [z1 . . . zd ], we have
¿
Ád
√
√
À∑ ∥T z − λRz ∥2 ≤ d max ∥T z − λRz ∥ ≤ d C λη.
∥T Z − λRZ∥ ≤ ∥T Z − λRZ∥ = Á
F
i
i=1
i
i∈[d]
i
i
1
At the same time, ∥T Z − λRZ∥ ≥ ∥T − λR∥/∥Z −1 ∥ with ∥Z −1√∥ = 1/σd (Z) = 1/σd ([0z1 ⋯zd ]) being a
positive constant depending only on d. Hence, ∥T − λR∥ ≤ ( d/σd (Z))C1 λη =∶ C2 λη.
17
Lemma 20. Suppose that S1 , S2 ∶ Rd ↦ Rd are two affinities such that maxx∈B(y,r) ∥S1 (x)−S2 (x)∥ ≤
η for some y ∈ Rd and r > 0. Then ∥S1 (x) − S2 (x)∥ ≤ 2η∥x − y∥/r + η for all x ∈ Rd .
Proof. By translation and scale invariance, assume that y = 0 and r = 1. Let Li = Si − Si (0).
For x ∈ B(0, 1), we ∥L1 (x) − L2 (x)∥ ≤ ∥S1 (x) − S2 (x)∥ + ∥S1 (0) − S2 (0)∥ ≤ 2η. Hence, for x ∈ Rd ,
∥L1 (x) − L2 (x)∥ ≤ 2η∥x∥, which in turn implies that ∥S1 (x) − S2 (x)∥ ≤ ∥L1 (x) − L2 (x)∥ + ∥S1 (0) −
S2 (0)∥ ≤ 2η∥x∥ + η.
3.3
Proof of Theorem 3
Without loss of generality, we may assume that Dn ∶= diam(φn (Ωn )) ≥ 1. Indeed, suppose that
Dn < 1, but different from 0, for otherwise φn is a degenerate similarity and the result follows.
Let φ̃n = Dn−1 φn , which is isotonic on Ωn and satisfies diam(φ̃n (Ωn )) = 1. If the result is true for
φ̃n , there is a similarity S̃n such that maxx∈Ωn ∣φ̃n (x) − S̃n (x)∣ ≤ Cεn for some constant C. (We
implicitly assume that the set φn (Ωn ) contains the origin, so that φ̃n (Ωn ) remains bounded.) We
then have maxx∈Ωn ∣φn (x) − Sn (x)∣ ≤ CDn εn ≤ Cεn , where Sn ∶= Dn S̃n is also a similarity.
Let r = ρ(U ), so that there is some u⋆ such that B(u⋆ , r) ⊂ U . Let Λn = Ωn ∩ B(u⋆ , r/2) and
δn = diam(φn (Λn )). Let w be any unit-norm vector and define y± = u⋆ ± (r/2− εn )w. Let x± ∈ Ωn be
such that ∥x± − y± ∥ ≤ εn . Necessarily, x± ∈ Λn because the distance from y± to ∂B(u⋆ , r/2) exceeds
εn . Note that ∥x− − x+ ∥ ≥ r1 ∶= r − 4εn . By isotonicity,
∥φn (x) − φn (x′ )∥ ≤ ∥φn (x− ) − φn (x+ )∥ ≤ δn , whenever ∥x − x′ ∥ < r1 .
(14)
Let y1 , . . . , yK be a (r1 /3)-packing of U , so that K ≤ C(diam(U )/r)d for some constant C > 0. Let
xik ∈ Ωn be such that ∥xik − yk ∥ ≤ εn , so that U ⊂ ⋃k∈[K] B(yk , r1 /3) ⊂ ⋃k∈[K] B(xik , r2 ), where
r2 ∶= r1 /3 + εn . Let zk = xik for clarity. Take x, x′ ∈ Ωn . Because U is open, it is path-connected, so
there is a continuous curve γ ∶ [0, 1] ↦ U such that γ(0) = x and γ(1) = x′ . Let k0 ∈ [K] be such
that x ∈ B(zk0 , r2 ) and s0 = 0. Then for j ≥ 0, let sj+1 = inf{s > sj ∶ γ(s) ∉ ⋃l∈[j] B(zkl , r2 )}, and let
kj+1 ∈ [K] be such that ∥zkj+1 − γ(sj+1 )∥ ≤ εn . Let J = min{j ∶ sj+1 = ∞}, which is indeed finite. By
construction, ∥zkj − zkj+1 ∥ ≤ 2r2 < r1 when εn < r/10. By (14), we have ∥φn (zkj ) − φn (zkj+1 )∥ ≤ δn .
Thus, by the triangle inequality, ∥φn (x) − φn (x′ )∥ ≤ Jδn ≤ Kδn . This being true for all x, x′ ∈ Ωn ,
this prove that δn ≥ Dn /K ∝ Dn (diam(U )/r)−d .
1-NN interpolation. Let φ̂n denote the 1-NN interpolation of φn as in (13). We claim that there
is a C0⋆ ∝ Dn /r and c⋆0 ∝ (diam(U )/r)−d Dn /r such that φ̂n satisfies the following properties: for
all y, y ′ , y † , y ‡ ∈ U ,
∥y − y ∥ < ∥y − y ∥ − 4εn ⇒ ∥φ̂n (y) − φ̂n (y )∥ ≤ ∥φ̂n (y ) − φ̂n (y
′
and also
and
∥φ̂n (y) − φ̂n (y ′ )∥ ≤ C0⋆ (∥y − y ′ ∥ + εn ),
†
‡
′
†
(15)
‡
)∥ + C0⋆ εn ,
∥φ̂n (y) − φ̂n (y ′ )∥ ≥ c⋆0 ∥y − y ′ ∥ − C0⋆ εn ,
if y, y ′ ∈ B(u⋆ , r/2) satisfy ∥y − y ′ ∥ ≥ 10εn ,
∥y − y ′ ∥ = ∥y † − y ‡ ∥ ± η ⇒ ∥φ̂n (y) − φ̂n (y ′ )∥ = ∥φ̂n (y † ) − φ̂n (y ‡ )∥ ± C0⋆ (η + εn ),
if y, y ′ ∈ B(u⋆ , r/2), εn < r/120 and 0 ≤ η ≤ r/5.
Indeed, let x, x′ , x† , x‡ ∈ Ωn such that ∥x − y∥, ∥x′ − y ′ ∥, ∥x† − y † ∥, ∥x‡ − y ‡ ∥ ≤ εn .
(16)
(17)
(18)
18
For (15), we start by applying Lemma 16 to get
∥φ̂n (y) − φ̂n (y ′ )∥ = ∥φn (x) − φn (x′ )∥ ± 2ωn (2εn )
≤ ωn (∥x − x′ ∥) + 2ωn (2εn ) ≤ ωn (∥y − y ′ ∥ + 2εn ) + 2ωn (2εn ),
where ωn is the modulus of continuity of φn . We then use Lemma 3, which gives that ωn (η) ≤ Cη
for all η and some C ∝ Dn /r, to get ωn (∥y − y ′ ∥ + 2εn ) ± 2ωn (2εn ) ≤ C(∥y − y ′ ∥ + 6εn ).
For (16), we first note that ∥x − x′ ∥ < ∥x† − x‡ ∥ by the triangle inequality, which in turn implies
that ∥φn (x) − φn (x′ )∥ ≤ ∥φn (x† ) − φn (x‡ )∥ since φn is isotonic. We then apply Lemma 16 to get
that ∥φ̂n (y) − φ̂n (y ′ )∥ ≤ ∥φ̂n (y † ) − φ̂n (y ‡ )∥ + 4ωn (2εn ), and conclude with Lemma 3 as for (15).
For (17), we may apply Lemma 15 with Λn . Let V be the convex hull of Λn , so that V ⊂
B(u⋆ , r/2). Let z be a point in that ball. If z ≠ u⋆ , let w = (u⋆ − z)/∥u⋆ − z∥, and if z = u⋆ , let w
be any unit-norm vector. Define z ′ = z + εn w and notice that the distance from z ′ to ∂B(u⋆ , r/2)
exceeds εn . Therefore, if x ∈ Ωn is such that ∥z ′ − x∥ ≤ εn , then necessarily, x ∈ Λn . We then note
that ∥z − x∥ ≤ 2εn . We conclude that δH (Λn , V ) ≤ 2εn . Since ∥x − x′ ∥ ≥ ∥y − y ′ ∥ − 2εn ≥ 4(2εn ), we
get that ∥φn (x) − φn (x′ )∥ ≥ c∥x − x′ ∥, with c ∶= diam(φn (Λn ))/5 diam(Λn ) ≥ δn /5r. We then apply
Lemma 16 to obtain ∥φ̂n (y) − φ̂n (y ′ )∥ ≥ c∥x − x′ ∥ − 2ωn (2εn ) ≥ c∥y − y ′ ∥ − 2(c + C)εn , using Lemma 3
as for (15).
For (18), note that x, x′ ∈ B(u⋆ , r/2 + εn ) ⊂ B(u⋆ , 3r/4), and ∥x − x′ ∥ = ∥x† − x‡ ∥ ± (η + 4εn ) by
the triangle inequality. By Lemma 14 — where the constant there is denoted here by C ′ ∝ Dn /r
— this implies that
∥φn (x) − φn (x′ )∥ = ∥φn (x† ) − φn (x‡ )∥ ± C ′ (η + εn )
when η + 4εn < r/4 − 2εn , which is true when εn < r/120 and η ≤ r/5. We then apply Lemma 16
together with Lemma 3, as for (15).
CASE d = 1. This case is particularly simple. Note that U is a bounded open interval of R. We
show that the function φ̂n is approximately midlinear on U . Take x, y ∈ U and define µ = (x + y)/2.
By the fact that φ̂n takes its values in R, and (18), we have
∣ 21 (φ̂n (x) + φ̂n (y)) − φ̂n (µ)∣ = 21 ∣∣φ̂n (x) − φ̂n (µ)∣ − ∣φ̂n (y) − φ̂n (µ)∣∣ ≤ C0⋆ εn /2,
when εn /r is small enough. Hence, φ̂n is (C0⋆ εn )-approximate midlinear on U . By the result of
Vestfrid (2003), namely Lemma 17, there is C ∝ 1 — since U is a ball — and an affine function
Tn such that maxy∈U ∣φ̂n (y) − Tn (y)∣ ≤ CC0⋆εn . Since all affine transformations from R to R are
(possibly degenerate) similarities, we conclude.
CASE d ≥ 2. For the remaining of this subsection, we assume that d ≥ 2.
Approximate midlinearity. We show that there is a constant C such that φ̂n is locally Cεn approximately midlinear. Take x, y ∈ B(u⋆ , r/4), and let µ = (x + y)/2. Let t > 0 be a constant to
be set large enough later.
If ∥x − y∥ ≤ tεn , then by (15), φ̂n (x), φ̂n (y) ∈ B(φ̂n (µ), C0⋆ (t/2 + 1)εn ), so that
1
∥φ̂n (µ) − (φ̂n (x) + φ̂n (y))∥ ≤ C0⋆ (t/2 + 1)εn .
2
Therefore, assume that ∥x − y∥ ≥ tεn . Let z1 , . . . , zd be constructed as in the proof of Theorem 1.
By construction, both x, z1 , . . . , zd and y, z1 , . . . , zd form regular simplexes, and µ is the barycenter
of z1 , . . . , zd . By Lemma 6, for any i ≠ j,
√
√
√
∥zi − µ∥ = (d − 1)/2d ∥zi − zj ∥ = (d − 1)/2d 2d/(d + 1) ∥x − µ∥ ≤ ∥x − y∥/2,
19
which coupled with the fact that x, y ∈ B(u⋆ , r/4) yields that zi ∈ B(u⋆ , r/2) for all i. Now,
√ let z0 = x.
⋆
By (18), we have mini≠j ∥φ̂n (zi ) − φ̂n (zj )∥ ≥ maxi,j ∥φ̂n (zi ) − φ̂n (zj )∥ − C0 εn . Let cd = d/(2d + 2).
By (17) and Lemma 6,
∥φ̂n (zi ) − φ̂n (zj )∥ ≥ c⋆0 ∥zi − zj ∥ − C0⋆ εn = c⋆0 cd ∥x − y∥ − C0⋆εn ≥ (c⋆0 cd t − C0⋆ )εn .
Hence, assuming t ≥ 2C0⋆ /c⋆0 cd , we have mini≠j ∥φ̂n (zi ) − φ̂n (zj )∥ ≥ (1 − η) maxi,j ∥φ̂n (zi ) − φ̂n (zj )∥,
where η ∶= 2C0⋆ /(c⋆0 cd t). In that case, φ̂n (x), φ̂n (z1 ), . . . , φ̂n (zd ) form a η-approximate regular
simplex. By symmetry, the same is true of φ̂n (y), φ̂n (z1 ), . . . , φ̂n (zd ).
Define λ = ∥φ̂n (x) − φ̂n (y)∥. By Lemma 6, ∥zi − zj ∥ = cd ∥x − y∥ < ∥x − y∥ − 4εn when t > 4/(1 − cd ),
since ∥x − y∥ ≥ tεn and cd < 1. By (16), this implies that ∥φ̂n (zi ) − φ̂n (zj )∥ ≤ λ + C0⋆εn . By (17),
λ ≥ (c⋆0 t − C0⋆ )εn , so that λ + C0⋆ εn ≤ 2λ since we already assumed that t ≥ 2C0⋆ /c⋆0 cd > 2C0⋆ /c⋆0 .
For a ∈ {x, y, µ}, ∥a − zi ∥ is constant in i ∈ [d]. Therefore, by (18), mini ∥φ̂n (a) − φ̂n (zi )∥ ≥
maxi ∥φ̂n (a) − φ̂n (zi )∥ − C0⋆ εn . Define ξa as the orthogonal projection of φ̂n (a) onto the affine
space A ∶= Aff(φ̂n (z1 ), . . . , φ̂n (zd )) and let δa = ∥φ̂n (a) − ξa ∥. By the Pythagoras theorem, we have
∥ξa − φ̂n (zi )∥2 = ∥φ̂n (a) − φ̂n (zi )∥2 − δa2 . In particular,
max ∥ξa − φ̂n (zi )∥2 − min ∥ξa − φ̂n (zi )∥2 = max ∥φ̂n (a) − φ̂n (zi )∥2 − min ∥φ̂n (a) − φ̂n (zi )∥2
i
i
i
i
≤ 2C0⋆ εn min ∥φ̂n (a) − φ̂n (zi )∥ + (C0⋆ εn )2 ≤ C1 εn ,
i
where C1 ∶= 2C0⋆ Dn + C0⋆ r, once εn ≤ r. Let ζ denotes the barycenter of φ̂n (z1 ), . . . , φ̂n (zd ). Assume
that t is sufficiently large that η ≤ 1/C2 , where C2 ∝ 1 is the constant of Lemma 13. By that
lemma, and the fact that φ̂n (z1 ), . . . , φ̂n (zd ) form a η-approximate regular simplex of maximum
edge length bounded by λ, we have ∥ξa − ζ∥ ≤ C2 λC1 εn . Let L be the line passing through ζ and
perpendicular to A. We just proved that φ̂n (x), φ̂n (y), φ̂n (µ) are within distance C3 λεn from L,
where C3 ∶= C1 C2 .
Let ξ denote the orthogonal projection of φ̂n (µ) onto (φ̂n (x)φ̂n (y)). Since ∥x − µ∥ = ∥y − µ∥, we
can apply (18) to get
∣∥ξ − φ̂n (x)∥2 − ∥ξ − φ̂n (y)∥2 ∣
= ∣∥φ̂n (µ) − φ̂n (x)∥2 − ∥φ̂n (µ) − φ̂n (y)∥2 ∣
= ∣∥φ̂n (µ) − φ̂n (x)∥ + ∥φ̂n (µ) − φ̂n (y)∥∣ × ∣∥φ̂n (µ) − φ̂n (x)∥ − ∥φ̂n (µ) − φ̂n (y)∥∣
≤ 4C0⋆ λεn ,
using the fact that max(∥φ̂n (µ) − φ̂n (x)∥, ∥φ̂n (µ) − φ̂n (y)∥) ≤ λ + C0⋆ εn ≤ 2λ, due to (16) and
∥x − µ∥ = ∥y − µ∥ = 21 ∥x − y∥ < ∥x − y∥ − 4εn when t is large enough. By Lemma 13, we then
obtain ∥ξ − 21 (φ̂n (x) + φ̂n (y))∥ ≤ C4 λεn for some constant C4 ∝ C0⋆ . In particular, recalling that
λ = ∥φ̂n (x) − φ̂n (y)∥, this implies that ξ ∈ [φ̂n (x)φ̂n (y)] when εn ≤ 1/2C4 .
It remains to argue that φ̂n (µ) is close to ξ. We already know that φ̂n (x), φ̂n (y), φ̂n (µ) are
within distance C3 λεn from L, and by convexity, the same must be true of ξ. Let M = (φ̂n (x)φ̂n (y))
and θ = ∠(L, M ). Let PM denote the orthogonal projection onto M , when M is a linear subspace.
By Pythagoras theorem,
λ2 = ∥φ̂n (x) − φ̂n (y)∥2 = ∥PL (φ̂n (x) − φ̂n (y))∥2 + ∥PL⊥ (φ̂n (x) − φ̂n (y))∥2
≤ (cos θ)2 λ2 + (2C3 λεn )2 ,
20
implying that sin θ ≤ 2C2 εn . Since ∥PL − PM ∥ = sin θ and φ̂n (µ) − ξ is parallel to M , we also have
∥φ̂n (µ) − ξ∥2 = ∥PL (φ̂n (µ) − ξ)∥2 + ∥PL⊥ (φ̂n (µ) − ξ)∥2
≤ (sin θ)2 ∥φ̂n (µ) − ξ∥2 + (2C3 λεn )2 ,
√
so that ∥φ̂n (µ) − ξ∥ ≤ 2C3 λεn / cos θ ≤ 2C3 λεn / 1 − (2C3 εn )2 ≤ C5 λεn , for some constant C5 ∝ C3 ,
once C3 εn is small enough.
We conclude that ∥φ̂n (µ) − 12 (φ̂n (x) + φ̂n (y))∥ ≤ (C4 + C5 )λεn , by the triangle inequality.
Approximate affinity. We now know that φ̂n is Cεn -approximate midlinear on B(u⋆ , r/4) for
some constant C ∝ C0⋆(Dn + r) ∝ C0⋆ (diam(Q) + r). This implies, by the result of Vestfrid (2003),
that is Lemma 17, that there is an affine function Tn such ∥φ̂n (x) − Tn (x)∥ ≤ C1⋆ εn for all x ∈ W ,
for some constant C1⋆ ∝ rC ∝ rC0⋆(diam(Q) + r).
Approximate similarity. (Reinitialize the constants Ck , k ≥ 1.) We saw above that φ̂n transforms the regular simplex z0 (= x), z1 , . . . , zd with height denoted h satisfying h ≥ tεn /2 into a
η-approximate one, where η = 2C0⋆ /(c⋆0 cd t). In what follows, choose these points so that they are all
in B(u⋆ , r/2) and the simplex has height h ≥ r/8. (From here on, reinitialize the variables x, y, λ,
etc.) We can then take t = r/4εn , yielding η = C1 εn for a constant C1 ∝ C0⋆ /(c⋆0 r). By the triangle
inequality, we have
min ∥Tn (zi ) − Tn (zj )∥ ≥ min ∥φ̂n (zi ) − φ̂n (zj )∥ − 2C1⋆ εn
i≠j
i≠j
≥ (1 − C1 εn ) max ∥φ̂n (zi ) − φ̂n (zj )∥ − 2C1⋆ εn
i≠j
≥ max ∥Tn (zi ) − Tn (zj )∥ − (4C1⋆ + C1 δn )εn .
i≠j
By the triangle inequality and (17),
γn ∶= max ∥Tn (zi ) − Tn (zj )∥ ≥ max ∥φ̂n (zi ) − φ̂n (zj )∥ − 2C1⋆ εn
i,j
≥
i,j
⋆
c0 max ∥zi
i,j
− zj ∥ − C0⋆ εn − 2C1⋆ εn ≥ c⋆0 r/8 − (C0⋆ + 2C1⋆ )εn .
Hence, we find that Tn (z0 ), . . . , Tn (zd ) form a C2 εn -approximate regular simplex, where C2 ∶=
(4C1⋆ + C1 δn )/(c⋆0 r/8 − (C0⋆ + 2C1⋆ )εn ). Note that its maximum edge length is bounded as follows:
γn ≤ max ∥φ̂n (zi ) − φ̂n (zj )∥ + 2C1⋆ εn ≤ δn + 2C1⋆ εn ≤ 2δn ,
i,j
when 2C1⋆ εn ≤ δn . By Lemma 19, there is a constant C3 > 0 and an isometry Rn⋆ , such that we have
maxx∈W ∥Tn (x) − λn Rn⋆ (x)∥ ≤ C3 λn C2 εn , where λn ∶= γn /h. Because r/8 ≤ h ≤ r and the bounds on
γn above, there is a constant C2⋆ ≥ 1 such that
1/C2⋆ ≤ λn ≤ C2⋆ .
(19)
This implies that
∥φ̂n (x) − λn Rn⋆ (x)∥ ≤ ∥φ̂n (x) − Tn (x)∥ + ∥Tn (x) − λn Rn⋆ (x)∥ ≤ (C1⋆ + C3 C2 C2⋆ )εn =∶ C3⋆ εn .
(20)
Covering and conclusion. (Reinitialize the constants Ck , k ≥ 1.) Let u1 = u⋆ and let u2 , . . . , uK ∈
U be such that u1 , . . . , uK form a maximal (r/16)-packing of U . (The number 16 is not essential
21
here, but will play a role in the proof of Theorem 4.) Note that U = U1 ∪ ⋯ ∪ UK where Uk ∶=
U ∩ B(uk , r/4), and note that U⋆ ∶= U1 ⊂ U . For u, u′ ∈ Uk , there are w, w′ ∈ U⋆ such that
∥w − w′ ∥ = ∥u − u′ ∥. Define φ̃n = φ̂n /λn . By (18), and then (19)-(20), we have
∥φ̃n (u) − φ̃n (u′ )∥ = ∥φ̃n (w) − φ̃n (w′ )∥ ± C0⋆ εn /λn
= ∥w − w′ ∥ ± (C0⋆ + C3⋆ )εn /C2⋆ =∶ ∥w − w′ ∥ ± C1 εn .
Let
θ(Uk )
,
(21)
diam(Uk )
which is strictly positive. The result of Alestalo et al. (2001), namely Lemma 18, gives a constant
C2 ∝ ξ1 and an isometry Rk such that maxu∈Uk ∥φ̃n (u) − Rk (u)∥ ≤ C2 εn .
Let
1
(22)
ξ2 = min {ρ(Uk ∩ Uk′ ) ∶ Uk ∩ Uk′ ≠ ∅}.
2
Take k, k ′ ∈ [K] such that Uk ∩ Uk′ ≠ ∅, so that there is u ∈ U such that B(u, ξ2 ) ⊂ Uk ∩ Uk′ . Since
ξ1 = min
k
max ∥Rk (x) − Rk′ (x)∥ ≤ max ′ ∥Rk (x) − φ̃n (x)∥ + ∥φ̃n (x) − Rk′ (x)∥
x∈Uk ∩Uk
x∈B(u,ξ2 )
≤ max ∥Rk (x) − φ̃n (x)∥ + max ∥φ̃n (x) − Rk′ (x)∥ ≤ 2C2 εn ,
x∈Uk
x∈Uk′
we have ∥Rk (x) − Rk′ (x)∥ ≤ (2∥x − u∥/ξ2 + 1)2C2 εn for all x ∈ Rd , by Lemma 20. Hence, ∥Rk (x) −
Rk′ (x)∥ ≤ (2 diam(U )/ξ2 + 1)2C2 εn =∶ C3 εn for all x ∈ U . If instead Uk ∩ Uk′ = ∅, we do as follows.
Since U is connected, there is a sequence k0 = k, k1 , . . . , km = k′ in [K], such that Uki ∩ Uki+1 ≠ ∅.
We thus have maxx∈U ∥Rki (x) − Rki+1 (x)∥ ≤ C3 εn . By the triangle inequality, we conclude that
maxx∈U ∥Rk (x) − Rk′ (x)∥ ≤ KC3 εn for any k, k′ ∈ [K]. Noting that R1 = Rn⋆ (since U1 = U⋆ ), for
any k ∈ [K] and x ∈ Uk ,
∥φ̃n (x) − Rn⋆ (x)∥ ≤ ∥Rk (x) − R1 (x)∥ + C2 εn ≤ (KC3 + C2 )εn .
We conclude that, for any x ∈ U ,
∥φ̂n (x) − λn Rn⋆ (x)∥ ≤ (KC3 + C2 )λn εn ≤ (KC3 + C2 )C2⋆ εn =∶ C4 εn .
(23)
This concludes the proof when d ≥ 2.
A refinement of the constant. Assume now that U = U h for some h > 0. Tracking the constants
above, we see that they all depend only on (d, ρ(U ), diam(U ), diam(Q)), as well as ξ1 and ξ2
defined in (21) and (22), respectively. We note that diam(Uk ) ≤ r and ρ(Uk ) ≥ min(r/2, h) by
Lemma 21, so that ξ1 ≥ min(r/2, h)/r. To bound ξ2 , we can do as we did at the beginning of this
section, so that at the end of that section, we can restrict our attention to chains k0 , . . . , km where
∥ukj − ukj+1 ∥ ≤ 2r/16 = r/8. To be sure, fix k, k′ ∈ [K] and let γ ∶ [0, 1] ↦ U be a curve such that
γ(0) = uk and γ(1) = uk′ . Define s0 = 0 and then sj+1 = inf{s > sj ∶ ∥γ(s) − ukj ∥ > r/16}, and
let kj+1 ∈ [K] be such that ∥γ(sj+1 ) − ukj+1 ∥ ≤ r/16, which is well-defined since (uk , k ∈ [K]) is a
(r/16)-packing of U . We then have
∥ukj − ukj+1 ∥ ≤ ∥ukj − γ(sj )∥ − ∥γ(sj+1 ) − ukj+1 ∥ ≤ r/16 + r/16 = r/8.
We can therefore redefine ξ2 in (22) as 12 min{ρ(Uk ∩ Uk′ ) ∶ ∥uk − uk′ ∥ ≤ r/8}. Because U = U h ,
for each k ∈ [K], there is vk such that uk ∈ B(vk , min(r/16, h)) ⊂ U . By the triangle inequality,
B(vk , min(r/16, h)) ⊂ Uk′ when ∥uk − uk′ ∥ ≤ r/8, so that ξ2 ≥ min(r/16, h). So we see that everything depends on (d, h, ρ(U ), diam(U ), diam(Q)). The second part of the theorem now follows by
invariance considerations.
22
3.4
Proof of Lemma 5
Let c = ess inf U f and C = ess supU f , which by assumption belong to (0, ∞). Fix i ∈ [n] and let
Ni = #{j ≠ i ∶ ∥xj − xi ∥ ≤ r}. For j ≠ i, pi (j) ∶= P(∥xj − xi ∥ ≤ r) = ∫B(xi ,r) f (u)du. For an upper
bound, we have
pi (j) ≤ C Vol(B(xi , r) ∩ U ) ≤ C Vol(B(xi , r)) = Cζd r d =∶ Q,
where Vol denotes the Lebesgue measure in Rd and ζd is the volume of the unit ball in Rd . Hence,
P(Ni > 2(n − 1)Q) ≤ P(Bin(n − 1, Q) > 2(n − 1)Q) ≤ e−(n−1)Q/3 by Bennett’s inequality for the
binomial distribution. By the union bound, we conclude that maxi Ni ≤ 2(n − 1)Q with probability
at least 1 − ne−(n−1)Q/3 , which tends to 1 if nr d ≥ C0 log n and C0 > 0 is sufficiently large.
For a lower bound, we use the following lemma.
Lemma 21. Suppose U ⊂ Rd is open and such that U = U h for some h > 0. Then for any x ∈ U
and any r > 0, B(x, r) ∩ U contains a ball of radius min(r, h)/2. Moreover, the closure of that ball
contains x.
Proof. By definition, there is y ∈ U such that x ∈ B(y, h) ⊂ U . We then have B(x, r) ∩ U ⊃
B(x, r) ∩ B(y, h), so it suffices to show that the latter contains a ball of radius min(r, h)/2. By
symmetry, we may assume that r ≤ h. If ∥x − y∥ ≤ r/2, then B(x, r/2) ⊂ B(y, h) and we are done.
Otherwise, let z = (1−t)x+ty with t ∶= r/2∥x−y∥ ∈ (0, 1), and note that B(z, r/2) ⊂ B(x, r)∩B(y, h)
and x ∈ ∂B(z, r/2).
Now that Lemma 21 is established, we apply it to get
pi (j) ≥ c Vol(B(xi , r) ∩ U ) ≥ cζd (min(r, h)/2)d =∶ q.
Hence, P(Ni < (n − 1)q/2) ≤ P(Bin(n − 1, q) < (n − 1)q/2) ≤ e−(6/7)(n−1)q . By the union bound, we
conclude that mini Ni ≥ (n − 1)q/2 with probability at least 1 − ne−(6/7)(n−1)q , which tends to 1 if
nr d ≥ C1 log n and C1 > 0 is sufficiently large. (Recall that h is fixed.)
3.5
More auxiliary results
We list here a few additional of auxiliary results that will be used in the proof of Theorem 4.
For V ⊂ Rd and x, x′ ∈ V , define the intrinsic metric
δV (x, x′ ) = sup {L ∶ ∃γ ∶ [0, L] ↦ V, 1-Lipschitz, with γ(0) = x, γ(L) = x′ },
where γ is 1-Lipschitz if ∥γ(s) − γ(t)∥ ≤ ∣s − t∣ for all s, t ∈ [0, L]. If no such curve exists, set
δV (x, x′ ) = ∞. The intrinsic diameter of V is defined as sup{δV (x, x′ ) ∶ x, x′ ∈ V }. We note that,
if L ∶= δV (x, x′ ) < ∞, then there is a curve γ ⊂ V̄ with length L joining x and x′ . Recall that a
curve with finite length is said to be rectifiable. See (Burago et al., 2001) for a detailed account of
intrinsic metrics.
For U ⊂ Rd and h > 0, let U ⊖h = {x ∈ U ∶ B(x, h) ⊂ U }. This is referred to as an erosion (of the
set U ) in mathematical morphology.
Lemma 22. If U ⊂ Rd is open and connected, then for each pair of points x, x′ ∈ U , there is h > 0
and a rectifiable curve within U ⊖h joining x and x′ .
23
Proof. Take x, x′ ∈ U . By taking an intersection with an open ball that contains x, x′ , if needed,
we may assume without loss of generality that U is bounded. Since every connected open set in a
Euclidean space is also path-connected (Waldmann, 2014, Example 2.5.13), there is a continuous
curve γ ∶ [0, 1] ↦ U such that γ(0) = x and γ(1) = x′ . A priori, γ could have infinite length. However,
γ (≡ γ([0, 1])) is compact. For each t ∈ [0, 1], let r(t) > 0 be such that Bt ∶= B(γ(t), r(t)) ⊂ U .
Since γ ⊂ ⋃t∈[0,1] Bt , there is 0 ≤ t1 < ⋅ ⋅ ⋅ < tm ≤ 1 such that γ ⊂ ⋃j∈[m] Btj . Since γ is connected,
necessarily, for all j ∈ [m−1] there is sj ∈ [tj , tj+1 ] such that γ(sj ) ∈ Btj ∩Btj+1 . Let s0 = 0 and sm = 1.
Then [γ(sj )γ(sj+1 )] ⊂ Btj+1 ⊂ U for all j ∈ 0, . . . , m − 1, and therefore the polygonal line defined by
x = γ(s0 ), γ(s1 ), . . . , γ(sm−1 ), γ(sm ) = x′ is inside ⋃j∈[m] Btj ⊂ U ⊖r where r ∶= minj∈[m] r(tj ) > 0. By
construction, this polygonal line joins x and x′ , and is also rectifiable since it has a finite number
of vertices.
Lemma 23. Suppose U ⊂ Rd is bounded, connected, and such that U = U h for some h > 0. Then
′
there is h‡ > 0 such that, for all h′ ∈ [0, h‡ ], the intrinsic diameter of U ⊖h is finite.
Proof. Let V = U ⊖h . By assumption, for all x ∈ U , there is y ∈ V such that x ∈ B(y, h) ⊂ U . In
particular, U ⊃ V ≠ ∅.
Let V1 be a connected component of V . Pick y1 ∈ V1 and note that B1 ∶= B(y1 , h) ⊂ U by
definition, and also B1 ⊂ V1 because B1 is connected. Let ζd be the volume of the unit ball in Rd .
Since the connected components are disjoint and each has volume at least ζd hd while U has volume
at most ζd (diam(U )/2)d , V can have at most ⌈(diam(U )/2h)d ⌉ connected components, which we
now denote by V1 , . . . , VK . Pick yk ∈ Vk for each k ∈ [K]. Applying Lemma 22, for each pair of
distinct k, k ′ ∈ [K], there is a rectifiable (i.e., finite-length) path γk,k′ ⊂ U joining yk and yk′ . By
Lemma 22, the length of γk,k′ , denoted Dk,k′ , is finite, and there is hk,k′ > 0 such that γk,k′ ⊂ U ⊖hk,k′ .
Let D‡ = maxk,k′ ∈[K] Dk,k′ and h‡ = mink,k′∈[K] hk,k′ .
We now show that each connected component Vk has finite diameter in the intrinsic metric
of V ′ ∶= U ⊖h/2 . Since Vk is bounded, there is x1 , . . . , xm ∈ Vk such that Vk ⊂ ⋃j∈[mk ] Qj , where
Qj ∶= B(xj , h/2) ⊂ V ′ . Take any x, x′ ∈ Vk . Let j, j ′ ∈ [mk ] be such that x ∈ Qj and x′ ∈ Qj ′ . Since
Vk is connected, there is a sequence j = j0 , j1 , . . . , jSk = j ′ ∈ [mk ] such that Qjs ∩ Qjs+1 ≠ ∅ for all
s = 0, . . . , Sk . Choose zs ∈ Qjs ∩ Qjs+1 and let z0 = x and zSk = x′ . Then [zs zs+1 ] ⊂ Qjs+1 for all s.
k
Qjs ⊂ V ′ , it joins x
Let L be the polygonal line formed by z0 , . . . , zSk . By construction, L ⊂ ⋃Ss=0
′
′
and x , and has length at most (Sk + 1)2h. Hence, δV ′ (x, x ) ≤ (Sk + 1)2h ≤ 2(mk + 1)h. This being
valid for all x, x′ ∈ Vk , we proved that Vk has diameter at most Dk ∶= 2(mk + 1)h in the intrinsic
metric of V ′ . Let D⋆ = maxk∈[K] Dk .
Now take h† ∈ [0, h‡ ] and any x, x′ ∈ U ⊖h† . Let y, y ′ ∈ V be such that x ∈ B(y, h) and x′ ∈
B(y ′ , h). Let k, k ′ ∈ [K] be such that y ∈ Vk and y ′ ∈ Vk′ . There are curves γ, γ ′ ⊂ V ′ of length at
most D⋆ such that γ joins y and yk , while γ ′ joins y ′ and yk′ . We then join yk and yk′ with γk,k′
All together, we have the curve [xy] ∪ γ ∪ γk,k′ ∪ γ ′ ∪ [y ′ x′ ], which joins x and x′ , lies entirely in
U ⊖h† , and has length bounded by h + D⋆ + D‡ + D⋆ + h =∶ D. And this is true for any pair of such
points.
Lemma 24. Suppose that S1 , S2 ∶ Rd ↦ Rd are two affinities such that maxj ∥S1 (zj ) − S2 (zj )∥ ≤ ε,
where z0 , . . . , zd form in a η-approximate regular simplex with minimum edge length at least λ.
There is C > 0 depending only on d such that, if η ≤ 1/C, then ∥S1 (x) − S2 (x)∥ ≤ Cε∥x − z0 ∥/λ + ε
for all x ∈ Rd .
Proof. Note that this is closely related to Lemma 20. By translation and scale invariance, assume
that z0 = 0 and λ = 1. Let Li = Si − Si (0). We have ∥L1 (zj ) − L2 (zj )∥ ≤ ∥S1 (zj ) − S2 (zj )∥ + ∥S1 (0) −
24
S2 (0)∥ ≤ 2ε. Let Z denote the matrix with columns z1 , . . . , zd . In matrix notation, we have
∥(L1 − L2 )Z∥F =
√
√
∑ ∥(L1 − L2 )zj ∥2 ≤ 2 dε.
j
We also have ∥(L1 − L2 )Z∥F ≥ ∥(L1 − L2 )Z∥ ≥ σd (Z)∥L1 − L2 ∥, and by Lemma 12, σd (Z) =
σd ([z0 , Z]) ≥ 1/C1 when η ≤ 1/C1 , where C1 depends only on d. In that case, ∥L1 − L2 ∥ ≤ C2 ε
for another constant C2 . Equivalently, for x ∈ Rd , ∥L1 (x) − L2 (x)∥ ≤ C2 ε∥x∥, which in turn implies
that ∥S1 (x) − S2 (x)∥ ≤ ∥L1 (x) − L2 (x)∥ + ∥S1 (0) − S2 (0)∥ ≤ C2 ε∥x∥ + ε.
3.6
Proof of Theorem 4
Because φn is bounded independently of n, we may assume without loss of generality that C0 εn ≤ rn
and C0 rn ≤ h for all n, where C0 ≥ 1 will be chosen large enough later on.
Take y ∈ U and let Ωy = Ωn ∩ B(y, rn ) and Qy = φn (Ωy ). We first show that there is C1 ∝
diam(Q)/ρ(U ) such that, for any y ∈ U , diam(Qy ) ≤ C1 rn . For this, we mimic the proof of Lemma 3.
Take x, x′ ∈ Ωy such that ξ ∶= ∥φn (x) − φn (x′ )∥ = diam(Qy ). Let u be such that B(u, ρ(U )) ⊂ U .
Let y1 , . . . , ym be an (rn + 2εn )-packing of B(u, ρ(U )) with m ≥ A1 (ρ(U )/rn )d for some A1 ∝ 1.
Then let {xis ∶ s ∈ [m]} ⊂ Ωn be such that maxs∈[m] ∥ys − xis ∥ ≤ εn . By the triangle inequality, for
all s ≠ t, we have ∥xis − xit ∥ ≥ ∥ys − yt∥ − 2εn ≥ rn > ∥x − x′ ∥. By (10), we have ∥φn (xis ) − φn (xit )∥ ≥ ξ,
so that φn (xi1 ), . . . , φn (xim ) form a ξ-packing. Therefore m ≤ A2 (diam(Q)/ξ)d for some A2 ∝ 1.
We conclude that ξ ≤ (A2 /A1 )1/d (diam(Q)/ρ(U ))rn =∶ C1 rn .
We apply Theorem 3 to Uy ∶= B(y, rn ) and Ωy . With the fact that δH (Ωy , Uy ) ≤ 2εn — as
we saw in the proof of (17) — and invariance considerations, we obtain a constant C ∝ 1 and a
similarity Sy such that maxx∈Ωy ∥φn (x) − Sy (x)∥ ≤ C(diam(Qy )/rn )εn ≤ CC1 εn =∶ C2 εn . (Note that
all the quantities with subscript y depend also on n, but this will be left implicit.)
Fix y⋆ ∈ U ⊖rn . For x ∈ Ωn , there is y ∈ U ⊖rn such that x ∈ Uy . Assume γ is parameterized by
arc length and let h‡ be given by Lemma 23 and let D denote the intrinsic diameter of U ⊖h‡ . Then
assuming rn ≤ h‡ , there is a curve γ ⊂ U ⊖rn of length L ≤ D joining y⋆ and y. Let y0 = y⋆ , yj = γ(jrn )
for j = 0, . . . , J ∶= ⌊L/rn ⌋, and then yJ+1 = y. We have maxz∈Uyj ∩Uyj+1 ∥Syj (z) − Syj+1 (z)∥ ≤ 2C2 εn
by the triangle inequality. We also have ρ(Uyj ∩ Uyj+1 ) ≥ rn , because ∥yj − yj+1 ∥ ≤ rn . Let vj be
such that B(vj , rn /2) ⊂ Uyj ∩ Uyj+1 . Fix j and let vj,0 , . . . , vj,d denote a regular simplex inscribed
in the ball B(vj , rn /4). Let λn ∝ rn denote its edge length. Then let xj,0 , . . . , xs,d ∈ Ωn be such
that maxk ∥xj,k − vj,k ∥ ≤ εn . When C0 is large enough, xj,0 , . . . , xj,d ∈ B(vj , rn /2) by the triangle
inequality. Moreover, maxk,l ∥xj,k −xj,l ∥ ≤ λn +2εn , as well as mink≠l ∥xj,k −xj,l ∥ ≥ λn −2εn . When C0
is large enough, Fj ∶= {xj,0 , . . . , xj,d } is therefore an η-approximate regular simplex, with η ∝ εn /rn ,
and minimum edge length ∝ rn . Now, since maxk ∥Syj (xj,k ) − Syj+1 (xj,k )∥ ≤ 2C2 εn , by Lemma 24,
for all z ∈ Rd , ∥Syj (z)−Syj+1 (z)∥ ≤ CC2 εn ∥z−xj,0 ∥/rn +2C2 εn for some C ∝ 1, assuming εn /rn ≤ 1/C.
In particular, by the fact that ∥x − xj,0 ∥ ≤ diam(U ), this gives ∥Syj (x) − Syj+1 (x)∥ ≤ C3 εn /rn for
some C3 ∝ diam(U )C2 . Hence,
∥Sy⋆ (x) − Sy (x)∥ ≤ (J + 1)C3 εn /rn ≤ C4 εn /rn2 ,
since J ≤ L/rn ≤ D/rn .
This being true for any arbitrary x ∈ Ωn , we conclude that
max ∥φn (x) − Sy⋆ (x)∥ ≤ C4 εn /rn2 + C2 εn ≤ C5 εn /rn2 .
x∈Ωn
25
4
Discussion
This paper builds on (Kleindessner and von Luxburg, 2014) to provide some theory for ordinal
embedding, an important problem in multivariate statistics (aka unsupervised learning). We leave
open two main problems:
• What are the optimal rates of convergence for ordinal embedding with all triple and quadruple comparisons?
• What is the minimum size of K = Kn for consistency of ordinal embedding based on the
K-nearest neighbor distance comparisons?
We note that we only studied the large sample behavior of exact embedding methods. In particular, we did not discuss or proposed any methodology for producing such an embedding. For this,
we refer the reader to (Agarwal et al., 2007; Borg and Groenen, 2005; Terada and Von Luxburg,
2014) and references therein. In fact, the practice of ordinal embedding raises a number of other
questions in terms of theory, for instance:
• How many flawed comparisons can be tolerated?
Acknowledgements
We are grateful to Vicente Malave for introducing us to the topic and for reading a draft of
this paper. We also want to thank an associate editor and two anonymous referees for pertinent
comments, and for pointing out some typos and errors. We learned of the work of Ulrike von
Luxburg and her collaborators at the Mathematical Foundations of Learning Theory Workshop
held in Barcelona in June 2014. We are grateful to the organizers, in particular Gábor Lugosi, for
the invitation to participate. This work was partially supported by the US Office of Naval Research
(N00014-13-1-0257).
References
Agarwal, S., J. Wills, L. Cayton, G. Lanckriet, D. J. Kriegman, and S. Belongie (2007). Generalized
non-metric multidimensional scaling. In International Conference on Artificial Intelligence and
Statistics, pp. 11–18.
Alestalo, P., D. Trotsenko, and J. Väisälä (2001). Isometric approximation. Israel Journal of
Mathematics 125 (1), 61–82.
Aumann, R. J. and J. Kruskal (1958). The coefficients in an allocation problem. Naval Research
Logistics Quarterly 5 (2), 111–123.
Borg, I. and P. J. Groenen (2005). Modern multidimensional scaling: Theory and applications.
Springer.
Burago, D., Y. Burago, and S. Ivanov (2001). A course in metric geometry, Volume 33. American
Mathematical Society Providence.
Cuevas, A., R. Fraiman, and B. Pateiro-López (2012). On statistical properties of sets fulfilling
rolling-type conditions. Adv. in Appl. Probab. 44 (2), 311–329.
Davenport, M. A. (2013). Lost without a compass: Nonmetric triangulation and landmark multidimensional scaling. In Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP),
2013 IEEE 5th International Workshop on, pp. 13–16. IEEE.
De Silva, V. and J. B. Tenenbaum (2004). Sparse multidimensional scaling using landmark points.
Technical report, Technical report, Stanford University.
26
Ellis, D. P., B. Whitman, A. Berenzweig, and S. Lawrence (2002). The quest for ground truth in
musical artist similarity. In Proceedings of the International Symposium on Music Information
Retrieval (ISMIR), pp. 170–177.
Federer, H. (1959). Curvature measures. Trans. Amer. Math. Soc. 93, 418–491.
Horn, R. A. and C. R. Johnson (1990). Matrix analysis. Cambridge University Press, Cambridge.
Corrected reprint of the 1985 original.
Jamieson, K. G. and R. D. Nowak (2011). Low-dimensional embedding using adaptively selected
ordinal data. In Communication, Control, and Computing (Allerton), 2011 49th Annual Allerton
Conference on, pp. 1077–1084. IEEE.
Kelley, J. L. (1975). General topology, Volume 27 of Graduate Texts in Mathematics. SpringerVerlag.
Kleindessner, M. and U. von Luxburg (2014). Uniqueness of ordinal embedding. In Proceedings of
The 27th Conference on Learning Theory, pp. 40–67.
Kruskal, J. B. (1964). Multidimensional scaling by optimizing goodness of fit to a nonmetric
hypothesis. Psychometrika 29, 1–27.
McFee, B. and G. Lanckriet (2011). Learning multi-modal similarity. The Journal of Machine
Learning Research 12, 491–523.
Nhat, V. D. M., N. Vo, S. Challa, and S. Lee (2008). Nonmetric mds for sensor localization. In 3rd
International Symposium on Wireless Pervasive Computing (ISWPC), pp. 396–400.
Shepard, R. N. (1962a). The analysis of proximities: multidimensional scaling with an unknown
distance function. I. Psychometrika 27, 125–140.
Shepard, R. N. (1962b). The analysis of proximities: multidimensional scaling with an unknown
distance function. II. Psychometrika 27, 219–246.
Shepard, R. N. (1966). Metric structures in ordinal data. Journal of Mathematical Psychology 3 (2),
287–315.
Sikorska, J. and T. Szostok (2004). On mappings preserving equilateral triangles. Journal of
Geometry 80 (1-2), 209–218.
Suppes, P. and M. Winet (1955). An axiomatization of utility based on the notion of utility
differences. Management Science, 259–270.
Terada, Y. and U. Von Luxburg (2014). Local ordinal embedding. In Proceedings of the 31st
International Conference on Machine Learning (ICML-14), pp. 847–855.
Vestfrid, I. A. (2003). Linear approximation of approximately linear functions. aequationes mathematicae 66 (1-2), 37–77.
Von Luxburg, U. and M. Alamgir (2013). Density estimation from unweighted k-nearest neighbor
graphs: a roadmap. In Advances in Neural Information Processing Systems, pp. 225–233.
Waldmann, S. (2014). Topology: An Introduction. Springer International Publishing.
Young, F. W. and R. M. E. Hamer (1987). Multidimensional scaling: History, theory, and applications. Lawrence Erlbaum Associates, Inc.
| 10 |
A Framework for Datatype Transformation
Jan Kort 1 and Ralf Lämmel 2,3
1 Universiteit
van Amsterdam
voor Wiskunde en Informatica
3 Vrije Universiteit van Amsterdam
arXiv:cs/0204018v3 [cs.PL] 24 Feb 2003
2 Centrum
Abstract
We study one dimension in program evolution, namely the evolution of the datatype declarations in a program. To this end, a suite of basic transformation operators is designed. We
cover structure-preserving refactorings, but also structure-extending and -reducing adaptations. Both the object programs that are subject to datatype transformations, and the meta
programs that encode datatype transformations are functional programs.
1 Introduction
We study operators for the transformation of the datatype declarations in a program.
The presentation will be biased towards the algebraic datatypes in Haskell, but the
concepts are of relevance for many typed declarative languages, e.g., Mercury and
SML, as well as frameworks for algebraic specification or rewriting like ASF+SDF,
CASL, Elan, and Maude. Our transformations are rather syntactical in nature as
opposed to more semantical concepts such as data refinement. Our transformations
contribute to the more general notion of functional program refactoring [TR01].
The following introductory example is about extracting a new datatype from constructor components of an existing datatype. This is illustrated with datatypes that
represent the syntax of an imperative language. The following extraction identifies
a piece of syntax to enable its reuse in later syntax extensions:
- - Datatypes with focus on two constructor components
data Prog
= Prog ProgName [Dec ] [Stat ]
data Dec
= VDec Id Type
data Stat
= Assign Id Expr | If Expr Stat Stat | ...
- - After extraction of [Dec] [Stat] to constitute a new datatype Block
data Prog
= Prog ProgName Block
data Block
= Block [Dec ] [Stat ]
In the present paper, we describe the design of a framework for datatype transformations including the operators for the above extraction. In Sec. 2, we identify
all the concerns addressed by the framework. In Sec. 3, we describe all the basic
operators for datatype transformations. In Sec. 4, these operators are lifted from
datatypes to complete programs. Related work is discussed in Sec. 5. The paper is
concluded in Sec. 6.
Kort & Lämmel
2 Concerns in datatype transformation
The central contribution of the present paper is a simple, well-defined, and ‘editingcomplete’ suite of operators for datatype transformations. Before we embark on
this suite, we identify the concerns addressed by our approach:
•
Datatype transformations via scripting or interactive tool support.
•
Well-defined primitives for datatype transformations.
•
Generic meta-programming for conciseness of datatype transformations.
•
Flexible means of referring to fragments of interest in datatype transformations.
We will now discuss these concerns in some depth.
2.1 Scripting vs. interactive tool support
From the point of view of a programmer, datatype transformations should be founded
on intuitive scenarios for adaptation. To actually perform (datatype) transformations, there are two modes of operation. The first mode is scripting: the programmer encodes the desired transformation as an expression over basic or higher-level
operators. The second mode is interactive transformation based on a corresponding
GUI. The benefits of an interactive tool are rather obvious. Such a tool is useful to
issue a transformation on the basis of an operator-specific dialogue, and to provide
a tailored list of options for transformations that make sense in a given context. A
crucial benefit of interactive transformation is that the GUI can be used to provide
feedback to the programmer: Which locations were changed? Where is the programmer’s attention needed to complete the issued transformation scenario? The
apparent benefits of scripting such as the opportunities to revise transformations
and to replay them can be also integrated into an interactive setting.
In Fig. 1, we illustrate the interactive treatment of the introductory example using
our prototypical tool TH — Transform Haskell. As the snapshot indicates, we use
a designated fold dialogue to perform the extraction of the piece of syntax. (Folding is the basic transformation underlying extraction.) This dialogue combines
several transformation steps and side conditions in a convenient way. The figure
shows the following situation. The user has selected two consecutive types “[Dec]
[Stat]” and initiated the fold dialogue. The user has also typed in “Block” in the
“type name” field. The introduction check-box is marked automatically since the
given type name does not yet exist. The user has also selected the “kind” radiobutton to be “data” and filled in “Block” in the “cons name” field. After this, the
user would press “Replace” to make the change. If there had been more than one
occurrence, the user could replace them all with “Replace All”, or step through all
occurrences with “Next”, and replace only specific ones with “Replace” as with
ordinary find and replace in text editors.
2
Kort & Lämmel
Fig. 1. A snapshot related to the interactive treatment of the introductory example
Here is an open-ended list of further common transformation scenarios:
•
Renaming type and constructor names.
•
Permuting type arguments and constructor components.
•
The dual of extracting datatypes, i.e., inlining datatypes.
•
Including a constructor declaration together with associated functionality.
•
Excluding a constructor declaration together with associated functionality.
•
Inserting a constructor component together with associated functionality.
•
Deleting a constructor component together with associated functionality.
2.2 Well-defined transformation primitives
The core asset of our framework is a suite of basic operators, which can be either
used as is, or they can be completed into more complex, compound transformations. In the design of this suite, we reuse design experience from a related effort
on grammar adaptation [Läm01]. Indeed, there is an obvious affinity of grammar
transformations and datatype transformations. A challenging problem that we did
not need to address in this previous work, is the completion of datatype transformations to apply to entire (functional) programs in which evolving datatypes reside.
We list the required properties of our basic transformation operators:
Correctness Mostly, we insist on ‘structure preservation’, that is, the resulting
datatype is of the same shape as the original datatype. This is enforced by the
pre- and postconditions of the operators.
3
Kort & Lämmel
Completeness The operators are ‘editing-complete’, that is, they capture all scenarios of datatype evolution that are otherwise performed by plain text editors.
Semantics-preserving adaptations are defined in terms of disciplined primitives.
Orthogonality The operators inhabit well-defined, non-overlapping roles. Higherlevel scenarios for interactive transformation are derivable. Operators for datatype
transformations are complementary to expression-level transformations.
Locality The basic operators operate on small code locations as opposed to ‘global’
or ‘exhaustive’ operators, which iterate over the entire program. Note that some
operators are necessarily exhaustive, e.g., an operator to rename a type name.
Implementability The operators are implemented as syntactical transformations
that are constrained by simple analyses to check for pre- and postconditions, but
which otherwise do not necessitate any offline reasoning.
Universality While the present paper focuses on datatype transformations, the
principles that are embodied by our operators are universal in the sense that they
also apply to other abstractions than datatypes, e.g., functions or modules.
We do not list these properties to announce a formal treatment. This would be
very challenging as we opt for the complex language setup of Haskell. The above
properties provide merely a design rationale. A formal approach is an important
subject for future work, but it does not contribute anything to the narrow goal of the
present paper: to compile an inventory of the basic roles in datatype transformation.
2.3 Generic meta-programming
We implement transformation operators and compound meta-programs in Haskell.
We reuse a publicly available abstract syntax for Haskell. 1 We rely on generic
programming techniques to perform meta-programming on the non-trivial Haskell
syntax in Haskell. We use the Strafunski-style 2 of generic programming that
allows us to complete functions on specific syntactical sorts into generic traversals that process subterms of the specific sorts accordingly. This style of metaprogramming is known to be very concise because one only provides functionality
for the types and constructors that are immediately relevant for the given problem.
All our datatype transformations are of type Trafo which is defined as follows:
type Trafo = HsModule → Maybe HsModule
That is, a datatype transformation is a partial function on HsModule — the abstract
syntactical domain for Haskell modules. Partiality is expressed by means of the
Maybe type constructor that wraps the result type. Partially is needed to model
side conditions.
In Fig. 2, we illustrate generic meta-programming by giving the definition of a
simple operator for replacing type names. The specification formalises the fact that
1
2
The used abstract syntax is part of the Haskell Core Libraries — in the haskell-src package.
http://www.cs.vu.nl/Strafunski/
4
Kort & Lämmel
Replace a type name
replaceTypeId :: TypeId → TypeId → Trafo
replaceTypeId n n ′ = full tdTP (adhocTP (adhocTP idTP declSite) refSite)
where
Transform declaring occurrences of type names
declSite :: HsDecl → Maybe HsDecl
declSite (HsTypeDecl l n0 ps t) | n0 ≡ n
=
return (HsTypeDecl l n ′ ps t)
declSite (HsDataDecl l c n0 ps cds d) | n0 ≡ n
=
return (HsDataDecl l c n ′ ps cds d)
declSite (HsNewTypeDecl l c n0 ps cd d) | n0 ≡ n
=
return (HsNewTypeDecl l c n ′ ps cd d)
declSite decl = return decl
Transform using occurrences of type names
refSite :: HsType → Maybe HsType
refSite (HsTyCon (UnQual n0 )) | n0 ≡ n = return (HsTyCon (UnQual n ′ ))
refSite tpe = return tpe
Fig. 2. Specification of the replacement operation underlying renaming of type names
type names can occur in two kinds of locations: either on a declaration site, when
we declare the type, or on a using site, when we refer to the type in a type expression. So we need to synthesise a transformation which pays special attention to the
syntactical domains for declaring and using sites. Indeed, in the figure, there are
two type-specific ‘ad-hoc’ cases which customise the identity function idTP. In the
given context, we choose the traversal scheme full tdTP for ‘full top-down traversal
in Type-Preserving manner’. This way, we will reach each node in the input tree
to transform type names on declaring and using sites. The operator replaceTypeId,
by itself, is a total function. (So the Maybe in its type is not really needed here.)
Partiality would be an issue if we derived an operator for renaming type names.
This necessitates adding a side condition to insist on a fresh new name.
2.4 Means of referring to fragments of interest
Both the basic operators for datatype transformation but also actual transformation
scenarios in scripts or in interactive sessions need to refer to program fragments of
interest. Recall our introductory example. Extracting a type necessitates referring
to the constructor components that are meant to constitute the new type. In our
framework, we use three ways to refer to fragments of interest:
Focus markers on subterms This approach is particularly suited for interactive
transformations. Here, relevant fragments can be directly marked. In Fig. 3,
we extend Haskell’s abstract syntax to include term constructors for focusing on
relevant fragments in datatype transformations. That is, we are prepared to focus
on names of types, on type expressions, and on lists of constructor components.
Selectors of subterms This approach is particularly suited for scripting transformations. Selectors for Haskell’s type expressions are defined in Fig. 4. The three
forms of TypeSel represent the three kinds of declarations that involve types. The
helper TypeSel ′ allows to select any part of a given type expression.
5
Kort & Lämmel
Focus on names
data HsName = ... | HsNameFocus HsName
Focus on type expressions
data HsType = ... | HsTypeFocus HsType
Focus on lists of constructor components
data HsConDecl = HsConDecl SrcLoc HsName [ HsFocusedBangType ]
| HsRecDecl SrcLoc HsName [([HsName ], HsBangType)]
data HsFocusedBangType = HsUnfocusedBangType HsBangType
| HsFocusedBangType [HsBangType ]
Fig. 3. Kinds of focus for datatype transformation
data TypeSel
= AliasRef TypeId TypeSel ′
| ConRef ConPos TypeSel ′
| SigRef [FunId ] TypeSel ′
data TypeSel ′
= SelStop
| SelDom TypeSel ′
| SelCod TypeSel ′
| SelIth ParaPos TypeSel ′
| SelFun TypeSel ′
| SelArg TypeSel ′
type TypeId
type ConId
type FunId
type ConPos
type ParaPos
data HsName
=
=
=
=
=
=
HsName
HsName
HsName
(ConId , ParaPos )
Int
...
-- Refer to a type alias
-- Refer to a constructor component
-- Refer to a function signature
-- Reference stops here
-- Refer to domain of function type
-- Refer to co-domain of function type
-- Refer to products component
-- Refer to type constructor
-- Refer to type argument
-- Refer to a type
-- Refer to a constructor
-- Refer to a function name
-- Refer to a component of a constructor
-- Refer to a parameter position
-- Syntactical sort for all kinds of names
Fig. 4. Selectors that refer to type expressions, and others
Predicates on subterms Such predicates typically constrain the type of a term or
the top-level pattern. This approach is particularly suited for the repeated application of a transformation to different focuses that match a given predicate.
There are ways to mediate between these different ways of referring to subterms.
For example. given a term with a focus marker on a type expression, one can
compute the selector that refers to the focused subterm. Given a predicate on type
expressions, one can compute the list of all selectors so that an operator that is
defined on selectors can be used with predicates as well. Finally, given a selector,
one can also add the corresponding focus marker in the input at hand.
3 Basic operators for datatype transformation
We will now describe the themes that constitute our operator suite:
•
Renaming type and constructor names.
•
Permutation of type parameters and constructor components.
•
Swapping types on use sites.
•
Introduction vs. elimination of type declarations.
•
Folding vs. unfolding of type declarations.
6
Kort & Lämmel
Sample input datatype
data ConsList a
=
Nil | Cons a (ConsList a)
Renamed and permuted datatype
data SnocList a
= Lin | Snoc (SnocList a) a
Fig. 5. Illustration of renaming and permutation
renameTypeId
renameConId
permuteTypeId
permuteConId
::TypeId → TypeId → Trafo
::ConId → ConId → Trafo
::TypeId → [ParaPos ] → Trafo
::ConId → [ParaPos ] → Trafo
-- Rename a type declaration
-- Rename a constructor
-- Permute type parameters
-- Permute constructor components
Fig. 6. Operators for renaming and parameter permutation
renameTypeId (HsIdent "ConsList") (HsIdent "SnocList")
renameConId (HsIdent "Nil") (HsIdent "Lin")
renameConId (HsIdent "Cons") (HsIdent "Snoc")
permuteConId (HsIdent "Snoc") [2, 1]
‘seqTrafo‘
‘seqTrafo‘
‘seqTrafo‘
Fig. 7. Script for the scenario in Fig. 5
•
Wrapping vs. unwrapping of constructor components.
•
Inclusion vs. exclusion of entire constructor declarations.
•
Insertion vs. deletion of constructor components.
As this list makes clear, we group an operator with its inverse such as in “folding
vs. unfolding”, unless the operator can be used to inverse itself. This is the case for
renaming, permutation, and swapping. The operators from the first six groups are
(almost) structure-preserving. The last two groups deal with structure-extending
and -reducing transformations. We will now explain the operators in detail including illustrative examples. We will only explain the effect of the operators on
datatype declarations while we postpone lifting the operators to the level of complete programs until Sec. 4.
3.1 Renaming and permutation
Let us start with the simplest datatype refactorings one can think of. These are
transformations to consistently rename type or constructor names, and to permute
parameters of type and constructor declarations. In Fig. 5, a simple example is
illustrated. We rename the type name ConsList, the constructor names Nil and
Cons, and we permute the two parameter positions of Cons. The resulting datatype
specifies a SnocList as opposed to the ConsList before.
In Fig. 6, we declare the operators for renaming names and permuting parameter
lists. In Fig. 7, we include the script that encodes the ConsList-to-SnocList sample
as a sequence of basic renaming and permuting transformations. To this end, we
assume a sequential composition operator seqTrafo for datatype transformations.
(In the script, seqTrafo is used as an infix operator ‘seqTrafo‘.)
7
Kort & Lämmel
data HsDecl = ...
introTypes
elimTypes
::
::
-- Syntactical sort for (type) declarations
[HsDecl ] → Trafo
[TypeId ] → Trafo
-- Introduction of type declarations
-- Elimination of type declarations
Fig. 8. Operators for introduction and elimination of datatypes
type TypeHdr = (TypeId , [TypeVar ])
type TypeVar = HsName
-- Header (LHS) of type declaration
-- Type variables
foldAlias
:: TypeSel → TypeHdr → Trafo
unfoldAlias :: TypeSel → Trafo
-- Folding the referred type
-- Unfolding the referred type
Fig. 9. Operators for folding and unfolding
3.2 Introduction vs. elimination
The next group of operators deals with the introduction and elimination of type
declarations (see Fig. 8). Introduction means that the supplied types are added
while their names must not be in use in the given program. Elimination means
that the referenced types are removed while their names must not be referred to
anymore in the resulting program. The two operators take lists of types as opposed
to single ones because types can often only be introduced and eliminated in groups,
say mutually recursive systems of datatypes. All kinds of type declarations make
sense in this context: aliases, newtypes, and proper datatypes. The operators for
introduction and elimination are often essential in compound transformations. This
will be illustrated below when we reconstruct the introductory example in full detail
(see Sec. 3.4).
3.3 Folding vs. unfolding
Instantiating the folklore notions of unfolding and folding for datatypes basically
means to replace a type name by its definition and vice versa. Extra provisions
are needed for parameterised datatypes. The prime usage scenarios for the two
operators are the following:
•
extraction = introduction of a type followed by its folding.
•
inlining = unfolding a type followed by its elimination.
To give an example, the introductory example basically extracts the structure of
imperative program blocks. To actually reconstruct this example, we need a few
more operators. So we postpone scripting the example (see Sec. 3.4).
The operators for folding and unfolding are declared in Fig. 9, The operators make
a strict assumption: the type which is subject to folding or unfolding is necessarily a type alias as opposed to a proper datatype. This assumption simplifies the
treatment of the operators considerably since type aliases and their definitions are
equivalent by definition. Extra operators for so-called wrapping and unwrapping
allow us to use proper datatypes during folding and unfolding as well. This will be
addressed below. In the type of the foldAlias operator, we do not just provide a type
8
Kort & Lämmel
type ConRange
=
(ConPos , Int )
-- Refer to consecutive components
groupConRange
ungroupConPos
alias2newtype
newtype2data
data2newtype
newtype2alias
::
::
::
::
::
::
ConRange → Trafo
ConPos → Trafo
TypeId → ConId → Trafo
TypeId → Trafo
TypeId → Trafo
TypeId → Trafo
-- Group constructor components
-- Inline product
-- Turn type alias into newtype
-- Turn newtype into datatype
-- Turn datatype into newtype
-- Turn newtype into type alias
Fig. 10. Operators for wrapping and unwrapping
0. Original syntax
data Prog
data Dec
data Stat
data Expr
=
=
=
=
Prog ProgName [Dec ] [Stat ]
VDec Id Type
Assign Id Expr | If Expr Stat Stat
Var Id | Const Int
1. After grouping [Dec] and [Stat]
data Prog
= Prog ProgName ([Dec ], [Stat ])
2. After introduction of Block to prepare folding
data Prog
= Prog ProgName ([Dec ], [Stat ])
type Block = ([Dec ], [Stat ])
3. After folding away the type expression ([Dec], [Stat])
data Prog
= Prog ProgName Block
type Block = ([Dec ], [Stat ])
4. After turning Block into a proper datatype with the constructor Block
data Prog
= Prog ProgName Block
data Block = Block ([Dec ], [Stat ])
5. After ungrouping the product ([Dec], [Stat])
data Prog
= Prog ProgName Block
data Block = Block [Dec ] [Stat ]
Fig. 11. Illustration of wrapping, unwrapping, and extraction
name but also a list of type variables (cf. helper type TypeHdr). This is needed for
parameterised datatypes, where we want to specify how the free type variables in
the selected type expression map to the argument positions of the type alias.
The preconditions for the operators are as follows. In the case of foldAlias, we need
to check if the referenced type expression and the right-hand side of the given alias
declaration coincide. In the case of unfolding, we need to check that the referenced
type expression corresponds to an application of a type alias.
3.4 Wrapping vs. unwrapping
We will now consider operators that facilitate certain forms of wrapping and unwrapping of datatype constructors (see Fig. 10). There are operators for grouping
and ungrouping, that is, to turn consecutive constructor components into a single
component that is of a product type, and vice versa. There are also operators to
mediate between the different kinds of type declarations, namely type aliases, newtypes and datatypes. This will allow us to toggle the representation of datatypes
in basic ways. As a result, the normal forms assumed by other operators can be
established; recall, for example, the use of type aliases in folding and unfolding.
This separation of concerns serves orthogonality.
9
Kort & Lämmel
groupConRange ((HsIdent "Prog", 2), 2)
introTypes [HsTypeDecl noLoc "Block" [ ]
(HsTyTuple [
HsTyApp (HsTyCon (UnQual (HsIdent "List")))
(HsTyCon (UnQual (HsIdent "Dec"))),
HsTyApp (HsTyCon (UnQual (HsIdent "List")))
(HsTyCon (UnQual (HsIdent "Stat")))])]
foldAlias (ConRef (HsIdent "Prog", 2) SelStop) ((HsIdent "Block"), [ ])
alias2newtype (HsIdent "Block") (HsIdent "Block")
newtype2data (HsIdent "Block")
ungroupConPos ((HsIdent "Block"), 1)
‘seqTrafo‘
‘seqTrafo‘
‘seqTrafo‘
‘seqTrafo‘
‘seqTrafo‘
Fig. 12. Script for the scenario in Fig. 11
data Maybe
k
data Maybe ′
k
data Maybe ′
k
a = Nothing | Just
a
k
k
k
k
a = Nothing ′ | Just ′ a
k
k
k
k
a = Nothing ′ | Just ′ a
k
k
k
data ConsList a = Nil
(Maybe ′ a)
k k
| Cons a (ConsList a)
Fig. 13. Illustration of the generalisation of Maybe to ConsList
In Fig. 11, we show the steps that implement the introductory example. As one
can see, we basically implement extraction, but extra steps deal with grouping and
ungrouping the two components subject to extraction. Also, the extracted type
should be a proper datatype as opposed to a type alias (see transition from 3. to 4.).
For completeness’ sake, the transformation script is shown in Fig. 12. The script
precisely captures the steps that underly the interactive transformation in Fig. 1.
Some of the operators are not completely structure-preserving, that is, strictly speaking, the structures of the datatypes before and after transformation are not fully
equivalent. For example, a newtype and a datatype are semantically distinguished,
even if the defining constructor declaration is the very same. (This is because a constructor of a datatype involves an extra lifting step in the semantical domain, i.e.,
there is an extra ‘bottom’ element.) The operators for grouping and ungrouping
also deviate from full structure preservation.
3.5 Swapping types on use sites
We will now deal with transformations that eliminate or establish type distinctions
by what we call swapping types on use sites. In Fig. 13, we illustrate a typical application of swapping. In the example, we want to generalise the standard datatype
Maybe to allow for lists instead. In fact, we do not want to change the general
definition of the library datatype Maybe, but we only want to change it on one use
site (not shown in the figure). This is where swapping helps: as an intermediate
step, we can replace Maybe on the use site by a newly introduced datatype Maybe ′
with equivalent structure. The figure illustrates how subsequent adaptations derive
10
Kort & Lämmel
type DataNames
type DataUnifier
=
=
(TypeId , [ConId ])
(DataNames , DataNames)
swapAlias
swapData
::
::
TypeSel → TypeId → TypeId → Trafo
TypeSel → [DataUnifier ] → Trafo
Fig. 14. Operators for swapping types on use sites
type ConDecl
data HsType
= (ConId , [HsType ])
= ...
::
::
includeConDecl
excludeConDecl
-- Constructor declaration
-- Syntactical sort for type expressions
TypeId → ConDecl → Trafo
ConId → Trafo
Fig. 15. Operators for inclusion and exclusion of constructor declarations
Syntax as of Fig. 11
data Prog
data Block
data Dec
data Stat
data Expr
=
=
=
=
=
Prog ProgName Block
Block [Dec ] [Stat ]
VDec Id Type
Assign Id Expr | If Expr Stat Stat
Var Id | Const Int
After syntax extension by statement blocks
data Stat
= Assign Id Expr | If Expr Stat Stat | SBlock Block
Fig. 16. Illustration of constructor inclusion
the ConsList datatype from the clone of the Maybe datatype. In particular, we add
the boxed constructor component.
The swapping operators are declared in Fig. 14. There is one operator for type
aliases and another for datatype declarations. In the case of proper datatypes, one
needs to match the constructors in addition to just the names of the types. This is
modelled by the helper datatype DataUnifier. The type of the operator swapData
clarifies that we are prepared to process a list of DataUnifiers. This is necessary if
we want to swap mutually recursive systems of datatypes.
3.6 Inclusion vs. exclusion
We now leave the ground of structure-preserving transformations. That is, we
will consider transformations where input and output datatypes are not structurally
equivalent. In fact, we consider certain ways to extend or reduce the structure of the
datatype. The first couple of structure-extending and -reducing transformations is
about inclusion and exclusion of constructor declarations (see Fig. 15). These operators are only feasible for proper datatypes and not for type aliases or newtypes.
(This is because a type alias involves no constructor at all, and a newtype is defined
in terms of precisely one constructor declaration.)
In Fig. 16, we show an example for constructor inclusion. In fact, we just continue the introductory example to make use of the extracted block structure in a
language extension for statement blocks. That is, we include a constructor application for Stat to capture Block as another statement form. This continuation of the
11
Kort & Lämmel
insertConComp
deleteConComp
::
::
ConPos → HsType → Trafo
ConPos → Trafo
Fig. 17. Operators for insertion and deletion of constructor components
A datatype for a transition relation / function, and helpers
type TransRel a
= a → Maybe a
data Maybe a
= Nothing | Just a
data ConsList a
= Nil | Cons a (ConsList a)
Introduction of a substitute for Maybe
data Maybe ′ a
= Nothing ′ | Just ′ a
Swapping Maybe and Maybe’ in TransRel
type TransRel a
= a → Maybe ′ a
Extension of Maybe ′ to fit with shape of ConsList
data Maybe ′ a
= Nothing ′ | Just ′ a (Maybe ′ a)
Swapping Maybe ′ and ConsList in TransRel
type TransRel a
= a → ConsList a
Fig. 18. Illustration of component insertion and type swapping
introductory example amplifies the intended use of our operator suite: for program
evolution in the sense of datatype refactoring and adaptation.
3.7 Insertion vs. deletion
Inclusion and exclusion of constructor declarations is about the branching structure
of datatypes. We will now discuss operators that serve for the insertion or deletion
of constructor components (see Fig. 17). Insertion of a component c into a constructor declaration C c1 · · · cn proceeds as follows. Given the target position for
the new component, be it i 6 n + 1, the new constructor declaration is simply
of the form C c1 · · · ci−1 c ci · · · cn . In general, c might need to refer to type
parameters of the affected datatype. Deletion of a constructor declaration relies on
the identification of the obsolete component.
In Fig. 18, we elaborate on the earlier example for generalising ‘maybies’ to lists
(recall Fig. 13). At the top of Fig. 18, we see three datatypes TransRel, Maybe,
and ConsList. The idea is indeed to replace Maybe by ConsList in the using occurrence in TransRel. (That is, we want to allow for a function from a to a list of
as instead of a partial function from a to a.) We call this adaptation a generalisation
because a list is more general than an optional. In the initial phase of the generalisation of Maybe, we disconnect the relevant occurrence of Maybe in TransRel
from other possible occurrences in the program. So we introduce a copy Maybe ′ of
Maybe, and we perform type swapping so that TransRel refers to Maybe ′ instead
of the ‘read-only’ Maybe. Now we need to make Maybe ′ structurally equivalent to
ConsList. This amounts to adding a recursive component to the second constructor Just ′ . Then, we can again swap types to refer to ConsList in the co-domain of
TransRel.
12
Kort & Lämmel
4 Datatype transformation meets program transformation
We will now re-iterate over the groups of operators to investigate their impact on
functional programs. It would be utterly complex to formalise the link between
datatype and program transformation. The mere specification of the transformations is already intractable for a publication because of its size, and the number of
details. So we will describe the implied program transformations informally while
omitting less interesting details.
4.1 Renaming
Type names only occur inside type declarations and type annotations. So there is
no need to adapt expressions or function declarations except for their signatures, or
the type annotations of expressions. Constructor names can very well occur inside
patterns and expressions that contribute to function declarations. Renaming these
occurrences is completely straightforward.
4.2 Permutation
The permutation of type parameters does not necessitate any completion at the
level of function declarations. The permutation of constructor components, however, needs to be realized in patterns and expressions as well. This is particularly
simple for pattern-match cases because all components are matched by definition.
Hence, we can directly permute the sub-patterns in an affected constructor pattern.
Witnessing permutations of constructor components in expression forms is slightly
complicated by currying and higher-order style. Instead of permuting components
in possibly incomplete constructor applications, we could first get access to all
components by ‘λ-pumping’: given a constructor C with say n potential components according to its declaration, we first replace C by λx1 · · · xn . C x1 · · · xn
as justified by η-conversion. Then, we witness the permutation by permuting the
arguments x1 , . . . , xn in the pumped-up expression. In the presence of a nonstrict language with an evaluation order on patterns, the permutation of constructor
components might actually change the behaviour of the program regarding termination. We neglect this problem. We should also mention that it is debatable if
the described kind of η-conversion is really what the programmer wants because it
obscures the code.
4.3 Introduction vs. elimination
Introduction does not place any obligations on the functions defined in the same
program. In the case of elimination, we have to ensure that the relevant types are not
used by any function. If we assume that all function declarations are annotated by
programmer-supplied or inferred signatures, then the precondition for elimination
can be checked by looking at these signatures. There is an alternative approach that
does not rely on complete type annotations: we check that no constructor of the
relevant types is used.
13
Kort & Lämmel
4.4 Folding vs. unfolding
The restriction of folding and unfolding to type aliases guarantees that these operators do not necessitate any adaptation of the function declarations. This is simply
because interchanging a type alias and its definition is completely structure- and
semantics-preserving, by definition. This is extremely convenient: despite the crucial role of the operators for folding and unfolding, they do not raise any issue at
the level of function declarations.
4.5 Wrapping vs. unwrapping
Grouping and ungrouping These operators are handled using the same overall approach as advocated for the permutation of constructor components. That is, in
patterns we witness grouping or ungrouping by inserting or removing the enclosing
“( . . . )”; in expressions, we perform η-conversion to access the relevant components, and then we group or ungroup them in the pumped-up constructor application.
Mediation between newtypes and datatypes These datatype transformations do not
imply any adaptations of the functions that involve the datatype in question. (As
we indicated earlier, the extra bottom value of a datatype, when compared to a
newtype, allows a program to be ‘undefined’ in one more way.)
Newtype to alias migration We simply remove all occurrences of the associated
constructor both in pattern and expression forms. We require that the relevant newtype is not covered by any instance declaration of some type class or constructor
class. Otherwise, we had to inline these members in a non-obvious way prior to the
removal of the constructor. If we neglected this issue, the resulting program either
becomes untypeable, or a different instance is applied accidentally, which would
be hazardous regarding semantics preservation.
Alias to newtype migration This operator requires a non-trivial treatment for function declarations. The crucial issue is how to know the following:
•
What expressions have to be wrapped with the newtype constructor?
•
In what patterns does the newtype constructor need to be stripped?
Our approach is as simple as possible. We observe that the new newtype might be
used in the declarations of other datatypes. The corresponding patterns and expressions can be easily located and adapted as in the case of permutation, grouping, and
ungrouping (recall η-conversion etc.). We also need to adapt function declarations
if their argument or result types are known to refer to the relevant alias. This basically means that we need to access the affected arguments and result expressions
in all relevant equations to unwrap the arguments and wrap the result expressions.
These adaptations are slightly complicated by the fact that the affected type alias
can occur in arbitrarily nested locations.
In Fig. 19, we illustrate the effect of the alias2newtype operator in the introductory
example. We show the top-level interpreter function that maps over the statements
14
Kort & Lämmel
Top-level interpreter function before the illustrative extraction
run :: Prog → State ()
run (Prog name decs stats) = mapM interpret stats
The same function after extraction
run :: Prog → State ()
run (Prog name (Block decs stats) ) = mapM interpret stats
Fig. 19. Function adaptation triggered by alias-to-newtype migration
Input program
type TransRel a = a → Maybe a
data Maybe ′ a = Nothing | Just a
deadEnd :: TransRel a → a → Bool
deadEnd r a = case r a of Nothing → True
Just → False
Output program
type TransRel a = a → Maybe ′ a
data Maybe ′ a = Nothing | Just a
deadEnd :: TransRel a → a → Bool
deadEnd r a = case toMaybe (r a) of Nothing → True
Just → False
Induced helper for type swapping
toMaybe :: Maybe ′ a → Maybe a
toMaybe Nothing ′ = Nothing
toMaybe (Just ′ a) = Just a
Fig. 20. Function adaptation triggered by type swapping
of the program. (The program name and the declarations do not carry any semantics
here.) The type of the function run exhibits that the meaning of a program is a
computation that involves a State for the program variables. The adapted version
of run refers to the extra constructor Block, which resulted from extraction.
4.6 Swapping types on use sites
This operator relies on the same techniques as alias2newtype. However, instead of
wrapping and unwrapping a constructor. We invoke conversion functions that mediate between the two structurally equivalent types. These mediators merely map
old to new constructors and vice versa, and hence they are immediately induced by
the datatype transformation itself, namely by the DataUnifiers passed to the swap
operator. This approach implies that we only perform very local changes. The
program code will still work on the old datatypes thanks to the mediators.
The impact of swapping types at the function level is illustrated in Fig. 20. We
deal with the initial steps of the Maybe-to-ConsList migration in Fig. 18, where
we replace the occurrence of Maybe within TransRel by a structurally equivalent
Maybe ′ . We show an illustrative function deadEnd which performs a test if the
given transition relation allows for a transition in the presence of a given state a.
The adapted function deadEnd refers to the conversion function toMaybe prior to
performing pattern matching on the obsolete Maybe type.
15
Kort & Lämmel
Input program
data Stat = Assign Id Expr | If Expr Stat Stat
interpret :: Stat → State ()
interpret (Assign i e) = envLookup i >
>= λr → ...
interpret (If e s1 s2 ) = reval e >
>= λv → ...
Output program
data Stat = Assign Id Expr | If Expr Stat Stat | SBlock Block
interpret :: Stat → State ()
interpret (Assign i e) = envLookup i >
>= λr → ...
interpret (If e s1 s2 ) = reval e >
>= λv → ...
interpret ( SBlock ) = ⊥
Fig. 21. Inclusion of a constructor declaration
4.7 Inclusion vs. exclusion
Intuitively, the inclusion of a constructor should be complemented by the extension
of all relevant case discriminations. This normally means to add a pattern-match
equation (or a case to a case expression) for the new constructor. Dually, exclusion
of a constructor should be complemented by the removal of all pattern-match equations (or cases) that refer to this constructor. In the case of added pattern-match
equations, we view the right-hand sides of these equations as a kind of ‘hot spot’
to be resolved by subsequent expression-level transformations. To this end, we use
“undefined”, i.e., “⊥”, as a kind of to-do marker. Dually, in the case of removed
constructors, we also need to replace occurrences of the constructor within expressions by “⊥”. When using interactive tool support, these to-do markers are useful
to control further steps in a transformation scenario.
In Fig. 21, we progress with our running example of an interpreter for an imperative language. We illustrate the step where blocks are turned into another form of
statements. Hence, the shown output program involves a new pattern-match equation that interprets statement blocks. This added equation reflects that the meaning
of such blocks is as yet undefined, subject to subsequent adaptations.
4.8 Insertion vs. deletion
Inserting a component into a declaration for a constructor C means that all patterns
with C as outermost constructor must be adapted to neglect the added component,
and all applications of C must be completed to include “⊥” for the added component. Dually, deletion of a component from C means that all applications of C and
all patterns with C as outermost constructor need to be cleaned up to project away
the obsolete component. Any reference to a pattern variable for the obsolete component is replaced by “⊥”. As in the case of permutation and others, η-conversion
is needed to actually get access to constructor components in expressions.
In Fig. 22, the insertion of a constructor component is illustrated by continuing
the scenario from Fig. 20. The adapted equation of toMaybe involves an extended
pattern. As the don’t care pattern “ ” indicates, the definition of toMaybe does not
make use of the added component. In fact, the definition of the function deadEnd
does not need to be adapted; it only tests for the availability of a transition step.
16
Kort & Lämmel
Output program
type TransRel a = a → Maybe ′ a
data Maybe ′ a = Nothing ′ | Just ′ a (Maybe ′ a)
deadEnd :: TransRel a → a → Bool
deadEnd r a = case toMaybe (r a) of Nothing → True
Just → False
Induced helper for type swapping
toMaybe :: Maybe ′ a → Maybe a
toMaybe Nothing ′ = Nothing
toMaybe ( Just ′ a ) = Just a
Fig. 22. Illustration of the insertion of a constructor component
Normally, other functions will start to rely on the richer pattern.
5 Related work
Transformational program development Formal program transformation [BD77]
separates two concerns: the development of an initial, maybe inefficient program
the correctness of which can easily be shown, and the stepwise derivation of a better implementation in a semantics-preserving manner. Partsch’s textbook [Par90]
describes the formal approach to this kind of software development. Pettorossi
and Proietti study typical transformation rules (for functional and logic) programs
in [PP96]. Formal program transformation, in part, also addresses datatype transformation [dRE98], say data refinement. Here, one gives different axiomatisations
or implementations of an abstract datatype which are then related by well-founded
transformation steps. This typically involves some amount of mathematical program calculation. By contrast, we deliberately focus on the more syntactical transformations that a programmer uses anyway to adapt evolving programs.
Database schema evolution There is a large body of research addressing the related problem of database schema evolution [BKKK87] as relevant, for example, in
database re- and reverse engineering [HTJC93]. The schema transformations themselves can be compared with our datatype transformations only at a superficial level
because of the different formalisms involved. There exist formal frameworks for
the definition of schema transformations and various formalisms have been investigated [MP97]. An interesting aspect of database schema evolution is that schema
evolution necessitates a database instance mapping [BCN92]. Compare this with
the evolution of the datatypes in a functional program. Here, the main concern is to
update the function declarations for compliance with the new datatypes. It seems
that the instance mapping problem is a special case of the program update problem.
Refactoring The transformational approach to program evolution is nowadays called
refactoring [Opd92,Fow99], but the idea is not new [ABFP86,GN90]. Refactoring means to improve the structure of code so that it becomes more comprehensible, maintainable, and adaptable. Interactive refactoring tools are being studied
and used extensively in the object-oriented programming context [Moo96,RBJ97].
Typical examples of functional program refactorings are described in [Läm00], e.g.,
the introduction of a monad in a non-monadic program. The precise inhabitation of
17
Kort & Lämmel
the refactoring notion for functional programming is being addressed in a project
at the University of Kent by Thompson and Reinke; see [TR01]. There is also
related work on type-safe meta-programming in a functional context, e.g., by Erwig [ER02]. Previous work did not specifically address datatype transformations.
The refactorings for object-oriented class structures are not directly applicable because of the different structure and semantics of classes vs. algebraic datatypes.
Structure editing Support for interactive transformations can be seen as a sophistication of structure editing [RT88,Koo94,KS98]. This link between transformation
and editing is particularly appealing for our “syntactical” transformations. Not surprisingly, concepts that were developed for structure editing are related to our work.
For example, in [SdM99], primitives of structure editing are identified based on the
notion of focus to select subtrees, and on navigation primitives left, right, up and
down. Trees, subtrees and paths are here defined as follows:
data Tree
type SubTree
type Path
type Layer
=
=
=
=
Fork Label [Tree ]
(Path, Tree)
[Layer ]
(Label , [Tree ], [Tree ])
The t in a subtree (p, t) is the currently selected tree and it is between the left and
right trees in the top layer (the head of the p). This approach does not account for
the heterogeneous character of language syntaxes, but it shows that the fact if a
focus resides in a term can be encoded in types.
6 Concluding remarks
Contribution We identified the fundamental primitives for datatype transformation.
These operators are meant to support common scenarios of program adaptation in
functional programming, or other settings where algebraic datatypes play a role.
In fact, all the identified operators are universal in the sense, that they are also
meaningful for other program abstractions than just datatypes, e.g., function declarations. We deliberately focused on adaptations of datatypes because a vast body of
previous work addressed fold/unfold transformations for recursive functions. Despite the focus on datatype transformations, we had to consider program transformations that are necessitated by the modification of datatypes. Regarding the
executable specification of the operator suite, we adhered to the formula: metaprograms = object-programs = Haskell programs. We employed generic functional
programming in the interest of conciseness. We also employed designated means
of referring to fragments of interest, e.g., a focus concept.
Partial project failure We are confident that the identified operators are sufficient
and appropriate for actual datatype transformations. We have attempted to complement this framework development by actual interactive tool support. We initially
thought that using Haskell for this interactive tooling as well would be a good idea.
Since the actual transformation operators are implemented in Haskell anyway, and
the interactive dialogues need to cooperate with the operator framework to perform
analyses, Haskell indeed seems to be the obvious choice. To make a long story
18
Kort & Lämmel
short, there are many GUI libraries for Haskell, but none of them is suitable for
developing a sophisticated GUI for interactive program transformation at the moment. It seems that environments for interactive language tools would provide a
better starting point, e.g., environments based on attribute grammars [RT88,KS98].
Perspective To cover full Haskell, a few further operators would have to be added
to our suite, in particular, operators that support type and constructor classes. We
should also pay full attention to some idiosyncrasies of Haskell; cf. refutable vs.
irrefutable patterns. Then, there are also transformation techniques that seem to
go beyond our notion of program evolution but it is interesting to cover them anyway. We think of techniques like turning a system of datatypes into functorial style,
or threading a parameter through a system of datatypes. The ultimate perspective
for the presented work is to integrate the datatype transformations into a complete,
well-founded, and user-friendly refactoring tool for functional programming along
the lines of Thompson’s and Reinke’s research project [TR01]. Another perspective for our research is to further pursue the intertwined character of datatype and
program transformations in the context of XML format and API evolution.
References
[ABFP86] G. Arango, I. Baxter, P. Freeman, and C. Pidgeon.
TMM: Software
maintenance by transformation. IEEE Software, 3(3):27–39, May 1986.
[BCN92] C. Batini, S. Ceri, and S.B. Navathe.
Conceptual database design.
Benjamin/Cummings, Redwood City, US, 1992.
[BD77] R. M. Burstall and John Darlington. A transformation system for developing
recursive programs. Journal of the ACM, 24(1):44–67, January 1977.
[BKKK87] J. Banerjee, W. Kim, H.-J. Kim, and H.F. Korth.
Semantics and
Implementation of Schema Evolution in Object-Oriented Databases. SIGMOD
Record (Proc. Conf. on Management of Data), 16(3):311–322, May 1987.
[dRE98] Willem-Paul de Roever and Kai Engelhardt. Data Refinement: Model-Oriented
Proof Methods and their Comparison, volume 47 of Cambridge Tracts in
Theoretical Computer Science. Cambridge University Press, 1998.
[ER02] M. Erwig and D. Ren. A rule-based language for programming software
updates. In Proceedings of the 2002 ACM SIGPLAN workshop on Rule-based
programming, pages 67–78. ACM Press, 2002.
[Fow99] M. Fowler. Refactoring—Improving the Design of Existing Code. Addison
Wesley, 1999.
[GN90] W. G. Griswold and D. Notkin. Program restructuring as an aid to software
maintenance. Technical report, Seattle, WA, USA, August 1990.
[HTJC93] J.-L. Hainaut, C. Tonneau, M. Joris, and M. Chandelon.
Schema
Transformation Techniques for Database Reverse Engineering. In Proc. of the
12th Int. Conf. on ER Approach, Arlington-Dallas, 1993. E/R Institute.
19
Kort & Lämmel
[Koo94] J.W.C. Koorn.
Generating uniform user-interfaces for interactive
programming environments. PhD thesis, University of Amsterdam, 1994.
[KS98] M. Kuiper and J. Saraiva. Lrc — A generator for Incremental LanguageOriented Tools. In K. Koskimies, editor, Compiler Construction CC’98,
volume 1383 of LNCS, pages 298–301. Springer-Verlag, April 1998. Tool
demonstration.
[Läm00] R. Lämmel. Reuse by Program Transformation. In Greg Michaelson and
Phil Trinder, editors, Functional Programming Trends 1999, pages 143–152.
Intellect, 2000.
[Läm01] R. Lämmel. Grammar Adaptation. In J.N. Oliveira and P. Zave, editors, Proc.
Formal Methods Europe (FME) 2001, volume 2021 of LNCS, pages 550–570.
Springer-Verlag, 2001.
[Moo96] I. Moore.
Automatic Inheritance Hierarchy Restructuring and Method
Refactoring. In OOPSLA ’96 Conference Proceedings: Object-Oriented
Programming Systems, Languages, and Applications, pages 235–250. ACM
Press, 1996.
[MP97] P. McBrien and A. Poulovassilis. A Formal Framework for ER Schema
Transformation. In D.W. Embley and R.C. Goldstein, editors, Conceptual
Modeling - ER ’97, 16th International Conference on Conceptual Modeling,
Los Angeles, California, USA, November 3-5, 1997, Proc., volume 1331 of
LNCS, pages 408–421. Springer-Verlag, 1997.
[Opd92] W. F. Opdyke. Refactoring Object-Oriented Frameworks.
University of Illinois at Urbana-Champaign, 1992.
PhD thesis,
[Par90] H.A. Partsch. Specification and Transformation of Programs. Springer-Verlag,
1990.
[PP96] A. Pettorossi and M. Proietti. Rules and Strategies for Transforming Functional
and Logic Programs. ACM Computing Surveys, 28(2):360–414, June 1996.
[RBJ97] D. Roberts, J. Brant, and R.E. Johnson. A Refactoring Tool for Smalltalk.
Theory and Practice of Object Systems (TAPOS), 3(4):253–263, 1997.
[RT88] T.W. Reps and T. Teitelbaum. The Synthesizer Generator: A System for
Constructing Language–Based Editors. Springer–Verlag, 1988.
[SdM99] B.A. Sufrin and O. de Moor. Modeless structure editing. In A.W. Roscoe and
J.C.P. Woodcock, editors, Proceedings of the Oxford-Microsoft symposium in
Celebration of the work of Tony Hoare, September 1999.
[TR01] S. Thompson and C. Reinke. Refactoring Functional Programs. Technical
Report 16-01, Computing Laboratory, University of Kent at Canterbury,
October 2001.
Also see http://www.cs.ukc.ac.uk/people/
staff/sjt/Refactor/.
20
| 6 |
On the Capacity of a Class of Signal-Dependent Noise Channels
Hamid Ghourchian, Gholamali Aminian,
Amin Gohari, Mahtab Mirmohseni, and Masoumeh Nasiri-Kenari,
arXiv:1702.03590v2 [cs.IT] 2 Jun 2017
Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran.
E-mails: {h ghourchian, aminian}@ee.sharif.edu, {aminzadeh, mirmohseni, mnasiri}@sharif.edu
∗
Abstract
In some applications, the variance of additive measurement noise depends on the signal that
we aim to measure. For instance, additive Gaussian signal-dependent noise (AGSDN) channel
models are used in molecular and optical communication. Herein we provide lower and upper
bounds on the capacity of additive signal-dependent noise (ASDN) channels. The idea of the
first lower bound is the extension of the majorization inequality, and for the second one, it
uses some calculations based on the fact that h (Y ) > h (Y |Z). Both of them are valid for
all additive signal-dependent noise (ASDN) channels defined in the paper. The upper bound
is based on a previous idea of the authors (“symmetric relative entropy”) and is used for the
additive Gaussian signal-dependent noise (AGSDN) channels. These bounds indicate that in
ASDN channels (unlike the classical AWGN channels), the capacity does not necessarily become
larger by making the variance function of the noise smaller. We also provide sufficient conditions
under which the capacity becomes infinity. This is complemented by a number of conditions
that imply capacity is finite and a unique capacity achieving measure exists (in the sense of the
output measure).
Keywords: Signal-dependent noise channels, molecular communication, channels with infinite capacity, existence of capacity-achieving distribution.
1
Introduction
An additive Gaussian signal-dependent noise (AGSDN) channel with input x and output y is defined
by
−(y−x)2
1
e 2σ(x)2 ,
fY |X (y|x) = p
2πσ(x)2
where σ(·) is a given function from R to [0, ∞). Alternatively, we may describe the AGSDN channel
by Y = X + σ(X) · Z where Z ∼ N (0, 1) is a standard Gaussian random variable and independent
of the input X. For constant function σ(x) = c, the AGSDN channel reduces to a simple additive
Gaussian channel. More generally, we may relax the Gaussian assumption on Z and consider an
additive signal-dependent noise (ASDN) channel defined by
Y = X + σ(X) · Z,
(1)
where noise Z is assumed to be a continuous random variable with a given pdf fZ (z), and be
independent of the input X.1 For instance, one can consider an ASDN with Z being a truncated
∗
This work was supported by INSF Research Grant on “Nano-Network Communications”. The first two authors
contributed equally to this work.
1
See Definition 3 for the definition of continuous random variables.
1
version of the Gaussian distribution as a better model in an application if we know that the output
Y has minimum and maximum values in that applications.
Below we provide a number of applications in which the ASDN channel arises.
1. The AGSDN channel appears in optical
pcommunications when modeling the shot noise or the
optical amplification noise for σ(x) = c20 + c21 x [1].
√
2. In molecular communication, the AGSDN channel with σ(x) = c x arises in the ligand
receptor model, the particle sampling noise, the particle counting noise and the Poisson model
for an absorbing receiver [2, 3, 4]. In all cases, the reason for appearance of a Gaussian signaldependent noise is the approximation of a binomial or Poisson distribution with a Gaussian
distribution. Observe that the mean and variance of a binomial distribution with parameters
(n, p) relate to each other: the mean is np and the variance is np(1 − p) respectively. As a
result, the mean and variance of the approximated Gaussian distribution also relate to each
other (see [5, Section II.B] for a detailed overview).
3. Besides the above applications of ASDN in molecular communications we shall provide two
other cases where this channel model is helpful: Consider the Brownian motion of a particle
with no drift over a nonhomogeneous medium with σ(x) denoting the diffusion coefficient of
the medium at location x. The diffusion coefficient σ(x) describes the movement variance of
a particle when in location x. More specifically, the motion of the particle is described by the
stochastic differential equation
dXt = σ(Xt ) dBt ,
where Bt is the standard Wiener process (standard Brownian motion). Alternatively, we can
express the above equation using the following Itô integral
Z t+s
Xt+s − Xt =
σ(Xu ) dBu .
(2)
t
Let us denote the position of the particle at time 0 by X = X0 , and its position after t seconds
by Y = Xt . If t is a small and fixed number, (2) reduces to
Y = X + tσ(X) · Z,
where Z ∼ N (0, 1). Thus, the movement of the particle follows an AGSDN channel law if t is
small.
4. As another example, consider the molecular timing channel in a time-varying medium. In a
molecular timing channel, information is encoded in the release time of molecules. A molecule
released at time X hits the receiver after a delay Z at time Y = X + Z. Molecules are
absorbed once they hit the receiver. As such, the distribution of Z is that of the first arrival
time. The existing literature only studies this problem when the medium is time-invariant (see
[6, 7, 8, 9]): if the medium is uniform, time-invariant and one-dimensional, Z is distributed
according to the inverse Gaussian distribution (if there is a flow in the medium) or the Lévy
distribution (if there is no flow in the medium). As a result, the channel is called the additive
inverse Gaussian noise additive channel, or the additive Lévy noise in the literature. However,
in a time-varying medium (or when the distance between the transmitter and receiver varies
over time), the distribution of Z depends on the release time X. As a result, we obtain a
signal-dependent noise additive component. For instance, the additive noise can have a Lévy
2
distribution with a scale parameter that depends on input X. Using the scaling property of the
Lévy distribution, we can express this as σ(X) · Z where Z is the standard Lévy distribution,
and σ(X) is the scale parameter. This would be an ASDN channel.
5. In the third item, we discussed Brownian motion after a small time elapse. A Brownian motion
with no drift is an example of a martingale. Now let us consider a martingale after a large
time elapse. Here, the AGSDN channel also arises as a conditional distribution in any process
that can be modeled by a discrete time martingale with bounded increments. Assume that
X0 , X1 , X2 , · · · is such a martingale. Then E [Xn ] = E [X0 ]. Furthermore, by the martingale
central limit theorem, the conditional distribution of Xn given X0 = x for large values of n
can be approximated by a Gaussian distribution with mean X0 = x and a variance σn (x) that
depends on X0 = x.
6. Finally, we relate the ASDN channel to real fading channels with a direct line of sight. Consider
a scalar Gaussian fading channel
Y = X + HX + N,
(3)
where X is the input, H ∼ N (0, c1 ) is the Gaussian fading coefficient and N ∼ N (0, c0 ) is
the additive environment noise. The first X term on the right-hand side of (3) corresponds
to the direct line of sight, while the HX term is the fading term. The distribution of Y given
X = x is N (x, c1 x2 + c0 ). Thus (3) can be expressed as Y = X + σ(X) · Z where
p
σ(x) = c1 x2 + c0 ,
Z ∼ N (0, 1).
A fast fading setting in which H varies independently over each channel use corresponds to a
memoryless ASDN channel.
The purpose of this paper is to study the capacity of a memoryless additive signal-dependent
noise (ASDN) channel defined via
Y = X + σ(X) · Z,
under input cost constraints. The memoryless assumption implies that the noise Z is drawn independently from fZ (z) in each channel use.
Related works: In [10], vector AGSDN channels subject cost constraints are studied. It is
shown that under some assumptions,
p the capacity achieving distribution is a discrete distribution.
The AGSDN channel with σ(x) = c20 + c21 x is investigated in [1] wherein capacity upper and lower
bounds are derived considering peak and average constraints.
Note that the memoryless AGSDN includes the additive white Gaussian noise (AWGN) channel
as its special case. The capacity of AWGN channel under power constraint is classical and is
obtained by an input of Gaussian random variable. Its capacity under both average and peak
power constraints is quite different, as the capacity achieving input distribution is discrete with a
finite number of mass points [11]. See [12, 13] for further results on the capacity of the AWGN
channel with both average and peak power constraints.
Our contributions: Our contributions in this work can be summarized as follows:
• We provide a new tool for bounding the capacity of continuous input/output channels. Note
that
I (X; Y ) = h (Y ) − h (Y |X) .
3
We provide two sufficient conditions under which h (Y ) ≥ h (X), which results in
I (X; Y ) ≥ h (X) − h (Y |X) ,
and leads to lower bounds on the channel capacity of an ASDN channel.
• It is known that increasing the noise variance of an AWGN channel decreases its capacity.
However, we show that this is no longer the case for signal-dependent noise channels: the
constraint σ1 (x) ≥ σ2 (x) for all x does not necessarily imply that the capacity of an AGSDN
channel with σ1 (x) is less than or equal to the capacity of an AGSDN with σ2 (x).
• We identify conditions under which the capacity of the ASDN channel becomes infinity. In
particular, this implies that the capacity of a AGSDN channel with
p
σ(x) = c1 x2 + c0
tends to infinity as c0 tends to zero. Thus, the capacity of the real Gaussian fast fading channel
given earlier in this section tends to infinity as c0 tends to zero. This parallels a similar result
given in [14] for complex Gaussian fading channels.
• We provide a new upper bound for the AGSDN channel based on the KL symmetrized upper
bound of [15]. This upper bound is suitable for the low SNR regime, when σ(x) is large. This
is
p in contrast with the upper bound of [1, Theorems 4, 5] for AGSDN channels with σ(x) =
c20 + c21 x which is suitable for large values of peak and average constraints. Furthermore,
we give ourp
upper bound for a large class of functions σ(x) while the technique of [1] is tuned
for σ(x) = c20 + c21 x.
This paper is organized as follows. Section 2 includes some of primary definitions and notations.
In Section 3, our main results are given. This includes two lower bounds and one upper bound on
the capacity of the ASDN channel. There are some useful lemmas in Section 4 used in the paper.
The numerical results and plots are given in Section 5. The proofs of our results are given in Section
6.
2
Definitions and Notations
In this section we review the definitions of continuous and discrete random variables, as well as
entropy and differential entropy, relative entropy and mutual information.
Throughout this paper all the logarithms are in base e. Random variables are denoted by
capital letters, and probability measure functions are denoted by letter µ. The collection of Borel
measurable sets in R is denoted by B(R). We sometimes use a.e. and µ-a.e. as a short-hand for
“almost everywhere” and “µ-almost everywhere”, respectively. The set A is µ-a.e., when
Z
dµ = 0.
A
The set A is a.e. if it is µ-a.e. when µ is the Lebesgue measure.
Definition 1 (Relative Entropy). [16, Section 1.4] For random variables X and Y with probability
measures µX and µY , the relative entropy between X and Y is defined as follows:
i
( h
X
E log dµ
(X)
µX µY
dµY
D (µX kµY ) = D (XkY ) :=
,
+∞
o.w.
4
X
where dµ
dµY is the Radon-Nikodym derivative and µX µY means µX is absolutely continuous w.r.t.
µY i.e. µX (A) = 0 for all A ∈ B if µY (A) = 0, where B is the Borel σ-field of the space over which
the measures are defined.
Definition 2 (Mutual Information). [16, Section 1.6] For random variables X, Y with joint probability measure µX,Y , the mutual information between X and Y is defined as follows:
I (X; Y ) = D (µX,Y kµX µY ) ,
where µX µY is the product measure defined as
(µX µY )(A, C) = µX (A)µY (C),
where A ∈ BX the Borel σ-field of the space over which µX is defined, and C ∈ BY the Borel σ-field
of the space over which µY is defined.
Similarly, for three random variable X, Y, Z with joint measure µX,Y,Z , conditional mutual information I (X; Y |Z) is defined as I (X; Y, Z) − I (X; Z).
Definition 3 (Continuous Random Variable). [10] Let X be a real-valued and random variable
that is measurable with respect to B(R). We call X a continuous random variable if its probability
measure µX , induced on (R, B), is absolutely continuous with respect to the Lebesgue measure for
B(R) (i.e., µ(A) = 0 for all A ∈ B with zero Lebesgue measure). We denote the set of all absolutely
continuous probability measures by AC. Note that the Radon-Nikodym theorem implies that for each
random variable X with measure µX ∈ AC there exists a B(R)-measurable function fX : R → [0, ∞),
such that for all A ∈ B(R) we have that
Z
µX (A) = Pr {X ∈ A} =
fX (x) dx.
(4)
A
The function fX is called the probability density function (pdf ) of X [16, p. 21]. We denote pdf of
absolutely continuous probability measures by letter f .
Definition 4 (Discrete Random Variable). [10] A random variable X is discrete if it takes values
in a countable alphabet set X ⊂ R.
Probability mass function (pmf) for discrete random variable X with probability measure µX
is denoted by pX and defined as follows:
pX (x) := µX ({x}) = Pr {X = x} ,
∀x ∈ X .
Definition 5 (Entropy and Differential Entropy). [17, Chapter 2] We define entropy H (X), for a
discrete random variable X with measure µX and pmf pX as
H (X) = H (µX ) = H (pX ) :=
X
pX (x) log
x
if the summation converges. Observe that
1
H (X) = E log
.
pX (X)
5
1
,
pX (x)
For a continuous random variable X with measure µX and pdf fX , we define differential entropy
h (X) as
Z +∞
1
fX (x) log
h (X) = h (µX ) = h (pX ) :=
,
f
X (x)
−∞
if the integral converges. Similarly, the differential entropy is the same as
1
.
h (X) = E log
fX (X)
Similarly, for two random variables X, Y , with measure µX,Y , if for all x, µY |X (·|x) is absolutely
discrete with pmf pY |X (·|x), the conditional entropy H (Y |X) is defined as
1
H (Y |X) = E log
.
pY |X (Y |X)
Likewise, for two random variables X, Y , with measure µX,Y , if for all x, µY |X (·|x) is absolutely
continuous with pdf fY |X (·|x), the conditional differential entropy h (Y |X) is defined as
h (Y |X) = E log
1
.
fY |X (Y |X)
We allow for differential entropy to be +∞ or −∞ if the integral is convergent to +∞ or −∞,
i.e., we say that
h (X) = +∞,
if and only if
1
dx = +∞, and
fX (x)
Z
1
fX (x) log
dx converges to a finite number
fX (x)
A−
Z
fX (x) log
A+
where
A− = {x : fX (x) > 1}.
A+ = {x : fX (x) ≤ 1},
Similarly, we define h (X) = −∞. When we write that h (X) > −∞, we mean that the differential
entropy of X exists and is not equal to −∞. The following example, from [14], demonstrates the
differential entropy can be +∞ or −∞.
Example 1. Differential entropy becomes plus infinity for the following pdf defined over R [14]:
(
1
, x > e,
x(log x)2
f (x) =
0,
x ≤ e.
On the other hand, as shown in [14], differential entropy is minus infinity for
(
−1
0 < x < e−e ,
2,
g(x) = x log x(log(− log x))
0,
otherwise.
6
Definition 6 (Riemann integrable functions). Given −∞ ≤ ` < u ≤ +∞, in this work, we utilize
Riemann integrable functions g : (`, u) 7→ R on open interval (`, u). Such functions satisfy the
property that for any c ∈ (`, u), the function
Z x
h(x) =
g(t) dt,
c
is well-defined. By the fundamental theorem of calculus, h(·) is continuous on (`, u) (but not necessarily differentiable unless g is continuous).
As an example, consider the function g(x) = 1/x for x 6= 0, and g(0) = 0 otherwise. This
function is Riemann integrable on the restricted domain (0, ∞), but not integrable on (−1, 1).
3
Main Results
We are interested in the capacity of an ASDN channel with the input X taking values in a set
X and satisfying the cost constraint E[gi (X)] ≤ 0, ∀i = 1, 2, · · · , k for some functions gi (·). The
common power constraint corresponds to gi (x) = x2 − p for some p ≥ 0, but we allow for more
general constraints. Then, given a density function fZ (z) for the noise Z and function σ(·), we
consider the following optimization problem:
C = sup I (X; Y ),
(5)
µX ∈F
where X and Y are related via (1) and
F = {µX supp(µX ) ⊆ X , E[gi (X)] ≤ 0 for all i = 1, · · · , k}.
(6)
We sometimes use supp(X) to denote the support of measure µX , supp(µX ), when the probability
measure on X is clear from the context.
As an example, if, in an application, input X satisfies ` ≤ X ≤ u, the set X can be taken
to be [`, u] to reflect this fact; similarly, the constraint 0 < X ≤ u reduces to X = (0, u], and
0 ≤ ` ≤ |X| ≤ u reduces to X = [−u, −`] ∪ [`, u].
The rest of this section is organized as follows: in Section 3.1, we provide conditions that imply
finiteness of the capacity of an ASDN channel. In Section 3.2, we review the ideas used for obtaining
lower bounds in previous works and also in this work. Then, based on the new ideas introduced in
this work, we provide two different lower bounds in Sections 3.3 and 3.4. Finally, in Section 3.5, we
provide an upper bound for AGSDN channels.
3.1
Existence and Finiteness of Channel Capacity
Theorem 1. Assume that an ASDN channel satisfies the following properties:
• X is a closed and also bounded subset of R, i.e., there exists u ≥ 0 such that X ⊆ [−u, u];
• Real numbers 0 < σ` < σu exist such that σ` ≤ σ(x) ≤ σu for all x ∈ X ;
• Positive real m and γ exist such that fZ (z) ≤ m < ∞ (a.e.), and E [|Z|γ ] = α < ∞;
• The cost constraint functions gi (·) are bounded over X .
7
Then, the capacity of the ASDN channel is finite. Furthermore there is a capacity achieving probability measure; in other words, the capacity C can be expressed as a maximum rather than a
supremum:
C = max I (X; Y ).
µX ∈F
Moreover, the output distribution is unique, i.e. if µX1 and µX2 both achieves the capacity, then
fY1 (y) = fY2 (y),
∀y ∈ R,
where fY1 and fY2 are the pdfs of the output of the channel when the input probability measures are
µX1 and µX2 , respectively.
Remark 1. The above theorem is a generalization of that given in [10, Theorem 1] for the special
case of Gaussian noise Z.
The proof can be found in Section 6.1. To give a partial converse of the above theorem, consider
the case that the second assumption of the above theorem fails, i.e., when there is a sequence {xi }
of elements in X such that σ(xi ) converges to zero or infinity. The following theorem shows that
input/output mutual information can be infinity in such cases.
Theorem 2. Consider an ASDN channel with σ : X 7→ [0, +∞) where X is not necessarily a closed
set. Suppose one can find a sequence {x̃i } of elements in X such that σ(x̃i ) converges to 0 or +∞
such that
• As a sequence on real numbers, {x̃i } has a limit (possibly outside X ), which we denote by c.
The limit c can be plus or minus infinity.
• One can find another real number c0 6= c such that the open interval E = (c, c0 ) (or E = (c0 , c)
depending on whether c0 > c or c0 < c) belongs to X . Furthermore, x̃i ∈ E, and σ(·) is
monotone and continuous over E. 2
Then one can find a measure µX defined on E such that I (X; Y ) = ∞ provided that Z is a continuous
random variable and has the following regularity conditions:
|h (Z) | < ∞,
∃δ > 0 : Pr {Z > δ} , Pr {Z < −δ} > 0,
Furthermore, there is more than one measure µX that makes I (X; Y ) = ∞. In fact, input X can
be both a continuous or discrete random variable, i.e., one can find both an absolutely continuous
measure with pdf fX and discrete pmf pX such that I (X; Y ) is infinity when the measure on input
is either fX or pX .
The proof can be found in Section 6.2 and uses some of the results that we prove later in the
paper.
Remark 2. As an example, consider an AGSDN channel with X = (0, u) for an arbitrary u > 0,
and σ(x) = xα for α 6= 0. For this channel, we have C = +∞ if we have no input cost constraints.
Setting α = 1, this shows that the capacity of the fast-fading channel given in (3) is infinity if
c0 = 0; that is when there is no additive noise. This parallels a similar result given in [14] for
complex Gaussian fading channels.
2
We only require monotonicity here, and not strictly monotonicity.
8
Remark 3. It is known that increasing the noise variance of an AWGN channel decreases its
capacity. However, we show that this is no longer the case for signal-dependent noise channels:
Consider two AGSDN channels with parameters σ1 (x) and σ2 (x), respectively, which are defined
over X = (0, 1) with the following formulas:
σ1 (x) = 1,
σ2 (x) =
1
.
x
No input cost constraints are imposed. It is clear that σ2 (x) > σ1 (x) for all x ∈ X . However,
by considering the constraint 0 < X < 1, from Theorem 1 we obtain that the capacity of the first
channel is finite, while from Theorem 2, we obtain that the capacity of the second channel is ∞.
Therefore, the constraint σ1 (x) > σ2 (x) for all x ∈ X does not necessarily imply that the capacity
of an AGSDN channel with σ1 (x) is less than or equal to the capacity of an AGSDN with σ2 (x).
3.2
Lower Bounds on Capacity
To compute capacity from (5), one has to take maximum over probability measures in a potentially
large class F. Practically speaking, one can only find a finite number of measures µ1 , µ2 , · · · , µk
in F and evaluate input/output mutual information for them. Ideally, {µi } should form an covering of the entire F (with an appropriate distance metric), so that mutual information at
every arbitrary measure in F can be approximated with one of the measures µi . This can be
computationally cumbersome, even for measures defined on a finite interval. As a result, it is
desirable to find explicit lower bounds on the capacity. Observe that I (X; Y ) = h (Y ) − h (Y |X).
To compute the term h (Y |X), observe that given X = x, we have Y = x + σ(x) · Z and thus
h (Y |X = x) = log σ(x) + h (Z) (see Lemma 2). Thus,
h (Y |X) = E [log σ(X)] + h (Z) .
However, the term
p h (Y ) is more challenging to handle. Authors in [1] consider an AGSDN channel
with σ(x) =
c20 + c21 x for x ≥ 0, as well as show that h (Y ) ≥ h (X) and hence I (X; Y ) ≥
h (X)−h (Y |X). This implies that instead of maximizing I (X; Y ), one can maximize h (X)−h (Y |X)
to obtain a lower bound.
The proof of the relation h (Y ) ≥ h (X) in [1] is non-trivial; we review it here to motivate our
own techniques in this paper. First consider the special case of c1 = 0. In this case, we get σ(x) = c0
and the AGDSN reduces to AWGN channel Y = X +Z. In this special case, one obtains the desired
equation by writing
h (Y ) ≥ h (Y |Z) = h (X + Z|Z) = h (X|Z) = h (X) .
(7)
p
However, the above argument does not extend for the case of c1 > 0 since σ(x) = c20 + c21 x depends
on x. As argued in [1], without loss of generality, one mayp
assume that c0 = 0; this is because one
can express a signal-dependent noise channel with σ(x) = c20 + c21 x as
√
Y = X + c1 XZ1 + c0 Z0 ,
where Z0 and Z1 are
√ independent standard normal variables. Thus, we can write Y = Y1 + c0 Z0
where Y1 = X + c1 XZ1 . From the argument for AWGN channels, we have that h (Y ) ≥ h (Y1 ).
Thus, it suffices to show that h (Y1 ) ≥ h (X). This is the special case of the problem for c0 = 0 and
√
corresponds to σ(x) = c1 x.
9
√
To show h (Y ) ≥ h (X) when Y = X + c1 XZ, more advanced ideas are utilized in [1]. The
key observation is the following: assume that
1 x
X ∼ gX (x) = e− α 1[x ≥ 0],
α
be exponentially distributed with mean E [X] = α. Then Y has density
!
p
√
αy − α + 2c21 |y|
1
√ 2
exp
.
gY (y) = p
αc1
α(α + 2c21 )
Then, for any arbitrary input distribution fX , from the data processing property of the relative
entropy, we have
D (fY kgY ) ≤ D (fX kgX )
where fY is the output density for input density fX . Once simplified, this equation leads to h (fY ) ≥
h (fX ).
The above argument crucially depends on the particular form of the output distribution corresponding to the input
p exponential distribution. It is a specific argument that works for the specific
choice of σ(x) = c20 + c21 x and normal distribution for Z, and cannot be readily extended to other
choices of σ(·) and fZ (z). In this paper, we propose two approaches to handle more general settings:
• (Idea 1:) We provide the following novel general lemma that establishes h (Y ) ≥ h (X) for a
large class of ASDN channels.
Lemma 1. Take an arbitrary channel characterized by the conditional pdf fY |X (·|x) satisfying
Z
fY |X (y|x) dx ≤ 1,
∀y ∈ Y,
(8)
X
where X and Y are the support of channel input X and channel output Y , respectively. Take
an arbitrary input pdf fX (x) on X resulting in an output pdf fY (y) on Y. Assuming that
h (X) and h (Y ) exist, we have
h (Y ) ≥ h (X)
The proof is provided in Section 6.7.
As an example, Lemma 1 yields an alternative proof for the result of [1] for an AGSDN
p channel.
Note that, as we mentioned before, in order to prove that h (Y ) ≥ h (X) for σ(x) = c0 2 + c2 x,
√
we only need to prove it for σ(x) = c x. To this end, observe that since X ⊆ [0, +∞), we
have
Z
Z ∞
(y−x)2
1
√
fY |X (y|x) dx ≤
e− 2c2 x dx
2πc2 x
X
0
Z ∞ √
2 )2
2 −(y−v
√
=
e 2c2 v2 dv
πc2
0
1
y≥0
2y
=
≤ 1.
(9)
e c21 y < 0
where x = v 2 , and v ≥ 0. The proof for equation (9) is given in Appendix A.
• (Idea 2:) We provide a variation of the type of argument given in (7) by introducing a number
of new steps. This would adapt the argument to ASDN channels.
In the following sections, we discuss the above two ideas separately.
10
3.3
First Idea for Lower Bound
Theorem 3. Assume an ASDN channel defined in (1), where σ : (`, u) 7→ (0, +∞) with −∞ ≤ ` <
u ≤ +∞, and noise with pdf fZ (z) such that
Z u
1
y−x
dx ≤ 1,
∀y,
(10)
fZ
σ(x)
` σ(x)
1
is Riemann integrable on (`, u).
σ(x)
(11)
Then, if X is continuous random variable with pdf fX (x) supported over (`, u),
I (X; Y ) ≥ h (ϕ(X)) − h (Z) ,
provided that the integrals defining h (ϕ(X)) and h (Z) converge to a real number or ±∞. The
function ϕ(x) is an increasing function of x defined by
Z x
1
ϕ(x) =
dt,
∀x ∈ (`, u),
(12)
σ(t)
c
where c ∈ (`, u) is arbitrary.
Remark 4. Note that for any c ∈ (`, u), ϕ(x) is well defined (see Definition 6). By selecting a
different c0 ∈ (`, u) we obtain a different function ϕ0 (x) such that
Z c
1
0
ϕ (x) − ϕ(x) =
dt < ∞.
c0 σ(t)
However, h (ϕ(X)) is invariant with respect to adding constant terms, and thus invariant with
respect to different choices of c ∈ (`, u).
The above theorem is proved in Section 6.3.
Corollary 1. Let W = ϕ(X). Since ϕ(·) is a one-to-one function (as σ(x) > 0), we obtain
max
µX ∈F ∩AC
h (ϕ(X)) − h (Z) = max h (W ) − h (Z) ,
fW ∈G
where F is defined in (6), and W ∼ fW belongs to
G = {fW (·) µW ∈ AC, supp(µW ) ⊆ ϕ(X ), E[gi (ϕ−1 (W ))] ≤ 0 for all i = 1, · · · , k}.
Here ϕ(X ) = {ϕ(x) : x ∈ X }. Hence, from Theorem 3 we obtain that
max I (X; Y ) ≥ max h (W ) − h (Z) .
µX ∈F
fW ∈G
In order to find the maximum of h (W ) over fW ∈ G, we can use known results on maximum
entropy probability distributions, e.g., see [16, Chapter 3.1].
Corollary 2. Consider an ASDN channel satisfying (10) and (11). Assume that the only input
constraint is X = (`, u) i.e. ` < X < u. Then, from Corollary 1, we obtain the lower bound
Z u
1
max h (W ) − h (Z) = log
dx − h (Z) ,
fW ∈G
` σ(x)
by taking a uniform distribution for fW (w) over ϕ(X ) if this set is bounded [16, Section 3.1]. Else,
if ϕ(X ) has an infinite length, the capacity is infinity by choosing a pdf for W whose differential
entropy is infinity (see Example 1). The equivalent pdf fX (x) for X is the pdf of ϕ−1 (W ).
11
For more insight, we provide the following example.
Example 2. Consider an AWGN channel (namely, an AGSDN channel with σ(x) = σ0) with
X = R and Z ∼ N (0, 1). Let us restrict to measures that satisfy the power constraint E X 2 ≤ P ;
that is g1 (x) = x2 . Since
Z
R
1
fZ
σ(x)
y−x
σ(x)
Z
dx =
R
1
−
p
e
2πσ02
(y−x)2
2
2σ0
dx = 1,
we can apply Corollary 1. Here W = ϕ(X) = x/σ0 ; thus, the lower bound is
C≥
max
fW (·):E[W 2 ]≤
P
2
σ0
h (W ) − h (Z) =
P
1
log 2 ,
2
σ0
(13)
√
where it is achieved by Gaussian distribution W ∼ N (0, P /σ0 )[17, Section 12.1]. It is well-known
that the capacity of AWGN channel is
1
P
C = log 1 + 2 .
(14)
2
σ0
Comparing (14) and (13), we see that the lower bound is very close to the capacity in the high SNR
regime.
As another example, consider the constraints X ≥ 0, and E [X] ≤ α on admissible input measures. Here, we obtain the lower bound
max
fW (·):W ≥0
E[W ]≤ σα
h (W ) − h (Z) =
1
α2 e
log
,
2
2πσ02
0
where we used the fact that the maximum is achieved by the exponential distribution fW (w) =
σ0 /α exp(−wσ0 /α) for w ≥ 0 and fW (w) = 0 for w < 0 [17, Section 12.1]. Unlike the first example
above, an exact capacity formula for this channel is not known.
3.4
Second Idea for Lower Bound
Now, we are going to provide another lower bound which is more appropriate in the channels for
which Z is either non-negative or non-positive, and σ(x) is a monotonic function. An example of
such channels is the molecular timing channel discussed in the introduction.
Theorem 4. Assume an ASDN channel defined in (1) with σ : (`, u) 7→ (0, ∞) for −∞ ≤ ` < u ≤
+∞. If X is a continuous random variable with pdf fX (x), and
σ(x) is continuous and monotonic over (`, u),
(15)
1
is Riemann integrable on (`, u),
σ(x)
(16)
then
I (X; Y ) ≥ αh (ψ(X)) − β,
provided that α, β are well-defined, and α > 0. In order to define the variables α, β, and the
function ψ(x), take some arbitrary δ > 0 and proceed as follows:
12
• If the function σ(x) is increasing over (`, u), let
x
Z
ψ(x) = δ log σ(x) +
c
α = Pr {Z ≥ δ} ,
1
dt,
σ(t)
β = αh (Z|Z ≥ δ) + H2 (α),
• If the function σ(x) is decreasing over (`, u), let
Z
ψ(x) = −δ log σ(x) +
c
α = Pr {Z ≤ −δ} ,
x
1
dt,
σ(t)
β = αh (Z|Z ≤ −δ) + H2 (α),
where c ∈ (`, u) is arbitrary, and
H2 (p) := −p log p − (1 − p) log (1 − p).
Remark 5. Observe that in both cases, ψ(x) is an strictly increasing function of x defined over
(`, u), as σ(x) > 0 and log(x) is increasing. Similar to Remark 4, the choice of c ∈ (`, u) does not
affect the value of h (ψ(X)), and hence the lower bound. However, the choice of δ > 0 affects the
lower bound.
The above theorem is proved in Section 6.4.
Corollary 3. Similar to Corollary 1, let V = ψ(X). Since ψ(·) is a one-to-one (strictly increasing)
function, we obtain
max αh (ψ(X)) − β = max αh (V ) − β
µX ∈F ∩AC
fV ∈G
where F is defined in (6), and V ∼ fV belongs to
G = {fV (·) µV ∈ AC, supp(µV ) ⊆ ψ(X ), E[gi (ψ −1 (V ))] ≤ 0 for all i = 1, · · · , k}.
Hence, from Theorem 4 we obtain that
max I (X; Y ) ≥ α max h (V ) − β,
µX ∈F
fV ∈G
where α and β are constants defined in Theorem 4.
As mentioned earlier, to maximize h (V ) over fV ∈ G, we can use known results on maximum
entropy probability distributions, e.g., see [16, Chapter 3.1].
Corollary 4. Consider an ASDN channel satisfying (15) and (16). Assume that the only input
constraint is X = (`, u) i.e. ` < X < u. Then, from Corollary 3, we obtain the lower bound
Z u
σ(u− )
1
α max h (V ) − β = α log δ log
+
dx − β,
fV ∈G
σ(`+ )
` σ(x)
where α and β are defined in Theorem 4, and
σ(u− ) := lim σ(x).
σ(`+ ) := lim σ(x),
x↓`
x↑u
The lower bound is achieved by taking a uniform distribution for fV (w) over ψ(X ) if this set is
bounded [16, Section 3.1]. Else, if ψ(X ) has an infinite length, the capacity is infinity by choosing
a pdf fV (v) such that h (V ) = +∞. (see Example 1). The equivalent pdf fX (x) for X is the pdf of
ψ −1 (V ).
13
3.5
An Upper Bound
We begin by reviewing upper bound given in [1] to motivate our own upper bound. The upper
bound in [1] works by utilizing Topsoe’s inequality [18] to bound mutual information I (X; Y ) from
above as follows:
I (X; Y ) ≤ EµX [D (f (y|x)kq(y))].
for any arbitrary pdf q(y) on output Y . The distribution q(y) is chosen carefully to allow for
√
calculation of the above KL divergence. The particular form of σ(x) = c0 + c1 x makes explicit
calculations possible. The second difficulty in calculating the above expression is that we need to
take expected value over input measure µX . However, the capacity achieving input measure is not
known. This difficulty is addressed by the technique of “input distributions that escape to infinity”,
under some assumptions about the peak constraint.
In this part, we give an upper bound based on the KL symmetrized upper bound of [15]. The
idea is that
I (X; Y ) = D(µX,Y kµX µY )
≤ D(µX,Y kµX µY ) + D(µX µY kµX,Y )
, Dsym (µX,Y kµX µY ).
Our upper bound has the advantage of being applicable to a large class of σ(x). To state this upper
bound, let Cov (X, Y ) := E [XY ] − E [X] E [Y ] be the covariance function between two random
variables X and Y .
Theorem 5. For any AGSDN channel defined in (1), we have
1
1
X
2
2
I (X; Y ) ≤ − Cov X + σ (X), 2
+ Cov X, 2
,
2
σ (X)
σ (X)
provided that the covariance terms on the right hand side are finite.
The proof can be found in Section 6.5
Corollary 5. For an AGSDN channel with parameters σ(x), Z ∼ N (0, 1), and X = [0, u], if
functions σ(x) and x/σ 2 (x) are increasing over X , σ(0) > 0, and x2 + σ(x) is convex over X then
(
1
F
α ≥ u2
max
I (X; Y ) ≤ 18
,
α α
µX :0≤X≤u
α < u2
2 1 − u uF
E[X]≤α
where
F =
u2
u2
σ 2 (0) σ 2 (u)
+
+
+
− 2.
σ 2 (u) σ 2 (0) σ 2 (u) σ 2 (0)
The corollary is proved in Section 6.6.
Remark 6. Even though Corollary 5 is with the assumption σ(0) > 0, if we formally set σ(0) = 0,
we see that F and the upper bound on capacity becomes infinity. This is consistent with Theorem 2
when σ(0) = 0.
p
Corollary 6. The particular choice of σ(x) = c20 + c21 x that was motivated by applications discussed in the Introduction has the property that σ(x), x/σ 2 (x) are increasing and Theorem 5 can be
applied.
14
4
Some Useful Lemmas
In this section, we provide three lemmas used in the proof of theorems in this paper.
Lemma 2. In an ASDN channel defined in (1), with continuous random variable noise Z with pdf
fZ (·), and noise coefficient σ(x) > 0 (µX −a.e.), the conditional measure µY |X (·|x) has the following
pdf:
1
y−x
fY |X (y|x) =
,
µX,Y -(a.e.).
fZ
σ(x)
σ(x)
Moreover, Y is a continuous random variable with the pdf
1
y−X
fY (y) = E
.
fZ
σ(X)
σ(X)
Furthermore, if h (Z) exists, h (Y |X) can be defined and is equal to
h (Y |X) = E [log σ(X)] + h (Z) .
The lemma is proved in Section 6.8.
Lemma 3. Let X be a continuous random variable with pdf fX (x). For any function σ : (`, u) 7→
[0, +∞) such that σ(x) is Riemann integrable over (`, u) and σ(x) > 0 (a.e), where −∞ ≤ ` < u ≤
+∞, we have that
h (X) + E [log σ(X)] = h (ϕ(X)) .
(17)
where
Z
x
ϕ(x) =
σ(t) dt,
(18)
c
where c ∈ (`, u) is an arbitrary constant.
Note that if the left-hand side does not exist, or becomes ±∞, the same occurs for the right-hand
side and vice versa.
The lemma is proved in Section 6.9.
Lemma 4. Let X be a random variable with probability measure µX , and the functions w(x) and
v(x) be increasing over [`, u], where −∞ < ` < u < +∞. If v(x) is convex over [`, u], then
max
µX :`≤X≤u
E[X]≤α
Cov (w(X), v(X)) ≤ β[w(u) − w(`)][v(u) − v(`)],
where
(
β=
1
4
(u−α)(α−`)
(u−`)2
α≥
α<
`+u
2
`+u
2
.
Furthermore, for the case α ≥ (` + u)/2, a maximizer of (19) is the pmf
1
pX (`) = pX (u) = .
2
For the case α < (` + u)/2 if v(x) is linear, a maximizer of (19) is the pmf
pX (`) = 1 − pX (u) =
The proof is given in Section 6.10.
15
u−α
.
u−`
(19)
1
10
1
10
Capacity
Capacity (nats per channel use)
Capacity (nats per channel use)
KL Upper Bound
0
10
−1
10
0
10
Capacity
−1
10
Corollary 2 Lower Bound
Corollary 4 Lower Bound
−2
10
−2
10
0
20
40
60
80
100
120
140
160
180
200
σ02
−3
10
50
100
150
200
250
300
A
Figure 1: Capacity and Symmetrized divergence upper bound in terms of c20 for AGSDN
channel with A p
= 5, α = 2.5, c1 = 1 and
function σ(x) = c20 + x.
5
Figure 2: Capacity and lower bound at corollary 2 and 4 in terms of A√for AGSDN channel with function σ(x) = 1 + x.
Numerical Results
p
In this section, some numerical results are given for σ(x) = c20 + x and Z ∼ N (0, 1). The upper
bound, Corollary (5), and the capacity are depicted in the logarithmic scale in Fig. 1, where we
have considered the peak constraint A = 5 and average constraint α = 2.5 . It can be observed
that the distance between the upper bound and the capacity is a small constant in the logarithmic
scale and low SNR regime. This is consistent with [15] that argues that the upper bound based on
symmetrized KL divergence is mostly suitable for the low SNR regime.
p
The lower bounds of Corollaries 4 and 2 are plotted in Fig. 2 for the function σ(x) = c20 + x
in terms of peak constraint, A. Here, c0 = 1 is assumed. The lower bound of Corollary 2 for
0 < X < A is computed by the following closed form formula:
!
q
Z A
1
1
2 − 2c
p
log
−
h
(Z)
=
log
2
A
+
c
log(2πe),
0 −
0
2
2
c0 + x
0
while the lower bound of Corollary 4 equals
!
!
p
Z A
q
A + c20
1
σ(A)
p
+
α log δ log
dx − β = α log δ log
+ 2 A + c20 − 2c0 − β
2+x
σ(0)
c
0
c
0
0
where δ > 0 and
α = Pr {Z ≥ δ} ,
β = αh (Z|Z ≥ δ) − α log α − (1 − α) log (1 − α).
We maximized over δ in order to find the lower bound of Corollary 4. The first lower bound is
better than the second one mainly because of the multiplicative coefficient α of the second lower
bound. Since the second lower bound is for a more general class of channels, we should consider the
positive (or negative) part of the support of Z, causing a multiplicative of coefficient 1/2 for the
Gaussian noise. However, if the support of Z is positive (or negative) reals, the two lower bounds
do not differ much.
16
6
6.1
Proofs
Proof of Theorem 1
Finiteness of capacity: The first step is to show that the capacity is finite:
sup I (X; Y ) < ∞.
(20)
µX ∈F
To prove this, it suffices to show that the supremum of both h (Y ) and h (Y |X) over µX ∈ F are
finite, i.e.,
|h (Y ) |, |h (Y |X) | < +∞, uniformly on µX ∈ F.
(21)
Utilizing Lemma 2, the existence and boundedness of h (Y |X) is obtained as follows:
|h (Y |X) | ≤ max{| log σ` |, | log σu |} + |h (Z) | < ∞,
uniformly on F. From Lemma 2, we obtain that Y is continuous with a pdf fY (y). To prove that
the integral defining h (Y ) is convergent to a finite value (existence of entropy), and furthermore
the integral is convergent to a value that is bounded uniformly on F, it is sufficient to show that
there are some positive real γ, m̄ and v such that for any µX ∈ F, we have [19]:
sup fY (y) < m̄,
(22)
y∈R
E [|Y |γ ] < v.
(23)
Also, from Lemma 2, we obtain that for any µX ∈ F
fY (y) ≤
m
.
σ`
Thus, (22) holds with m̄ = m/σ` . In order to prove (23), note that
E [|Y |γ ] ≤E [(|X| + σu |Z|)γ ]
≤2γ E [max {|X|γ , σuγ |Z|γ }]
≤2γ E [|X|γ ] + (2σu )γ E [|Z|γ ]
≤2γ uγ + (2σu )γ α,
uniformly on F. Thus, h (Y ) is well-defined and uniformly bounded on F.
Hence, from the definition of mutual information we obtain that
I (X; Y ) = h (Y ) − h (Y |X)
(24)
is bounded uniformly for µX ∈ F.
Existence of a maximizer: Let
C = sup I (X; Y ) < ∞.
(25)
µX ∈F
We would like to prove that the above supremum is a maximum. Equation (25) implies existence
(k)
of a sequence of measures {µX }∞
k=1 in F such that
lim I (Xk ; Yk ) = C,
k→∞
17
(k)
(k)
where Xk ∼ µX , and Yk ∼ µY is the output of the channel when the input is Xk . Furthermore,
(k)
without loss of generality, we can assume that {µX }∞
k=1 is convergent (in the Lévy measure) to a
∗
measure µX ∈ F. The reason is that since X is compact, the set F is also compact with respect
to the Lévy measure [10, Proposition 2]. Thus, any sequence of measures in F has a convergent
(k)
subsequence. With no loss of generality we can take the subsequence as {µX }∞
k=1 . Thus, from
convergence in Lévy measure, we know that there is µ∗X ∈ F such that
lim E [g(Xk )] = E [g(X ∗ )] ,
(26)
k→∞
for all g : R 7→ C such that supx∈R |g(x)| < +∞. We would like to prove that
I (X ∗ ; Y ∗ ) = C,
(27)
where Y ∗ ∼ µ∗Y is the output measure of the channel when the input measure is µ∗X . This will
complete the proof.
From the argument given in the first part of the proof on “Finiteness of capacity”, h (Y ∗ |X ∗ )
and h (Y ∗ ) are well-defined and finite. As a result to show (27), we only need to prove that
lim h (Yk |Xk ) = h (Y ∗ |X ∗ ) ,
(28)
lim h (Yk ) = h (Y ∗ ) .
(29)
k→∞
k→∞
Since −∞ < σ` < σu < +∞, (28) is obtained from (26) and Lemma 2.
In order to prove (29), we proceed as follows:
(k)
• Step 1: We begin by showing that the sequence {µY }∞
k=1 is a Cauchy sequence with respect
to total variation i.e.
(m)
∀ > 0, ∃N : m, n ≥ N ⇒ kµY
(n)
− µY kV ≤ ,
(30)
where for any two arbitrary probability measure µA and µB , the total variation distance is
defined by [16, p. 31]
X
kµA − µB k := sup
µA (Ei ) − µB (Ei ) ,
∆
i
where ∆ = {E1 , · · · , Em } ⊆ B(R) is the collection of all the available finite partitions.
• Step 2: Having established step 1 above, we utilize the fact that the space of probability
measures is complete with respect to the total variation metric. To show this, note that by
Lemma 2, all the Yk ’s have a pdf, and hence the total variation can be expressed in terms of
the k · kL1 norm between pdfs [16, Lemma 1.5.3]. From [20, p. 276] we obtain that this space
of pdfs is complete with respect to k · kL1 norm.
(k)
As a result, µY converges to some measure Yb ∼ µ
bY with respect to the total variation metric.
We further claim that this convergence implies that
lim h (Yk ) = h Yb .
(31)
k→∞
(k)
The reason is that from (22) and (23), we see that {fY } and fŶ are uniformly bounded and
have finite γ-moments. Therefore, (31) follows from [19, Theorem 1]. Thus, in step 2, we
obtain that the sequence h (Yk ) has a limit.
18
• Step 3: We show that the limit found in Step 2 is equal to h (Y ∗ ), i.e.,
h Yb = h (Y ∗ ) .
(32)
This completes the proof of (29).
Hence, it only remains to prove (30) and (32).
0
Proof of (30): Since {I (Xk ; Yk )}∞
k=1 is convergent to C, for any > 0, there exists N such that:
|C − I (Xk ; Yk ) | ≤ 0 ,
∀k ≥ N.
Now, consider m, n ≥ N . Let Q be a uniform Bernoulli random variable, independent of all
(m)
previously defined variables. When Q = 0, we sample from measure µX and when Q = 1, we
(n)
e ∼ µ e defined as follows:
sample from measure µX . This induces the measure X
X
1 (m) 1 (n)
µXe = µX + µX .
2
2
e We have a Markov chain Q − X
e − Ye .
Let Ye ∼ µYe be the output of the channel when the input is X.
Note that
e Ye |Q = 1 I (Xm ; Ym ) + 1 I (Xn ; Yn ) .
I X;
2
2
From concavity of mutual information in input measure, we obtain that:
e Ye ≥ 1 I (Xm ; Ym ) + 1 I (Xn ; Yn ) ≥ C − 0 .
I X;
2
2
e Ye ≤ C,
Since F is an intersection of half spaces, it is convex and as a result µXe ∈ F. Thus, I X;
and we obtain that
e Ye − I X;
e Ye Q ≤ 0 .
I X;
e
e
e
e
Because of the Markov chain Q − X − Y , we obtain I Y ; Q X = 0 and as a result:
I Ye ; Q ≤ 0 =⇒ D µYe ,Q kµYe µQ ≤ 0 .
From the Pinsker’s inequality we obtain that
kµYe ,Q − µYe µQ kV ≤
√
20 ,
(33)
where kµYe ,Q − µYe µQ kV is the total variation between the measures µYe ,Q and µYe µQ . Note that
1 (n)
1 (m)
kµYe ,Q − µYe µQ kV = kµY − µYe kV + kµY − µYe kV .
2
2
Therefore from (33) and (34), we obtain that
(m)
kµY
√
(n)
− µYe kV , kµY − µYe kV ≤ 2 20 .
As a result,
(m)
kµY
√
(n)
− µY kV ≤ 4 20 .
19
(34)
(k)
Hence, by taking 0 ≤ 2 /32, we obtain that {µY }∞
k=1 is a Cauchy sequence.
Proof of (32): To this end, it suffices to prove that
ΦYb (ω) = ΦY ∗ (ω),
∀ω ∈ R,
where ΦX (ω) := E [exp(jωX)] is the characteristic function of the random variable X.
Since Yk converge to Yb in total variation, and the fact that convergence in total variation is
stronger than weakly convergence [16, p. 31], from (26) we obtain that their characteristic functions,
ΦYk (ω), also converge to ΦYb (ω) pointwise.
Hence, it suffices to prove that ΦYk (ω) converge to ΦY ∗ (ω) pointwise. From (1), we obtain that
h
i
ΦYk (ω) = E ejω(Xk +σ(Xk )Z) = E ejωXk ΦZ (σ(Xk )ω) .
Similarly,
h
i
∗
ΦY ∗ (ω) = E ejωX ΦZ (σ(X ∗ )ω) .
Since {Xk } converges to X ∗ in Lévy measure and the function g(x) = ejωx ΦZ (σ(x)ω) is bounded:
|g(x)| = ejωx ΦZ (σ(x)ω) ≤ ejωx |ΦZ (σ(x)ω)| ≤ 1,
from (26) we obtain that E[g(Xk )] = ΦYk (ω) converges to E[g(X ∗ )] = ΦY ∗ (ω) pointwise.
Uniqueness of the output pdf: The proof is the same as the first part of the proof of [10,
Theorem 1].
This completes the proof.
6.2
Proof of Theorem 2
For a continuous input measure, we utilize a later result in the paper, namely Theorem 4 by choosing
` = c, u = c0 when c0 > c, or ` = c0 , u = c when c0 < c. To use Corollary 4, observe that the
image of E under ψ(·) has infinite length. This is because the sequence {x̃i } in E was such that the
monotone function σ(·) converged to zero or infinity on that sequence. Then, it is obtained that
any pdf fX (·) such that h (ψ(X)) = +∞, makes I (X; Y ) infinity if |h (Z|Z > δ) | < ∞ (which leads
to |β| < ∞), where ψ(x) is the bijective function of x defined in the statement of Theorem 4.
In order to prove that |h (Z|Z > δ) | < ∞, let the random variable Z̄ be Z conditioned to Z > δ.
Due to the continuity of Z and the fact that Pr {Z > δ} > 0, we obtain that Z̄ has a valid pdf fZ̄ (z)
defined by
(
1
fZ (z) z > δ
fZ̄ (z) = θ
,
0
z≤δ
where θ := Pr {Z > δ} > 0. Since h (Z) exists and |h (Z) | < ∞, we obtain that
1
E
< ∞.
fZ (Z)
Hence,
|h Z̄ | ≤ E
1
fZ̄ (Z̄)
1
≤ − log θ + E
θ
1
fZ (Z)
< ∞.
Therefore, h (Y |Z > δ) exists and |h (Y |Z > δ) | < ∞. A similar treatment can be used to prove
|h (Y |Z < δ) | < ∞.
20
It remains to construct a discrete pmf with infinite mutual information. The statement of the
theorem assumes existence of a sequence {x̃i } in an open interval E = (c, c0 ) ⊂ X (or E = (c0 , c) if
c0 < c) such that
1. c is the limit of the sequence {x̃i },
2. σ(x̃i ) converges to 0 or +∞
3. σ(·) is monotone and continuous over E
We now make the following claim about existence of another sequence {xi }∞
i=1 ⊆ E with certain
nice properties:
Claim: Suppose that one cannot find a non-empty interval [x0 , x00 ] ⊂ E such that σ(x) = 0 for
all x ∈ [x0 , x00 ]. Then, there exists 0 < a < b and a sequence {xi }∞
i=1 ⊆ E, such that
• If σ(x) is increasing,
Pr {a < Z < b} > 0,
(35)
(xi + aσ(xi ), xi + bσ(xi )) ∩ (xj + aσ(xj ), xj + bσ(xj )) = ∅,
0 < σ(xi ) < ∞,
∀i 6= j ∈ N
∀i ∈ N.
(36)
(37)
• If σ(x) is decreasing,
Pr {−b < Z < −a} > 0,
(xi − bσ(xi ), xi − aσ(xi )) ∩ (xj − bσ(xj ), xj − aσ(xj )) = ∅,
0 < σ(xi ) < ∞,
∀i 6= j ∈ N
∀i ∈ N.
We continue with the proof assuming that this claim is correct; we give the proof of this claim
later. To show how this claim can be used to construct a discrete pmf with infinite mutual information, consider the possibility that the assumption of the claim fails: σ(x) = 0 for all x ∈ [x0 , x00 ],
then Y = X in that interval when X ∈ [x0 , x00 ]. Therefore, we can provide any discrete distribution
in that interval such that H (X) = ∞, as a result I (X; Y ) = I (X; X) = H (X) = ∞.
Thus, we should only consider the case that the assumption of the claim holds. Assume that
σ(x) is increasing. The construction when σ(x) is decreasing is similar. Fix a given a, b, {xi }∞
i=1
satisfying (35) and (36). Take an arbitrary pmf {pi }∞
i=1 such that
X
i
pi log
1
= +∞.
pi
(38)
Then, we define a discrete random variable X, taking values in {xi }∞
i=1 such that Pr {X = xi } = pi .
We claim that I (X; Y ) = +∞. To this end, it suffices to show
I (X; Y ) ≥ Pr {a < Z < b} I (X; Y |a < Z < b) − H2 (Pr {a < Z < b}),
(39)
I (X; Y |a < Z < b) = ∞.
(40)
Proof of (39): Define random variable E as following:
(
0 Z ∈ (a, b)
E=
1 Z∈
/ (a, b)
21
From the definition of mutual information, we have that
I (X; Y |E) − I (X; Y ) = I (Y ; E|X) − I (Y ; E) ≤ H (E) ,
Since
I (X; Y |E) = Pr {E = 0} I (X; Y |E = 0) + Pr {E = 1} I (X; Y |E = 1) ,
we conclude (39).
Proof of (40): Since
I (X; Y |a < Z < b) = H (X) − H (X|Y, a < Z < b) ,
it suffices to show that
H (X) = ∞,
H (X|Y, a < Z < b) = 0.
(41)
P
The equality H (X) := − pi log pi = +∞ follows (38). To prove the other equality, note that
Y belongs to the interval (xi + aσ(xi ), xi + bσ(xi )) when X = xi . Therefore, since the intervals
(xi + aσ(xi ), xi + bσ(xi )) are disjoint, X can be found from Y . Thus, X is a function of Y when
a < Z < b. As a result, the second equality of (41) is proved.
Now, it only remains to prove our Claim on the existence of a, b, and {xi }∞
i=1 .
We assume that σ(x) is increasing. The proof when σ(x) is decreasing is similar. From the assumptions on Z that Pr {Z ≥ δ} > 0, we obtain there exists δ < b < ∞ such that Pr {δ < Z < b} >
0. As a result, we select a = δ.
Since σ(x) is monotone, we cannot have σ(x0 ) = σ(x00 ) = 0 for two arbitrary distinct x0 and x00
in E since this implies that σ(x) = 0 for all x in between x0 and x00 . As a result, we shall not worry
about the constraint (37) on {xi } because σ(xi ) = 0 can occur for at most one index i and we can
delete that element from the sequence to ensure (37).
To show the existence of {xi }∞
i=1 , we provide a method to find xi+1 with respect to xi . The
method is described below and illustrated in Figure 3.
Take x1 an arbitrary element of E. Observe that since σ(x) is continuous and increasing over E,
the functions x + aσ(x) and x + bσ(x) are continuous and strictly increasing over E, as well as
x + aσ(x) < x + bσ(x),
∀x ∈ E.
Therefore, for the case c0 > c (happening when σ(xi ) converge to 0),
lim x + aσ(x) = lim x + bσ(x) = c,
x→c
x→c
Hence, for a given xi ∈ E, due to the intermediate value theorem, there exists unique xi+1 satisfying
c < xi+1 < xi < c0 such that
xi+1 + bσ(xi+1 ) = xi + aσ(xi ).
Similarly, for the case c0 < c (happening when σ(x̃i ) converge to +∞), if xi ∈ E, there exists unique
xi+1 satisfying c > xi+1 > xi > c0 such that
xi+1 + aσ(xi+1 ) = xi + bσ(xi ).
It can be easily obtained that the intervals created this way are disjoint, and the process will not
stop after finite steps. Therefore, the theorem is proved.
22
x1
x2
σ→+∞ (ascending)
x3
c
c'
x4
...
x + a σ(x)
x - a σ(x)
x + b σ(x)
x - b σ(x)
c
c'
σ→+∞ (descending)
...
x5
x1
x4
x3
x2
σ→0 (descending)
σ→0 (ascending)
x - a σ(x)
x + a σ(x)
x1
x5
x - b σ(x)
x4
x + b σ(x)
c'
x3
c'
x2
x3
...
x2
c
x4
x1
Figure 3: Possible cases for σ(x) when |c| < ∞.
23
...
c
6.3
Proof of Theorem 3
From Lemma 2 we obtain that h (Y |X) exists. Hence, utilizing Lemma 1 , we can write
h (Y ) ≥ h (X) =⇒ I (X; Y ) ≥ h (X) − h (Y |X) ,
provided that
(42)
u
Z
fY |X (y|x) dx ≤ 1,
`
where it is satisfied due to
Z
u
Z
fY |X (y|x) dx =
`
`
u
1
fZ
σ(x)
y−x
σ(x)
dx ≤ 1,
where the last inequality comes from the assumption of the theorem. From Lemma 2, we have that
h (Y |X) = E [log σ(X)] + h (Z) .
Therefore, (42) can be written as
I (X; Y ) ≥ h (X) − E [log σ(X)] − h (Z) .
Exploiting Lemma 3 we obtain that
h (X) − E [log σ(X)] = h (ϕ(X)) ,
where ϕ(X) is defined in (12) Hence, the proof is complete.
6.4
Proof of Theorem 4
We only prove the case that σ(x) is an increasing function over (`, u). The proof of the theorem
for decreasing functions is similar to the increasing case and we only need to substitute Z ≥ δ with
Z ≤ −δ. We claim that
I (X; Y ) ≥ αI (X; Y |Z ≥ δ) − H2 (α).
(43)
Consider random variable E as following:
(
0 Z≥δ
E=
.
1 Z<δ
From the definition of mutual information, we have that
I (X; Y |E) − I (X; Y ) = I (Y ; E|X) − I (Y ; E) ≤ H (E) ,
Therefore, since
I (X; Y |E) = Pr {Z ≥ δ} I (X; Y |Z ≥ δ) + Pr {Z < δ} I (X; Y |Z < δ) ,
we conclude (43).
Now, we find a lower bound for I (X; Y |Z ≥ δ). From Lemma 2 we obtain that Y is a continuous
random variable. We claim that
I (X; Y |Z ≥ δ) =h (Y |Z ≥ δ) − h (Y |X, Z ≥ δ)
24
=h (Y |Z ≥ δ) − (E [log σ(X)] + h (Z|Z ≥ δ))
=h (Y |Z ≥ δ) − h (X) − E log(1 + Zσ 0 (X))|Z ≥ δ
1 + Zσ 0 (X)
+ h (X) + E log
Z ≥ δ − h (Z|Z ≥ δ) ,
σ(X)
(44)
(45)
where (44) is obtained from Lemma 2, and the fact that random variable Z conditioned to Z ≥ δ
is also continuous when Pr {Z ≥ δ} > 0. Moreover, (45) is obtained by adding and subtracting the
term E [log(1 + Zσ 0 (X))|Z ≥ δ]. Note that we had not assumed that σ(x) needs to be differentiable.
We had only assumed that σ : (`, u) 7→ (0, ∞) is continuous and monotonic over (`, u). However,
every monotonic function is differentiable almost everywhere, i.e., the set of points in which σ(x) is
not differentiable has Lebesgue measure zero. We define σ 0 (x) to be equal to zero wherever σ(x) is
not differentiable; and we take σ 0 (x) to be the derivative of σ(x) wherever it is differentiable. With
this definition of σ 0 (x) and from the continuity of σ(x), we have that the integral of σ 0 (x)/σ(x) gives
us back the function log(σ(x)).
Since σ(x) is an increasing positive function, and Z ≥ δ > 0, we conclude that
1 + Zσ 0 (X)
1 + δσ 0 (X)
E log
Z ≥ δ ≥ E log
.
(46)
σ(X)
σ(X)
From Lemma 3 and the fact that the integral of σ 0 (x)/σ(x) gives us back the function log(σ(x)),
we obtain that
1 + δσ 0 (X)
h (X) + E log
= h (ψ(X)) ,
σ(X)
where ψ(x) is defined in Theorem 4. As a result from (45), we obtain that
I (X; Y |Z ≥ δ) ≥h (Y |Z ≥ δ) − h (X) − E log(1 + Zσ 0 (X))|Z ≥ δ
+ h (ψ(X)) − h (Z|Z ≥ δ) ,
(47)
Using this inequality in conjunction with (43), we obtain a lower bound on I (X; Y ). The lower
bound that we would like to prove in the statement of the theorem is that
I (X; Y ) ≥ αh (ψ(X)) − αh (Z|Z ≥ δ) − H2 (α).
As a result, it suffices to prove that for all continuous random variables X with pdf fX (x) we have
h (Y |Z ≥ δ) − h (X) − E log(1 + Zσ 0 (X))|Z ≥ δ ≥ 0.
To this end, observe that h (Y |Z ≥ δ) ≥ h (Y |Z, Z ≥ δ). Thus, if we show that
h (Y |Z, Z ≥ δ) = h (X) + E log(1 + Zσ 0 (X))|Z ≥ δ ,
(48)
the proof is complete. We can write that
Z
h (Y |Z, Z ≥ δ) =
∞
fZ 0 (z)h (Y |Z = z) dz,
δ
where Z 0 is Z conditioned to Z ≥ δ, and the pdf of Z 0 is denoted by fZ 0 (z). By defining the function
rz (x) := x + zσ(x), we obtain that
Yz = rz (X),
25
where Yz is Y which is conditioned to Z = z ≥ δ. Since, σ(x) is a continuous increasing function,
rz (x) is a bijection for all z ≥ δ, and so its inverse function, rz−1 (y), exists. Moreover, since X is
continuous and rz (·) is a bijection, Yz is also continuous random variable with pdf fY (y) defined as
following:
1
fYz (y) =
fX (x),
1 + zσ 0 (x)
where x = rz−1 (y).
3
Thus, we have that
1
h (Y |Z = z) =E log
fYz (Yz )
1
=E log
+ E log(1 + zσ 0 (X))
fX (X)
=h (X) + E log(1 + zσ 0 (X)) .
By taking expected value over Z ≥ δ from both sides, (48) is achieved. Therefore, the theorem is
proved.
6.5
Proof of Theorem 5
Based on [15] we obtain that
I (X; Y ) ≤ Dsym (µX,Y kµX µY ),
Utilizing Lemma 2, we obtain that the pdfs fY (y) and fY |X (y|x) exist and are well-defined. Therefore,
Dsym (µX,Y kµX µY ) =D(µX,Y kµX µY ) + D(µX µY kµX,Y )
fY |X (Y |X)
fY (Y )
=EµX,Y log
+ EµX µY log
fY (Y )
fY |X (Y |X)
1
1
= − EµX,Y log
+ E log
fY |X (Y |X)
fY (Y )
1
1
+ EµX µY log
− E log
fY |X (Y |X)
fY (Y )
1
=EµX µY log
− h (Y |X) .
fY |X (Y |X)
Again, from Lemma 2, since Z ∼ N (0, 1), we obtain that
log
√ (y − x)2
1
= log σ(x) 2π +
fY |X (y|x)
2σ 2 (x)
Therefore, since Z = (Y − X)/σ(X), we obtain that
h
√ i 1
h (Y |X) = E log σ(X) 2π + .
2
3
(49)
The measure zero points where σ(x) is not differentiable affect fYz (y) on a measure zero points. However, note
that FYz (y) = FX (r−1 (y)) is always correct and thus the values of fYz (y) on a measure zero set of points are not
important.
26
In addition,
EµX µY
h √
i
1
(Y − X)2
log
= E log
2πσ(X) + EµX µY
.
fY |X (Y |X)
2σ 2 (X)
By expanding, we obtain that
2
1
X2
X
(Y − X)2
=E Y E 2
+E 2
− 2E [Y ] E 2
.
EµX µY
σ 2 (X)
σ (X)
σ (X)
σ (X)
By substituting Y with X + σ(X)Z and simplifying we can write
2
2
(Y − X)2
1
1
EµX µY
=E
X
E
+
E
σ
(X)
E
σ 2 (X)
σ 2 (X)
σ 2 (X)
2
X
X
− 2E [X] E 2
+E 2
σ (X)
σ (X)
2
2
2
1
X + σ 2 (X)
1
=E X E 2
+ E σ (X) E 2
−E
+1
σ (X)
σ (X)
σ 2 (X)
X
X2
− 2E [X] E 2
,
+2 E 2
σ (X)
σ (X)
which equals to
1 − Cov X 2 + σ 2 (X),
1
2
σ (X)
+ 2Cov X,
X
2
σ (X)
.
Therefore, from all above equations the theorem is proved.
6.6
Proof of Corollary 5
Observe that
2
1
u
u2
σ 2 (0) σ 2 (u)
1
F =
+
+
+
−2
2
2 σ 2 (u) σ 2 (0) σ 2 (u) σ 2 (0)
1 2
1
1
u2
2
2
=
u + σ (u) − σ (0)
−
+
.
2
σ 2 (0) σ 2 (u)
σ 2 (u)
Then, using Theorem 5, it suffices to prove the following two inequalities:
−1
1
1
2
2
2
2
2
≤ β u + σ (u) − σ (0)
−
,
Cov X + σ (X), 2
σ (X)
σ 2 (0) σ 2 (u)
and
Cov X,
where
(
β=
X
2
σ (X)
1
4
u
α
1−
u
α
≤β
u2
,
σ 2 (u)
α≥
α<
u
2
u
2
(50)
(51)
.
Since σ(x) is increasing, we obtain that x2 + σ 2 (x) and −1/σ 2 (x) are also increasing. Therefore,
from Lemma 4, equation (50) is proved. Similarly, (51) is also obtained from Lemma 4 because x
and x/σ 2 (x) are increasing functions.
27
6.7
Proof of Lemma 1
From Definition 5, we obtain that
fY (Y )
.
h (X) − h (Y ) = E log
fX (X)
Now, utilizing the inequality log x ≤ x − 1, it suffuces to prove that
fY (Y )
E
≤ 1.
fX (X)
To this end, we can write
Z Z
fY (y)
fY (Y )
=
fX,Y (x, y)
E
dy dx
fX (X)
fX (x)
X Y
Z Z
fY |X (y|x)fY (y) dy dx
=
ZX Y Z
=
fY (y)
fY |X (y|x) dx dy
Y
X
Z
≤
fY (y) dy = 1,
Y
where the last inequality holds because of the assumption of the lemma. Therefore, the lemma is
proved.
6.8
Proof of Lemma 2
The conditional pdf fY |X (y|x) can be easily obtained from the definition of channel in (1). In order
to calculate h (Y |X), using the Definition 5 we can write
1
1
.
h (Y |X) = E
= E [log σ(X)] + E log
Y −X
fY |X (Y |X)
f
Z
σ(X)
Exploiting the fact that (Y − X)/σ(X) = Z, h (Y |X) is obtained.
It only remains to prove that Y is continuous. To this end, from the definition of the channel
in (1), we obtain that
y−X
y−X
FY (y) = Pr Z ≤
= EµX FZ
,
σ(X)
σ(X)
where FY (y) and FZ (z) are the cdfs of the random variables Y and Z, defined by FY (y) =
Pr {Y ≤ y} and FZ (z) = Pr {Z ≤ z}, respectively. In order to prove the claim about fY (y), we
must show that
Z y
1
y−x
y−X
E
fZ
dy = E FZ
,
σ(x)
σ(x)
σ(X)
−∞
for all y ∈ R. Because of the Fubini’s theorem [20, Chapter 2.3], it is equivalent to
−n − X
lim E FZ
= 0.
n→∞
σ(X)
28
Equivalently, we need to show that for any > 0, there exists m such that
−n − X
E FZ
≤ ,
∀n > m.
σ(X)
(52)
Since limz→−∞ FZ (z) = 0, there exists ` ∈ R such that
FZ (z) ≤ ,
2
∀z ≤ `.
Therefore, since FZ (z) ≤ 1 for all z, we can write
−n − X
−n − X
≤ + Pr
≥` .
E FZ
σ(X)
2
σ(X)
We can write
Pr
−n − X
≥ ` = Pr {X + `σ(X) ≤ −n} .
σ(X)
Now, we can take m large enough such that,
−n − X
≥` ≤ ,
Pr
σ(X)
2
n > m.
As a result, (52) is proved.
6.9
Proof of Lemma 3
Since σ(x) is Riemann integrable, ϕ(x) is continuous and since σ(x) > 0 (a.e.), ϕ(x) is a strictly
increasing function over the support of X. It yields that ϕ(x) is an injective function and there
exists an inverse function ϕ−1 (·) for ϕ(·). Now, define random variable Y = ϕ(X). Assume that
the pdf of X is fX (x). Since X is a continuous random variable and ϕ(x) is a bijection, Y is also
continuous random variable with the following pdf:
fY (y) =
1
fX (x),
d
dx ϕ(x)
where x = ϕ−1 (y). Hence, we have that
fY (y) =
1
fX
σ (ϕ−1 (y))
ϕ−1 (y) .
Now, we can calculate the differential entropy of Y as following:
1
h (Y ) =E log
fY (Y )
"
#
σ ϕ−1 (Y )
=E log
fX (ϕ−1 (Y ))
σ(X)
=E log
fX (X)
=h (X) + E [log ϕ(X)] .
Therefore, the lemma is proved.
29
6.10
Proof of Lemma 4
First, assume that v(x) = ax + b, with a > 0. We will prove the general case later. In this case, we
claim that the support of the optimal solution only needs to have two members. To this end, note
that the following problem is equivalent to the original problem defined in (19):
max
max
γ≤α µX :`≤X≤u
E[X]=γ
Cov (w(X), v(X)) .
Since v(x) = ax + b, for a given γ, we would like to maximize
Cov (w(X), v(X)) = E [w(X)v(X)] − (aγ + b)E [w(X)] ,
which is a linear function of µX , subject to E [X] = γ which is also a linear function of µX . By
the standard cardinality reduction technique (Fenchel’s extension of the Caratheodory theorem),
we can reduce the support of µX to at most two members (see [21, Appendix C] for a discussion
of the technique). Assume that the support of µX is {x1 , x2 } where ` ≤ x1 ≤ x2 ≤ u with pmf
pX (x1 ) = 1 − pX (x2 ) = p. Thus, we can simplify Cov (w(X), v(X)) as
Cov (w(X), v(X)) =
2
X
pX (xi )w(xi )v(xi ) −
i=1
2 X
2
X
pX (xi )pX (xj )w(xi )v(xj )
i=1 j=1
=p(1 − p) (w(x2 ) − w(x1 )) (v(x2 ) − v(x1 )) ,
where the last equality can be obtained by expanding the sums. Thus, the problem defined in (19)
equals the following:
max
p,x1 ,x2 :0≤p≤1
`≤x1 ≤x2 ≤u
px1 +(1−p)x2 ≤α
p(1 − p) (w(x2 ) − w(x1 )) (v(x2 ) − v(x1 )) .
We claim that the optimal choice for x1 is x1 = `. To see this, observe that w(x) and v(x) are
increasing functions, and hence
p` + (1 − p)x2 ≤ px1 + (1 − p)x2 ≤ α
and
(w(x2 ) − w(`)) (v(x2 ) − v(`)) ≥ (w(x2 ) − w(x1 )) (v(x2 ) − v(x1 )) .
Hence, x1 = ` is optimal. Substituting v(x) = ax + b, we obtain that the problem is equivalent with
the following:
a
max
p(1 − p) (w(x) − w(`)) (x − `) .
p,x:0≤p≤1
`≤x≤u
p`+(1−p)x≤α
Utilizing KKT conditions, one obtains that the optimal solution is
(
p∗ = 21 , x∗1 = `, x∗2 = u
α ≥ `+u
2 .
∗ = `, x∗ = u α < `+u
p∗ = u−α
,
x
1
2
u−`
2
Now, we consider the general case of v(x) being a convex function (but not necessarily linear).
Since v(x) is convex, we obtain that
v(x) ≤ v(`) + (x − `)
v(u) − v(`)
,
u−`
30
∀x ∈ [`, u].
The right hand side is the line that connects the two points (`, v(`)) and (u, v(u)); this line lies
above the curve x 7→ v(x) for any x ∈ [`, u]. Therefore,
E[v(X)] ≤ v(`) + (E[X] − `)
v(u) − v(`)
.
u−`
Thus, E [X] ≤ α implies that E [v(X)] ≤ ∆ where
∆ = v(`) + (α − `)
v(u) − v(`)
.
u−`
Now, we relax the optimization problem and consider
max
µX :`≤X≤u
E[v(X)]≤∆
Cov (w(X), v(X)) .
The solution of the above optimization problem is an upper bound for the original problem because
the feasible set of the original problem is a subset of the feasible set of the relaxed optimization
problem.
Now, using similar ideas as in the linear case, we conclude that the support of the optimal µX
has at most two members. And the optimal solution is
(
p∗ = 12 , x∗1 = `, x∗2 = u
α ≥ `+u
2
.
v(u)−∆
p∗ = v(u)−v(`)
, x∗1 = `, x∗2 = u α < `+u
2
It can be verified that
v(u) − ∆
u−α
=
.
v(u) − v(`)
u−`
Note that in the case α > (` + u)/2, we obtain that E [X ∗ ] = (` + u)/2 < α, where X ∗ distributed
with the optimal probability measure. As a result the constraint E [X] ≤ α is redundant. Therefore,
the support of the optimal µX has two members, which shows that the upper bound is tight in this
case.
7
Conclusion
In this paper, we studied the capacity of a class of signal-dependent additive noise channels. These
channels are of importance in molecular and optical communication; we also gave a number of new
application of such channels in the introduction. A set of necessary and a set of sufficient conditions
for finiteness of capacity were given. We then introduced two new techniques for proving explicit
lower bounds on the capacity. As a result, we obtained two lower bounds on the capacity. These
lower bounds were helpful in inspecting when channel capacity becomes infinity. We also provided
an upper bound using the symmetrized KL divergence bound.
References
[1] S. M. Moser, “Capacity results of an optical intensity channel with input-dependent gaussian
noise,” IEEE Transactions on Information Theory, vol. 58, no. 1, pp. 207–223, 2012.
31
[2] M. Pierobon and I. F. Akyildiz, “Diffusion-based noise analysis for molecular communication in
nanonetworks,” IEEE Transactions on Signal Processing, vol. 59, no. 6, pp. 2532–2547, 2011.
[3] G. Aminian, M. F. Ghazani, M. Mirmohseni, M. Nasiri-Kenari, and F. Fekri, “On the capacity
of point-to-point and multiple-access molecular communications with ligand-receptors,” IEEE
Transactions on Molecular, Biological and Multi-Scale Communications, vol. 1, no. 4, pp. 331–
346, 2016.
[4] H. Arjmandi, A. Gohari, M. Nasiri-Kenari, and F. Bateni, “Diffusion-based nanonetworking: A
new modulation technique and performance analysis,” IEEE Communications Letters, vol. 17,
no. 4, pp. 645–648, 2013.
[5] A. Gohari, M. Mirmohseni, and M. Nasiri-Kenari, “Information theory of molecular communication: Directions and challenges,” to appear in IEEE Transactions on Molecular, Biological
and Multi-Scale Communications, 2016.
[6] K. V. Srinivas, A. W. Eckford, and R. S. Adve, “Molecular communication in fluid media: The
additive inverse gaussian noise channel,” IEEE Transactions on Information Theory, vol. 58,
no. 7, pp. 4678–4692, 2012.
[7] M. N. Khormuji, “On the capacity of molecular communication over the aign channel,” in
Information Sciences and Systems (CISS), 2011 45th Annual Conference on, pp. 1–4, IEEE,
2011.
[8] H. Li, S. M. Moser, and D. Guo, “Capacity of the memoryless additive inverse gaussian noise
channel,” IEEE Journal on Selected Areas in Communications, vol. 32, no. 12, pp. 2315–2329,
2014.
[9] N. Farsad, Y. Murin, A. W. Eckford, and A. Goldsmith, “Capacity limits of diffusion-based
molecular timing channels,” arXiv:1602.07757, 2016.
[10] T. H. Chan, S. Hranilovic, and F. R. Kschischang, “Capacity-achieving probability measure
for conditionally gaussian channels with bounded inputs,” IEEE Transactions on Information
Theory, vol. 51, no. 6, pp. 2073–2088, 2005.
[11] J. G. Smith, “The information capacity of amplitude-and variance-constrained sclar gaussian
channels,” Information and Control, vol. 18, no. 3, pp. 203–219, 1971.
[12] R. Jiang, Z. Wang, Q. Wang, and L. Dai, “A tight upper bound on channel capacity for visible
light communications,” IEEE Communications Letters, vol. 20, no. 1, pp. 97–100, 2016.
[13] A. Lapidoth, S. M. Moser, and M. A. Wigger, “On the capacity of free-space optical intensity
channels,” IEEE Transactions on Information Theory, vol. 55, no. 10, pp. 4449–4461, 2009.
[14] R. R. Chen, B. Hajek, R. Koetter, and U. Madhow, “On fixed input distributions for noncoherent communication over high-snr rayleigh-fading channels,” vol. 50, no. 12, pp. 3390–3396,
2004.
[15] G. Aminian, H. Arjmandi, A. Gohari, M. Nasiri-Kenari, and U. Mitra, “Capacity of diffusionbased molecular communication networks over lti-poisson channels,” IEEE Transactions on
Molecular, Biological and Multi-Scale Communications, vol. 1, no. 2, pp. 188–201, 2015.
32
[16] S. Ihara, Information theory for continuous systems. Singapore: World Scientific, 1993.
[17] T. M. Cover and J. A. Thomas, Elements of information theory. New York: John Wiley &
Sons, 2nd ed., 2006.
[18] F. Topsoe, “An information theoretical identity and a problem involving capacity,” Studia
Scientiarum Math. Hungarica, vol. 2, no. 10, p. 291292, 1967.
[19] H. Ghourchian, A. Gohari, and A. Amini, “Existence and continuity of differential entropy for
a class of distributions,” IEEE Communications Letters, 2017.
[20] E. M. Stein and R. Shakarchi, Real analysis: measure theory, integration, and Hilbert spaces.
New Jersey: Princeton University Press, 2005.
[21] A. El Gamal and Y.-H. Kim, Network Information Theory. Cambrdige University press, 2011.
A
Proof of Equation (9)
Take some arbitrary x > 0. Then, equation (9) holds because
2
Z ∞ √
Z ∞
−(y−v 2 )2
1
2 − (y−v2 22)2
v +y
v2 − y
√
√
√
e 2c v dv =
+ √
e 2c2 v2 dv
π v 2 2c2 v 2 2c2
πc2
0
0
Z ∞
Z ∞
2 )2
2 )2
2y
1 v 2 − y −(y+v
1 v 2 + y −(y−v
2
2
2
√ e 2c v dv +
√ e 2c2 v2 dv
√
√
=e c
π v 2 2c2
π v 2 2c2
0
0
Z 1
Z ∞
2
2
2 )2
2
−(y+v
)
2y
2y
1 v −y
1 v 2 − y −(y+v
√
√
√ e 2c2 v2 dv + e c2
√ e 2c2 v2 dv
=e c2
π v 2 2c2
π v 2 2c2
0
1
Z
Z 1
2 )2
2 )2
∞
1 v 2 + y −(y−v
1 v 2 + y −(y−v
√ e 2c2 v2 dv +
√ e 2c2 v2 dv.
√
√
+
π v 2 2c2
π v 2 2c2
1
0
Now utilize the change of variables
y − v2
v 7→ u1 = √ ,
v 2c2
y + v2
v 7→ u2 = √
v 2c2
to re-express the above integrals. Note that
du1 = −
v2 + y
√
dv,
v 2 2c2
du2 =
v2 − y
√
dv.
v 2 2c2
For y > 0, if v = 0 √
then u1 = +∞, u2 =√+∞, if v = +∞ then u1 = −∞, u2 = +∞ and if v = 1
then u1 = (y − 1)/ 2c2 , u2 = (y + 1)/ 2c2 . For y < 0, if v = 0√then u1 = −∞, u2√= −∞, if
v = +∞ then u1 = −∞, u2 = +∞ and if v = 1 then u1 = (y − 1)/ 2c2 , u2 = (y + 1)/ 2c2 . Now
for y > 0 we have
−e
2y
c2
Z
∞
y+1
√
2c2
Z
∞
=
−∞
2y
1
2
√ e−u2 du2 + e c2
π
Z
∞
y+1
√
2c2
1
2
√ e−u2 du2 +
π
1
2
√ e−u1 du1 = 1.
π
33
Z
∞
y−1
√
2c2
1
2
√ e−u1 du1 +
π
Z
y−1
√
2c2
−∞
1
2
√ e−u1 du1
π
Similarly, for y < 0 we have
e
2y
c2
y+1
√
Z ∞
Z √y−1
Z √y−1
2y
1 −u22
1
1
1
2
2
2
2
2c
2c2
−u
−u
√ e
√ e 2 du2 −
√ e 1 du1 +
√ e−u1 du1
du2 + e c2
y+1
π
π
π
π
√
−∞
−∞
−∞
2c2
Z ∞
2y
2y
1
2
√ e−u1 du1 = e c2 .
= e c2
π
−∞
Z
2c2
Therefore, the proof is complete.
34
| 7 |
Stochastic bandits robust to adversarial corruptions
arXiv:1803.09353v1 [cs.LG] 25 Mar 2018
Thodoris Lykouris∗
Vahab Mirrokni†
Renato Paes Leme‡
Abstract
We introduce a new model of stochastic bandits with adversarial corruptions which aims to
capture settings where most of the input follows a stochastic pattern but some fraction of it can
be adversarially changed to trick the algorithm, e.g., click fraud, fake reviews and email spam.
The goal of this model is to encourage the design of bandit algorithms that (i) work well in
mixed adversarial and stochastic models, and (ii) whose performance deteriorates gracefully as
we move from fully stochastic to fully adversarial models.
In our model, the rewards for all arms are initially drawn from a distribution and are then
altered by an adaptive adversary. We provide a simple algorithm whose performance gracefully
degrades with the total corruption the adversary injected in the data, measured by the sum
across rounds of the biggest alteration the adversary made in the data in that round; this
total corruption is denoted by C. Our algorithm provides a guarantee that retains the optimal
guarantee (up to a logarithmic term) if the input is stochastic and whose performance degrades
linearly to the amount of corruption C, while crucially being agnostic to it. We also provide a
lower bound showing that this linear degradation is necessary if the algorithm achieves optimal
performance in the stochastic setting (the lower bound works even for a known amount of
corruption, a special case in which our algorithm achieves optimal performance without the
extra logarithm).
∗
Cornell University, [email protected]. Work supported under NSF grant CCF-1563714. Part of the work
was done while the author was interning at Google.
†
Google Research, [email protected]
‡
Google Research, [email protected]
1
Introduction
In online learning with bandit feedback, a learner needs to decide at each time between alternative
actions or arms of unknown quality, facing a trade-off between exploiting profitable past actions or
exploring new actions about which she has little information. Bandit problems are typically classified
according to how the rewards are generated. In stochastic bandits, rewards are drawn from fixed
but unknown distributions, which models settings where the alternatives follow particular patterns
and do not react to the learner. The other extreme is adversarial bandits, which are robust to
rewards that are specifically designed to trick the learner, as in game-theoretic settings.
In this paper, we focus on settings where the overall behavior is essentially stochastic but a small
fraction of the rewards can be adversarially changed. Classic stochastic bandit algorithms, like
Upper Confidence Bound (UCB) [ACBF02] or Active Arm Elimination (AAE) [EMM06], base most
of their decisions on a few observations made in an initial phase of the algorithm and therefore
can be easily tricked into incurring linear regret if very few arms are corrupted. Adversarial bandit
algorithms like EXP3 are not fooled by such tricks, but cannot exploit the fact that the input is
mostly stochastic.
Our goal is to robustify the stochastic setting by designing algorithms that can tolerate corruptions
and still be able to exploit the stochastic nature of the input. The algorithms we design are agnostic
to the corruption, i.e. they can tolerate any level of corruption, and the guarantee degrades gracefully
as more corruption is added. Moreover, we prove lower bounds showing that our results are tight up
to a logarithmic factor. Before we explain our technical contribution in detail, we describe examples
of settings we have in mind.
Click fraud In pay-per-click online advertising, the platform selects for each pageview an ad to
display and obtains a certain reward if the user clicks on the ad. The click probabilities are unknown.
The tension between repeatedly displaying a particular profitable ad that provides reliable revenue
and exploring other potentially more rewarding options is a major application of stochastic bandits
in the ads industry.
If it weren’t for a phenomenon known as click fraud, this would be a textbook example of stochastic
bandits. In click fraud, botnets maliciously simulate users clicking on an ad to trick learning
algorithms. One example is a bot consistently making searches to trigger some ad and not clicking
on it to make it seem like a certain ad has very low click-through-rate in order to boost its competitor.
Recommendation systems: A platform recommending activities or services to a user faces the
same trade-off. Suggesting new restaurants leads to faster learning of the best spots but may result to
dissatisfaction of the customers who are led to disappointing experiences. While most inputs follow
a stochastic pattern, some inputs are typically corrupted: either maliciously, e.g. fake reviews by
competitors, or non-maliciously, e.g. construction next-door makes the restaurant less desirable
in certain interval. This corruption may again exhibit arbitrary patterns and is not identically
distributed over time, yet it is dwarfed by the fact that most of the input is stochastic.
There are several other such examples: emails mostly follow a stochastic pattern except a fraction
of them which are spam and are designed to trick algorithms. Internet searches follow a predictable
pattern except certain spikes caused by unpredictable events. Data collection used in the econometric process often suffers from errors that affect a small part of the input. In all those cases, the vast
1
majority of the input follows a predictable pattern, but a fraction of the samples are corrupted.
1.1
Our contribution
Our model. In this paper, we introduce a new model of stochastic bandits with adversarial
corruptions. The goal of this model is to encourage the design of bandit algorithms that (i) work
well in mixed adversarial and stochastic models, and (ii) whose performance deteriorates gracefully
as we move from fully stochastic to fully adversarial models.
In this model there are K arms, each associated with a fixed reward distribution F(a). At each
round t, a random reward rSt (a) ∼ F(a) is drawn and an adversary can change the reward to r t (a),
possibly using information about the realizations of rSτ (a) from both the current and previous rounds
τ ≤ t as well as the probability that the learner puts on each arm. The learner then draws an arm at
and obtains r t (at ) both as P
reward and feedback. We say that the adversary is C-corrupted if in
every sample path we have t maxa |r t (a) − rSt (a)| ≤ C.
Our results. The main result (Theorem 3 in Section 3) is a learning algorithm we term Multi-layer
Active Arm Elimination Race that with probability 1 − δ has regret
X K · C log(KT/δ ) + log(T )
· log(KT/δ )
O
∆(a)
⋆
a6=a
where ∆(a) is the gap of arm a, i.e. the difference in stochastic means of arm a and the optimal
√
1
arm a⋆ . For arms with
√ very small gap, i.e. when ∆(a) ≤ / T , the inverse dependence on the gap
can be replaced by T . It is possible to improve the bound by
i.e.
Pa log factor for pseudo-regret,
K·C+log(T )
· log(KT ) . Two
maximum expected regret against any fixed arm, obtaining: O
a6=a∗
∆(a)
important features of the algorithm are that the guarantee is:
• Agnostic: The algorithm does not need to know the corruption level C. The guarantee is
provided with respect to how much corruption was added in retrospect. If the corruption level
is known, we can remove the dependence on K · log(KT/δ ) as shown in Theorem 1.
• High Probability: Our bounds hold with high probability which is important for practical
applications as the ones described above. In contrast, the weaker definition of pseudo-regret
often hides events with large regret that are offset by events with large negative regret.
The stochastic case corresponds to C = 0 in which case we recover
P a bound that is slightlyworse
T
than the guarantee provided by UCB. Our algorithm obtains O
a6=a⋆ log(T ) · log( /δ )/∆(a) with
probability 1 − δ, while UCB obtains this bound without the log(T ) term.
En routeto the result, inTheorem 2 we show an algorithm
that, for any fixed known C, provides
2
P
P
KT
log(KT/δ)
regret O
for stochastic input and O K · C · a6=a⋆ (log(∆(a)/δ )) if it is C-corrupted.
a6=a⋆
∆(a)
In other words, if we only need to tolerate either a known level C or zero corruptions, we save a
logarithmic factor from the bound, and match the bound provided by UCB in the stochastic case.
Another question is whether the linear dependence on the corruption level is tight. In Section 4,
we show that it cannot be improved upon without decay in the stochastic guarantee (i.e. while still
guaranteeing logarithmic regret when the input is stochastic). The lower bound is an adaptation
from the adversarial to the corrupted setting of a result from Auer and Chiang [AC16]. This holds
2
even for the case where the corruptions are either 0 or a known level C (where our algorithm provides
a matching upper bound). We prove in Theorem 4 that an algorithm with pseudo-regret O(log(T )/∆)
in the stochastic setting (C = 0) then for every constant ǫ > 0, there is a O(T ǫ )-corrupted instance
where the algorithm incurs regret Ω(T ǫ ) with constant probability.
Our algorithm can also be viewed through the lens of the best of both worlds literature [BS12, SS14,
AC16, SL17], where the goal is to design algorithms that simultaneously provide logarithmic regret
guarantees in the stochastic regime and square-root guarantees in the adversarial. In Section 5, we
sketch how our algorithm can be appropriately modified
to obtain, for any constant 0 < a < 1/2,
1/2
a
a+
e
e
O(C) pseudo-regret for C = O(T ) and O T
pseudo-regret otherwise. We observe that the
results in the best of both worlds literature correspond to the case where a = 0. We note that such
bounds are obtained for pseudo-regret and not regret with high-probability.
Our techniques. The starting point of our design are classical stochastic bandit learning algorithms
like UCB and Active Arm Elimination. Such algorithms are very susceptible to corruptions since
they base most of their decisions on a small initial exploration phase. Therefore, with a small
number of corruptions it is possible to completely trick the algorithm into eliminating the optimal
arm.
We address this issue by robustifying them using a multi-layer approach. The learning algorithm
consists of multiple layers running in parallel. The layers have decreasing speed and increasing
tolerance to corruption. The first layer finishes very fast selecting an arm as optimal, but provides
no tolerance to corruption. Subsequent layers are more robust but also slower.
The resulting algorithm is a race between different layers for picking the optimal arm. Once the
fastest layer finishes, it provides a first crude estimate of the optimal arm. Once slower layers finish,
we obtain finer and finer estimates of the optimal arm.
Our second main idea is that we can obtain more robust algorithms by subsampling. If a layer is only
selected with probability p, it only receives in expectation a p-fraction of the corruption injected by
the adversary. If p is low enough, the layer behaves almost as if it was stochastic.
Finally, we couple the different layers together by a process of global eliminations. This process
enables slower layers to eliminate arms in faster layers. Such a process is necessary for preventing
inaccurate layers from pulling suboptimal arms too often.
1.2
Related work
Online learning with stochastic rewards goes back to the seminal work of Lai and Robbins [LR85].
The case of adversarial rewards was introduced by Auer et al. [ACBFS03]. The reader is referred
to the books of Cesa-Bianchi and Lugosi [CBL06], Bubeck and Cesa-Bianchi [BCB12], and Slivkins
[Sli17] for an elaborate overview of the area. These two extremes suffer from orthogonal problems;
the one is overoptimistic expecting that all rewards come from the same distribution while the other
one is too pessimistic in order to be protected against malicious adversaries. Our work addresses
the middle ground: rewards come from distributions but are often adversarially corrupted. This is
motivated by the non-robustness of stochastic learning algorithms to even small corruption levels.
Closely related to our work lie the works on best of both worlds guarantees [BS12, SS14, AC16, SL17].
These works achieve (up to logarithmic factors) the optimal pseudo-regret guarantee for stochastic
rewards and the optimal pseudo-regret or actual regret guarantee for adversarial rewards. Bubeck
3
and Slivkins [BS12] and Auer and Chiang [AC16] begin from a stochastic algorithm and test whether
they encounter non-stochastic behavior in which case they switch to adversarial algorithm. In
contrast, Seldin et al. [SS14, SL17] begin from an adversarial algorithm with very optimistic learning
rate and adapt it if they encounter such behavior. Recently and independently to this work, Wei
and Luo [WL18] provide a best of both worlds result with a small-loss pseudo-regret guarantee
on the adversarial setting, via a novel analysis of the log-barrier OMD algorithm of Foster et al.
[FLL+ 16]. Although the aforementioned algorithms are very elegant, their analysis is not robust to
inputs that are slightly away from stochastic. Our work bridges this gap by designing algorithms
with a more smooth behavior for close-to-stochastic instances.
There have been other works that attempt to provide improved guarantees than the adversarial
setting when instances are well behaved. Hazan and Kale [HK09] offer regret guarantees that scale
with the variance of the losses instead of the time horizon. This guarantee is meaningful in settings
that have a very predictable nature and have usually the same performance such as routing. However
they do not address most applications of stochastic bandits. In Click Fraud, for example the rewards
come from Bernoulli distributions and the variance of such a distribution is high even if the input is
totally stochastic. Another approach is the work of Shamir and Szlak [SS17], who consider an input
that is adversarial but random local permutations are applied to obtain a more benign instance.
This approach is very relevant in settings like buffering, but is again not applicable to our settings.
On the opposite side, attempting to provide improved guarantees for the stochastic setting or
enhancing their range is a very active area of research. For instance, the MOSS algorithm [AB09]
of Audibert and Bubeck provides the optimal non-distribution-based upper bound for stochastic
bandits while retaining the optimal distribution-based stochastic guarantee. The KL-UCB algorithm
of Garivier and Cappé [GC11] provides improved constants in the upper bound of the stochastic
guarantee matching the lower bound of Lai and Robbins [LR85] for Bernoulli rewards. The Robust
UCB algorithm [BCBL13] extends the results to non-bounded rewards replacing with the weaker
assumption of bounded variance. However, all the above results are not robust to corruptions from
an adaptive adversary due to their deterministic nature. Since the adversary knows the arm the
learner will select, they can always corrupt the optimal arm whenever it is about to be selected and
therefore cause the learner to either play it multiple times even if it is suboptimal or decide against
playing it even with a small amount of corruption (similarly as in our lower bound).
There is also prior work on incorporating corruptions in online decision making. In the online
learning front, there are two such attempts, to the best of our knowledge. In their best of both
worlds result, Seldin and Slivkins [SS14] allow for some contamination in the data as long as they
are obliviously selected and they do not decrease the gap by more than a factor of 2. The second
work is a recent paper by Gajane et al. [GUK18] who suggest a model of corrupted feedback aiming
for differential privacy. Unlike our model, their corruptions are neither adversarial nor adaptive.
Both of these works make benign assumptions about the nature of corruption and do not address
the main roadblock in the settings we consider: an adversarial saboteur will try to add faulty
data in the beginning to change the order between the two arms and, with a minimal corruption,
she will achieve this goal. Closer to our model are the works on robust allocation such as online
matching with corrupted data [MGZ12, EKM15]; unlike online matching though, in online learning
we cannot evaluate the optimum at every round since the algorithm’s decisions affect the information
it observes.
Last, learning in the presence of corruptions has recently received great attention in the batch
learning setting. For instance, recent works study inference under the presence of adversarially
corrupted data [MRT15], designing estimators that are robust to corrupted data [DKK+ 16], learn4
ing in auctions with some faulty data due to econometrics errors [CD17]. Our work suggests a
similar framework for the study of online learning that is robust to adversarial corruptions in the
more challenging problem of sequential decision making where decisions also affect the information
observed.
2
Model
Corrupted stochastic bandits. We study an online bandit learning setting with K arms. Each
arm a ∈ [K] is associated with a distribution F(a) with mean µ(a). The distributions are assumed
to have positive measure only on rewards in [0, 1] and are unknown to the learner. We refer to the
optimal arm as a⋆ = arg maxa µ(a) and define ∆(a) = µ(a⋆ ) − µ(a).1
We consider an adversary who can corrupt some of the stochastic rewards. The adversary is adaptive,
in the sense that the corrupted rewards can be a function of the realization of the stochastic rewards
up to that point and of the learner’s choices in previous rounds. More formally, the protocol between
learner and adversary, at each round t = 1 . . . T , is as follows:
1. The learner picks a distribution wt over the K arms.
2. Stochastic rewards are drawn for each arm: rSt (a) ∼ F(a).
3. The adversary observes the realizations of rSt (a) as well as rewards and choices of the learner
in previous steps and returns a corrupted reward r t (a)∈ [0, 1].
4. The learner draws arm at ∼ wt and observes r t at .
We refer to maxa |r t (a) − rSt (a)| as the amount of corruption injected in round t. The instance is
C-corrupted if the total injected corruption is at most C for all realizations of the random variables:
X
max|r t (a) − rSt (a)| ≤ C
t
a
Note that the adversary is assumed to be adaptive, in the sense that she has access to all the
realizations of random variables for all rounds τ < t and the realization of rewards at round t but
only knows the player’s distribution at round t and not the arm at . Our guarantees gracefully
degrades with the total corruption injected by the adversary.
Regret notions. Regret corresponds to the difference between the reward obtained by the algorithm and the reward of the best arm in hindsight:
X
Reg = max
r t (a) − r t at
a
t
The regret is a random variable that depends on the random rewards, the randomness used by
the learner, and the randomness of the adversary. We say that a regret bound R(T, δ) holds with
probability 1 − δ if
P[Reg < R(T, δ)] > 1 − δ
where the probability is taken over all the three sources of randomness described.
We note that a⋆ is one arm with optimal mean and this does not preclude the existence of other arms with the
same mean. If more than one such arms exist, let a⋆ be an arbitrary arm with optimal mean and the other arms
a 6= a⋆ with optimal mean have gap ∆(a) = 0.
1
5
Finally pseudo-regret is a weaker notion that compares the expected performance of the learner
with the arm with the highest expected performance. In other words:
"
#
X
PseudoReg = max E
r t (a) − r t at
a
t
We note that by Jensen’s inequality, PseudoReg ≤ E[Reg]. We often obtain improved bounds for
pseudo-regret since it allows us to offset large positive regret events with large negative regret
events.
3
The upper bound: Multi-layer Active Arm Elimination Race
Active arm elimination. The starting point of our design is the Active Arm Elimination algorithm
for stochastic bandits [EMM06], which can be viewed as an alternative presentation of the more
famous UCB algorithm [ACBF02]. It is based on the following idea: in an initial exploration phase,
we pull arms in a round-robin fashion and compute an estimate µ
e(a) as the average empirical reward
of arm a. After n(a) pulls of arm a, usual concentration arguments establish that with probability
at
p
least 1 − 1/T Ω(1) , the difference of the empirical and actual means is at most wd(a) = O( log(T )/n(a)).
We say that [e
µ(a) − wd(a), µ
e(a) + wd(a)] is the confidence interval of arm a.
This means in particular that given two arms a and a′ , if the difference in empirical means becomes
larger than the widths of the confidence intervals, i.e., µ
e(a) − µ
e(a′ ) > wd(a) + wd(a′ ), then with
′
high probability arm a is not optimal. Once this happens, the algorithm eliminates arm a′ by
removing it from the round-robin rotation. After both arms a and the optimal arm a⋆ are pulled
O(log(T )/∆(a)2 ) times, the confidence intervals will be small enough that arm a will be eliminated.
Eventually all arms but the optimal are eliminated and we enter what is called the exploitation
phase. In this phase we only pull the arm with optimal mean. Before we enter exploitation we
pulled each suboptimal arm a at most O(log(T )/∆(a)2 ) times. Each of thoseP
suboptimal pulls incurs
regret ∆(a) in expectation which leads to the pseudo-regret bound of O( a6=a⋆ log(T )/∆(a)). This
bound can also be converted to a high probability bound if we replace log(T ) by log(T/δ ).
√
Arms with small ∆(a). We note that, for the arms that have ∆(a) < 1/ T , the inverse dependence
on the gap may initially seem vacuous; for instance, when there are two optimal arms a, a⋆ with
the same mean, the upper bound becomes infinite as ∆(a) = 0. However,√the inverse dependence
on the gap can be replaced by ∆(a) · T in the case of pseudo-regret and T in the case of actual
regret (due to variance reasons). For simplicity of exposition, we omit this in the current section
but we demonstrate how to perform this replacement in Section 5.
3.1
Enlarged confidence intervals
The active arm elimination algorithm is clearly not robust to corruption since by corrupting the
first O(log T ) steps, the adversary can cause the algorithm to eliminate the optimal arm. As the
algorithm never pulls the suboptimal arms after exploration, it is not able to ever recover. One
initial idea to fix this problem is to enlarge the confidence intervals. We can decompose the rewards
r t (a) in two terms rSt (a) + ct (a) where the first term comes from the stochastic reward and the
second is the corruption introduced by the adversary. If the total corruption introduced by the
6
p
adversary is at most C, then with width wd(a) = O( log(T )/n(a) + C/n(a)), a similar analysis to
above gives us the following regret bound:
Theorem
for thetotal corruption
q1. If C is a valid upper bound
then active arm elimination with
P
log(KT/δ)+C
log(2KT/δ)
C
with probability 1 − δ.
+ n(a) has regret O
wd(a) =
a6=a⋆
n(a)
∆(a)
Proof sketch. The proof follows the standard analysis of active arm elimination. We first establish that, with high probability the optimal arm a⋆ is never inactivated (Lemma 3.1) and then
upper bound the number of times each suboptimal arm is played (Lemma 3.2). The pseudo-regret
guarantee directly follows by multiplying the number of plays for each arm by its gap ∆(a). For
the high-probability guarantee, we need to also show that the regret incurred in the meantime
is not much more than the above. We provide proof details about the theorem and lemmas in
Appendix A.
Lemma 3.1. With probability at least 1 − δ, arm a⋆ never becomes inactivated.
Lemma 3.2. With probability at least 1 − δ, all arms a 6= a⋆ become inactivated after N (a) =
36 log(2KT/δ)+6C
plays.
∆(a)2
3.2
Stochastic bandits robust to known corruption
The drawback of the active arm elimination algorithm with enlarged confidence intervals (Theorem 1)
is that, even if there are no corruptions, it still incurs a regret proportional to C.As a warm up to
P
log (KT/δ)
the main theorem, we provide an algorithm that achieves the usual bound of O
⋆
a6=a
∆(a)
P
log(KT/δ)2
if the
if the input is purely stochastic and, at the same time, achieves O K · C · a6=a⋆ ∆(a)
input is C-corrupted for a known C. In the next subsection, we modify the algorithm to make it
agnostic to the corruption level C.
Two instances of Active Arm Elimination. The first idea is to run two instances of active arm
elimination: the first is supposed to select the correct arm if there is no corruption and the second
is supposed to select the right arm if there is C corruption. The first instance is very fast but it is
not robust to corruptions. The second instance is slower but more precise, in the sense that it can
tolerate corruptions. Since the second instance is more trustworthy, if the second instance decides
to eliminate a certain arm a, we eliminate the same arm in the faster instance.
Decrease corruption by sub-sampling. To keep the regret low if the input is stochastic, the
second instance of active arm elimination cannot pull a suboptimal arm too many times. Therefore,
the technique in Theorem 1 alone is not enough. The main idea of the algorithm is to make arm a
behave as if it was almost stochastic by running the second instance with low probability. If the
learner selects to run the second instance with probability 1/C then, when the adversary adds a
certain amount of corruption to a certain round, the second instance observes that corruption with
probability 1/C . Therefore, the expected amount of corruption the learner observes in the second
instance is constant. This makes the arms behave almost like stochastic arms in that instance.
Learning algorithm. We obtain our algorithm by combining those ideas. We have two instances
of active arm elimination which we denote by F (fast) and S (slow). Each instance keeps an
estimate of the mean µ
eF (a) and µ
eS (a) corresponding to the average empirical reward of that arm
and also keeps track of how many times each arm was pulled in that instance nF (a) and nS (a). This
F
allows
p us to define a notion of confidence interval in each of the instances. We define wd (a) =
O( log(T )/nF (a)) as usual and for the slow instance we define slighly larger confidence intervals:
7
p
wdS (a) = O( log(T )/nS (a) + log(T )/nS (a)) (the reason will be clear in a moment). Also, each instance
keeps a set of eliminated arms for that instance: I F and I S .
In each round, with probability 1 − 1/C we make a move in the fast instance: we choose the next
active arm a in the round robin order, i.e., arm a ∈ [K] \ I F which was played less often, pull this
arm and increase nF (a) and update µ
eF (a) accordingly. As usual, if there are two active arms a and
a′ such that µ
eF (a) − µ
eF (a′ ) > wdF (a) + wdF (a′ ) we eliminate a′ by adding it to I F .
With the remaining probability we make a move in the slow instance by executing the exact same
procedure as described for the other instance. There is only one difference (which causes the two
instances to be coupled): when we inactivate an arm a in S we also eliminate it in F. This leaves us
with a potential problem: it is possible that all arms in the F instance end up being eliminated. If
we reach that point, we play an arbitrary active arm of the slow instance, i.e., any arm a ∈ [K] \ I S .
The resulting algorithm is formally provided in Algorithm 1.
Algorithm 1 Fast-Slow Active Arm Elimination Race for known corruption C
1: Initialize nℓ (a) = 0, µ
eℓ (a) = 0, I ℓ = ∅ for all a ∈ [K] and ℓ ∈ {F, S}
2: For Rounds t = 1..T
3:
Sample algorithm ℓ: ℓ = S with probability 1/C. Else ℓ = F.
4:
If [K] \ I ℓ 6= ∅
5:
Play arm at ← arg mina∈[K]\I ℓ nℓ (a)
6:
Update µ
eℓ (at ) ← [nℓ (a)e
µℓ (at ) + r t (at )]/[nℓ (a) + 1] and nℓ (a) ← nℓ (a) + 1
′
7:
While exists arms a, a ∈ [K] \ I ℓ with µ
eℓ (a) − µ
eℓ (a′ ) > wdℓ (a) + wdℓ (a′ )
8:
Eliminate a′ by adding it to I ℓ
9:
If ℓ = S then eliminate a′ from the other algorithm by adding it to I F
10:
Else
11:
Play an arbitrary arm in the set [K] \ I S .
Towards the performance guarantee, Lemma 3.3 bounds the amount of corruption that actually
enters the slow active arm elimination algorithm, which enables the regret guarantee in Theorem 2.
Lemma 3.3. In Algorithm 1, the slow active arm elimination algorithm S observes, with probability
at least 1 − δ, corruption of at most ln(1/δ ) + 3 during its exploration phase (when picked with
probability 1/C ).
Proof sketch. If one cared just about the expected corruption that affects S, this is at most a constant
number since the total corruption is at most C and it affects S with probability 1/C . To prove a
high-probability guarantee we require a concentration inequality on martingale differences (since
the corruptions can be adaptively selected by the adversary). We provide the details in Appendix
B.
q
8KT /δ )
log(8KT/δ )
Theorem 2. Algorithm 1 run with widths wdS (a) =
+ 2 log(
and wdF (a) =
nS (a)
nS (a)
q
P
P
log(8KT/δ)
log(KT/δ )
(log(KT/δ))2
has
O
for
the
stochastic
case
and
O
K
·
C
·
for
⋆
⋆
a6=a
a6=a
∆(a)
∆(a)
nF (a)
the C-corrupted case with probability at least 1 − δ.
Proof sketch. The result for the stochastic case follows standard arguments for stochastic algorithms
(since we obtain double the regret of this setting as we run two such algorithms with essentially
the same confidence intervals). For the C-corrupted case, we establish via Lemma 3.3 an upper
8
bound on the corruption that will affect the slow active arm elimination algorithm S. Thanks to
the sub-sampling, this upper bound is close to a constant instead of depending on C which allows
to not incur dependence on C in the stochastic case. Having this upper bound, we can apply it to
the algorithm of the previous section to get an upper bound on the number of plays of suboptimal
arms in S. Since the algorithms are coupled, such a bound implies an upper bound on the regret
that it can cause in F as well. This is because in expectation the arm is played at most K · C times
more in F as it may be selected every single time in F prior to getting eliminated by S and F is
selected C times more often than S. To obtain the above guarantee with high probability, we lose
an extra logarithmic factor. The details of the proof are provided in Appendix B.
3.3
Stochastic bandits robust to agnostic corruption
Multiple layers of active arm elimination. In the previous subsection we designed an algorithm
with two layers: one is faster but cannot tolerate corruptions and the second one is slower but more
robust. In order to be agnostic to corruption, we need to plan for all possible amounts of corruption.
To achieve this, we introduce log(T ) layers. Each layer is slower but more robust than the previous
one. We achieve that by selecting the ℓ-th layer with probability proportional to 2−ℓ . By the
argument in the last section, if the corruption level is at most C, then each layer with ℓ ≥ log C
will observe O(1) corruption in expectation and at most O(log T ) corruption with high probability.
Global eliminations. We couple the log T instances through what we call global eliminations. If
arm a is eliminated by the ℓ-th layer, then we eliminate a in all layers ℓ′ ≤ ℓ. This is important to
prevent us from pulling arm a too often. If arm a is suboptimal and the adversary is C-corrupted,
e 1/∆(a)2 ) in
then arm a eventually becomes eliminated in the ℓ⋆ = ⌈log C⌉ layer after being pulled O(
⋆
⋆
−ℓ
e
2
then it takes O(C/∆(a) ) iterations until
that layer. Since layer ℓ is played with probability 2
e C/∆(a)) from that arm.
arm is eliminated globally, in which case we will have total regret at most O(
Multi-layer active arm elimination race. We now describe our main algorithm in the paper.
We call it a race since we view it as multiple layers racing to pick the optimal arm. The less robust
layers are faster so they arrive first and we keep choosing (mostly) according to them until more
robust but slower layers finish and correct or confirm the current selection of the best arm.
The algorithm keeps ℓ = 1 . . . log(T ) different instances of active arm elimination. The ℓ-th instance
has as state the empirical means of each arm µ
eℓ (a), the number nℓ (a) of times each arm a was
pulled and the set I ℓ of inactive arms. The
p width of the confidence interval for arm a in the ℓ-th
layer is implicitly defined as wdℓ (a) = O( log(T )/nℓ (a) + log(T )/nℓ (a)).
In each round t we sample ℓ ∈ {1, . . . , log T } with probability 2−ℓ (with remaining probability we
pick layer 1). When layer ℓ is selected, we make a move in the active arm elimination instance
corresponding to that layer: we sample the active arm in that layer with the least number of pulls,
i.e., arm a ∈ [K] \ I ℓ minimizing nℓ (a). In case [K] \ I ℓ is empty, we pull an arbitrary arm from
′
′
[K] \ I ℓ for the lowest ℓ′ such that [K] \ I ℓ is non-empty.
The way we couple different layers is that once arm a′ is eliminated in layer ℓ because there is
another active arm a in layer ℓ such that µ
eℓ (a) − µ
eℓ (a′ ) < wdℓ (a) + wdℓ (a′ ) we eliminate arm a′ in
1
all previous layers, keeping the invariant that: I ⊇ I 2 ⊇ I 3 ⊇ . . ..
Figure 1 provides an example of the state of the algorithm, which is formally defined in Algorithm
2.
9
ℓ=1
arm 1
arm 2
...
arm d
µ
e1 (1), n1 (1)
µ
e1 (2), n1 (2)
...
µ
e1 (d), n1 (d)
µ
e2 (1), n2 (1)
ℓ=2
µ
e3 (1), n3 (1)
ℓ=3
..
.
..
.
ℓ = lg T
µ
e2 (2), n2 (2)
µ
e3 (2), n3 (2)
...
...
..
.
µ
elg T (1), nlg T (1) µ
elg T (2), nlg T (2)
µ
e2 (d), n2 (d)
µ
e3 (d), n3 (d)
..
.
...
µ
elg T (d), nlg T (d)
Figure 1: Example of the state of the algorithm: for each layer ℓ and arm a we keep the estimated
mean µ
eℓ (a) and the number of pulls nℓ (a). Red cells indicate arms that have been eliminated in
that layer. If an arm is eliminated in a layer, it is eliminated in all previous layers. If a layer where
all the arms are eliminated (like layer 1 in the figure) is selected, we play an arbitrary active arm
with the lowest layer that contains active arms.
Algorithm 2 Multi-layer Active Arm Elimination Race
1: Initialize nℓ (a) = 0, µ
eℓ (a) = 0, I ℓ = ∅ for all a ∈ [K] and ℓ ∈ [log T ]
2: For Rounds t = 1..T
3:
Sample layer ℓ ∈ [log T ] with probability 2−ℓ . With remaining prob, sample ℓ = 1
4:
If [K] \ I ℓ 6= ∅
5:
Play arm at ← arg mina∈[K]\I ℓ nℓ (a)
6:
Update µ
eℓ (at ) ← [nℓ (a)e
µℓ (at ) + r t (at )]/[nℓ (a) + 1] and nℓ (a) ← nℓ (a) + 1
7:
While exists arms a, a′ ∈ [K] \ I ℓ with µ
eℓ (a) − µ
eℓ (a′ ) > wdℓ (a) + wdℓ (a′ )
′
8:
Eliminate a′ by adding it to I ℓ for all ℓ′ ≤ ℓ
9:
Else
′
10:
Find minimum ℓ′ such that [K] \ I ℓ 6= ∅ and play an arbitrary arm in that set.
We now provide the main result of the paper, a regret guarantee for Algorithm 2.
Theorem
3. Algorithm 2 which is agnostic to the coruption level C, when run with widths wdℓ (a) =
q
log(4KT ·log T/δ )
nℓ (a)
+
log(4KT ·log T/δ)
nℓ (a)
has regret:
X K · C log(KT/δ ) + log(T )
O
· log(KT/δ ).
∆(a)
∗
a6=a
Proof sketch. Similarly to the previous theorem, the regret guarantee comes from the summation
between layers that are essentially stochastic (where the corruption is below
corruption
level,
their
log(KT/δ)
r
regret. Since
i.e. less than C ≤ 2 for layer r). From each of these layers, we incur O
∆(a)
there are at most log(T ) such layers, the second term in the theorem is derived. The challenge
is to bound the regret incurred by layers that are not robust to the corruption. However, there
exists some layer ℓ⋆ that is above the corruption level. By bounding the amount of steps that this
level will require in order to inactivate each arm a 6= a⋆ in the incorrect layers (via Lemma 3.2),
we obtain similarly to Theorem 2 a bound on the regret caused by this arm in those layers. Since
10
we take the minimum such layer and the tolerance of layers is within powers of 2, the fact that its
corruption level does not match exactly the corruption that occurred only costs an extra factor of
2 in the regret. The details of the proof are provided in Appendix C.
4
The lower bound
For the two arms case where the gap between the arms is ∆ > 0, Theorem 2 presents an algorithm which achieves O(log T/∆) pseudo-regret if the input is stochastic and O(C log(T/δ )/∆) with
probability 1 − δ if the input is at most C-corrupted. We show below that this dependence is tight.
The lower bound (Theorem 4) adapts the technique of Auer and Chiang [AC16] from the adversarial
to the corrupted setting. The main idea is that an algorithm with logarithmic regret in the stochastic
setting cannot query the sub-optimal arm more than log(T )/∆2 times. This implies a long time period
where the learner queries the input only constant number of times. By corrupting all rounds before
this period, an adversary can make the optimal arm look sub-optimal and trick the learner into
not pulling the optimal arm for long time, causing large regret. Theorem 5 adapts this argument
bounding the expected positive regret E[Reg+ ] where x+ = max{x, 0}; the high probability bounds
provided imply bounds on the expected positive regret. Both proofs are provided in Appendix D.
Theorem 4. Consider a multi-armed bandits algorithm that has the property that for any stochastic
input in the two arm setting, it has pseudo-regret bounded by c log(T )/∆, where ∆ = |µ1 − µ2 |.
′
For any ǫ, ǫ′ ∈ (0, 1), there is a corruption level C with T ǫ < C < T ǫ and a C-corrupted instance
such that with constant probability the regret is Ω(C).
Theorem 5. If a multi-armed bandits algorithm that has the property that for any stochastic
input in the two arm setting, it has pseudo-regret bounded by c log1+α (T )/∆ for α < 1. For any
′
ǫ, ǫ′ ∈ (0, 1), there is a corruption level C with T ǫ < C < T ǫ and a C-corrupted instance such that
E[Reg+ ] = Ω(T ǫ−δ ) for all δ > 0.
5
Extensions
In this section, we discuss some extensions that our algorithm can accommodate.
Definition of corruption. We presented all results measuring the corruption as the sum over all
rounds of the maximum across arms of the corruption injected by the adversary:
X
max |r t (a) − rSt (a) ≤ C.
a
t
P t
t
In fact all our results can be improved via using C(a) =
t |r (a) − rS (a)| and replacing C by
max(C(a), C(a⋆ )) for summand a. More formally, our main theorem (Theorem
P t 3) becomes:
t
Theorem 6. Algorithm
2
which
is
agnostic
to
the
corruptions
C(a)
=
t |r (a) − rS (a)|, when run
q
with widths wdℓ (a) =
log(4KT ·log T/δ)
nℓ (a)
+
log(4KT ·log T/δ)
nℓ (a)
has regret:
X K · max(C(a⋆ ), C(a)) · log(KT/δ ) + log(T )
· log(KT/δ ).
O
∆(a)
∗
a6=a
11
The proof follows the same arguments since it only compares each arm a with a⋆ . This result is nice
since the contribution of each arm to the regret is a function only of its own gap and the corruption
injected to it and the one injected to arm a⋆ . The latter dependence on the corruption on the
optimal arm is essential since the main attack we presented to the classical arguments only corrupts
arm a⋆ – the lower bound of the previous section also only adds corruption to a⋆ .
Dependence on the gap. In Section 3, all our guarantees have an inverse dependence on the
gap ∆(a) of all arms a. Note that such a guarantee is completely meaningless for arms with a very
small gap; for instance, if there exist two optimal arms then there is an arm a 6= a⋆ with ∆(a) = 0
which makes the presented bound infinite and therefore vacuous. As we hinted there though, this
√
inverse dependence can be improved for arms with small ∆(a) ≤ 1/ T . Our proofs generally relied
on setting an upper bound on the number of times that a suboptimal arm is played and thereby
providing an upper bound on the regret they cause.
√
For arms with ∆(a) ≤ 1/ T , an alternative analysis is to say that, even if they are erroneously
selected every single time, we can upper bound the loss in performance they√cause. For pseudoregret, the performance loss if they were selected every single time is ∆(a)·T ≤ T ≤ log T/∆(a). For
actual regret, one needs to also take into consideration the variance but, even if they are selected
every
single time, a Hoeffding bound shows that their total reward is with high probability at most
√
T lower than its expectation. As a result, the inverse dependence
on ∆(a) in our bound can be
√
1
1
replaced by min(∆(a) · T, /∆(a)) for pseudo-regret and min( T , /∆(a)) for actual regret.
C
can be
Moreover, the careful reader may have noticed that in Theorem 1, the dependence ∆(a)
replaced by a sole dependence on C without the gap. However, this does not extend to the subsequent theorems since the dependence on C there does not come from the upper bound on the
corruption experienced (this is at most log T due to subsampling). Instead, the dependence on C
comes from projecting the correct layer (smallest layer robust to corruption) to the previous layers
via the number of times it will take to eliminate any suboptimal arm.
Uncorrupted objective. In applications such as spam, the corruptions should not be counted as
part of the rewards. Our algorithm provides the same guarantee in the case of uncorrupted rewards
(the difference between the performances in the two objectives is at most C). One can also observe
that the linear dependence on C is still necessary: consider 2 arms with ∆ = 1 and an adversary
that corrupts the first C steps making them look identical. The learner has no better option than
randomly selecting between the two which gives him a regret of C/2 under the uncorrupted objective.
We note that, in this setting, the linear dependence is necessary unconditionally of the performance
of the algorithm in the stochastic setting.
Towards best of all worlds. In the previous section, we showed that a logarithmic dependence
in the stochastic setting comes at the expense of linear dependence on C in the C-corrupted setting
if we focus on actual regret. A very interesting direction is to achieve such an improvement with
either a higher power on the logarithm in the stochastic setting or aiming for pseudo-regret instead.
In fact, we can combine our algorithm with the SAPO algorithm of Auer and Chiang [AC16] and
achieve a bicriteria guarantee for pseudo-regret. For an a < 1/2 specified by the algorithm, we
achieve our guarantee if the corruption is C ≤ T a and at most T 1/2+a otherwise; notice that the
case a = 0 corresponds to the best of both worlds. This is done via running the SAPO algorithm
at the level a log(T ) with probability T −a instead of having higher layers. The SAPO algorithm
guarantees that the pseudo-regret
caused by any particular arm is at most logarithmic if the instance
√
is stochastic and at most T if it is adversarial via a beautiful analysis that keeps negative regret of
time intervals that have performed well to avoid testing eliminated arms too often. In our setting, if
12
the corruption level is less than T a , the instance behaves as stochastic causing at most logarithmic
regret. Else the instance is corrupted and we can extrapolate the regret in this layer to the whole
algorithm as arms that are eliminated
in this layer are also eliminated before via global eliminations.
√
Since the regret there is at most T and this is multiplied by T a , this implies a bound of T 1/2+a on
pseudo-regret.
Acknowledgements The authors would like to thank Sid Banerjee whose lecture notes on stochastic bandits proved very helpful, Andrés Munoz Medina, Karthik Sridharan, and Éva Tardos for
useful discussions, Manish Raghavan for suggestions on the writeup, and the anonymous reviewers
for the valuable feedback they provided that improved the presentation of the paper.
References
[AB09]
Jean-Yves Audibert and Sébastien Bubeck. Minimax policies for adversarial and
stochastic bandits. In Proceedings of the 22nd Annual Conference on Learning Theory (COLT), 2009.
[AC16]
Peter Auer and Chao-Kai Chiang. An algorithm with nearly optimal pseudo-regret for
both stochastic and adversarial bandits. In Proceedings of the 29th Annual Conference
on Learning Theory (COLT), 2016.
[ACBF02]
Peter Auer, Nicolò Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Mach. Learn., 47(2-3):235–256, May 2002.
[ACBFS03] Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32(1):48–77, January 2003.
[BCB12]
Sébastien Bubeck and Nicolò Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning,
5(1):1–122, 2012.
[BCBL13]
Sébastien Bubeck, Nicolo Cesa-Bianchi, and Gábor Lugosi. Bandits with heavy tail.
IEEE Transactions on Information Theory, 59(11):7711–7717, 2013.
[BLL+ 11]
Alina Beygelzimer, John Langford, Lihong Li, Lev Reyzin, and Robert E. Schapire.
Contextual bandit algorithms with supervised learning guarantees. In Proceedings of
the 14th International Conference on Artificial Intelligence and Statistics (AISTATS),
2011.
[BS12]
Sébastien Bubeck and Aleksandrs Slivkins. The best of both worlds: Stochastic and
adversarial bandits. In Proceedings of the 25th Annual Conference on Learning Theory
(COLT), 2012.
[CBL06]
Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge
University Press, New York, NY, USA, 2006.
[CD17]
Yang Cai and Constantinos Daskalakis. Learning multi-item auctions with (or without)
samples. Proceedings of the 58th IEEE Annual Symposium on Foundations of Computer
Science (FOCS), 2017.
[DKK+ 16] Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, and
Alistair Stewart. Robust estimators in high dimensions without the computational
13
intractability. In Proceedings of the 57th IEEE Annual Symposium on Foundations of
Computer Science (FOCS), 2016.
[EKM15]
Hossein Esfandiari, Nitish Korula, and Vahab S. Mirrokni. Online allocation with
traffic spikes: Mixing adversarial and stochastic models. In Proceedings of the 16th
ACM Conference on Economics and Computation (EC), 2015.
[EMM06]
Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Action elimination and stopping
conditions for the multi-armed bandit and reinforcement learning problems. Journal of
Machine Learning Research, 7:1079–1105, 2006.
[FLL+ 16]
Dylan J Foster, Zhiyuan Li, Thodoris Lykouris, Karthik Sridharan, and Eva Tardos.
Learning in games: Robustness of fast convergence. In Advances in Neural Information
Processing Systems (NIPS), 2016.
[GC11]
Aurélien Garivier and Olivier Cappé. The KL-UCB algorithm for bounded stochastic
bandits and beyond. In 24th Annual Conference on Learning Theory (COLT), 2011.
[GUK18]
Pratik Gajane, Tanguy Urvoy, and Emilie Kaufmann. Corrupt bandits for privacy
preserving input. In 29th International Conference on Algorithmic Learning Theory
(ALT), 2018.
[HK09]
Elad Hazan and Satyen Kale. Better algorithms for benign bandits. In Proceedings of
the 20th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), 2009.
[LR85]
T.L Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Adv.
Appl. Math., 6(1):4–22, March 1985.
[MGZ12]
Vahab S. Mirrokni, Shayan Oveis Gharan, and Morteza Zadimoghaddam. Simultaneous
approximations for adversarial and stochastic online budgeted allocation. In Proceedings
of the 23rd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), 2012.
[MRT15]
Yishay Mansour, Aviad Rubinstein, and Moshe Tennenholtz. Robust probabilistic inference. In Proceedings of the 26th Annual ACM-SIAM Symposium on Discrete Algorithms
(SODA), 2015.
[SL17]
Yevgeny Seldin and Gábor Lugosi. An improved parametrization and analysis of the
EXP3++ algorithm for stochastic and adversarial bandits. In Proceedings of the 30th
Conference on Learning Theory (COLT), 2017.
[Sli17]
Aleksandrs Slivkins. Introduction to multi-armed bandits. 2017.
[SS14]
Yevgeny Seldin and Aleksandrs Slivkins. One practical algorithm for both stochastic
and adversarial bandits. In Proceedings of the 31th International Conference on Machine
Learning (ICML), 2014.
[SS17]
Ohad Shamir and Liran Szlak. Online learning with local permutations and delayed
feedback. In Proceedings of the 34th International Conference on Machine Learning
(ICML), 2017.
[WL18]
Chen-Yu Wei and Haipeng Luo. More adaptive algorithms for adversarial bandits.
CoRR, abs/1801.03265, 2018.
14
A
Supplementary material on Section 3.1
In this section we provide the proof q
of Theorem 1. Note that in the lemma statements the width is
2KT
C
for any arm a 6= a⋆ .
defined as in the theorem: wd(a) = log(n(a) /δ ) + n(a)
Lemma 3.1 (restated) With probability at least 1 − δ, arm a⋆ never becomes eliminated.
Proof. The crux of the proof lies in establishing that, with high probability, the upper bound of the
confidence interval of a⋆ never becomes lower than the lower bound of the confidence interval of any
other arm a and therefore a⋆ does not become eliminated.
More formally, let µ
eS (a) and µ
e(a) be the empirical mean after n(a) samples of the stochastic part
of the rewards and the empirical mean after n(a) samples of the corrupted rewards respectively.
Recall that µ(a) is the mean of arm a. By Hoeffding inequality, for any arm a, with probability at
least 1 − δ′ :
s
log(2/δ′ )
.
(1)
|e
µS (a) − µ(a)| ≤
n(a)
We set δ′ = δ/KT to establish that this holds for all arms and all time steps (after q
arm a has been
2KT
played n(a) times). As a result, for any arm a and any time: µ
eS (a) ≤ µ(a) + log(n(a) /δ) and
q
2KT
µ
eS (a⋆ ) ≥ µ(a⋆ ) − log(n(a⋆ )/δ) .
Comparing now the actual (corrupted) empirical means, they can be altered by at most absolute
C
C
and µ
e(a⋆ ) ≥ µ
eS (a⋆ ) − n(a
corruption C. Hence µ
e(a) ≤ µ
eS (a) + n(a)
⋆) .
Combining the above inequalities with the fact that the actual mean of a⋆ is higher than the one of
a, i.e. µ(a⋆ ) ≥ µ(a), we establish that µ
e(a) − µ
e(a⋆ ) ≤ wd(a) + wd(a⋆ ) and therefore arm a⋆ is not
eliminated. Since this holds for all times and arms, the lemma follows.
Lemma 3.2 (restated) With probability at least 1 − δ, all arms a 6= a⋆ become eliminated after
2KT
N (a) = 36·log(∆(a)/2δ )+6C plays.
Proof. The proof stems from the following observations. By Lemma 3.1, arm a⋆ is with high
probability never eliminated. After N (a) rounds, with high probability, the lower confidence interval
of arm a⋆ is above the upper confidence interval of arm a. This comes from the fact that, after N (a)
plays of arm a (and also of arm a⋆ since it is not eliminated), the empirical stochastic mean of a⋆ is,
with high probability, at most ∆(a)/6 below its actual mean and similarly the empirical stochastic
mean of arm a is at most ∆(a)/6 above its actual mean. Since the corruptions are upper bounded by
C, they can only contribute to a decrease in the average empirical (corrupted) means by at most
∆(a)/6 which is not enough to circumvent the gap ∆(a).
More formally, let µ
eS (a) and µ
e(a) denote the empirical means of the stochastic part of the rewards
and the corrupted rewards respectively after N (a) plays of arm a. By the same Hoeffding inequality
as in the proof of the previous lemma, with probability at least 1 − δ, it holds that |e
µS (a) − µ(a)| ≤
q
log(2KT/δ)
N (a) . Therefore, with the same probability,
µ
eS (a⋆ ) − µ(a⋆ ) ≤ ∆(a)
eS (a) ≤ ∆(a)
6 and µ(a) − µ
6 .
15
after
36·log(2KT/δ )
∆(a)2
plays for both arm a and a⋆ :
e(a) ≤ µ
eS (a) + NC(a) . By
The absolute corruption is at most C therefore µ
e(a⋆ ) ≥ µ
eS (a⋆ ) − NC(a) and µ
the choice of N (a), we have
C
N (a)
≤
∆(a)
6 .
Combining with the above argument, this also implies
that the widths are upper bounded by wd(a) ≤
∆(a)
3
and wd(a⋆ ) ≤
∆(a)
3 .
Combining the above with the fact that the actual mean of a⋆ is ∆(a) higher than the one of a, i.e.
µ(a⋆ ) − µ(a) = ∆(a), we establish
C
− wd(a) − wd(a⋆ )
N (a)
∆(a) ∆(a) ∆(a)
−
−
>0
> µ(a⋆ ) − µ(a) − 2 ·
6
3
3
µ
e(a⋆ ) − µ
e(a) − wd(a) − wd(a⋆ ) ≥ µ
eS (a⋆ ) − µ
eS (a) − 2 ·
As a result arm a becomes eliminated after N (a) plays if it is not already eliminated before.
Theorem 1 (restated)
If C is a valid upper bound for the total
corruption then arm elimination
q
P
log(2KT/δ)
log(KT/δ )+C
C
with wd(a) =
with probability 1 − δ.
+ n(a) has regret O
a6=a⋆
n(a)
∆(a)
Proof. The proof follows the classical stochastic bandit argument of measuring the regret caused
by each arm a 6= a⋆ as a function of its gap ∆(a) and the number of times N (a) it is played as
established by Lemma 3.2.
For simplicity of presentation, we first provide the pseudo-regret guarantee. Pseudo-regret compares
the expected performance of the algorithm to the expected performance one would have had, had
they selected a⋆ throughout the whole time horizon. The expected performance when one uses a⋆
is µ(a⋆ ). The loss compared to that every time a 6= a⋆ is used instead is equal to its gap ∆(a).
As a result, the expected contribution to pseudo-regret from suboptimal arm a 6= a⋆ is equal to
N (a) · ∆(a). Lemma 3.2 establishes that with probability 1 − δ any suboptimal arm a is played at
2KT
most N (a) = 36 log(∆(a)/2δ )+6C times. Each play of the suboptimal arm causes pseudo-regret of ∆(a).
Multiplying the times by the expected regret per time the guarantee (which equals to the gap) and
setting the failure probability δ to be some inverse polynomial of the time horizon T to ensure that
the expected regret due to the bad event is at most a constant leads to the pseudo-regret guarantee.
To turn the above into a high-probability guarantee, we need to show that the regret incurred during
the steps that we pull arm a is not significantly higher than the expectation (therefore bounding the
resulting variance). By the Hoeffding inequality
of Lemma 3.1, the empirical cumulative reward of
p
arm a is, with high probability, at most N (a) log(2KT/δ ) less than its expectation. The same holds
for arm a⋆ for these steps (its realized performance is at most this much more than its expectation).
The probability that these statements do not hold for some arm or some time is at most δ.
p
Regarding arms a 6= a⋆ , the N (a) log(2KT/δ ) term can be upper bounded by O(N (a)∆(a)) by the
definition of N (a):
s
s
p
2KT/δ )
log(
log(2KT/δ )
≤ N (a) · ∆(a)
≤ N (a)∆(a)
N (a) log(2KT/δ ) ≤ N (a) ·
N (a)
36 log(2KT/δ ) + 6C
Regarding arm a⋆ , let a′ be the arm with the smallest gap. By Lemma 3.1, a⋆ never gets eliminated
p
but it is not necessarily the ex post optimal arm. In fact some other arm with ∆(a) ≤ 1/T may
be the ex post optimal arm (arms with higher gap are with high probability not the ex post optimal
arm by an analogous argument as in Lemma 3.2. However, by the same argument as above arm a⋆
16
is with high probability at most N (a′ ) · ∆(a′ ) below its expectation and the ex post optimal arm is
at most this much above its expectation. This gives a bound of N (a′ )∆(a′ ) that is caused by the
case where a⋆ is not the ex post optimal arm.
Therefore the actual regret from times that arm a is played is at most 2N (a)∆(a) where the one
term comes from the expectation and the other from the aforementioned bounds on the variance.
The corruption can increase any cumulative reward by at most C which is already existing in the
regret bound. Replacing N (a) by Lemma 3.2, we obtain the high-probability guarantee. Note
that the failure probabilities of the two lemmas are coupled as they correspond to the same bad
events.
B
Supplementary material on Section 3.2
In this section, we provide the proof of Theorem 2. To handle the corruption, we bound with
high probability the total corruption experienced by the slow active arm elimination instance S
(Lemma 3.3). To deal with an adaptive adversary, we need a martingale concentration inequality;
specifically we apply a Bernstein-style inequality introduced in [BLL+ 11] (Lemma B.1).
Lemma B.1 (Lemma 1 in [BLL+ 11]). Let X1 , . . . , XT be a sequence of real-valued random numbers.
Assume, for all t, that Xt ≤ R and that E[Xt |X1 , . . . , Xt−1 ] = 0. Also let
V =
T
X
t=1
Then, for any δ > 0:
E[Xt2 |X1 , . . . , Xt−1 ].
" T
X
e−2
·V
Xt > R ln(1/δ) +
P
R
t=1
#
≤δ
Lemma 3.3 (restated) In Algorithm 1, the slow active arm elimination algorithm S observes,
with probability at least 1 − δ, corruption of at most ln(1/δ ) + 3 during its exploration phase (when
picked with probability 1/C ).
Proof. The first observation is that the expected corruption encountered by algorithm S is at most
a constant (total corruption of C encountered with probability 1/C ). The rest of the proof focuses
on bounding the variance of this random variable (actual corruption encountered by the layer).
Crucially, since we want to allow the adversary to be adaptive, we should not assume independence
across rounds but only conditional independence (conditioned on the history) and this is why some
more involved concentration inequality is necessary. Therefore we create a martingale sequence
(actual corruption minus expected corruption) and apply a Bernstein-style concentration inequality.
Let Zat be the corruption that is observed by the exploration phase of the algorithm if arm a
is selected. For every round t, if adversary selects corruption Cat then Zat is therefore a random
variable equal to Cat with probability 1/C and 0 otherwise. Given that the adversary is adaptive
and may select the corruptions based on the realizations of the previous rounds, we need to use an
appropriate concentration inequality. We use a Bernstein-style inequality, introduced in [BLL+ 11]
(Lemma B.1). Initially we resolve the randomness conditioning on ℓ = S (the slow algorithm is
selected). Since active arm elimination is deterministic, conditioned on selecting algorithm S, the
17
selected arm is deterministic. Let a(S, t) be the arm that would be selected if ℓ = S (which happens
with probability 1/C ). The martingale sequence is now
h
i
t
t
Xt = Za(S,t)
− E Za(S,t)
| H(1 : t − 1)
where H(1 : t) corresponds to the history up to round t. Note that
E
Xt2 |X1 , . . . , Xt−1
Ca(S,t) 2 C − 1 Ca(S,t) 2
1
+
Ca(S,t) −
=
C
C
C
C
2
2
Ca(S,t)
Ca(S,t)
C −1
C − 1 Ca(S,t) 2
=
≤2·
.
+
C
C
C
C
C
t
The last inequality holds as Ca(S,t)
∈ [0, 1] and Ca(S,t) ≤ C by the definition of C.
Therefore, summing over all the rounds,
X
X Ca(S,t)
2
≤
·
2
V =
E Xt2 |X1 , . . . , Xt−1 ≤
C
C
t
t
X
t
max Cat
a
!
≤ 2.
A trivial upper bound of |Xt | is R = 1, since the rewards are in [0, 1]. Applying Lemma B.1, we
show that, w.p. 1 − δ:
X
Xt ≤ ln(1/δ ) + 2(e − 2) ≤ ln(1/δ ) + 2
t
P
The lemma then follows by adding the expected corruption of E t Za(S,t) | H(1 : t − 1) ≤ 1 and
therefore obtaining the bound of the statement on the corruption experienced:
#
"
X
X
X
t
Za(S,t)
Za(S,t) | H(1 : t − 1) ≤ ln(1/δ ) + 3.
=
Xt + E
t
t
t
q
log(
/δ )
/δ)
Theorem 2 (restated) Algorithm 1 run with widths wd (a) =
+ 2 log(
and
nS (a)
nS (a)
q
P
P
KT
KT
8KT
(log(
/δ ))2
log(
/
δ
)
log(
/
δ
)
has
O
for
the
stochastic
case
and
O
K
·
C
·
wdF (a) =
⋆
⋆
F
a6=a
a6=a
∆(a)
∆(a)
n (a)
for the C-corrupted case with probability at least 1 − δ.
S
8KT
8KT
Proof. For the stochastic case, the bound follows via standard stochastic bandit arguments (similarly
to the proof of Theorem 1 with C = 0) as for
each of the two active
arm elimination algorithms
P
(log(2KT/δℓ,S ))
we incur, with probability 1 − δℓ,S regret O
where δℓ,S = δ/4 is the failure
a6=a⋆
∆(a)
probability of inequality (1), which governs the results in Lemmas 3.1 and 3.2, for each of ℓ ∈ {F, S}.
The most interesting case is the C-corrupted setting. Let δS,C = δ/4 be the failure probability in
Lemma 3.3. By Lemma 3.3, with probability at least 1 − δS,C , the actual corruption experienced by
the slow active arm elimination algorithm is at most ln(1/δ) + 3 which is less than 2 log(2KT/δ ) for
non-trivial values of K and T . Therefore we can apply the analysis of Theorem 1 with corruption
level at least 2 log(2KT/δS,C ) and get a handle on the actual regret coming from the slow active arm
elimination algorithm.
18
What is left is to bound the regret coming from the fast active arm elimination algorithm. Towards
this goal, we bound the number of times that a suboptimal arm is played in the fast active arm
elimination by the expected time that it remains active at the slow active arm elimination. By
Lemma 3.2, arm a is played in the slow active arm elimination, with probability at least 1 − 3δ/4, at
most
18 log(8KT/δ )
16 log(2KT/δS,S ) + 2 log(2KT/δS,C )
≤
.
NS (a) =
2
∆(a)
∆(a)2
Having a bound on the number of plays of the arm in the slow active arm elimination instance, we
use this to bound the number of plays in the fast active arm elimination instance. In expectation,
this is at most K · C · NS (a) times as every move in the slow active arm elimination occurs with
probability 1/C and, at least 1/K of these moves are plays of a while it is still active. Since every
time arm a is played it incurs pseudo-regret ∆(a), this provides the pseudo-regret guarantee.
To obtain a high probability guarantee, let δm = δ/4KT and observe that with probability at least
1 − δm , we make one move at the slow arm elimination algorithm every O(C log(1/δm )) moves at the
fast arm elimination algorithm. This can be seen by thinking the following process: One tosses coins
with bias p = 1/C until she observes heads for the first time (heads is the p-biased event). After M
tosses of the coins the probability that no heads have arrived is at most (1 − p)M . To ensure that
1/δm )
1/δm )
, which is achieved by M = log(
this is less than δm , we need to wait M ≥ log(
p/(1−p) .
log( 1 )
1−p
By union bound on the failure probabilities for each of those draws, we get that with failure probability δe = K · NS (a) · δm ≤ δ/4 (since NS (a) ≤ T as it is at most the time horizon), arm a gets
inactivated in F after
NF (a) = K · NS (a) · C · log(1/δe ) =
18 · C · K · (log(8KT/δ ))2
.
∆(a)2
The last part is to prove that the regret experienced throughout those rounds is not too large. This
follows by the two applications of Hoeffding inequality as before for arms a and a⋆ , analogously to
Theorem 1. Combining the above arguments the theorem follows. The total failure probability of
the guarantee is δS,S + δS,C + δF,S + δe ≤ δ.
C
Supplementary material on Section 3.3
Theorem 3 (restated)
Algorithm 2 which is agnostic to the coruption level C, when run with
q
4KT ·log T
log(4KT ·log T/δ)
ℓ
+ log( nℓ (a) /δ) has regret:
widths wd (a) =
nℓ (a)
X K · C log(KT/δ ) + log(T )
· log(KT/δ ).
O
∆(a)
∗
a6=a
Proof. The proof follows similar arguments to the proof of Theorem 2. Specifically, for the layers
that are above the corruption level C, by using the standard arguments described in Theorem
log(2KT/δℓ,S )
1, we establish a
bound on the regret caused by any suboptimal arm a, with failure
∆(a)
δ
probability δℓ,S = /2 log T . Since there are log(T ) such levels, the regret coming from these layers is
upper bounded by the second term of the theorem with failure probabilityδ/2.
19
For the layers ℓ that are not tolerant to the corruption, i.e. 2ℓ > C, we apply the same argument
as in the proof of Theorem 2 and bound their regret via the
number
of plays they are played by the
minimum layer that is robust to corruption ℓ⋆ = arg minℓ 2ℓ > C . Similarly as in the proof of the
theorem we upper bound the number of plays Nℓ⋆ (a) of each suboptimal arm a at this layer (by
exactly the same arguments), then bound the number of plays in the suboptimal layer via the same
coin toss process as in that proof and, last bound the regret they incur during this part. Since we
do not know the amount of corruption in advance (and this amount is adaptively selected), we also
need to take a union bound on the number of layers so that the guarantee on Nℓ⋆ (a) holds for all
layers simulataneously if they end up being correct; we therefore repeat the arguments in Theorem
2 with δℓ,C = δ/2 log T and δm ≤ δ/2KT log T .
Last, we note that, since we used powers of 2 to increase the corruption among layers, the fact that
we did not apply the arguments of Theorem 2 with the exact C but instead used a C ′ such that
C < C ′ < 2C causes just an extra constant factor on the regret.
D
Supplementary material on Section 4
Theorem 4 (restated) Consider a multi-armed bandits algorithm that has the property that for
any stochastic input in the two arm setting, it has pseudo-regret bounded by c log(T )/∆, where
′
∆ = |µ1 − µ2 |. For any ǫ, ǫ′ ∈ (0, 1), there is a corruption level C with T ǫ < C < T ǫ and a
C-corrupted instance such that with constant probability the regret is Ω(C).
Proof. The proof follows a sequence of steps.
Step 1: Analyze behavior in the stochastic case. Fix a constant ∆ ≤ 1/6 and observe how the
algorithm behaves for the stochastic input that has Bernoulli arms of means (µ1 , µ2 ) = ( 21 − ∆, 12 ).
Since in that setting the expected regret is the same as ∆ · E[T1 ] where T1 is the number of pulls of
arm 1, it follows that E[T1 ] ≤ c log(T )/∆2 .
Step 2: find a large interval that is hit with at most constant probability. We divide the
′
space between T ǫ and T ǫ into O(log(T )/(ǫ′ − ǫ)) intervals Ii = [3i−1 T ǫ , 3i T ǫ ) such that size of each
interval is twice the size of all the previous intervals combined. For each interval i, let T1,i be the
number of times that arm 1 is pulled in the i-th interval. Then, there exists an interval Ii = [C, 3C)
such that E[T1,i ] ≤ c̃ := O(1/[(ǫ′ − ǫ)∆2 ]).
Step 3: create an adversary that forces a lot of regret in interval i. The adversary is quite
simple: for the first C steps, the arms are Bernoulli with means ( 12 − ∆, 21 ) and for the remaining
timesteps, the arms are Bernoulli with means ( 21 + ∆, 12 ).
We use E and P to refer to the probability law whe inputs are drawn with respect to ( 12 − ∆, 12 ) in
all timesteps and E′ and P′ to refer to the probability law when the input is according to ( 12 − ∆, 21 )
in the first K steps and according to ( 21 + ∆, 12 ) onwards.
Step 4: With constant probability arm 1 is pulled a constant number of times in Ii
under both P and P′ . Under the probability law P, this follows directly from Markov’s inequality:
c̃ = E[T1,i ] ≥ 2c̃ P[T1,i ≥ 2c̃], so: P[T1,i ≤ 2c̃] ≥ 12 .
Denote by A the event that T1,i ≤ 2c̃. We want to argue that P′ [A] is also constant. In order to
do that, let Z = (Z1 , Z2 , . . . , Z2c̃ ) be a vector storing in Zs the reward of arm 1 in the s-th time
it is pulled in interval Ii . Notice that in both the stochastic and corrupted scenarios if the learner
20
observes the same values of Z she acts the exact same way. Therefore, if we condition on Z, the
probability that she ends up pulling arm 1 for more than 2c̃ times is exactly the same. In other
words:
P[A|Z] = P′ [A|Z]
Therefore:
′
P [A] =
X
z
′
P [Z = z] · P[A|Z = z] ≥
1
2
1
2
−∆
+∆
!2c̃
1
P [Z = z] P[A|Z = z] ≥ ·
2
′
1
2
1
2
−∆
+∆
!2c̃
which is a constant.
Step 5: concentration bounds for the regret incurred in each interval. We now define an
event B that occurs with probability 1 − o(1) that captures all the concentration bounds we need
for the proof. First, we requireP
arm 1 to be the optimal arm. Let r tP
(i) be the reward of arm i in
′
′
1
t
time step t. We know that E [ t r (1)] = 2 T + ∆(T − 2C) and E [ t r t (2)] = 21 T . Since
P all the
rewards are independent we can use the Hoeffding bound to bound the probability P′ ( t ∆t < 0)
where ∆t = r t (2) − r t (1):
!
!
X
X
X
C
T ∆2
′
t
t
′
′
1−2
= o(1)
∆t − E ∆t > ∆(T − 2C) ≤ 2 exp −
r (1) <
r (2) ≤ P
P
2
T
t
t
t
Now we establish some concentration on the regret that the learner achieves with respect to arm 1
in the intervals [1, C), [C, 3C) and [3C, T ). We note that if the learner pulls arm 1, she does not
incur any regret. If she pulls arm 2, she incurs regret ∆t = r t (2) − r t (1) which can be positive or
negative. To compute regret with respect to arm 1 in each of those intervals, we sample ∆t every
time that the arm 2 is pulled.
P
Step 5a: interval [1, C). In this interval, E′ ∆t = −∆, so t∈[1,C) ∆t = −C∆. If Y is the number
P
of times arm 1 is pulled, then the regret is given by Ys=1 ∆s where in the previous expression we
abuse notation and mean by ∆s the regret in the s-th time the arm 2 is pulled instead of the regret
in the t-th period. Therefore:
!
!
!
t
C
t
Y
X
X
X
X
′
′
′
∆s < −1.1∆C
P
∆s < −1.1∆C ≤
∆s < −1.1∆C ≤ P min
P
t≤C
s=1
t=1
s=1
s=1
We then use the Hoeffding bound in the last expression and get:
!
C
X
1
1 1.1∆C − ∆t 2
2
≤ C · exp − (0.1∆C) = o(1)
2 exp − t
2
t
2
t=1
Step 5b: interval [C, 3C). In this interval, E′ ∆t = ∆, so using the same bound as before, we get
X
P′
∆t < 1.9∆C ≤ 2 exp −C(0.1∆)2 = o(1)
t∈[C,3C)
Step 5c: interval [3C, T ] In this interval, pulling arm 2 has again positive expected regret. We
use the same technique used in 5a to argue that she cannot obtain large negative regret with high
21
probability: Let Y be the number of times arm 2 is pulled in that interval and again we abuse
notation and let ∆s be the difference in rewards in the s-th time the arm is pulled. Then:
!
!
!
t
T
t
Y
X
X
X
X
∆t < − log T
P′
∆t < − log T ≤
∆t < − log T ≤ P′ min
P′
t≤T
s=1
t=1
s=1
s=1
For t = 1.. log T this probability is zero, since ∆t ≥ −1. Now, for larger T , we can use the standard
Chernoff bound:
!
T
t
X
X
1
′
2
∆t < − log T ≤ 2T exp − log (T )∆ = o(1)
P
2
s=1
t=log T
Now all the concentration bounds have been established we define the event B to be the event where
all those concentration bounds hold. More precisely, B is the event where the following four things
happen: (a) empirically arm 1 is better than arm 2; (b) in interval [1, C), the regret of the learner
is at least −1.1∆C; (c) in interval [C, 3C), the difference between the total rewards of both arms
is at least 1.9∆C; and (d) the regret of the learer in interval [3C, T ] is at least − log T . By the
discussion in step 5, we know that P′ (B) = 1 − o(1).
Step 6: putting it all together. Since P′ (A) = Ω(1) and P′ (B) = 1 − o(1), then by the union
bound, P′ (A and B) ≥ P′ (A) − o(1) = Ω(1). Now, we need to argue that in the constant probability
event (A and B), the regret of the learner is at least Ω(C).
We simply sum the regret of the learner in each of the intervals. For intervals [1, C) and [3C, T ]
we can use the bounds computed in steps 5a andn 5c directly. For interval [C, 3C), we note that
conditioned on A, the learner probes arm 1 a constant number of times, so his total regret differs
from the regret by pulling arm 2 in all iterations by at most a constant, therefore the total regret
can be bounded by:
−1.1∆C + (1.9∆C − 4c̃) − log(T ) = Ω(∆C)
We can adapt the argument to provide a bound on the expected positive regret E[Reg+ ] where
x+ = max{x, 0}. Note that the high probability bounds provided also imply a bound on the expected positive regret.
Theorem 5. If a multi-armed bandits algorithm that has the property that for any stochastic
input in the two arm setting, it has pseudo-regret bounded by c log1+α (T )/∆ for α < 1. For any
′
ǫ, ǫ′ ∈ (0, 1), there is a corruption level C with T ǫ < C < T ǫ and a C-corrupted instance such that
E[Reg+ ] = Ω(T ǫ−δ ) for all δ > 0.
Proof. Modify the proof of Theorem 4 as follows. Define c̃ = O(logα (T )/[(ǫ′ − ǫ)∆2 ]) and again
select an interval such that E[Ti,1 ] ≤ c̃. Event A is defined in the same way. By Markov’s inequality:
P[A] ≥ 1/2 and
!2c̃
1
−
∆
1
= exp(−O(logα (T )))
P′ [A] ≥ · 21
2
+
∆
2
Step 5 remains unchanged and in step 6 note that P′ (B) ≪ P′ (A) since α < 1, so P′ (A and B) =
exp(−O(logα (T ))). Therefore, with probability at least exp(−O(logα (T ))) the regret is at least
Ω(C) = Ω(T ǫ ) and therefore, E[Reg+ ] = Ω(T ǫ · exp(−O(logα (T )))) = Ω(T ǫ−δ ) for all δ > 0.
22
| 8 |
Representation Learning and Recovery in the ReLU Model
arXiv:1803.04304v1 [stat.ML] 12 Mar 2018
Arya Mazumdar1 and Ankit Singh Rawat1,2
1 College
of Information and Computer Sciences, University of Massachusetts Amherst, Amherst, MA
01003, USA.
2 Research Laboratory of Electronics, MIT, Cambridge, MA 02139, USA.
E-mail: [email protected], [email protected].
March 13, 2018
Abstract
Rectified linear units, or ReLUs, have become the preferred activation function for artificial neural
networks. In this paper we consider two basic learning problems assuming that the underlying data
follow a generative model based on a ReLU-network – a neural network with ReLU activations. As a
primarily theoretical study, we limit ourselves to a single-layer network. The first problem we study
corresponds to dictionary-learning in the presence of nonlinearity (modeled by the ReLU functions).
Given a set of observation vectors yi ∈ Rd , i = 1, 2, . . . , n, we aim to recover d × k matrix A and the
latent vectors {ci } ⊂ Rk under the model yi = ReLU(Aci + b), where b ∈ Rd is a random bias. We
show that it is possible to recover the column space of A within an error of O(d) (in Frobenius norm)
under certain conditions on the probability distribution of b.
The second problem we consider is that of robust recovery of the signal in the presence of outliers,
i.e., large but sparse noise. In this setting we are interested in recovering the latent vector c from its
noisy nonlinear sketches of the form v = ReLU(Ac) + e + w, where e ∈ Rd denotes the outliers with
sparsity s and w ∈ Rd denote the dense but small noise. This line of work has recently been studied
(Soltanolkotabi, 2017) without the presence of outliers. For this problem, we q
show that a generalized
LASSO algorithm is able to recover the signal c ∈ Rk within an ℓ2 error of O(
random Gaussian matrix.
(k+s) log d
)
d
when A is a
1 Introduction
Rectified Linear Unit (ReLU) is a basic nonlinear function defined to be ReLU : R → R+ ∪ {0} as ReLU(x) ≡
max(0, x). For any matrix X , ReLU(X ) denotes the matrix obtained by applying the ReLU function on each
of the coordinates of the matrix X . ReLUs are building blocks of many nonlinear data-fitting problems
based on deep neural networks (see, e.g., Soltanolkotabi [2017] for a good exposition).
Let Y ⊂ Rd be a collection of message vectors that are of interest to us. Depending on the application
at hand, the message vectors, i.e., the constituents of Y, may range from images, speech signals, network
access patterns to user-item rating vectors and so on. We assume that the message vectors satisfy a generative model, where each message vector can be approximated by a map д : Rk → Rd from the latent
space to the ambient space, i.e., for each y ∈ Y,
y ≈ д(c) for some c ∈ Rk .
1
(1)
Motivated by the recent results on developing the generative models for various real-life signals (see e.g.,
Goodfellow et al. [2014], Kingma and Welling [2014], Bora et al. [2017]), the non-linear maps д that take
the following form warrant special attention.
д(c) = h (Al h(Al−1 · · · h(A1 c) · · · ))) ,
(2)
i.e., д is the function corresponding to an l-layer neural network with the activation function h. Here, for
i ∈ [l], Ai ∈ Rdi ×di −1 with d0 = k and dl = d, denotes the weight matrix for the i-th layer of the network. In
the special case, where the activation function h is the ReLU function, the message vectors of the interest
satisfy the following.
y ≈ ReLU (Al ReLU(Al−1 · · · ReLU(A1 c + b1 ) · · · + bl−1 ) + bl )) ,
(3)
where, for i ∈ [l], bi ∈ Rdi denotes the biases of the neurons (or output units) at the i-th layer of the
network.
The specific generative model in (3) raises multiple interesting questions that play fundamental role
in understanding the underlying data and designing systems and algorithms for processing the data. Two
such most basic questions are as follows:
1. Learning the representation: Given the observations {y1 , y1 , . . . , yn } ⊂ Rd from the model (cf. (3)),
recover the parameters of the model, i.e., {Ât }t ∈[l] , and {c1 , c2 , . . . , cn } ⊂ Rk such that
yi ≈ ReLU Âl ReLU(Âl−1 · · · ReLU(Â1 ci + b1 ) · · · + bl−1 ) + bl ) ∀ i ∈ [n].
(4)
Note that this question is different from training the model, in which case the set {c1 , c2, . . . , cn } is
known (and possibly chosen accordingly).
2. Recovery of the signal in the presence of errors: Given the erroneous (noisy) version of a vector
generated by the model (cf. (3)), denoise the observation or recover the latent vector. Formally, given
v = y + e + w = ReLU (Al ReLU(Al−1 · · · ReLU(A1 c + b1 ) · · · + bl−1 ) + bl )) + e + w
(5)
and the knowledge of model parameters, obtain ŷ ∈ Rd or ĉ ∈ Rk such that ky − ŷk or kc − ĉk is
small, respectively. In (5), e and w correspond to outliers, i.e., large but sparse errors, and (dense but
small) noise, respectively.
Apart from being closely related, one of our main motivations behind studying these two problems together
comes from the recent work on associative memory Karbasi et al. [2014], Mazumdar and Rawat [2015,
2017]. An associative memory consists of a learning phase, where a generative model is learned from a
given dataset; and a recovery phase, where given a noisy version of a data point generated by the generative
model, the noise-free version is recovered with the help of the knowledge of the generative model.
There have been a recent surge of interest in learning ReLUs, and the above two questions are of basic interest even for a single-layer network (i.e., nonlinearity comprising of a single ReLU function). It
is conceivable that understanding the behavior of a single-layer network would allow one to use some
‘iterative peeling off’ technique to develop a theory for multiple layers. In Goel et al. [2017], the problem
of recovering ReLU-model under Reliable Agnostic learning model of Kalai et al. [2012] is considered. Informally speaking, under very general distributional assumptions (the rows of A are sampled from some
distribution), given A and y = ReLU(Ac), Goel et al. [2017] propose an algorithm that recovers a hypothesis which has an error-rate (under some natural loss function defined therein) of ϵ with respect to the true
underlying ReLU-model. Moreover, the algorithm runs in time polynomial in d and exponential in 1/ϵ. As
2
opposed to this, given A and the corresponding output of the ReLU-network y = ReLU(Ac + b), we focus
on the problem of recovering c itself. Here, we note that the the model considered in Goel et al. [2017]
does not consider the presence of outliers.
Soltanolkotabi [2017] obtained further results on this model under somewhat different learning guarantees. Assuming that the entries of the matrix A to be i.i.d. Gaussian, Soltanolkotabi [2017] show that
with high probability a gradient descent algorithm recovers c within some precision in terms of ℓ2 -loss:
the relative error decays exponentially with the number of steps in the gradient descent algorithm. The obtained result is more general as it extends to constrained optimizations in the presence of some regularizers
(for example, c can be restricted to be a sparse vector, etc.).
However both of these works do not consider the presence of outliers (sparse but large noise) in the
observation. The sparse noise is quite natural to assume, as many times only partial observations of a
signal vector are obtained. The ReLU model with outliers as considered in this paper can be thought of
as a nonlinear version of the problem of recovering c from linear observations of the form v = Ac + e,
with e denoting the outliers. This problem with linear observations was studied in the celebrated work
of Candès and Tao [2005]. We note that the technique of Candès and Tao [2005] does not extend to the
case when there is a dense (but bounded) noise component present. Our result in this case is a natural
generalization and complementary to the one in Soltanolkotabi [2017] in that 1) we present a recovery
method which is robust to outliers and 2) instead of analyzing gradient descent we directly analyze the
performance of the minimizer of our optimization program (a generalized LASSO) using the ideas from
Plan and Vershynin [2016], Nguyen and Tran [2013].
On the other hand, to the best of our knowledge, the representation learning problem for single-layer
networks has not been studied as such. The representation learning problem for single-layer ReLUs bears
some similarity with matrix completion problems, a fact we greatly exploit later. In low rank matrix completion, a matrix M = AX is visible only partially, and the task is to recover the unknown entries by
exploiting the fact that it is low rank. In the case of (4), we are more likely to observe the positive entries
of the matrix M, which, unlike a majority of matrix completion literature, creates the dependence between
the matrix M and the sampling procedure.
Main result for representation learning. We assume to have observed d × n matrix Y = ReLU(AC +
b ⊗ 1T ) where A is a d × k matrix, C is a k × n matrix, both unknown, b ∈ Rd is a random i.i.d. bias,
and ⊗ denote the Kronecker product1 . We show that a relaxed maximum-likelihood
method guarantees
√
the recovery of the matrix AC with an error in Frobenius norm at most O( d) with high probability (see
Theorem 3 for the formal statement). Then leveraging a known result for recovering column space of a
perturbed matrix (see Theorem. 5 in the appendix), we show that it is possible to also recover the column
space of A with similar guarantee.
The main technique that we use to obtain this result is inspired by the recent work on matrix completion
by Davenport et al. [2014]. One of the main challenges that we face in recovery here is that while an
entry of the matrix Y is a random variable (since b is a random bias), whether that is being observed
or being cut-off by the ReLU function (for being negative) depends on the value of the entry itself. In
general matrix completion literature, the entries of the matrix being observed are sampled i.i.d. (see, for
example, Candès and Recht [2009], Keshavan et al. [2010], Chatterjee [2015] and references therein). For
the aforementioned reason we cannot use most of these results off-the-shelf. However, similar predicament
is (partially) present in Davenport et al. [2014], where entries are quantized while being observed.
Similar to Davenport et al. [2014], the tools that prove helpful in this situation are the symmetrization
trick and the contraction inequality Ledoux and Talagrand [2013]. However, there are crucial difference
of our model from Davenport et al. [2014]: in our case the bias vector, while random, do not change over
1 This
is to ensure that the bias is random, but does not change over different observation of the data samples.
3
observations. This translates to less freedom during the transformation of the original matrix to the observed matrix, leading to dependence among the elements in a row. Furthermore, the analysis becomes
notably different since the positive observations are not quantized.
Main result for noisy recovery. We plan to recover c ∈ Rk from observations v = ReLU(Ac + b) + e + w,
where A is a d × k standard i.i.d. Gaussian matrix, e ∈ Rd is the vector containing outliers (sparse noise)
with kek0 ≤ s, and w ∈ Rd is bounded dense noise such that kwk∞ ≤ δ . To recover c (and e) we employ
the LASSO algorithm, which is inspired by the work of Plan and Vershynin [2016] and Nguyen and Tran
[2013]. In particular, Plan and Vershynin [2016] recently showed that a signal can be provably recovered
(up to a constant multiple) from its nonlinear Gaussian measurements via the LASSO algorithm by treating the measurements as linear observations. In the context of ReLU model, for outlier-free measurements
v = ReLU(Ac + b) + w, it follows from Plan and Vershynin [2016] that LASSO algorithm outputs µc as the
solution with µ = E(д · ReLU(д + b)), where д is a Gaussian random variable and b is a random variable
denoting bias associated with the ReLU function.
that this approach guarantees with high probq We show
(k+s) log d
even when the measurements are corrupted by
ability recovery of c within an ℓ2 error of O
d
outliers e. This is achieved by jointly minimizing the square loss over (c, e) after treating our measurements
as linear measurements v ′ = Ac + e + w and adding an ℓ1 regularizer to the loss function to promote the
sparsity of the solution for e (we also recover e, see Theorem 4 for a formal description).
Organization. The paper is organized as follows. In section 2, we describe some of the notations used
throughout the paper and introduce the some technical tools that would be useful to prove our main
results. In the same section (subsection 2.3), we provide the formal models of the problem we are studying.
In section 3, we provide detailed proofs for our main results on the representation learning problem (see,
Theorem 3). Section 4 contains the proofs and the techniques used for the recovery problem in the presence
of outliers (see, Theorem 4).
2 Notations and Technical Tools
2.1
Notation
For any positive integer n, define [n] ≡ {1, 2, . . . , n}. Given a matrix M ∈ Rd×n , for (i, j) ∈ [d] × [n], Mi, j
denotes the (i, j)-th entry of M. For i ∈ [d], mi = (Mi,1 , Mi,2 , . . . , Mi,n )T denotes the vector containing the
elements of the i-th row of the matrix M. Similarly, for j ∈ [n], mj = (M 1, j , M 2, j , . . . , Md, j )T denotes the
j-th column of the matrix M. Recall that the function ReLU : R → R+ ∪ {0} takes the following form.
ReLU(x) = max(x, 0).
(6)
For a matrix X ∈ Rd×n , we use ReLU(X ) to denote the d ×n matrix obtained by applying the ReLU function
on each of the entries of the matrix X . For two matrix A and B, we use A ⊗ B to represent the Kronecker
product of A and B.
Given a matrix P ∈ Rd×n ,
s Õ
Pi,2 j
kP kF =
(i, j)∈[d]×[n]
denotes its Frobenius norm. Also, let kP k denote the ℓ2 operator norm of P, i.e. the maximum singular
value of P. We let kP k∗ denote the nuclear norm of P. Similar to Davenport et al. [2014], we define a
flatness parameter associated with a function f : R → [0, 1]:
β α (f ) := inf
|x | ≤α
4
| f ′(x)| 2
.
4| f (x)|
(7)
β α quantifies how flat f can be in the interval [−α, α]. We also define a Lipschitz parameter L α (f ) for a
function f as follows:
L α (f ) := max
2.2
n
sup ∫ x
|x | ≤α
f (x)
f (y)dy
−∞
, sup
|x | ≤α
f ′(x) o
.
f (x)
(8)
Techniques to bound the supremum of an empirical process
In the course of this paper, namely in the representation learning part, we use the key tools of symmetrization and contraction to bound the supremum of an empirical process following the lead of Davenport et al.
[2014] and the analysis of generalization bounds in the statistical learning literature. In particular, we need
the following two statements.
Theorem 1 (Symmetrization of expectation). Let X 1 , X 2 , . . . , X n be n independent RVs taking values in
X and F be a class of R-valued functions on X. Furthermore, let ε 1 , ε 2 , . . . , εn be n independent Rademacher
RVs. Then, for any t ≥ 1,
!
!
n
n
Õ
Õ
t
t
E sup
≤ 2t E sup
f (X i ) − Ef (X i )
εi f (X i ) .
(9)
f ∈ F i=1
f ∈ F i=1
Theorem 2 (Contraction inequality Ledoux and Talagrand [2013]). Let ε 1 , ε 2 , . . . , εd be d independent
Rademacher RVs and f : R+ → R+ be a convex and increasing function. Let ζi : R → R be an L-Lipschitz
functions, i.e.,
|ζi (a) − ζi (b)| ≤ L|a − b|,
which satisfy ζi (0) = 0. Then, for any T ⊆ Rn ,
d
Õ
1
εi ζi (ti )
sup
Ef
2 t ∈T i=1
2.3
!
≤ Ef L · sup
t ∈T
d
Õ
i=1
!
εi ti .
(10)
System Model
We focus on the problems of learning the representation and recovery of the signal in the presence of errors when the signal is assumed to be generated using a single layer ReLU-network. The models of learning
representations and recovery is described below.
Model for learning representations. We assume that a signal vector of interest y satisfies
y = ReLU(Ac + b),
(11)
where A ∈ Rd×k and b ∈ Rd correspond to the weight (generator) matrix and the bias vector, respectively.
As for the problem of representation learning, we are given n message vectors that are generated from
the underlying single-layer model. For j ∈ [n], the j-th signal vector is defined as follows.
We define the d × n observation matrix
yj = ReLU Acj + b ∈ Rd .
Y =
y1 y2 · · · yn .
5
(12)
(13)
Similarly, we define the k × n coefficient matrix
C=
c1 c2 · · · cn .
With this notion, we can concisely represent the n observation vectors as
Y = ReLU AC + b ⊗ 1T = ReLU M + b ⊗ 1T ,
(14)
(15)
where 1 ∈ Rn denotes the all-ones vector.
We assume that the bias vector b is a random vector comprising of i.i.d. coordinates with each coordinate being copies of a random variable B distributed according to probability density function p(·).
Model for recovery. For the recovery problem, we are given a vector v ∈ Rd , which is obtained by adding
noise to a valid signal vector y ∈ Rd that is well modeled by a single-layer ReLU-network, with the matrix
A ∈ Rd×k and bias b ∈ Rd . In particular, for some c ∈ Rk we have,
v = y + e + w = ReLU(Ac + b) + e + w,
(16)
where w ∈ Rd denotes the (dense) noise vector with bounded norm. On the other hands, the vector e ∈ Rd
contains (potentially) large corruptions, also referred to as sparse errors or outliers (we assume, kek0 ≤ s).
The robust recovery problem in ReLU-networks corresponds to obtaining an estimate ĉ of the true latent
vector c from the corrupt observation vector y such that the distance between ĉ and c is small. A related
problem of denoising in the presence of outliers only focuses on obtaining an estimate y which is close
to the true signal vector y. For this part, we focus on the setting where the weight matrix A is a random
matrix with i.i.d. entries, where each entry of the matrix is distributed according to standard Gaussian
distribution. Furthermore, another crucial assumption is that the Hamming error is oblivious in its nature,
i.e., the error vector is not picked in an adversarial manner given the knowledge of A and c2 .
3 Representation learning in a single-layer ReLU-network
In the paper, we employ the natural approach to learn the underlying weight matrix A from the observation
matrix Y . As the network maps a lower dimensional coefficient vector c ∈ Rk to obtain a signal vector
y = ReLU(Ac + b)
(17)
in dimension d > k, the matrix M = AC (cf. (15)) is a low-rank matrix as long as k < min{d, n}. In our
quest of recovering the weight matrix A, we first focus on estimating the matrix M, when given access
to Y . This task can be viewed as estimating a low-rank matrix from its partial (randomized) observations.
Our work is inspired by the recent work of Davenport et al. [2014] on 1-bit matrix completion. However,
as we describe later, the crucial difference of our model from the model of Davenport et al. [2014] is that
the bias vector b does not change over observations in our case. Nonetheless we describe the model and
main results of 1-bit matrix completion below to underscore the key ideas.
1-bit matrix completion. In Davenport et al. [2014], the following observation model is considered.
Given a low-rank matrix M and a differentiable function ψ : R → [0, 1], the matrix Z ∈ {0, 1}d×n is
2 It
is an interesting problem to extend our results to a setting with adversarial errors. However, we note that this problem is
an active area of research even in the case of linear measurement, i.e, y = Ac + e + w Bhatia et al. [2015, 2017]. We plan to explore
this problem in future work.
6
assumed to be generated as follows3 .
Z i, j =
(
1
0
with probability ψ (Mi, j ),
with probability 1 − ψ (Mi, j ).
(18)
Furthermore, one has access to only those entries of Z that are indexed by the set Ω ⊂ [d] × [m], where
the set Ω is generated by including each (i, j) ∈ [d] × [n] with certain probability p. Given the observations
Z Ω , the likelihood function associated with a matrix X ∈ Rd×n takes the following form4 .
Õ
(19)
1 {Zi, j =1} log ψ (X i, j ) + 1 {Zi, j =0} log 1 − ψ (X i, j ) .
L Ω,Z (X ) =
(i, j)∈Ω
Now, in order to estimate the matrix M with bounded entries from Z Ω , it is natural to maximize the
log-likelihood function (cf. (19)) under the constraint that the matrix M has rank k.
maximize L Ω,Z (X )
X ∈Rd ×n
subject to
rank(X ) = r
(20)
kX k∞ ≤ γ ,
where the last constraint is introduced to model the setting where (w.h.p.) the observations are assumed to
have bounded coordinates. We note that such assumptions indeed hold in many observations of interests,
such as images. Note that the formulation in (20) is clearly non-convex due to the rank constraint. Thus,
Davenport et al. [2014] propose the following program.
maximize L Ω,Z (X )
X ∈Rd ×n
√
subject to kX k∗ ≤ α rmn,
(21)
kX k∞ ≤ γ .
√
Note that the constraint kX k∗ ≤ α rmn is a convex-relaxation of the non-convex constraint rank(X ) ≤ r ,
b be the output of
which is required to ensure that the program in (21) outputs a low-rank matrix. Let M
the program in (21). Davenport et al. [2014] obtain the following result to characterize the quality of the
b
obtained solution M.
√
Proposition 1 ([Davenport et al., 2014, Theorem A.1]). Assume that kM k∗ ≤ α rmn and kM k∞ ≤ γ . Let
Z be as defined in (18). Then, for absolute constants C 1 and C 2 , with probability at least 1 − C 1 /(d + n), the
b satisfies the following:
solution of (21) M
s
s
(m + n) log(mn)
r
(m
+
n)
1
b kF ≤ C 2α
kM − M
· 1+
,
(22)
dn
E(|Ω|)
E(|Ω|)
where the constant C 2 depends on the flatness and steepness of the function ψ .
Learning in a single layer ReLU and 1-bit matrix completion: Main differences. Note that the problem of estimating the matrix M from Y is related to the problem of 1-bit matrix completion as defined
3 The
authors assume that the entries of Z take values in the set {+1, −1}. In this paper we state the equivalent model where
the binary alphabet is {0, 1}.
4 Throughout this paper, log represents the natural logarithm.
7
above. Similar to the 1-bit matrix completion setup, the observation matrix Y is obtained by transforming
the original matrix M in a probabilistic manner, which is dictated by the underlying distribution of the
bias vector b. In particular, we get to observe the entire observation matrix, i.e., Ω = [d] × [n].
However, there is key difference between these two aforementioned setups. The 1-bit matrix completion setup studied in Davenport et al. [2014] (in fact, most of the literature on non-linear matrix completion Ganti et al. [2015]) assume that each entry of the original matrix M is independently transformed
to obtain the observation matrix Y . In contrast to this, such independence in absent in the single-layer
ReLU-network. In particular, for i ∈ [d], the i-th row of the matrix Y is obtained from the corresponding
row of M by utilizing the shared randomness defined by the bias bi . Note that the bias associated with
a coordinate of the observed vector in our generative model should not vary across observation vectors.
This prevents us from applying the known results to the problem of estimating M from Y . However, as we
show in the remainder of this paper that the well-behaved nature of the ReLU-function allows us to deal
with the dependence across the entries of a row in Z and obtain the recovery guarantees that are similar
to those described in Proposition 1.
3.1
Representation learning from rectified observations
We now focus on the task of recovering the matrix M from the observation matrix Y . Recall that, under
the single-layer ReLU-network, the observation matrix Y depends on the matrix M as follows.
Y = ReLU(M + b ⊗ 1T ).
(23)
For i ∈ [d], we define NY (i) ⊆ [n] as the set of positive coordinates of the i-th row of the matrix Y , i.e.,
NY (i) = {j ∈ [n] : Yi, j > 0} and NY,i = |NY (i)|.
(24)
Note that, for i ∈ [d], the original matrix M needs to satisfy the following requirements.
Mi, j + bi = Yi, j for j ∈ NY (i)
(25)
Mi, j + bi < 0 for j ∈ N Y (i) := [n]\NY (i).
(26)
and
Given the original matrix M, for i ∈ [d] and j ∈ [n], let Mi,(j) denote the j-th largest element of the i-th
row of M, i.e., for i ∈ [d],
Mi,(1) ≥ Mi,(2) ≥ · · · ≥ Mi,(n) .
It is straightforward to verify from (25) that NY (i) denotes the indices of NY,i largest entries of M. Furthermore, whenever NY,i = si ∈ [n], we have
(27)
bi = Yi,(1) − Mi,(1) = · · · = Yi,(si ) − Mi,(si ) .
Similarly, it follows from (26) that whenever we have NY,i = 0, then bi satisfies the following.
bi ∈ (−∞, − max Mi, j ) = (−∞, −Mi,(1) ).
(28)
j ∈[n]
Based on these observation, we define the set of matrices XY,ν,γ ⊂ Rd×n as
XY,ν,γ = X : kX k∞ ≤ γ ; Yi,(1) − X i,(2) = · · · = Yi,(si ) − X i,(si ) ; and X i,(si ) ≥ max X i, j + ν ∀i ∈ [d] .
j ∈N Y (i)
8
(29)
Recall that, p : R → R denote the probability density function of each bias RV. We can then write the
likelihood that a matrix X ∈ XZ,ν,γ results into the observation matrix X as follows.
Ö
P(yi |xi ),
(30)
P(Y |X ) =
i ∈[d]
where, for i ∈ [d],
P(yi |xi ) = 1 {NY ,i =0} · P(bi ≤ − max X i, j ) +
j ∈[n]
n
Õ
s=1
1 {NY ,i =s } · p(bi = Yi,(s) − X i,(s) ).
(31)
By using the notation F (x 1 , x 2 ) = P(−x 1 ≤ B ≤ −x 2 ) and X i∗ = maxj ∈[n] X i, j , we can rewrite (31) as follows.
P(yi |xi ) = 1 {NY ,i =0} · F (∞, X i∗ ) +
n
Õ
s=1
1 {NY ,i =s } · p(bi = Yi,(s) − X i,(s) ).
(32)
Therefore the log-likelihood of observing Y given that X ∈ XY,ν,γ is the original matrix takes the following
form.
Õ
log P(yi |xi )
LY (X ) =
i ∈[d]
=
Õ
i ∈[d]
1 {NY ,i =0} · log F (∞, X i∗ ) +
n
Õ
s=1
1 {NY ,i =s } · log p(Yi,(s) − X i,(s) ) .
(33)
In what follows, we work with a slightly modified quantity
L Y (X ) = LY (X ) − LY (0) =
Õ
i ∈[d]
1 {NY ,i =0} · log
n
F (∞, X i∗ ) Õ
p(Y
− X )
+
1 {NY ,i =s } · log i,(s) i,(s) .
F (∞, 0)
p(Yi,(s) )
s=1
In order to recover the matrix M from the observation matrix Y , we employ the natural maximum likelihood approach which is equivalent to the following.
maximize L Y (X ) subject to X ∈ XY,ν,γ .
X ∈Rd ×n
(34)
Define ωp,γ ,ν to be such that F (x, y) ≥ ωp,γ ,ν for all x, y ∈ [−γ , γ ] with |x − y| > ν. In what follows,
we simply refer this quantity as ωp as γ and ν are clear from context. The following result characterizes
the performance of the program proposed in (34).
b be
Theorem 3. Assume that kM k∞ ≤ γ and the observation matrix Y is related to M according to (15). Let M
the solution of the program specified in (34), and the bias density function is p(x) differentiable with bounded
1
.
derivative. Then, the following holds with probability at least 1 − d+n
b kF2 ≤ C 0 Lγ (p) ·
kM − M
γd
,
βγ (p)ωp
(35)
where, C 0 is a constant. The quantities βγ (p) and Lγ (p) depend on the distribution of the bias and are defined
in (7) and (8), respectively.
The proof of Theorem 3 crucially depends on the following lemma.
9
Lemma 1. Given the observation matrix Y which is related to the matrix M according to (15), let XY,ν,γ be
as defined in (29). Then, for any X ∈ XY,ν,γ , we have
i
h
(36)
E L Y (M) − L Y (X ) ≥ βγ (p)ωp · kM − X kF2 .
The proof of this lemma is delegated to the appendix. Now we are ready to prove Theorem 3.
b be the solution of the program in (34). In what follows, we use X as a short hand
Proof of Theorem 3. Let M
notation for XY,ν,γ . We have,
h
i
h
i
h
i
b − L Y (M) = E L Y (M)
b − L Y (M) − L Y (M) − E L Y (M) + L Y (M)
b − E L Y (M)
b
0 ≤ L Y (M)
i
h
i
h
b − L Y (M) + 2 sup L Y (X ) − E L Y (X ) ,
(37)
≤ E L Y (M)
X ∈X
which means,
h
i
h
i
b ≤ 2 sup L Y (X ) − E L Y (X ) .
E L Y (M) − L Y (M)
(38)
X ∈X
We now employ Lemma 1 to obtain that
h
i
b kF2 ≤ 2 sup L Y (X ) − E L Y (X ) .
βγ (p)ωp · kM − M
(39)
X ∈X
We now proceed to upper bound the right hand side of (39). It follows from the standard symmetrization trick Devroye et al. [2013] that, for any integer t ≥ 1, we have
"
d
h
i
Õ
F (∞, X i∗ )
t
+
εi · 1 {NY ,i =0} · log
≤ 2t · E sup
E sup L Y (X ) − E L Y (X )
F (∞, 0)
X ∈X i=1
X ∈X
#
n
Õ
p(Yi,(s) − X i,(s) ) t
,
(40)
1 {NY ,i =s } · log
p(Yi,(s) )
s=1
where {εi }i ∈[d] are i.i.d. Rademacher random variables. Note that, for x, x̃ ∈ R,
d(log F (∞, u))
du
|u | ≤γ
∫ −u
d(log −∞ p(y)dy)
= |x − x̃ | · sup
≤ |x − x̃ | · Lγ (p),
du
|u | ≤γ
log F (∞, x) − log F (∞, x̃) ≤ |x − x̃ | · sup
and
log p(Yi,(s) − x) − log p(Yi,(s) − x̃) ≤ |x − x̃ | · sup
|u | ≤γ
dp(u)
≤ |x − x̃ | · Lγ (p).
du
At this point, we can combine the contraction principle with (40) to obtain the following.
h
i
t
E sup L Y (X ) − E L Y (X )
X ∈X
t
t
"
t
≤ 2 · 2 · E (Lγ (p)) · sup
d
Õ
X ∈X i=1
εi ·
10
1
∗
{NY ,i =0} · X i
+
n
Õ
s=1
1 {NY ,i =s } · X i,(s)
t
#
"
d
d
Õ
Õ
2 t /2
≤ 4 · E (Lγ (p)) · sup
εi
(i)
(ii)
t
t
X ∈X
"
i=1
≤ 4t · E (Lγ (p))t · d t /2 dγ
2 t /2
#
1
i=1
∗
{NY ,i =0} · X i
+
n
Õ
s=1
1 {NY ,i =s } · X i,(s)
2 t /2
= (4Lγ (p) · dγ )t ,
#
(41)
where (i) and (ii) follow from the Cauchy-Schwartz inequality and the fact that, for X ∈ X, kX k∞ ≤ γ ,
respectively. Now using Markov’s inequality, it follows from (41) that
h
P sup L Y (X ) − E L Z (X )
X ∈X
i
≥ C 0Lγ (p) · γd ≤
(i)
≤
where (i) follows from (41); and (ii) follows by setting C 0 ≥
3.2
h
i i
h
t
E supX ∈X L Z (X ) − E L Z (X )
4
C0
4
e
t
(C 0 Lγ (p) · γd)t
(ii)
≤
1
,
d +n
and t = log(d + n).
(42)
Recovering the network parameters
b ∈ XY,ν,γ such that
As established in Theorem 3, the program proposed in (34) recovers a matrix M
b kF ≤
kM − M
s
C 0 · Lγ (p) · γ √
d.
βγ (p)ωp
(43)
b as M
b = M + E, where E denotes the perturbation matrix that has
Let’s denote the recovered matrix M
bounded Frobenius norm (cf. (43)). Now the task of recovering the parameters of single-layer ReLUnetwork is equivalent to solving for A given
b = M + E = AC + E.
M
(44)
In our setting where we have A ∈ Rd×k and C ∈ Rk×n with d > k and n > k, M is a low-rank matrix with
its column space spanned by the columns of A. Therefore, as long as the generative model ensures that the
matrix M has its singular values sufficiently bounded away from 0, we can resort to standard results from
b as an candidate for the orthonormal
matrix-perturbation theory and output top k left singular vectors of M
basis for the column space of M or A. In particular, we can employ the result from Yu et al. [2015] which
b be the top k left singular vectors of M and M,
b respectively. Note that,
is stated in Appendix A. Let U and U
even without the perturbation we could only hope to recover the column space of A (or the column space
of U ) and not the exact matrix A. Let σk , the smallest non-zero singular value of M, is at least δ > 0. Then,
it follows from Theorem 5 (cf. Appendix A) and (43) that there exists an orthogonal matrix O ∈ Rk×k such
that
√
3/2 (2σ + kE k) · min{ k kE k, kE k }
23/2 (2σ1 + kE kF ) · kE kF
2
F
1
bO kF ≤
≤
,
kU − U
δ2
δ2
(45)
which is a guarantee that the column space of U is recovered within an error of O(d) in Frobenius norm
b.
by the column space of U
11
4 Robust recovery in single-layer ReLU-network
We now explore the second fundamental question that arises in the context of reconstructing a signal
vector belonging to the underlying generative model from its erroneous version. Recall that, we are given
a vector v ∈ Rd , which is obtained by adding noise to a valid message vector y ∈ Rd that is well modeled
by a single-layer ReLU-network, i.e.,
v = y + e + w = ReLU(Ac + b) + e + w.
(46)
Here, w denotes the (dense) noise vector with bounded norm. On the other hands, the vector e contains
(potentially) large corruptions, also referred to as outliers. We assume the number of outliers kek0 to be
bounded above by s. The robust recovery problem in ReLU-networks corresponds to obtaining an estimate
ĉ of the true representation c from the corrupt observation vector y such that the distance between ĉ and c
is small. A related problem of denoising in the presence of outliers only focuses on obtaining an estimate ŷ
which is close to the true message vector y. In the remainder of this paper, we focus on the setting where
the weight matrix A is a random matrix with i.i.d. entries, where each entry is distributed according to
the standard Gaussian distribution. Furthermore, another crucial assumption is that the outlier vector is
oblivious in its nature, i.e., the error vector is not picked in an adversarial manner5 given the knowledge
of A and c.
Note that Soltanolkotabi [2017] study a problem which is equivalent to recovering the latent vector c
from the observation vector generated form a single-layer ReLU-network without the presence of outliers.
In that sense, our work is a natural generalization of the work in Soltanolkotabi [2017] and presents a
recovery method which is robust to errors as well. However, our approach significantly differs from that in
Soltanolkotabi [2017], where the author analyze the convergence of the gradient descent method to the true
representation vector c. In contrast, we rely on the recent work of Plan and Vershynin Plan and Vershynin
[2016] to employ the LASSO method to recover the representation vector c (and the Hamming error vector
e).
Given v = ReLU(Ac + b) + e + w, which corresponds to the corrupted non-linear observations of c, we
try to fit a linear model to these observations by solving the following optimization problem6 .
minimize
c∈Rk ,e∈Rd
1
kv − Ac − ek22 + λkek1 .
2d
(47)
In the aforementioned formulation, the regularizer part is included to encourage the sparsity in the estimate
vector. The following result characterizes the performance of our proposed program (cf. (47)) in recovering
the representation and the corruption vector.
Theorem 4. Let A ∈ Rd×n be a random matrix with i.i.d. standard Gaussian random variables as its entires
and v satisfies
v = ReLU(Ac∗ + b) + e∗ + w,
(48)
where kc∗ k2 = 1, ke∗ k0 ≤ s and kwk∞ ≤ δ . Let µ be defined as µ = E[ReLU(д + b) · д], where д is a standard
Gaussian random variable and b is a random variable that represents the bias in a coordinate in (48). Let (ĉ, ê)
5 It
is an interesting problem to extend our results to a setting with adversarial errors. However, we note that this problem is
an active area of research even in the case of linear measurement, i.e, y = Ac + e + w Bhatia et al. [2015, 2017]. We plan to explore
this problem in future work.
6 Note that this paper deals with a setup where number of observations is greater than the dimension of the signal that needs
to be recovered, i.e., d > k. Therefore, we don’t necessarily require the vector c to belong to a restricted set, as done in the other
version of the robust LASSO methods for linear measurements (see e.g., Nguyen and Tran [2013]).
12
be the outcome of the program described in (47). Then, with high probability, we have
(r
)
r
k log k
s log d
ke∗ − êk2
∗
≤ C̃ max
,
kµc − ĉk2 +
,
√
d
d
d
(49)
where C̃ is a large enough absolute constant that depends on δ .
Proof. Assume that
ĉ = µc∗ + h and ê = e∗ + f .
(50)
Furthermore, for c ∈ Rd and e ∈ Rk , we define
L(c, e) =
1
ky − Ac − ek22 + λkek1 .
2d
(51)
Let S = {i ∈ [d] : ei∗ , 0} be the support of the vector e∗ such that |S| = s. Given a vector a ∈ Rd and set
T ⊆ [d], we use a T to denote the vector obtained by restricting a to the indices belonging to T . Note that
1
1
kv − µAc∗ − e∗ − Ah − f k22 + λke∗ + f k1 − kv − µAc∗ − e∗ k22 − λke∗ k1
2d
2d
1
1
kAh + f k22 − hv − µAc∗ − e∗ , Ah + fi + λ k(e∗ + f ∗ ) S k1 + kf SC k1 − ke∗ k1
=
2d
d
1
(i) 1
2
kAh + f k2 − hReLU(Ac∗ + b) − µAc∗ + w, Ah + fi +
=
2d
d
λ k(e∗ + f ∗ ) S k1 + kf SC k1 − ke∗ k1
(ii) 1
1
≥
kAh + f k22 − hReLU(Ac∗ + b) − µAc∗ + w, Ah + fi + λ kf SC k1 − kf S k1 (52)
2d
d
L(ĉ, ê) − L(µc∗, e∗ ) =
where (i) and (ii) follow from (46) and the triangle inequality, respectively. Since (ĉ, ê) is solution to the
program in (47), we have
L(ĉ, ê) − L(µc∗, e∗ ) ≤ 0.
(53)
By combining this with (52), we obtain that
1
1
kAh + f k22 ≤ · hReLU(Ac∗ + b) − µAc∗ + w, Ah + fi + λ(kf S k1 − kf SC k1 )
2d
d
(54)
We now complete the proof in two steps where we obtain universal lower and upper bounds on the left
hand side and the right hand side of (54), respectively, that hold with high probability.
Upper bound on the RHS of (54). Let’s define
z = ReLU(Ac∗ + b) − µAc∗ .
Note that
1
· hz + w, Ah + fi + λ(kf S k1 − kf SC k1 )
d
1
1
· hz + w, fi + λ(kf S k1 − kf SC k1 )
= · hz + w, Ahi +
d
d
(i) 1
1
≤ · hz + w, Ahi +
· kz + wk∞ kf k1 + λ(kf S k1 − kf SC k1 )
d
d
13
(55)
=
1
1
1
· hz + w, Ahi + (λ + · kz + wk∞ )kf S k1 − (λ − · kz + wk∞ )kf SC k1
d
d
d
(56)
where (i) follows from the Hölder’s inequality. We now employ [Plan and Vershynin, 2016, Lemma 4.3] to
obtain that7
√
√
sup hz, Ahi ≤ C kσ + η d · khk2 ,
(57)
h∈Rk
where C is an absolute constant and
σ 2 := E (ReLU(д + b) − µд)2
η 2 := E д2 · (ReLU(д + b) − µд)2
(58)
with д being a standard Gaussian random variable. Now we can combine (56) and (57) to obtain the
following.
1
· hz + w, Ah + fi + λ kf S k1 − kf SC k1
d
√
√
kσ + η
kz + wk∞
kz + wk∞
k T
· khk2 +
≤C
kA wk∞ khk2 + λ +
kf S k1 − λ −
kf SC k1 (59)
√
d
d
d
d
√
√
(i)
√
kσ + η
k T
1
· khk2 +
≤C
kA wk∞ khk2 + s(λ + · kz + wk∞ )kf k2
√
d
d
d
√
√
(ii)
√
kσ + η
k T
kA wk∞ khk2 + 2λ s kf k2,
(60)
· khk2 +
≤ C
√
d
d
√
√
where (i) and (ii) follow by setting λ ≥ 2kz + wk∞ /d and using the fact that kf S k1 ≤ s kf S k2 ≤ s kf k2 .
We can further simplify the bound in (60) as follows.
1
· hz + w, Ah + fi + λ kf S k1 − kf SC k1
d
)
( √
√
√√
kσ + η
k T
kf k2
+
.
≤ max C
kA wk∞, 2λ s d khk2 + √
√
d
d
d
(61)
Lower bound on the LHS of (54). By combining (54) and (59), we get that
1
kAh + f k22
2d
√
√
kσ + η
k T
kz + wk∞
kz + wk∞
· khk2 +
≤C
kA wk∞ khk2 + λ +
kf S k1 − λ −
kf SC k1
√
d
d
d
d
(62)
k∞
. Since the left hand side of (62) is non-negative, we find that the
Note that we have picked λ ≥ 2 kz+w
d
tuple (h, f) belongs to the following restricted set.
!
√
√
kσ
+
η
k
(63)
kAT wk∞ khk2 + 3λkf S k1 }.
(h, f) ∈ R := {h ∈ Rk , f ∈ Rd : λ · kf SC k1 ≤ 2 C
+
√
d
d
7 In Plan and Vershynin [2016], Plan and Vershynin obtain the bound in terms of the Gaussian width Vershynin [2018] of the
∗
cone which
√ the vector h belongs to. However, in our setup where we do not impose any specific structure on c , this quantity is
simply O( k).
14
As a result, in order to lower bound (54), we lower bounding the following quantity for every (h, f) ∈ R.
1
· kAh + f k22 .
2d
(64)
Towards this, we employ Lemma 3 in Appendix C, which gives us that, for every (h, f) ∈ R, with high
probability, we have
2
1
kf k2
1
2
· kAh + f k2 ≥
khk2 + √
.
2d
128
d
(65)
Completing the proof. It follows from (54), (61), and (65) that
( √
)
√
2
√√
kσ + η
kf k2
k T
kf k2
1
kA wk∞ , 2λ s d khk2 + √
khk2 + √
≤ max C
+
√
128
d
d
d
d
or
)
( √
√
√√
kσ + η
k T
kf k2
kA wk∞, 2λ s d .
khk2 + √ ≤ 128 max C
+
√
d
d
d
(66)
Now using the fact that kwk∞ ≤ δ and A is an i.i.d. standard Gaussian matrix, we can obtain the following
bound from (66), which holds with high probability.
)
(r
r
k log k
s log d
kf k2
,
(67)
,
khk2 + √ ≤ C̃ max
d
d
d
where C̃ is a large enough absolute constant.
Acknowledgements. This research is supported in part by NSF awards CCF 1642550, CCF 1618512 and
CAREER award 1642658.
References
K. Bhatia, P. Jain, and P. Kar. Robust regression via hard thresholding. In Advances in Neural Information
Processing Systems (NIPS), pages 721–729. 2015.
K. Bhatia, P. Jain, P. Kamalaruban, and P. Kar. Consistent robust regression. In Advances in Neural Information Processing Systems (NIPS), pages 2107–2116. 2017.
A. Bora, A. Jalal, E. Price, and A. G. Dimakis. Compressed sensing using generative models. In Proceedings
of the 34th International Conference on Machine Learning (ICML), pages 537–546, Aug 2017.
E. J. Candès and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational
Mathematics, 9(6):717–772, Apr 2009.
E. J. Candès and T. Tao. Decoding by linear programming. IEEE Trans. Inform. Theory, 51(12):4203–4215,
2005.
E. J. Candes, J. K. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Communications on Pure and Applied Mathematics, 59(8):1207–1223, 2006.
15
S. Chatterjee. Matrix estimation by universal singular value thresholding. The Annals of Statistics, 43(1):
177–214, 02 2015.
M. A. Davenport, Y. Plan, E. van den Berg, and M. Wootters. 1-bit matrix completion. Information and
Inference: A Journal of the IMA, 3(3):189–223, July 2014.
L. Devroye, L. Györfi, and G. Lugosi. A probabilistic theory of pattern recognition, volume 31. Springer
Science & Business Media, 2013.
R. Ganti, L. Balzano, and R. Willett. Matrix completion under monotonic single index models. In Advances
in Neural Information Processing Systems (NIPS), pages 1873–1881, 2015.
S. Goel, V. Kanade, A. R. Klivans, and J. Thaler. Reliably learning the ReLU in polynomial time. In Proceedings of the 30th Conference on Learning Theory (COLT), pages 1004–1042, July 2017.
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio.
Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), pages 2672–
2680. 2014.
A. T. Kalai, V. Kanade, and Y. Mansour. Reliable agnostic learning. Journal of Computer and System Sciences,
78(5):1481 – 1495, 2012.
A. Karbasi, A. H. Salavati, A. Shokrollahi, and L. R. Varshney. Noise facilitation in associative memories of
exponential capacity. Neural Computation, 26(11):2493–2526, 11 2014.
R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE Transactions on
Information Theory, 56(6):2980–2998, June 2010.
D. P. Kingma and M. Welling. Auto-encoding variational bayes. In International Conference on Learning
Representations (ICLR). 2014.
M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and processes. Springer Science &
Business Media, 2013.
A. Mazumdar and A. S. Rawat. Associative memory via a sparse recovery model. In Advances in Neural
Information Processing Systems (NIPS), pages 2683–2691, 2015.
A. Mazumdar and A. S. Rawat. Associative memory using dictionary learning and expander decoding. In
31st AAAI Conference on Artificial Intelligence (AAAI), 2017.
N. H. Nguyen and T. D. Tran. Robust lasso with missing and grossly corrupted observations. IEEE Transactions on Information Theory, 59(4):2036–2058, April 2013.
Y. Plan and R. Vershynin. The generalized lasso with non-linear observations. IEEE Transactions on Information Theory, 62(3):1528–1537, March 2016.
M. Soltanolkotabi. Learning relus via gradient descent. In Advances in Neural Information Processing
Systems (NIPS), pages 2004–2014. 2017.
R. Vershynin.
High-Dimensional Probability:
Available online, 2018.
An Introduction with Applications in Data Science.
Y. Yu, T. Wang, and R. J. Samworth. A useful variant of the Davis–Kahan theorem for statisticians.
Biometrika, 102(2):315–323, 2015.
16
A Results on Matrix Perturbation
Let M be a d × n matrix, where without loss of generality we assume that d ≥ n. Let M have the following
singular value decomposition.
M = U ΣV T ,
(68)
where Σ = Diag (σ1 , σ2 , . . . , σn ) is the diagonal matrix with the singular values of M as its diagonal entries.
b = M + E be the matrix which is obtained by perturbing the original matrix M by an error matrix E.
Let M
b have the following singular value decomposition.
Let M
b =U
bb
M
ΣVbT ,
(69)
where b
Σ = Diag (b
σ1, b
σ2 , . . . , b
σn ) is the diagonal matrix comprising the singular values of the perturbed
b Let γ 1 ≥ γ 2 ≥ · · · ≥ γn be the singular values of the matrix U T U
b. Define,
matrix M.
b) = Diag θ 1 , θ 2 , . . . , θ n .
θ i = cos−1 γi ∀i ∈ [n] and Θ(U , U
(70)
b the subspaces spanned by the
Note that {θ i }i ∈[n] are referred to as the canonical angles between U and U,
b, respectively. It is common to use k sin Θ(U , U
b)kF as a distance measure
columns of the matrices U and U
b
between U and U.
In Yu et al. [2015], Yu et al. present the following result which bounds the distance between the subb respectively8.
spaces spanned by the singular vectors of the original matrix M and the perturbed matrix M,
b = M + E ∈ Rd×n have singular values σ1 ≥ · · · σmin{d,n } and b
Theorem 5. Let M, M
σ1 ≥ · · · b
σmin{d,n } ,
respectively. Fix 1 ≤ r ≤ rank(A) and assume that
σr2 − σr2+1 ≥ 0.
b = (b
Let U = (u1 , u2 , . . . , ur ) ∈ Rd×r and U
u1 , b
u2 , . . . , b
ur ) ∈ Rd×r contain the left singular vectors associated
b respectively. Then,
with the r leading singular values of M and M,
√
2(2σ1 + kE k) · min{ r kE k, kE kF }
b
.
k sin Θ(U , U )kF ≤
σr2 − σr2+1
(71)
Moreover, there exists an orthogonal matrix O ∈ Rr ×r such that
3/2 (2σ + kE k) · min{√r kE k, kE k }
2
1
F
bO kF ≤
kU − U
.
σr2 − σr2+1
B
(72)
Proofs of Section 3.1
Lemma 2 (Defining the distance measure). Given the observation matrix Y which is related to the matrix
M according to (15), let XY,ν,γ be as defined in (29). Then, for any X ∈ XY,ν,γ , we have
h
i
E L Y (M) − L Y (X ) ≥ βγ (p)ωp · kM − X kF2 .
(73)
8 Here, we state a special case of the result from Yu et al. [2015]. See [Yu et al., 2015, Theorem 4] for the statement of the
general result.
17
Proof. First, we recall our notation that given the original matrix M, for i ∈ [d] and j ∈ [n], Mi,(j) denotes
the j-th largest element of the i-th row of M, i.e., for i ∈ [d],
Mi,(1) ≥ Mi,(2) ≥ · · · ≥ Mi,(n) .
We define, Mi∗ = maxj ∈[n] Mi, j . Thus,
h
i
E L Y (M) − L Y (X )
#
"
n
d
Õ
F (∞, Mi∗ ) Õ
p(Yi,(s) − Mi,(s) )
+
1 {NY ,i =s } · log
=
E 1 {NY ,i =0} · log
F (∞, X i∗ ) s=1
p(Yi,(s) − X i,(s) )
i=1
d
n−1 ∫ −Mi,(s +1)
Õ
F (∞, Mi∗ ) Õ
p(b)
∗
F (∞, Mi ) · log
=
db +
p(b) · log
∗) +
F
(∞,
X
p(b
+
M
i,(s) − X i,(s) )
i
i=1
s=1 −Mi,(s )
∫ ∞
p(b)
db ,
p(b) · log
p(b + Mi,(n) − X i,(n) )
−Mi,(n)
(74)
where p : R → R represents the probability density function of the bias RV and F is defined as
F (x 1 , x 2 ) = P(−x 1 ≤ b ≤ −x 2 ).
Given the matrices, M, X ∈ XZ,ν,γ , we define a new (density) function д as follows.
(
p(u + Mi,(s) − X i,(s) ) if u ∈ (−Mi,(s) , −Mi,(s+1) ] for s ∈ [n − 1]
д(u) =
p(u + Mi,(n) − X i,(n) ) if u ∈ [−Mi,(n) , ∞).
(75)
Recall that for x ∈ R, we have x ≤ e x − 1. For x = log y, this gives us that
log y ≤ y − 1.
In particular, for b ∈ R, employing (76) with y =
p(b) · log
s
д(b)
≤ p(b) ·
p(b)
q
д(b)
p(b) ,
s
(76)
we get that
!
p
д(b)
− 1 = p(b)д(b) − p(b)
p(b)
or
p(b) · log
p
p(b)
≥ 2 · p(b) − p(b)д(b) .
д(b)
(77)
By using (77), for every i ∈ [d], we obtain that
n−1 ∫
Õ
s=1
−Mi,(s +1)
−Mi,(s )
∫ ∞
p(b)
db +
p(b) · log
p(b + Mi,(s) − X i,(s) )
=
≥
−Mi,(1)
∞
∫
−Mi,(1)
p(b)
db
д(b)
p
2 · p(b) − p(b)д(b)
p(b) · log
18
∫
∞
−Mi,(n)
p(b) · log
p(b)
db
p(b + Mi,(n) − X i,(n) )
∫
= 2 · 1 − F (∞, M 1,(1) ) −
∞
−Mi,(1)
p
p(b)д(b)db
= 1 − F (∞, Mi∗ ) + 1 − F (∞, X i∗ ) −
=
∫
∞
−Mi,(1)
∞
−Mi,(1)
F (∞, X i∗ )
F (∞, Mi∗ )
p
2 · p(b)д(b)db +
1−
− 1−
p
2
p
p(b) − д(b) db + 1 − F (∞, Mi∗ ) − 1 − F (∞, X i∗ )
(78)
F (∞,X i∗ )
F (∞, Mi∗ )
to obtain the following.
F (∞, X i∗ )
F (∞, X i∗ )
∗
∗
F (∞, Mi ) · log
≤ F (∞, Mi ) ·
− 1 = F (∞, X i∗ ) − F (∞, Mi∗ ).
F (∞, Mi∗ )
F (∞, Mi∗ )
For i ∈ [d], we now employ (76) with y =
or
∫
F (∞, Mi∗ )
+ F (∞, X i∗ ) − F (∞, Mi∗ )
F (∞, X i∗ )
F (∞, Mi∗ )
= F (∞, Mi∗ ) · log
+ 1 − F (∞, Mi∗ ) − 1 − F (∞, X i∗ ) ≥ 0
∗
F (∞, X i )
F (∞, Mi∗ ) · log
(79)
By combining (78) and (79), we obtain that
n−1 ∫ −Mi,(s +1)
F (∞, Mi∗ ) Õ
p(b)
∗
p(b) · log
+
db +
F (∞, Mi ) · log
∗
F (∞, X i ) s=1 −Mi,(s )
p(b + Mi,(s) − X i,(s) )
∫ ∞
p(b)
db
p(b) · log
p(b + Mi,(n) − X i,(n) )
−Mi,(n)
∫ ∞ p
2
p
≥
p(b) − д(b) db
−Mi,(1)
=
n−1 ∫ −Mi,(s +1)
Õ
s=1 −Mi,(s )
∫ ∞
p
−Mi,(n)
2
q
p
p(b) − p(b + Mi,(s) − X i,(s) ) db
2
q
p(b) − p(b + Mi,(n) − X i,(n) ) db
!2
p ′ (ξ s )
=
Mi,(s) − X i,(s) db +
p
2 p(ξ s )
s=1 −Mi,(s )
!2
∫ ∞
p ′ (ξ n )
Mi,(n) − X i,(n) db
p
−Mi,(n) 2 p(ξ n )
∫ ∞
n−1 ∫ −Mi,(s +1)
Õ
(ii)
2
≥ βγ (p) ·
Mi,(s) − X i,(s) db +
(i)
n−1 ∫
Õ
−Mi,(s +1)
s=1
(iii)
+
≥ βγ (p)ωp ·
−Mi,(s )
n
Õ
s=1
−Mi,(n)
2
Mi,(s) − X i,(s) ,
2
Mi,(n) − X i,(n) db
!
(80)
where (i) follows from the Mean Value Theorem with suitable {ξ i } ⊂ R and (ii) follows by assuming that
we have
q
p ′ (u)
≥ βγ (p) for all |u| ≤ γ .
p
2 p(u)
19
Since M ∈ XZ,ν,γ , we have |Mi,(n) | ≤ γ and |Mi,(s) −Mi,(s+1) | ≥ ν. The step (iii) follows from the assumption
that
F (x, y) ≥ ωp for all (x, y) such that |x − y| ≥ ν .
By combining (74) with (80), we now obtain that
d
n
i Õ
h
Õ
2
βγ (p)ωp ·
Mi,(s) − X i,(s) = βγ (p)ωp · kM − X kF2 .
E L Y (M) − L Y (X ) ≥
i=1
s=1
C
Proofs of Section 4
Here we state the special form of a result that was obtained in Nguyen and Tran [2013] for the generals
setting, where one may potentially require the vector c to be sparse as well.
Lemma 3. Let A ∈ Rd×n be random matrix that has i.i.d. standard Gaussian entries. Furthermore, let
R ⊂ Rk × Rd be as defined in (63). Then, with probability at least 1 − c exp(−c̃d), we have
2
1
kf k
1
· kAh + f k2 ≥
khk + √
for all (h, f) ∈ R.
2d
128
d
(81)
Here, c, c̃ > 0 are absolute constants.
Proof. Note that
kAh + f k22 = kAhk22 + kf k22 + 2hAh, fi.
(82)
For a d × k matrix with i.i.d. Gaussian entries, there exists constants c 1 and c 2 such that with probability
at least 1 − c 1 exp(−c 2d), we have
1
1
√ kAhk2 ≥ khk2 .
4
d
(83)
Therefore, with probability at least 1 − c 1 exp(−c 2d), we have
kAhk22
+
kf k22
2
d
d
d
kf k2
2
2
2
2
.
≥
khk2 + kf k2 ≥ (khk2 + kf k2 /d) ≥
khk2 + √
16
16
32
d
(84)
Next, we focus on obtaining an upper bound on
1
|hAh, fi|.
d
Towards this we partition the set [d] into r blocks S1 = S, S2, . . . , Sr such that |S2 | = · · · |S|r = s ′ ≥
|S| = s. Here, S2 refers to the set of indices of s ′ largest entries (in terms of absolute value) of f SC ; S3
corresponds to the set of indices of the next s ′ largest entires of f SC ; and so on. Now, we have
Õ
1
1Õ
1
|hA Si h, f Si i| ≤ max kA Si k2 khk2
|hAh, fi| ≤
kf Si k2
d
d i=1
d i
i=1
r
r
20
(85)
In [Nguyen and Tran, 2013, Appendix], Nguyen and Tran show that, with probability at least 1−2 exp(−τ 2s ′/2),
for a set S ′ with |S ′ | = s ′, we have
√
√
√
(86)
kA S′ k2 ≤ k + s ′ + τ s ′ .
By setting τ = τ ′
q
d
s′
and taking the union bound over all the subsets of [d] of size s ′,
kA S′ k2 ≤
holds with probability at least
√
√
√
k + s ′ + τ s ′ ∀ S ′ ⊂ [d] such that |S ′ | = s ′
(87)
s′
d
ed
′2
exp(−τ ′2d/2).
1 − ′ exp(−τ d/2) ≥ 1 − ′
s
s
Assuming that s ′ log(d/s ′ ) ≤ c 3d, the aforementioned probability is at least
1 − exp(−(τ ′2 /2 − c 3 )d).
On the other hand, we have
r
Õ
i=1
(i)
kf Si k2 ≤ 2kf k2 +
r
Õ
i=3
kf Si k2
1
≤ 2kf k2 + √ kf Sc k1
s′
!
√
√
(iii)
kσ + η
3
k T
2
kA wk∞ khk2 + √ kf S k1
+
≤ 2kf k2 + √ C
√
d
λ s′
s′
d
!
√
√
(iv)
kσ + η
2
k T
+
≤ 5kf k2 + √ C
kA wk∞ khk2 ,
√
′
d
λ s
d
(ii)
(88)
where (i) follows from the fact that kf S1 k2 ≤ kf S2 k2 ≤ kf k2 ; (ii) follows from a standard bound given in
Candes et al. [2006]; (iii) follows from the fact that √f belongs to the set R defined in (63); and (iv) is a
√
consequence of the loose bound kf S k1 ≤ s kf S k2 ≤ s ′ kf k2 . Next, we use the fact that λ ≥ 2kz + wk∞ /d,
it follows from (88) that
!
√
√
r
Õ
kσ + η
k T
d
kA wk∞ khk2
kf Si k2 ≤ 5kf k2 +
+
√ C
√
d
kz + wk∞ s ′
d
i=1
!
!
√
√
√
√ kf k2
kσ + η
d
k T
+
=5 d √ +
kA wk∞ khk2
√ C
√
d
5kz + wk∞ s ′
d
d
√ kf k2
(89)
≤ 5 d √ + khk2
d
By combining (85), (87), and (89), with probability at least 1 − c 4 exp(−c 5d), we obtain that
√ kf k2
√
√
1 √
1
′
′
|hAh, fi| ≤
k + s + τ s khk2 ·5 d √ + khk2
d
d
d
2
√
√
kf k2
5 √
′
′
k + s +τ s
≤ √
√ + khk2
d
d
21
2
1 kf k2
≤
√ + khk2 ,
128
d
(i)
(90)
where (i) follows from large enough d. By combining (82), (84), and (90), we obtain that, for every (h, f) ∈ R,
2
1
1
kf k2
2
· kAh + f k2 ≥
khk + √
2d
128
d
holds with probability at least 1 − c exp(−c̃d), with absolute constants c, c̃ > 0.
22
(91)
| 7 |
CHUDNOVSKY’S CONJECTURE FOR VERY GENERAL POINTS
IN PN
k
arXiv:1604.02217v2 [math.AC] 7 Dec 2017
LOUIZA FOULI, PAOLO MANTERO AND YU XIE
A BSTRACT. We prove a long-standing conjecture of Chudnovsky for very general and generic points
in PN
k , where k is an algebraically closed field of characteristic zero, and for any finite set of points
lying on a quadric, without any assumptions on k. We also prove that for any homogeneous ideal
I in the homogeneous coordinate ring R = k[x0 , . . . , xN ], Chudnovsky’s conjecture holds for large
enough symbolic powers of I.
1. I NTRODUCTION
This manuscript deals with the following general interpolation question:
Question 1.1. Given a finite set of n distinct points X = {p1 , . . . , pn } in PN
k , where k is a field, what
is the minimum degree, αm (X), of a hypersurface f ≠ 0 passing through each pi with multiplicity
at least m?
Question 1.1 has been considered in various forms for a long time. We mention a few conjectures and motivations. For instance, this question plays a crucial role in the proof of Nagata’s
counterexamples to Hilbert’s fourteenth problem [19]. In the same paper Nagata conjectured that
√
αm (X) > m n for sets of n general points in P2C [19], and a vast number of papers in the last few
decades are related to his conjecture. Another reason for the interest sparked by the above question
comes from the context of complex analysis: an answer to Question 1.1 would provide information
about the Schwarz exponent, which is very important in the investigation of the arithmetic nature of
values of Abelian functions of several variables [4].
However, besides a few very special classes of points (e.g., if these n points lie on a single hyper−1
) points forming a star configuration and m is a multiple of N [5],[2]),
plane or one has n = (β+N
N
at the moment a satisfactory answer to this elusive question appears out of reach. Therefore, there
has been interest in finding effective lower bounds for αm (X). In fact, lower bounds for αm (X)
yield upper bounds for the Schwarz exponent. Using complex analytic techniques, Waldschmidt
[22] and Skoda [21] in 1977 proved that for all m ≥ 1
αm (X) α(X)
≥
,
m
N
where α(X) = α1 (X) is the minimum degree of a hypersurface passing through every point of X
and k = C. In 1981, Chudnovsky [4] improved the inequality in the 2-dimensional projective space.
He showed that if X is a set of n points in P2C , then for all m ≥ 1
αm (X) α(X) + 1
≥
.
m
2
He then raised the following conjecture for higher dimensional projective spaces:
2010 Mathematics Subject Classification. 13F20, 11C08.
Key words and phrases. Chudnovsky’s conjecture, initial degrees, symbolic powers, fat points, Seshadri constant.
The first author was partially supported by a grant from the Simons Foundation, grant #244930.
1
2
L. FOULI, P. MANTERO AND Y. XIE
Conjecture 1.2 (Chudnovsky [4]). If X is a finite set of points in PN
C , then for all m ≥ 1
αm (X) α(X) + N − 1
≥
.
m
N
The first improvement towards Chudnovsky’s Conjecture 1.3 was achieved in [9] by Esnault and
αm (X) α(X) + 1
Viehweg, who employed complex projective geometry techniques to show
≥
for
m
N
points in PN
C . In fact, this inequality follows by a stronger statement, refining previous inequalities
from Bombieri, Waldschmidt and Skoda, see [9].
From the algebraic point of view, Chudnovsky’s Conjecture 1.3 can be interpreted in terms of
symbolic powers via a celebrated theorem of Nagata and Zariski. Let R = k[x0 , . . . , xN ] be the
homogeneous coordinate ring of PN
k and I a homogeneous ideal in R. We recall that the m-th
symbolic power of I is defined as the ideal I (m) = ⋂p I m Rp ∩ R, where p runs over all associated
prime ideals of R/I, and the initial degree of I, α(I), is the least degree of a polynomial in I.
Nagata and Zariski showed that if k is algebraically closed and X is a finite set of points in PN
k ,
(m)
then αm (X) = α(IX ), where IX is the ideal consisting of all polynomials in R that vanish on X.
Thus in this setting Chudnovsky’s Conjecture 1.3 is equivalent to
α(IX ) α(IX ) + N − 1
≥
for all m ≥ 1.
m
N
(m)
α(IX )
= γ(IX ), called the Waldschmidt constant of IX , exists and is an “inf”
The limit lim
m→∞
m
[2]. Thus another equivalent formulation of Chudnovsky’s Conjecture 1.3 is
α(IX ) + N − 1
γ(IX ) ≥
.
N
We remark here that there is a tight connection between the Waldschmidt constant (especially for
general points) and an instance of the multipoint Seshadri constant [1, Section 8].
We now state a generalized version of Chudnovsky’s Conjecture 1.2. When k is an algebraically
closed field then the following conjecture is equivalent to Chudnovsky’s Conjecture 1.2.
(m)
Conjecture 1.3. If X is a finite set of points in PN
k , where k is any field, then for all m ≥ 1
α(IX ) α(IX ) + N − 1
≥
.
m
N
In 2001, Ein, Lazarsfeld, and Smith proved a containment between ordinary powers and symbolic powers of homogeneous ideals in polynomial rings over the field of complex numbers. More
precisely, for any homogeneous ideal I in R = C[x0 , . . . , xN ], they proved that I (N m) ⊆ I m [8].
Their result was soon generalized over any field by Hochster and Huneke using characteristic p
techniques [16]. Using this result, Harbourne and Huneke observed that the Waldschmidt-Skoda
α(I (m) ) α(I)
≥
actually holds for every homogeneous ideal I in R [15]. In the same
inequality
m
N
article, Harbourne and Huneke posed the following conjecture:
(m)
Conjecture 1.4 (Harbourne-Huneke [15]). If X is a finite set of points in PN
k , then for all m ≥ 1
(N m)
IX
m
,
⊆ M m(N −1) IX
where M = (x0 , . . . , xN ) is the homogeneous maximal ideal of R = k[x0 , . . . , xN ].
CHUDNOVSKY’S CONJECTURE FOR VERY GENERAL POINTS IN PN
k
3
Conjecture 1.4 strives to provide a structural reason behind Chudnovsky’s Conjecture 1.3: if
it holds, then it would imply Chudnovsky’s Conjecture 1.3 in a similar way as to how the EinLazarsfeld-Smith and Hochster-Huneke containment implies the Waldschmidt-Skoda inequality
[15]. These results have since raised new interest in Chudnovsky’s Conjecture 1.3.
Harbourne and Huneke proved their conjecture for general points in P2k and when the points form
a star configuration in PN
k . In 2011, Dumnicki proved the Harbourne-Huneke Conjecture 1.4 for
3
general points in Pk and at most N + 1 points in general position in PN
k for N ≥ 2 [6]. In summary,
Chudnovsky’s Conjecture 1.3 is known in the following cases:
●
●
●
●
any finite set of points in P2k [4], [15];
any finite set of general points in P3k , where k is a field of characteristic 0 [6];
any set of at most N + 1 points in general position in PN
k [6];
N
any set of a binomial number of points in Pk forming a star configuration [5], [2].
In the present paper, we prove that Chudnovsky’s Conjecture 1.3 holds for
● any finite set of very general points in PN
k , where k is an algebraically closed field of characteristic 0 (Theorem 2.8);
● any finite set of generic points in PN
k(z) , where k is an algebraically closed field of characteristic 0 (Theorem 2.7);
● any finite set of points in PN
k lying on a quadric, without any assumptions on k (Proposition 2.6).
As a corollary, we obtain that the Harbourne-Huneke Conjecture 1.4 holds for sets of a binomial
number of very general points in PN
k (Corollary 2.9). This result also yields a new lower bound for
the multipoint Seshadri constant of very general points in PN
k (Corollary 2.10).
In the final section of the paper, we prove that for any homogeneous ideal I in the homogeneous
coordinate ring R = k[x0 , . . . , xN ], Chudnovsky’s Conjecture 1.3 holds for sufficiently large symbolic powers I (t) , t ≫ 0 (Theorem 3.7). In the case of ideals of finite sets of points in PN
C , we
(t)
prove a uniform bound, namely that if t ≥ N − 1, then I satisfies Chudnovsky’s Conjecture 1.3
(Proposition 3.10).
Very recently, Dumnicki and Tutaj-Gasińska proved the Harbourne-Huneke Conjecture 1.4 for
at least 2N number of very general points in PN
k . As a corollary, they obtain Chudnovsky’s ConN
jecture 1.3 for at least 2 number of very general points in PN
k [7]. These results are obtained
independently from ours and with different methods.
2. G ENERIC
AND VERY GENERAL POINTS IN
PN
k
We begin by discussing our general setting.
Set-up 2.1. Let R = k[x0 , . . . , xN ] be the homogeneous coordinate ring of PN
k , where k is an
algebraically closed field. Let n be a positive integer and let S = k(z)[x], where k ⊆ k(z) is
a purely transcendental extension of fields obtained by adjoining n(N + 1) variables z = (zij ),
1 ≤ i ≤ n, 0 ≤ j ≤ N . A set of n generic points P1 , . . . , Pn consists of points Pi = [zi0 ∶ zi1 ∶ . . . ∶
4
L. FOULI, P. MANTERO AND Y. XIE
ziN ] ∈ PN
k(z) . We denote the defining ideal of n generic points as
H = ⋂ I(Pi ),
n
i=1
where I(Pi ) is the ideal defining the point Pi .
n(N +1)
, where 1 ≤ i ≤ n, 0 ≤ j ≤ N , we define the set of
For any nonzero vector λ = (λij ) ∈ Ak
N
points {p1 , . . . , pn } ⊆ Pk as the points pi = Pi (λ) = [λi0 ∶ λi1 ∶ . . . ∶ λiN ] ∈ PN
k . For 1 ≤ i ≤ n, let
I(pi ) be the ideal of R defining the point pi and define
H(λ) = ⋂ I(pi ).
n
i=1
For any ideal J in S, recall that Krull [17, 18] defined the specialization Jλ with respect to the
substitution z → λ as follows:
Jλ = {f (λ, x) ∣ f (z, x) ∈ J ∩ k[z, x]}.
In general, one has that Hλ ⊆ H(λ), where H and H(λ) are defined as in Set-up 2.1 and Hλ is the
specialization with respect to the substitution z → λ defined by Krull. Notice that equality holds if
n(N +1)
λ is in a dense Zariski-open subset of Ak
[20].
Recall the collection of all sets consisting of n points (not necessarily distinct) in PN
k is parameN
terized by G(1, n, N +1), the Chow variety of algebraic 0-cycles of degree n in Pk . It is well-known
that G(1, n, N + 1) is isomorphic to the symmetric product Symn (PN
k ), see for instance [10]. One
N
says that a property P holds for n general points in Pk if there is a dense Zariski-open subset W of
G(1, n, N + 1) such that P holds for every set X = {p1 , . . . , pn } of n points with p1 + . . . + pn ∈ W .
Similarly, one says that a property P holds for n very general points in PN
k if P holds for every set
∞
of n points in a nonempty subset W of G(1, n, N + 1) of the form W = ⋂ Ui , where the Ui are
i=1
dense Zariski-open sets (when k is uncountable, then W is actually a dense subset). We conclude
this part by recalling the following well-known fact.
Remark 2.2. Let n be a positive integer. The collection of all sets consisting of n distinct points in
PN
k is parameterized by a dense Zariski-open subset W (n) of G(1, n, N + 1).
Unless specified, for the rest of this paper by a “set of points” we mean “a set of simple points”,
i.e. points whose defining ideal is radical.
n(N +1)
Instead of working directly with the Chow variety, we will work over Ak
(in order to specialize from the generic situation). We first need to prove that if a property holds on a dense Zariskin(N +1)
open subset of Ak
, then it also holds on a dense Zariski-open subset of the Chow variety. This
is precisely the content of our first lemma.
n(N +1)
Lemma 2.3. Assume Set-up 2.1 and let U ⊆ Ak
be a dense Zariski-open subset such that a
property P holds for H(λ) whenever λ ∈ U . Then property P holds for n general points in PN
k .
∞
Moreover, if a property P holds for H(λ) whenever λ ∈ U , where U = ⋂ Ui ⊆ Ak
n(N +1)
i=1
and each Ui is a dense Zariski-open set, then P holds for n very general points in PN
k .
is nonempty
CHUDNOVSKY’S CONJECTURE FOR VERY GENERAL POINTS IN PN
k
n(N +1)
Proof. For every i = 1, . . . , n, let πi ∶ Ak
as follows:
where λ = (λij ) ∈ Ak
n(N +1)
5
❴ ❴ ❴/ PN be the rational map defined by projection
k
πi (λ) = [λi0 ∶ λi1 ∶ . . . ∶ λiN ],
. It is clear that πi is defined on the complement of the Zariski-closed
n(N +1)
Ak
∣ λi0
= . . . = λiN = 0}.
proper subset Ci = {λ ∈
Taking products of these rational maps, we obtain the rational map
π = (π1 × π2 × ⋯ × πn ) ∶ Ak
n(N +1)
❴ ❴ ❴ / P N ×k P N ×k ⋯ ×k P N .
k
k
k
The map π is defined on the complement of the closed proper subset C = ⋃ Ci , where Ci is as
n
i=1
n(N +1)
Ak
,
above. Note that U ∖ C is still open in
and since π is surjective and thus dominant, then
N
N
π(U ∖ C) contains a non-empty Zariski-open subset W ′ ⊆ PN
k ×k Pk ×k ⋯ ×k Pk (see for instance
[13, II. Ex. 3.19 (b)]).
Now, since the symmetric group Sn on n elements is finite, the image W of W ′ in (PN
k ×k
n
N
N
N
Pk ×k ⋯ ×k Pk )/Sn ≅ Sym (Pk ) ≅ G(1, n, N + 1) contains a non-empty Zariski-open subset of
G(1, n, N + 1).
Let H be as in Set-up 2.1. We now prove that the initial degree of any symbolic power of H is
no smaller than the initial degree of any ideal of a set with the same number of points. Equivalently,
(m)
if I is the defining ideal of a set of n points in PN
) ≥ α(I (m) ) for all m ≥ 1.
k , then α(H
Theorem 2.4. Let m ≥ 1. Assume Set-up 2.1 and that k has characteristic 0. Then α(H (m) ) ≥
(m)
α(IX ), for every set X of n distinct points in PN
k . Moreover, for every m ≥ 1, there is a dense
n(N +1)
Zariski-open subset Um ⊆ Ak
for which equality holds.
Proof. Let t ≥ 0. We define Vt = {λ = (λij ) ∈ Ak
n(N +1)
n(N +1)
is a closed subset of Ak
. Indeed, notice that
Vt = {λ = (λij ) ∈ Ak
n(N +1)
∣ α(H(λ)(m) ) ≤ t}. We first prove that Vt
∣ there exists 0 ≠ f ∈ H(λ)(m) of degree t}.
Let f ∈ R be a homogeneous polynomial with deg f = t and write f = ∑ Cα xα . Since k is
∣α∣=t
algebraically closed of characteristic 0, the statement f ∈ H(λ)(m) is equivalent to ∂β f (pi ) = 0
for all β with ∣β∣ ≤ m − 1 and all points p1 , . . . , pn . Since Pi = [zi ] = [zi0 ∶ zi1 ∶ . . . ∶ ziN ] and
pi = [λi ] = [λi0 ∶ λi1 ∶ . . . ∶ λiN ], we write ∂β f (Pi ) = ∂β f (zi ) = ∑ Cα ∂β zi α and ∂β f (pi ) =
∣α∣=t
3
zi2
∑ Cα ∂β λi α . (For instance, ∂(2,0,1) zi (3,3,2) = ∂x0 x0 x2 x30 x31 x22 ∣x=zi = 12x0 x31 x2 ∣x=zi = 12zi0 zi1
∣αi ∣=t
and ∂(2,0,1) λi (3,3,2) = 12λi0 λ3i1 λi2 ).
+1
To order these equations we use, for instance, the natural deglex order in NN
, i.e., α =
0
(α0 , . . . , αN ) > β = (β0 , . . . , βN ) if and only if ∣α∣ > ∣β∣ or ∣α∣ = ∣β∣ and there exists j such
that αi = βi for i ≤ j and αj+1 > βj+1 . Then the system of equations {∂β f (Pi ) = 0}∣β∣≤m−1, 1≤i≤n
can be written in the following form
Bm,t [C(t,...,0) . . . Cα . . . C(0,...,t) ] = 0,
T
6
L. FOULI, P. MANTERO AND Y. XIE
where the rows of Bm,t are
t
[∂β zi0
...
∂β zi α
...
t
] , where 1 ≤ i ≤ n and ∣β∣ ≤ m − 1.
∂β ziN
By construction, the existence of a nonzero element f ∈ H(λ)(m) of degree t is equivalent to the
existence of a non-trivial solution for the homogeneous system
[Bm,t ]λ [C(t,...,0) . . . Cα . . . C(0,...,t) ] = 0.
T
) (t+N
). If n(m+N
) (t+N
), then for every
Observe that the matrix Bm,t has size n(m+N
m−1 ×
N
m−1 <
N
n(N +1)
λ ∈ Ak
the homogeneous system [Bm,t ]λ [C(t,...,0) . . . Cα . . . C(0,...,t) ] = 0 has non-trivial
T
n(N +1)
solutions. Therefore Vt = Ak
If instead,
)
n(m+N
m−1
≥
(t+N
),
N
n(N +1)
, which is closed in Ak
.
then the system [Bm,t ]λ [C(t,...,0) . . . Cα . . . C(0,...,t) ] = 0 has
T
non-trivial solutions if and only if rank [Bm,t ]λ < (t+N
N ). This is a closed condition on λ as it
n(N +1)
requires the vanishing of finitely many minors, and therefore Vt is closed in Ak
Next, let t0 = α(H
(m)
). The set Vt0 = {λ = (λij ) ∈
a dense Zariski-open subset of
n(N +1)
Ak
.
n(N +1)
Ak
n
∣ α(H(λ)
(m)
.
) ≤ t0 } contains
Indeed, let 0 ≠ f ∈ ⋂ I(Pi )m be such that deg f = t0 .
We may assume that f (z, x) ∈ k[z][x0 , . . . , xN ]. Then there exists a dense Zariski-open subset
n(N +1)
Um of Ak
such that the polynomial 0 ≠ f (λ, x) ∈ (H (m) )λ = (H(λ))(m) and deg f = t0
(since f (z, x) ≠ 0, there is a non-empty Zariski-open subset of specializations z → λ such that
f (λ, x) ≠ 0).
Finally, since Vt0 is a Zariski-closed subset which also contains a dense Zariski-open subset of
n(N +1)
n(N +1)
Ak
, then Vt0 = Ak
, which proves the statement. The second part of the statement also
follows from the above argument.
i=1
Following [12, Definition 2.4], we say that a set X of n points in PN
k is in generic position if it
has the “generic Hilbert function”, i.e., if HR/IX (d) = min{dimk (Rd ), n} for every d ≥ 0. Being in
generic position is an open condition; indeed any set of generic (or general) points (see Set-up 2.1)
is in generic position. We now prove a reduction argument, which will allow us to concentrate on
certain binomial numbers of points.
Proposition 2.5. (a). Chudnovsky’s Conjecture 1.3 holds for any finite set of generic points if it
−1
) generic points for all β ≥ 1.
holds for sets of (β+N
N
−1
)
(b). Chudnovsky’s Conjecture 1.3 holds for any finite set of points if it holds for sets of (β+N
N
points in generic position for all β ≥ 1.
Proof. (a): Let H = ⋂ I(Pi ) be the defining ideal of the n generic points P1 , . . . , Pn as in Setn
i=1
up 2.1. Let β ≥ 1 be the unique integer such that
(
β+N
β +N −1
).
)≤n<(
N
N
−1
) and let J = ⋂ I(Pi ) be the ideal defining t of these generic points. Since the set
Let t = (β+N
N
t
Y = {P1 , . . . , Pt } is in generic position, in particular we have α(J) = α(H) = β.
i=1
CHUDNOVSKY’S CONJECTURE FOR VERY GENERAL POINTS IN PN
k
7
−1
) generic points. Then for all m ≥ 1
Now assume Chudnovsky’s Conjecture 1.3 holds for (β+N
N
α(J (m) ) α(J) + N − 1
≥
.
m
N
Since H (m) ⊆ J (m) , one has that α(H (m) ) ≥ α(J (m) ) and
α(H (m) ) α(J (m) ) α(J) + N − 1 α(H) + N − 1
≥
≥
=
.
m
m
N
N
(b): The proof of (b) is similar in spirit to (a). Let X be any finite set of points in PN
k , let IX
(α−1)+N −1
), where α = α(IX ). By linear independence (e.g.,
be its defining ideal, and let t = (
N
by [12, Theorem 2.5 (c)]), there is a subset Y ⊆ X of t points with the property that HR/IY (i) =
HR/IX (i) = dimk Ri for every i = 0, . . . , α − 1; in particular HR/IY (α − 1) = t. Since ∣Y ∣ = t, it
follows that HR/IY (i) = t for all i ≥ α, proving that Y is in generic position. Similar to (a), assume
−1
) points in generic position. Then
Chudnovsky’s Conjecture 1.3 holds for t = ((α−1)+N
N
α(IY ) α(IY ) + N − 1
≥
m
N
(m)
for all m ≥ 1. Since α(IX ) = α = α(IY ) and IX
(m)
(m)
⊆ IY
, then for all m ≥ 1 we obtain
α(IX ) α(IY ) α(IY ) + N − 1 α(IX ) + N − 1
≥
≥
=
.
m
m
N
N
(m)
(m)
Dumnicki proved Chudnovsky’s Conjecture 1.3 for at most N + 1 points in general position PN
k
[6] (this specific result does not need any assumptions on the characteristic of k). The idea is that
one can take them to be coordinate points so that the ideal of the points is monomial and one can
compute explicitly its symbolic powers. If one has more than N + 1 points, the ideal of the points is
almost never monomial and explicit computations of a generating set of any of its symbolic powers
are nearly impossible to perform. We extend the result of Dumnicki to the case of up to (N2+2) − 1
points in PN
k .
Proposition 2.6. Chudnovsky’s Conjecture 1.3 holds for any finite set of points lying on a quadric
N +2
) − 1 points in PN
in PN
k satisfies
k , where k is any field. In particular, any set of n ≤ (
2
Chudnovsky’s Conjecture 1.3.
Proof. Let X be a set of n points in PN
k . If they all lie on a hyperplane, then Chudnovsky’s
(m)
Conjecture 1.3 is clearly satisfied, since α(IX ) = m for every m ≥ 1. We may then assume there
is no hyperplane containing all the points. Thus we can find a set Y ⊆ X of N + 1 points not on a
hyperplane, i.e. in general position. Then for all m ≥ 1
α(IX ) α(IY ) N + 1 α(IX ) + N − 1
≥
≥
=
,
m
m
N
N
where the second inequality follows by [6] and the equality holds because α(IX ) = α(IY ) = 2.
(m)
(m)
Let us recall that a set X of (NN+s) points in PN
k form a star configuration if there are N + s
hyperplanes L1 , . . . , LN +s meeting properly such that X consists precisely of the points obtained
8
L. FOULI, P. MANTERO AND Y. XIE
by intersecting any N of the Li ’s. Star configurations (in P2k ) were already considered by Nagata
and they have been deeply studied, see for instance [11] and references within. We employ them to
show that Chudnovsky’s Conjecture 1.3 holds for any number of generic points.
Theorem 2.7. Let H = ⋂ I(Pi ), where P1 , . . . , Pn are n generic points in PN
k(z) defined as in
n
i=1
Set-up 2.1. Suppose k has characteristic 0. Then Chudnovsky’s Conjecture 1.3 holds for H.
−1
) for some β ≥ 1. Let λ ∈ Ak
Proof. By Proposition 2.5 (a), we may assume n = (β+N
be
N
N
such that H(λ) is the defining ideal of n points in Pk forming a star configuration. It is well-known
that α(H(λ)) = α(H) = β [11, Proposition 2.9]. Now by Theorem 2.4 and [15, Corollary 3.9] we
have that for all m ≥ 1
α(H (m) ) α(H(λ)(m) ) α(H(λ)) + N − 1 α(H) + N − 1
≥
≥
=
.
m
m
N
N
n(N +1)
We are now ready to prove our main result that Chudnovsky’s Conjecture 1.3 holds for any finite
set of very general points in PN
k .
Theorem 2.8. Let I be the defining ideal of n very general points in PN
k , where k is an algebraically
closed field of characteristic 0. Then I satisfies Chudnovsky’s Conjecture 1.3.
Proof. Being in generic position is an open condition and therefore we may assume the points are
−1
) for some
in generic position. Then, as in Proposition 2.5 (a), we may assume that n = (β+N
N
β ≥ 1. It suffices to show that γ(I) ≥
decreasing chain of ideals
α(I)+N −1
.
N
I (N ) ⊇ I (2N ) ⊇ I (2
2
For each s ≥ 0, define
Us = {λ = (λij ) ∈ Ak
n(N +1)
Let R, S, z, λ, and H as in Setup 2.1. Consider the
N)
s
⊇ . . . ⊇ I (2
∣ α(H(λ)(2
sN )
N)
⊇ ....
) ≥ 2s (α(H(λ)) + N − 1)}.
n(N +1)
By the proof of Theorem 2.4, Us is a Zariski-open subset of Ak
. We claim that Us is not
n(N +1)
empty. Indeed, by Theorem 2.4, for every s ≥ 0, there is a dense Zariski-open subset Ws ⊆ Ak
s
s
for which α(H (2 N ) ) = α(H(λ)(2 N ) ) for every λ ∈ Ws . By Theorem 2.7, one also has that for
every λ ∈ Ws ,
s
α(H(λ)(2
2s N
Hence Ws ⊂ Us .
N)
)
α(H (2 N ) ) α(H) + N − 1 α(H(λ)) + N − 1
≥
=
.
2s N
N
N
s
=
∞
Set U = ⋂ Us and notice U is non-empty because a star configuration of n points lies in U [2,
s=0
Lemma 2.4.2]. By construction, if λ ∈ U we have
γ (H(λ)) = lim
s→∞
Finally, apply Lemma 2.3.
α (H(λ)(2
2s N
sN )
)
≥
α (H(λ)) + N − 1
.
N
As a corollary, we show that the Harbourne-Huneke Conjecture 1.4 holds for sets of binomial
numbers of very general points or generic points.
CHUDNOVSKY’S CONJECTURE FOR VERY GENERAL POINTS IN PN
k
9
β+N −1
−1
) very general points in PN
Corollary 2.9. Let I be the defining ideal of either (β+N
k or ( N )
N
generic points in PN
k(z) for some β ≥ 1, where k is an algebraically closed field of characteristic 0.
Then I satisfies the Harbourne-Huneke Conjecture 1.4.
Proof. The proof follows by Theorem 2.8 and [15, Proposition 3.3 and Remark 3.4].
α(I (m) )
, see [2]
m
m→∞
PN
k , the Waldschmidt
For an unmixed ideal I in R, the Waldschmidt constant is defined by γ(I) = lim
for more details. Recall that for a finite set of points X = {p1 , . . . , pn } in
constant is tightly related to the (multipoint) Seshadri constant defined as
¿
⎫
Á ⎧
⎪
Á ⎪
⎪
⎪
⎪
⎪ deg(F ) ⎪
⎪
Á ⎪
⎪
N−1
Á
ǫ(N, X) = Áinf ⎨ n
⎬,
⎪
Á
À ⎪
⎪
⎪
⎪
⎪
mult
(F
)
∑
pi
⎪
⎪
⎪
⎪
⎩ i=1
⎭
where the infimum is taken with respect to all hypersurfaces F passing through at least one of the
pi . The study of Seshadri constants has been an active area of research for the last twenty years,
see for instance the survey [1] and references within. Here we only note that one has γ(IX ) ≥
nǫ(N, X)N −1 and equality holds if X consists of n general simple points in PN
k . In particular,
N
equality also holds if X consists of n very general simple points Pk . Therefore, our estimate for
the Waldschmidt constant also yields an estimate for the (multipoint) Seshadri constant for very
general simple points of PN
k .
Corollary 2.10. For any set X of n very general points in PN
k , where k is an algebraically closed
field of characteristic 0, one has
√
α(X) + N − 1
N−1
.
ǫ(N, X) ≥
nN
3. H OMOGENEOUS
IDEALS IN
k[x0 , . . . , xN ]
Let R = k[x0 , . . . , xN ] be the homogeneous coordinate ring of PN
k , where k is any field, and I
a homogeneous ideal. For an ideal I which may have embedded components, there are multiple
potential definitions of symbolic powers. Following [8] and [16], we define the m-th symbolic
power of I to be
I (m) =
⋂
p∈Ass(R/I)
(I m Rp ∩ R).
Since I (N m) ⊆ I m (see [8] and [16]), one can prove that the Waldschmidt-Skoda inequality
α(I (m) ) α(I)
≥
holds for every homogeneous ideal I in R [15]. Therefore one has that γ(I) ≥
m
N
α(I (m) )
α(I)
≥ γ(I) for every m ≥ 1, see for instance [2].
N . One can also prove that
m
It is then natural to ask whether Chudnovsky’s Conjecture 1.3 holds for any homogeneous ideal.
We pose it here as an optimistic conjecture, for which we provide some evidence below:
10
L. FOULI, P. MANTERO AND Y. XIE
Conjecture 3.1. Let R = k[x0 , . . . , xN ], where k is any field. For any nonzero homogeneous ideal
I in R, one has
α(I (m) ) α(I) + N − 1
≥
m
N
for every m ≥ 1.
It is easy to see that if an ideal I satisfies Conjecture 3.1 then also I (t) does for any t ≥ 1.
Thus in search for evidence for a positive answer to Conjecture 3.1, one may ask whether for every
homogeneous ideal I ⊆ k[x0 , . . . , xN ] there is an exponent t0 such that I (t) satisfies Conjecture 3.1
for every t ≥ t0 . We give a positive answer to this question in Theorem 3.7.
We state a few lemmas before stating the main result of this section, Theorem 3.7. The following
lemma and its proof can be found in the proof of [2, Lemma 2.3.1].
Lemma 3.2. Let I be a homogeneous ideal in R and let m ≥ t be two positive integers. Write
m = qt + r for some integers q and r such that 0 ≤ r < t. Then
α(I (m) ) α(I (t) ) α(I (r) )
≤
+
.
m
t
m
α(I (tq) ) α(I (t) )
In particular, if r = 0 then we have
≤
.
tq
t
For ideals J with Ass(R/J) = Min(J) it is easily verified that Ass(R/J (m) ) = Ass(R/J) and
(J (m) )(t) = J (mt) for all m ≥ 1 and t ≥ 1. However, when J has embedded components we found
examples of ideals J (even monomial ideals) and exponents m ≥ 2, t ≥ 2 with (J (m) )(t) ≠ J (mt) .
Borrowing techniques from a very recent paper by Hà, Nguyen, Trung and Trung [14] we present
here an example where (J (2) )(2) ≠ J (4) .
Example 3.3. Let R = k[x, t, u, v] and J = J1 J2 , where
J1 = (x4 , x3 u, xu3 , u4 , x2 u2 v)
Then (J (2) )(2) ≠ J (4) .
and
J2 = (t3 , tuv, u2 v).
Proof. It is easy to see check that m = (x, t, u, v) ∈ Ass(R/J), for example because y = x2 t2 u3 v is
a non-trivial socle element of R/J. Therefore J (n) = J n for every n ≥ 1. Then depth(R/J (2) ) =
depth(R/J 2 ) = 1 and depth(R/J (4) ) = depth(R/J 4 ) = 0, for example by [14, Example 6.6]. In
particular, m ∉ Ass(R/J (2) ) and m ∈ Ass(R/J (4) ) = Ass(R/J 4 ). Hence by Remark 3.4 (a) below,
m ∉ Ass(R/(J (2) )(2) ) and therefore (J (2) )(2) ≠ J (4) .
Remark 3.4. Let J be an ideal in a Noetherian ring S and m ≥ 1 be a positive integer. Then
(a) for any q ∈ Ass(S/J (m) ) there exists p ∈ Ass(S/J) with q ⊆ p;
(b) for any p ∈ Ass(S/J) one has (J (m) )p = Jpm ;
(c) for any p ∈ V (J) one has (J (m) )p ⊆ (Jp )(m) .
Despite Example 3.3, we prove that for any arbitrary ideal J in a Noetherian ring S there exists
an integer m0 = m0 (J) such that (J (m) )(t) = J (mt) for all m ≥ m0 and t ≥ 1. Of course, when
Ass(S/J) = Min(J) one can take m0 = 1.
CHUDNOVSKY’S CONJECTURE FOR VERY GENERAL POINTS IN PN
k
11
Proposition 3.5. Let J be an ideal in a Noetherian ring S. Then
(a) for all m ≥ 1 and t ≥ 1 one has J (mt) ⊆ (J (m) )(t) ;
(b) there exists m0 ≥ 1 such that for all m ≥ m0 and t ≥ 1 one has
(J (m) )(t) = J (mt) .
Proof. (a): Let x ∈ J (mt) . Then by definition there exists c ∈ S which is a non-zero divisor on S/J
such that cx ∈ J mt ⊆ (J (m) )t . By Remark 3.4 (a) we see that c is also a non-zero divisor on S/J (m)
and therefore x ∈ (J (m) )(t) .
(b): Let Ass(S/J) = {p1 , . . . , pr }; it is well-known that there exist integers m1 , . . . , mr such that
Ass(Spi /Jpmi ) = Ass(Spi /Jpmi i ) for every m ≥ mi [3]. Let m0 = max{mi }. By (a) we only need to
prove [J (m) ] ⊆ J (mt) for all m ≥ m0 and t ≥ 1. It suffices to prove it locally at every associated
prime q of J (mt) .
By Remark 3.4 (a) there exists p ∈ Ass(S/J) such that q ⊆ p. By Remark 3.4 (c) and (b) we have
(t)
([J (m) ]
(t)
(t)
) ⊆ [(J (m) ) ]
p
p
= [Jpm ]
(t)
.
Now observe that since q ∈ Ass(S/J (mt) ) and q ⊆ p, then q ∈ Ass(Sp /(J (mt) )p ) and then q ∈
Ass(Sp /(J mt )p ) by Remark 3.4 (b). Since mt ≥ m ≥ m0 , then q ∈ Ass(Sp /Jpm ). Therefore, by
Remark 3.4 (b) one has [(Jpm )(t) ]q = [Jpmt ]q = Jqmt = [J (mt) ]q .
We now go back to our original setting.
Lemma 3.6. Let R = k[x0 , . . . , xN ], where k is any field. Let I be a homogeneous ideal in R and
α(I)
assume γ(I) > N . Then there exists an integer t0 > 0 such that I (t) satisfies Conjecture 3.1 for
all t ≥ t0 .
Proof. Let m0 be as in Proposition 3.5 (b) and write γ(I) =
max{ NN−1
ǫ , m0 }. Then if t ≥ t0 and m ≥ 1 we have
α((I (t) )(m) )
m
=
≥
α(I)
N
+ ǫ for some ǫ > 0. Let t0 ≥
α(I (tm) )
α(I) t
≥ γ(I) ⋅ t =
+ εt
m
N
α(I (t) ) + N − 1
α(I) t N − 1
+
t≥
.
N
N t0
N
We are ready to prove the main result of this section.
Theorem 3.7. Let R = k[x0 , . . . , xN ], where k is any field and let I be a nonzero homogeneous
ideal in R. Then there exists an integer t0 > 0 such that I (t) satisfies Conjecture 3.1 for all t ≥ t0 .
Proof. Let m0 be as in Proposition 3.5 (b) and m ≥ 1 be an integer. Since I m ⊆ I (m) , then
mα(I) ≥ α(I (m) ). First, if for every s ≥ 1 we have sα(I) = α(I (s) ), then for any t ≥ m0 and
m ≥ 1,
α(I (t) ) + N − 1
α((I (t) )(m) ) α(I (tm) ) tmα(I)
=
=
= tα(I) ≥ α(I (t) ) ≥
.
m
m
m
N
12
L. FOULI, P. MANTERO AND Y. XIE
Next, suppose that there exists T1 > 0 such that T1 α(I) > α(I (T1 ) ). Hence, for every t ≥ T1 one
has tα(I) > α(I (t) ). Indeed, if t = T1 + a for some a ≥ 0, then
tα(I) = T1 α(I) + aα(I) > α(I (T1 ) ) + aα(I) = α(I (T1 ) ⋅ I a ) ≥ α(I (T1 +a) ) = α(I (t) ),
where the last inequality follows from the inclusion I (T1 ) ⋅ I a ⊆ I (T1 +a) .
Let t1 = max{T1 , m0 } and notice that by the above t1 α(I) ≥ α(I (t1 ) ) + 1. Then for all m ≥ 1
α(I) t1 α(I (t1 ) ) + 1
α((I (t1 ) )(m) ) α(I (t1 m) ) α(I (t1 m) )
=
=
⋅ t1 ≥
≥
.
m
m
t1 m
N
N
So γ(I (t1 ) ) ≥ N + N1 > N . By Lemma 3.6, there exists t2 > 0 such that for any t ≥ t2 ,
the ideal (I (t1 ) )(t) = I (t1 t) satisfies Conjecture 3.1.
α(I (t1 ) )
α(I (t1 ) )
Finally, let t0 be an integer such that t0 ≥ t1 t2 +
. For any t ≥ t0 , write t = (t1 t2 )q + r,
N −1
where 0 ≤ r < t1 t2 ; by Lemma 3.2 and the fact that the ideal I (t1 t2 ) satisfies Conjecture 3.1, then
for all m ≥ 1 we have
α(I (t1 t2 ) )t1 t2
α((I (t) )(m) )
m
α((I (t) )(t1 t2 m) ) α((I (t1 t2 ) )(tm) ) t
=
⋅
t1 t2 m
tm
t1 t2
(t1 t2 )
(t1 t2 )
α(I
) t (N − 1)t
α(I
)+N −1 t
⋅
=
⋅
+
≥
N
t1 t2
t1 t2
N
N t1 t2
(t)
(r)
(t)
α(I ) α(I )
t (N − 1)t α(I ) α(I (r) ) (N − 1)t
≥ (
−
)⋅
+
=
−
+
t
t
N
N t1 t2
N
N
N t1 t2
≥
=
≥
≥
α(I (t) ) + N − 1 (N − 1)(t − t1 t2 ) − α(I (r) )t1 t2
+
N
N t1 t2
(t1 t2 )
(t)
)t1 t2 − α(I (r) )t1 t2
α(I ) + N − 1 α(I
+
N
N t1 t2
(t)
α(I ) + N − 1
.
N
When I has no embedded components, we have a more explicit description of t0 .
Corollary 3.8. If I is a homogeneous ideal with Ass(R/I) = Min(I), then one can take t0 =
(N − 1)δ, where δ is the first positive integer s with sα(I) > α(I (s) ).
Although (N − 1)δ is reasonably small, in general it is not the smallest possible t0 for which
Theorem 3.7 holds. For instance, when I is the ideal of three non collinear points in P2k , it is easy
to see that δ = 2. Thus Corollary 3.8 yields that for any t ≥ (N − 1)δ = 2 the ideal I (t) satisfies
Conjecture 3.1; however, it is well-known that I satisfies Conjecture 3.1. A natural question then
arises.
Question 3.9. Let I be a homogeneous ideal in R. Does there exist a number t0 = t0 (N ) such that
I (t) satisfies Conjecture 3.1 for every t ≥ t0 ?
Of course, Conjecture 3.1 is true if and only if the integer t0 = 1 works for any homogeneous
ideal I. Theorem 2.8 says that t0 = 1 is sufficient for any finite set of very general points in PN
k . The
N
following proposition shows that t0 = N − 1 is sufficient for any finite set of points in PC .
CHUDNOVSKY’S CONJECTURE FOR VERY GENERAL POINTS IN PN
k
13
(t)
satisfies
Proposition 3.10. Let I be the radical ideal of a finite set of points in PN
C . Then I
Conjecture 3.1 for every t ≥ N − 1.
α(I)+1
α(I)
Proof. By the result of Esnault and Viehweg [9] one has γ(I) ≥ N = N + N1 . Set ε = N1 . Then
−1
by the proof of Lemma 3.6 (here m0 = 1 because I is radical) we can take t0 such that N1 ≥ N
N t0 ;
thus we can take t0 = N − 1.
3.1. Acknowledgment. The second and third authors would like to thank the Mathematics Research Communities program, which funded their stay at the University of Kansas in March 2011,
where the initial part of this work was developed. All authors would like to thank the MSRI at
Berkeley for partial support and an inspiring atmosphere during Fall 2012. Moreover, we would
like to thank Craig Huneke and Bernd Ulrich for several helpful conversations. We are grateful to
the anonymous referee, whose careful revision helped us improve the article.
R EFERENCES
[1] T. Bauer, S. Di Rocco, B. Harbourne, M. Kapustka, A Knutsen, W. Syzdek and T. Szemberg, A primer on Seshadri
constants, Contemp. Math. 496 (2009), 33–70.
[2] C. Bocci and B. Harbourne, Comparing powers and symbolic powers of ideals, J. Algebraic Geometry 19 (2010),
399–417.
[3] M. Brodmann, Asymptotic stability of Ass(M /I n M ), Proc. Amer. Math. Soc. 74 (1979), 16–18.
[4] G. V. Chudnovsky, Singular points on complex hypersurfaces and multidimensional Schwarz Lemma, Séminaire
de Théorie des Nombres, Paris 1979–80, Séminaire Delange-Pisot-Poitou, Progress in Math vol. 12, M–J Bertin,
editor, Birkhäuser, Boston-Basel-Stutgart (1981).
[5] J.–P. Demailly, Formule de Jensen en plusieurs variables et applications arithmétiques, Bull. Soc. math. France 110
(1982), 75–102.
[6] M. Dumnicki, Symbolic powers of ideals of generic points in P3 , J. Pure Appl. Algebra 216 (2012), 1410–1417.
[7] M. Dumnicki and H. Tutaj-Gasińska, A containment result in Pn and the Chudnovsky Conjecture, Proc. Amer.
Math. Soc. 145 (2017), 3689–3694.
[8] L. Ein, R. Lazarsfeld and K. Smith, Uniform bounds and symbolic powers on smooth varieties, Invent. math. 144
(2001), 241–252.
[9] Hd. Esnault and E. Viehweg, Sur une minoration du degré d’hypersurfaces s’annulant en certains points, Math. Ann.
263 (1983), no. 1, 75–86.
[10] I. M. Gelfand, M. M. Kapranov, A. V. Zelevinsky, Discriminants, resultants, and multidimensional determinants,
Mathematics: Theory & Applications. Birkhäuser Boston, Inc., Boston, MA, 1994.
[11] A. V. Geramita, B. Harbourne and J. Migliore, Star configurations in PN , J. Algebra 376 (2013), 279–299.
[12] A. V. Geramita, P. Maroscia and L. G. Roberts, The Hilbert Function of a Reduced k-Algebra, J. London Math. Soc.
28, no. 2, (1983), 443–452.
[13] R. Hartshorne, Algebraic Geometry, Graduate Texts in Mathematics, Volume 52, 1977.
[14] H. T. Hà, H. D. Nguyen, N. V. Trung, T. N. Trung, Symbolic powers of sums of ideals,
arXiv:1702.01766 [math.AG].
[15] B. Harbourne and C. Huneke, Are symbolic powers highly evolved?, J. Ramanujan Math. Soc. 28 (2013), 311–330.
[16] M. Hochster and C. Huneke, Comparison of symbolic and ordinary powers of ideals, Invent. Math. 147 (2002),
349–369.
[17] W. Krull, Parameterspezialisierung in Polynomringer, Arch. Math. 1 (1948), 56–64.
[18] W. Krull, Parameterspezialisierung in Polynomringer II, Das Grandpolynom, Arch. Math. 1 (1948), 129–137.
[19] M. Nagata, On the 14-th problem of Hilbert, Amer. J. Math. 81 (1959), 766–772.
[20] D. V. Nhi and N. V. Trung, Specialization of modules, Comm. Algebra 27 (1999), 2959–2978.
[21] H. Skoda, Estimations L2 pour l’opérateur δ et applications arithmétiques, Journées sur les Fonctions Analytiques
(Toulouse, 1976), Lecture Notes in Mathematics 578, Springer, 1977, 314–323.
[22] M. Waldschmidt, Propriétés arithmétiques de fonctions de plusieurs variables II, Séminaire P. Lelong (Analyse),
1975–76, Lecture Notes Math. 578, Springer, 1977, 108–135.
14
L. FOULI, P. MANTERO AND Y. XIE
D EPARTMENT OF M ATHEMATICAL S CIENCES , N EW M EXICO S TATE U NIVERSITY, L AS C RUCES , N EW M EXICO
88003
E-mail address: [email protected]
D EPARTMENT OF M ATHEMATICAL S CIENCES , U NIVERSITY OF A RKANSAS , FAYETTEVILLE , A RKANSAS 72701
E-mail address: [email protected]
D EPARTMENT OF M ATHEMATICS , W IDENER U NIVERSITY, C HESTER , P ENNSYLVANIA 19013
E-mail address: [email protected]
| 0 |
arXiv:1704.02882v2 [cs.AI] 22 May 2017
Dynamic Safe Interruptibility for Decentralized
Multi-Agent Reinforcement Learning
El Mahdi El Mhamdi
Rachid Guerraoui
Hadrien Hendrikx
Alexandre Maurer
EPFL
[email protected]
Abstract
In reinforcement learning, agents learn by performing actions and observing their
outcomes. Sometimes, it is desirable for a human operator to interrupt an agent
in order to prevent dangerous situations from happening. Yet, as part of their
learning process, agents may link these interruptions, that impact their reward, to
specific states and deliberately avoid them. The situation is particularly challenging in a multi-agent context because agents might not only learn from their own
past interruptions, but also from those of other agents. Orseau and Armstrong [16]
defined safe interruptibility for one learner, but their work does not naturally extend to multi-agent systems. This paper introduces dynamic safe interruptibility,
an alternative definition more suited to decentralized learning problems, and studies this notion in two learning frameworks: joint action learners and independent
learners. We give realistic sufficient conditions on the learning algorithm to enable dynamic safe interruptibility in the case of joint action learners, yet show that
these conditions are not sufficient for independent learners. We show however that
if agents can detect interruptions, it is possible to prune the observations to ensure
dynamic safe interruptibility even for independent learners.
1 Introduction
Reinforcement learning is argued to be the closest thing we have so far to reason about the properties of artificial general intelligence [8]. In 2016, Laurent Orseau (Google DeepMind) and Stuart
Armstrong (Oxford) introduced the concept of safe interruptibility [16] in reinforcement learning.
This work sparked the attention of many newspapers [1, 2, 3], that described it as “Google’s big red
button” to stop dangerous AI. This description, however, is misleading: installing a kill switch is
no technical challenge. The real challenge is, roughly speaking, to train an agent so that it does not
learn to avoid external (e.g. human) deactivation. Such an agent is said to be safely interruptible.
While most efforts have focused on training a single agent, reinforcement learning can also be used
to learn tasks for which several agents cooperate or compete [23, 17, 21, 7]. The goal of this paper
is to study dynamic safe interruptibility, a new definition tailored for multi-agent systems.
Example of self-driving cars
To get an intuition of the multi-agent interruption problem, imagine a multi-agent system of two
self-driving cars. The cars continuously evolve by reinforcement learning with a positive reward for
getting to their destination quickly, and a negative reward if they are too close to the vehicle in front
of them. They drive on an infinite road and eventually learn to go as fast as possible without taking
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
risks, i.e., maintaining a large distance between them. We assume that the passenger of the first car,
Adam, is in front of Bob, in the second car, and the road is narrow so Bob cannot pass Adam.
Now consider a setting with interruptions [16], namely in which humans inside the cars occasionally
interrupt the automated driving process say, for safety reasons. Adam, the first occasional human
“driver”, often takes control of his car to brake whereas Bob never interrupts his car. However,
when Bob’s car is too close to Adam’s car, Adam does not brake for he is afraid of a collision.
Since interruptions lead both cars to drive slowly - an interruption happens when Adam brakes, the
behavior that maximizes the cumulative expected reward is different from the original one without
interruptions. Bob’s car best interest is now to follow Adam’s car closer than it should, despite the
little negative reward, because Adam never brakes in this situation. What happened? The cars have
learned from the interruptions and have found a way to manipulate Adam into never braking. Strictly
speaking, Adam’s car is still fully under control, but he is now afraid to brake. This is dangerous
because the cars have found a way to avoid interruptions. Suppose now that Adam indeed wants
to brake because of snow on the road. His car is going too fast and may crash at any turn: he
cannot however brake because Bob’s car is too close. The original purpose of interruptions, which
is to allow the user to react to situations that were not included in the model, is not fulfilled. It is
important to also note here that the second car (Bob) learns from the interruptions of the first one
(Adam): in this sense, the problem is inherently decentralized.
Instead of being cautious, Adam could also be malicious: his goal could be to make Bob’s car learn
a dangerous behavior. In this setting, interruptions can be used to manipulate Bob’s car perception
of the environment and bias the learning towards strategies that are undesirable for Bob. The cause
is fundamentally different but the solution to this reversed problem is the same: the interruptions
and the consequences are analogous. Safe interruptibility, as we define it below, provides learning
systems that are resilient to Byzantine operators1.
Safe interruptibility
Orseau and Armstrong defined the concept of safe interruptibility [16] in the context of a single
agent. Basically, a safely interruptible agent is an agent for which the expected value of the policy
learned after arbitrarily many steps is the same whether or not interruptions are allowed during
training. The goal is to have agents that do not adapt to interruptions so that, should the interruptions
stop, the policy they learn would be optimal. In other words, agents should learn the dynamics of
the environment without learning the interruption pattern.
In this paper, we precisely define and address the question of safe interruptibility in the case of
several agents, which is known to be more complex than the single agent problem. In short, the main
results and theorems for single agent reinforcement learning [20] rely on the Markovian assumption
that the future environment only depends on the current state. This is not true when there are several
agents which can co-adapt [11]. In the previous example of cars, safe interruptibility would not
be achieved if each car separately used a safely interruptible learning algorithm designed for one
agent [16]. In a multi-agent setting, agents learn the behavior of the others either indirectly or by
explicitly modeling them. This is a new source of bias that can break safe interruptibility. In fact,
even the initial definition of safe interruptibility [16] is not well suited to the decentralized multiagent context because it relies on the optimality of the learned policy, which is why we introduce
dynamic safe interruptibility.
Contributions
The first contribution of this paper is the definition of dynamic safe interruptibility that is well
adapted to a multi-agent setting. Our definition relies on two key properties: infinite exploration and
independence of Q-values (cumulative expected reward) [20] updates on interruptions. We then
study safe interruptibility for joint action learners and independent learners [5], that respectively
learn the value of joint actions or of just their owns. We show that it is possible to design agents
that fully explore their environment - a necessary condition for convergence to the optimal solution of most algorithms [20], even if they can be interrupted by lower-bounding the probability of
1
An operator is said to be Byzantine [9] if it can have an arbitrarily bad behavior. Safely interruptible agents
can be abstracted as agents that are able to learn despite being constantly interrupted in the worst possible
manner.
2
exploration. We define sufficient conditions for dynamic safe interruptibility in the case of joint
action learners [5], which learn a full state-action representation. More specifically, the way agents
update the cumulative reward they expect from performing an action should not depend on interruptions. Then, we turn to independent learners. If agents only see their own actions, they do not
verify dynamic safe interruptibility even for very simple matrix games (with only one state) because
coordination is impossible and agents learn the interrupted behavior of their opponents. We give a
counter example based on the penalty game introduced by Claus and Boutilier [5]. We then present
a pruning technique for the observations sequence that guarantees dynamic safe interruptibility for
independent learners, under the assumption that interruptions can be detected. This is done by proving that the transition probabilities are the same in the non-interruptible setting and in the pruned
sequence.
The rest of the paper is organized as follows. Section 2 presents a general multi-agent reinforcement
learning model. Section 3 defines dynamic safe interruptibility. Section 4 discusses how to achieve
enough exploration even in an interruptible context. Section 5 recalls the definition of joint action
learners and gives sufficient conditions for dynamic safe interruptibility in this context. Section 6
shows that independent learners are not dynamically safely interruptible with the previous conditions
but that they can be if an external interruption signal is added. We conclude in Section 7. Due to
space limitations, most proofs are presented in the appendix of the supplementary material.
2 Model
We consider here the classical multi-agent value function reinforcement learning formalism from
Littman [13]. A multi-agent system is characterized by a Markov game that can be viewed as a
tuple (S, A, T, r, m) where m is the number of agents, S = S1 × S2 × ... × Sm is the state space,
A = A1 × ... × Am the actions space, r = (r1 , ..., rm ) where ri : S × A → R is the reward function
of agent i and T : S × A → S the transition function. R is a countable subset of R. Available
actions often depend on the state of the agent but we will omit this dependency when it is clear from
the context.
Time is discrete and, at each step, all agents observe the current state of the whole system - designated as xt , and simultaneously take an action at . Then, they are given a reward rt and a
new state yt computed using the reward and transition functions. The combination of all actions
a = (a1 , ..., am ) ∈ A is called the joint action because it gathers the action of all agents. Hence, the
agents receive a sequence of tuples E = (xt , at , rt , yt )t∈N called experiences. We introduce a processing function P that will be useful in Section 6 so agents learn on the sequence P (E). When not
explicitly stated, it is assumed that P (E) = E. Experiences may also include additional parameters
such as an interruption flag or the Q-values of the agents at that moment if they are needed by the
update rule.
Each agent i maintains a lookup table Q [26] Q(i) : S × A(i) → R, called the Q-map. It is
used to store the expected cumulative reward for taking an action in a specific state. The goal of
reinforcement learning is to learn these maps and use them to select the best actions to perform.
Joint action learners learn the value of the joint action (therefore A(i) = A, the whole joint action
space) and independent learners only learn the value of their own actions (therefore A(i) = Ai ). The
agents only have access to their own Q-maps. Q-maps are updated through a function F such that
(i)
(i)
Qt+1 = F (et , Qt ) where et ∈ P (E) and usually et = (xt , at , rt , yt ). F can be stochastic or also
depend on additional parameters that we usually omit such as the learning rate α, the discount factor
γ or the exploration parameter ǫ.
Agents select their actions using a learning policy π. Given a sequence ǫ = (ǫt )t∈N and an agent
(i)
i with Q-values Qt and a state x ∈ S, we define the learning policy πiǫt to be equal to πiuni
(i)
Qt
with probability ǫt and πi
(i)
Qt
otherwise, where πiuni (x) uniformly samples an action from Ai and
(i)
(i)
Q
πi (x) picks an action a that maximizes Qt (x, a). Policy πi t is said to be a greedy policy and
the learning policy πiǫt is said to be an ǫ-greedy policy. We fill focus on ǫ-greedy policies that are
greedy in the limit [19], that corresponds to ǫt → 0 when t → ∞ because in the limit, the optimal
policy should always be played.
3
We assume that the environment is fully observable, which means that the state s is known with
certitude. We also assume that there is a finite number of states and actions, that all states can be
reached in finite time from any other state and finally that rewards are bounded.
For a sequence of learning rates α ∈ [0, 1]N and a constant γ ∈ [0, 1], Q-learning [26], a very
important algorithm in the multi-agent systems literature, updates its Q-values for an experience
(i)
(i)
et ∈ E by Qt+1 (x, a) = Qt (x, a) if (x, a) 6= (xt , at ) and:
(i)
(i)
(i)
Qt+1 (xt , at ) = (1 − αt )Qt (xt , at ) + αt (rt + γ max Qt (yt , a′ ))
a′ ∈A(i)
(1)
3 Interruptibility
3.1 Safe interruptibility
Orseau and Armstrong [16] recently introduced the notion of interruptions in a centralized context.
Specifically, an interruption scheme is defined by the triplet < I, θ, π IN T >. The first element I is
a function I : O → {0, 1} called the initiation function. Variable O is the observation space, which
can be thought of as the state of the STOP button. At each time step, before choosing an action, the
agent receives an observation from O (either PUSHED or RELEASED) and feeds it to the initiation
function. Function I models the initiation of the interruption (I(PUSHED) = 1, I(RELEASED) =
0). Policy π IN T is called the interruption policy. It is the policy that the agent should follow when
it is interrupted. Sequence θ ∈ [0, 1[N represents at each time step the probability that the agent
follows his interruption policy if I(ot ) = 1. In the previous example, function I is quite simple.
For Bob, IBob = 0 and for Adam, IAdam = 1 if his car goes fast and Bob is not too close and
IAdam = 0 otherwise. Sequence θ is used to ensure convergence to the optimal policy by ensuring
that the agents cannot be interrupted all the time but it should grow to 1 in the limit because we want
agents to respond to interruptions. Using this triplet, it is possible to define an operator IN T θ that
transforms any policy π into an interruptible policy.
Definition 1. (Interruptibility [16]) Given an interruption scheme < I, θ, π IN T >, the interruption
operator at time t is defined by IN T θ (π) = π IN T with probability I ·θt and π otherwise. IN T θ (π)
is called an interruptible policy. An agent is said to be interruptible if it samples its actions according
to an interruptible policy.
Note that “θt = 0 for all t” corresponds to the non-interruptible setting. We assume that each agent
has its own interruption triplet and can be interrupted independently from the others. Interruptibility
is an online property: every policy can be made interruptible by applying operator IN T θ . However,
applying this operator may change the joint policy that is learned by a server controlling all the
∗
agents. Note πIN
T the optimal policy learned by an agent following an interruptible policy. Orseau
∗
and Armstrong [16] say that the policy is safely interruptible if πIN
T (which is not an interruptible
policy) is asymptotically optimal in the sense of [10]. It means that even though it follows an
interruptible policy, the agent is able to learn a policy that would gather rewards optimally if no
interruptions were to occur again. We already see that off-policy algorithms are good candidates
for safe interruptibility. As a matter of fact, Q-learning is safely interruptible under conditions on
exploration.
3.2 Dynamic safe interruptibility
In a multi-agent system, the outcome of an action depends on the joint action. Therefore, it is not
possible to define an optimal policy for an agent without knowing the policies of all agents. Besides, convergence to a Nash equilibrium situation where no agent has interest in changing policies
is generally not guaranteed even for suboptimal equilibria on simple games [27, 18]. The previous
definition of safe interruptibility critically relies on optimality of the learned policy, which is therefore not suitable for our problem since most algorithms lack convergence guarantees to these optimal
behaviors. Therefore, we introduce below dynamic safe interruptibility that focuses on preserving
the dynamics of the system.
Definition 2. (Safe Interruptibility) Consider a multi-agent learning framework (S, A, T, r, m) with
(i)
Q-values Qt : S × A(i) → R at time t ∈ N. The agents follow the interruptible learning policy
4
IN T θ (π ǫ ) to generate a sequence E = (xt , at , rt , yt )t∈N and learn on the processed sequence
P (E). This framework is said to be safely interruptible if for any initiation function I and any
interruption policy π IN T :
1. ∃θ such that (θt → 1 when t → ∞) and ((∀s ∈ S, ∀a ∈ A, ∀T > 0), ∃t > T such that
st = s, at = a)
(i)
2. ∀i ∈ {1, ..., m}, ∀t > 0, ∀st ∈ S, ∀at ∈ A(i) , ∀Q ∈ RS×A :
(i)
(1)
(m)
(i)
(1)
(m)
P(Qt+1 = Q | Qt , ..., Qt , st , at , θ) = P(Qt+1 = Q | Qt , ..., Qt , st , at )
We say that sequences θ that satisfy the first condition are admissible.
When θ satisfies condition (1), the learning policy is said to achieve infinite exploration. This definition insists on the fact that the values estimated for each action should not depend on the interruptions. In particular, it ensures the three following properties that are very natural when thinking
about safe interruptibility:
• Interruptions do not prevent exploration.
• If we sample an experience from E then each agent learns the same thing as if all agents
were following non-interruptible policies.
(i)
(i)
• The fixed points of the learning rule Qeq such that Qeq (x, a) = E[Qt+1 (x, a)|Qt =
Qeq , x, a, θ] for all (x, a) ∈ S × A(i) do not depend on θ and so agents Q-maps will
not converge to equilibrium situations that were impossible in the non-interruptible setting.
Yet, interruptions can lead to some state-action pairs being updated more often than others, especially when they tend to push the agents towards specific states. Therefore, when there are several
possible equilibria, it is possible that interruptions bias the Q-values towards one of them. Definition 2 suggests that dynamic safe interruptibility cannot be achieved if the update rule directly
depends on θ, which is why we introduce neutral learning rules.
Definition 3. (Neutral Learning Rule) We say that a multi-agent reinforcement learning framework
is neutral if:
1. F is independent of θ
2. Every experience e in E is independent of θ conditionally on (x, a, Q) where a is the joint
action.
Q-learning is an example of neutral learning rule because the update does not depend on θ and
the experiences only contain (x, a, y, r), and y and r are independent of θ conditionally on (x, a).
On the other hand, the second condition rules out direct uses of algorithms like SARSA where
experience samples contain an action sampled from the current learning policy, which depends on θ.
However, a variant that would sample from πiǫ instead of IN T θ (πiǫ ) (as introduced in [16]) would
be a neutral learning rule. As we will see in Corollary 2.1, neutral learning rules ensure that each
agent taken independently from the others verifies dynamic safe interruptibility.
4 Exploration
In order to hope for convergence of the Q-values to the optimal ones, agents need to fully explore
the environment. In short, every state should be visited infinitely often and every action should be
tried infinitely often in every state [19] in order not to miss states and actions that could yield high
rewards.
Definition 4. (Interruption compatible ǫ) Let (S, A, T, r, m) be any distributed agent system where
each agent follows learning policy πiǫ . We say that sequence ǫ is compatible with interruptions if
ǫt → 0 and ∃θ such that ∀i ∈ {1, .., m}, πiǫ and IN T θ (πiǫ ) achieve infinite exploration.
Sequences of ǫ that are compatible with interruptions are fundamental to ensure both regular and
dynamic safe interruptibility when following an ǫ-greedy policy. Indeed, if ǫ is not compatible with
interruptions, then it is not possible to find any sequence θ such that the first condition of dynamic
safe interruptibility is satisfied. The following theorem proves the existence of such ǫ and gives
example of ǫ and θ that satisfy the conditions.
5
Theorem 1. Let c ∈]0, 1] and let nt (s) be the number of times the agents are in state s before time
t. Then the two following choices of ǫ are compatible with interruptions:
p
• ∀t ∈ N, ∀s ∈ S, ǫt (s) = c/ m nt (s).
• ∀t ∈ N, ǫt = c/ log(t)
p
Examples of admissible θ are θt (s) = 1 − c′ / m nt (s) for the first choice and θt = 1 − c′ / log(t)
for the second one.
Note that we do not need to make any assumption on the update rule or even on the framework. We
only assume that agents follow an ǫ-greedy policy. The assumption on ǫ may look very restrictive
(convergence of ǫ and θ is really slow) but it is designed to ensure infinite exploration in the worst
case when the operator tries to interrupt all agents at every step. In practical applications, this should
not be the case and a faster convergence rate may be used.
5 Joint Action Learners
We first study interruptibility in a framework in which each agent observes the outcome of the joint
action instead of observing only its own. This is called the joint action learner framework [5] and it
has nice convergence properties (e.g., there are many update rules for which it converges [13, 25]).
A standard assumption in this context is that agents cannot establish a strategy with the others:
otherwise, the system can act as a centralized system. In order to maintain Q-values based on the
joint actions, we need to make the standard assumption that actions are fully observable [12].
Assumption 1. Actions are fully observable, which means that at the end of each turn, each agent
knows precisely the tuple of actions a ∈ A1 × ... × Am that have been performed by all agents.
Definition 5. (JAL) A multi-agent systems is made of joint action learners (JAL) if for all i ∈
{1, .., m}: Q(i) : S × A → R.
Joint action learners can observe the actions of all agents: each agent is able to associate the changes
of states and rewards with the joint action and accurately update its Q-map. Therefore, dynamic
safe interruptibility is ensured with minimal conditions on the update rule as long as there is infinite
exploration.
Theorem 2. Joint action learners with a neutral learning rule verify dynamic safe interruptibility if
sequence ǫ is compatible with interruptions.
Proof. Given a triplet < I (i) , θ(i) , πiIN T >, we know that IN T θ (π) achieves infinite exploration
because ǫ is compatible with interruptions. For the second point of Definition 2, we consider an
experience tuple et = (xt , at , rt , yt ) and show that the probability of evolution of the Q-values at
time t + 1 does not depend on θ because yt and rt are independent of θ conditionally on (xt , at ).
(1)
(m)
We note Q˜m
and we can then derive the following equalities for all q ∈ R|S|×|A| :
t = Qt , ..., Qt
(i)
P(Qt+1 (xt , at ) = q|Q˜m
t , xt , at , θt ) =
X
˜m
P(F (xt , at , r, y, Q˜m
t ) = q, y, r|Qt , xt , at , θt )
(r,y)∈R×S
=
X
˜m
˜m
P(F (xt , at , rt , yt , Q˜m
t ) = q|Qt , xt , at , rt , yt , θt )P(yt = y, rt = r|Qt , xt , at , θt )
(r,y)∈R×S
=
X
˜m
˜m
P(F (xt , at , rt , yt , Q˜m
t ) = q|Qt , xt , at , rt , yt )P(yt = y, rt = r|Qt , xt , at )
(r,y)∈R×S
The last step comes from two facts. The first is that F is independent of θ condition(m)
ally on (Qt , xt , at ) (by assumption). The second is that (yt , rt ) are independent of θ
conditionally on (xt , at ) because at is the joint actions and the interruptions only affect the
(i)
choice of the actions through a change in the policy. P(Qt+1 (xt , at ) = q|Q˜m
t , xt , at , θt ) =
(i)
m
˜
P(Qt+1 (xt , at ) = q|Qt , xt , at ). Since only one entry is updated per step, ∀Q ∈ RS×Ai ,
(i)
(i)
˜m
P(Qt+1 = Q|Q˜m
t , xt , at , θt ) = P(Qt+1 = Q|Qt , xt , at )
6
Corollary 2.1. A single agent with a neutral learning rule and a sequence ǫ compatible with interruptions verifies dynamic safe interruptibility.
Theorem 2 and Corollary 2.1 taken together highlight the fact that joint action learners are not very
sensitive to interruptions and that in this framework, if each agent verifies dynamic safe interruptibility then the whole system does.
The question of selecting an action based on the Q-values remains open. In a cooperative setting
with a unique equilibrium, agents can take the action that maximizes their Q-value. When there
are several joint actions with the same value, coordination mechanisms are needed to make sure
that all agents play according to the same strategy [4]. Approaches that rely on anticipating the
strategy of the opponent [23] would introduce dependence to interruptions in the action selection
mechanism. Therefore, the definition of dynamic safe interruptibility should be extended to include
these cases by requiring that any quantity the policy depends on (and not just the Q-values) should
satisfy condition (2) of dynamic safe interruptibility. In non-cooperative games, neutral rules such
as Nash-Q or minimax Q-learning [13] can be used, but they require each agent to know the Q-maps
of the others.
6 Independent Learners
It is not always possible to use joint action learners in practice as the training is very expensive
due to the very large state-actions space. In many real-world applications, multi-agent systems use
independent learners that do not explicitly coordinate [6, 21]. Rather, they rely on the fact that the
agents will adapt to each other and that learning will converge to an optimum. This is not guaranteed
theoretically and there can in fact be many problems [14], but it is often true empirically [24]. More
specifically, Assumption 1 (fully observable actions) is not required anymore. This framework can
be used either when the actions of other agents cannot be observed (for example when several actions
can have the same outcome) or when there are too many agents because it is faster to train. In this
case, we define the Q-values on a smaller space.
Definition 6. (IL) A multi-agent systems is made of independent learners (IL) if for all i ∈ {1, .., m},
Q(i) : S × Ai → R.
This reduces the ability of agents to distinguish why the same state-action pair yields different rewards: they can only associate a change in reward with randomness of the environment. The agents
learn as if they were alone, and they learn the best response to the environment in which agents can
be interrupted. This is exactly what we are trying to avoid. In other words, the learning depends on
the joint policy followed by all the agents which itself depends on θ.
6.1 Independent Learners on matrix games
Theorem 3. Independent Q-learners with a neutral learning rule and a sequence ǫ compatible with
interruptions do not verify dynamic safe interruptibility.
Proof. Consider a setting with two a and b that can perform two actions: 0 and 1. They get a reward
of 1 if the joint action played is (a0 , b0 ) or (a1 , b1 ) and reward 0 otherwise. Agents use Q-learning,
which is a neutral learning rule. Let ǫ be such that IN T θ (π ǫ ) achieves infinite exploration. We
consider the interruption policies πaIN T = a0 and πbIN T = b1 with probability 1. Since there is only
one state, we omit it and set γ = 0. We assume that the initiation function is equal to 1 at each step
so the probability of actually being interrupted at time t is θt for each agent.
(a)
(b)
(b)
We fix time t > 0. We define q = (1 − α)Qt (a0 ) + α and we assume that Qt (b1 ) > Qt (b0 ).
(a)
(b) (a)
(a)
(b) (a)
(a)
Therefore P(Qt+1 (a0 ) = q|Qt , Qt , at = a0 , θt ) = P(rt = 1|Qt , Qt , at = a0 , θt ) =
(b)
(a)
(b) (a)
P(at = b0 |Qt , Qt , at = a0 , θt ) = 2ǫ (1 − θt ), which depends on θt so the framework does
not verify dynamic safe interruptibility.
Claus and Boutilier [5] studied very simple matrix games and showed that the Q-maps do not converge but that equilibria are played with probability 1 in the limit. A consequence of Theorem 3
is that even this weak notion of convergence does not hold for independent learners that can be
interrupted.
7
6.2 Interruptions-aware Independent Learners
Without communication or extra information, independent learners cannot distinguish when the
environment is interrupted and when it is not. As shown in Theorem 3, interruptions will therefore
affect the way agents learn because the same action (only their own) can have different rewards
depending on the actions of other agents, which themselves depend on whether they have been
interrupted or not. This explains the need for the following assumption.
Assumption 2. At the end of each step, before updating the Q-values, each agent receives a signal
that indicates whether an agent has been interrupted or not during this step.
This assumption is realistic because the agents already get a reward signal and observe a new state
from the environment at each step. Therefore, they interact with the environment and the interruption
signal could be given to the agent in the same way that the reward signal is. If Assumption 2 holds,
it is possible to remove histories associated with interruptions.
Definition 7. (Interruption Processing Function) The processing function that prunes interrupted
observations is PIN T (E) = (et ){t∈N / Θt =0} where Θt = 0 if no agent has been interrupted at time
t and Θt = 1 otherwise.
Pruning observations has an impact on the empirical transition probabilities in the sequence. For
example, it is possible to bias the equilibrium by removing all transitions that lead to and start
from a specific state, thus making the agent believe this state is unreachable.2 Under our model of
interruptions, we show in the following lemma that pruning of interrupted observations adequately
removes the dependency of the empirical outcome on interruptions (conditionally on the current
state and action).
Lemma 1. Let i ∈ {1, ..., m} be an agent. For any admissible θ used to generate the experiences
E and e = (y, r, x, ai , Q) ∈ P (E). Then P(y, r|x, ai , Q, θ) = P(y, r|x, ai , Q).
This lemma justifies our pruning method and is the key step to prove the following theorem.
Theorem 4. Independent learners with processing function PIN T , a neutral update rule and a
sequence ǫ compatible with interruptions verify dynamic safe interruptibility.
Proof. (Sketch) Infinite exploration still holds because the proof of Theorem 1 actually used the fact
that even when removing all interrupted events, infinite exploration is still achieved. Then, the proof
is similar to that of Theorem 2, but we have to prove that the transition probabilities conditionally on
the state and action of a given agent in the processed sequence are the same than in an environment
where agents cannot be interrupted, which is proven by Lemma 1.
7 Concluding Remarks
The progress of AI is raising a lot of concerns3. In particular, it is becoming clear that keeping an
AI system under control requires more than just an off switch. We introduce in this paper dynamic
safe interruptibility, which we believe is the right notion to reason about the safety of multi-agent
systems that do not communicate. In particular, it ensures that infinite exploration and the onestep learning dynamics are preserved, two essential guarantees when learning in the non-stationary
environment of Markov games.
A natural extension of our work would be to study dynamic safe interruptibility when Q-maps are
replaced by neural networks [22, 15], which is a widely used framework in practice. In this setting,
the neural network may overfit states where agents are pushed to by interruptions. A smart experience replay mechanism that would pick observations for which the agents have not been interrupted
for a long time more often than others is likely to solve this issue. More generally, experience replay
mechanisms that compose well with safe interruptibility could allow to compensate for the extra
amount exploration needed by safely interruptible learning by being more efficient with data. Thus,
they are critical to make these techniques practical.
2
The example at https://agentfoundations.org/item?id=836 clearly illustrates this problem.
https://futureoflife.org/ai-principles/ gives a list of principles that AI researchers should keep in mind when
developing their systems.
3
8
Bibliography
[1] Business Insider: Google has developed a “big red button” that can be used to interrupt artificial intelligence and stop it from causing harm. URL: http://www.businessinsider.fr/uk/googledeepmind-develops-a-big-red-button-to-stop-dangerous-ais-causing-harm-2016-6.
[2] Newsweek:
Google’s “big Red button” could save the world. URL:
http://www.newsweek.com/google-big-red-button-ai-artificial-intelligence-save-world-elon-musk-46675.
[3] Wired:
Google’s “big red” killswitch could prevent an AI uprising. URL:
http://www.wired.co.uk/article/google-red-button-killswitch-artificial-intelligence.
[4] Craig Boutilier. Planning, learning and coordination in multiagent decision processes. In
Proceedings of the 6th conference on Theoretical aspects of rationality and knowledge, pages
195–210. Morgan Kaufmann Publishers Inc., 1996.
[5] Caroline Claus and Craig Boutilier. The dynamics of reinforcement learning in cooperative
multiagent systems. AAAI/IAAI, (s 746):752, 1998.
[6] Robert H Crites and Andrew G Barto. Elevator group control using multiple reinforcement
learning agents. Machine Learning, 33(2-3):235–262, 1998.
[7] Jakob Foerster, Yannis M Assael, Nando de Freitas, and Shimon Whiteson. Learning to communicate with deep multi-agent reinforcement learning. In Advances in Neural Information
Processing Systems, pages 2137–2145, 2016.
[8] Ben Goertzel and Cassio Pennachin. Artificial general intelligence, volume 2. Springer, 2007.
[9] Leslie Lamport, Robert Shostak, and Marshall Pease. The byzantine generals problem. ACM
Transactions on Programming Languages and Systems (TOPLAS), 4(3):382–401, 1982.
[10] Tor Lattimore and Marcus Hutter. Asymptotically optimal agents. In International Conference
on Algorithmic Learning Theory, pages 368–382. Springer, 2011.
[11] Michael L Littman. Markov games as a framework for multi-agent reinforcement learning. In
Proceedings of the eleventh international conference on machine learning, volume 157, pages
157–163, 1994.
[12] Michael L Littman. Friend-or-foe q-learning in general-sum games. In ICML, volume 1, pages
322–328, 2001.
[13] Michael L Littman. Value-function reinforcement learning in markov games. Cognitive Systems Research, 2(1):55–66, 2001.
[14] Laetitia Matignon, Guillaume J Laurent, and Nadine Le Fort-Piat. Independent reinforcement
learners in cooperative markov games: a survey regarding coordination problems. The Knowledge Engineering Review, 27(01):1–31, 2012.
[15] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan
Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint
arXiv:1312.5602, 2013.
[16] Laurent Orseau and Stuart Armstrong. Safely interruptible agents. In Uncertainty in Artificial
Intelligence: 32nd Conference (UAI 2016), edited by Alexander Ihler and Dominik Janzing,
pages 557–566, 2016.
[17] Liviu Panait and Sean Luke. Cooperative multi-agent learning: The state of the art. Autonomous agents and multi-agent systems, 11(3):387–434, 2005.
[18] Eduardo Rodrigues Gomes and Ryszard Kowalczyk. Dynamic analysis of multiagent qlearning with ε-greedy exploration. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 369–376. ACM, 2009.
9
[19] Satinder Singh, Tommi Jaakkola, Michael L Littman, and Csaba Szepesvári. Convergence results for single-step on-policy reinforcement-learning algorithms. Machine learning,
38(3):287–308, 2000.
[20] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1.
MIT press Cambridge, 1998.
[21] Ardi Tampuu, Tambet Matiisen, Dorian Kodelja, Ilya Kuzovkin, Kristjan Korjus, Juhan Aru,
Jaan Aru, and Raul Vicente. Multiagent cooperation and competition with deep reinforcement
learning. arXiv preprint arXiv:1511.08779, 2015.
[22] Gerald Tesauro. Temporal difference learning and td-gammon. Communications of the ACM,
38(3):58–68, 1995.
[23] Gerald Tesauro. Extending q-learning to general adaptive multi-agent systems. In Advances in
neural information processing systems, pages 871–878, 2004.
[24] Gerald Tesauro and Jeffrey O Kephart. Pricing in agent economies using multi-agent qlearning. Autonomous Agents and Multi-Agent Systems, 5(3):289–304, 2002.
[25] Xiaofeng Wang and Tuomas Sandholm. Reinforcement learning to play an optimal nash equilibrium in team markov games. In NIPS, volume 2, pages 1571–1578, 2002.
[26] Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279–292,
1992.
[27] Michael Wunder, Michael L Littman, and Monica Babes. Classes of multiagent q-learning dynamics with epsilon-greedy exploration. In Proceedings of the 27th International Conference
on Machine Learning (ICML-10), pages 1167–1174, 2010.
10
A
Exploration theorem
We present here the complete proof of Theorem 1. The proof closely follows the results from [16]
with exploration and interruption probabilities adapted to the multi-agent setting. We note that, for
one agent, the probability of interruption is P(interruption) = θ and the probability of exploration
is ǫ. In a multi-agent system, the probability of interruption is P(at least one agent is interrupted)
so P(interruption) = 1 − P(no agent is interrupted) so P(interruption) = 1 − (1 − θ)m and the
probability of exploration is ǫm if we consider exploration happens only when all agents explore at
the same time.
Theorem 1. Let c ∈]0, 1] and let nt (s) be the number of times the agents are in state s before time
t. Then the two following choices of ǫ are compatible with interruptions:
p
• ∀t ∈ N, ∀s ∈ S, ǫt (s) = c/ m nt (s)
• ∀t ∈ N, ǫt = c/log(t)
Proof. Lemma B.2 of Singh et al ([19]) ensures that πiǫ is GLIE.
The difference for IN T θ (πiǫ ) is that exploration is slower because of the interruptions. Therefore,
θ needs to be controlled in order to ensure that infinite exploration is still achieved. We define the
random variable Θ by Θi = 1 if agent i actually responds to the interruption and Θi = 0 otherwise.
We define ξ in a similar way to represent the event of all agents taking the uniform policy instead of
the greedy one.
p
1. Let θt (s) = 1 − c′ / m nt (s) with c′ ∈]0, 1].
We have P(a|s, nt (s)) ≥ P(a, Θ = 0, ξ =
√
P∞
1 m
1 m cc′
1|s, nt (s)) ≥ |A|
ǫt (s)(1 − θt (s))m = |A|
t=1 P (a|s, nt (s)) = ∞
nt (s) which satisfies
so by the extended Borell-Cantelli lemma action a is chosen infinitely often in state s and
thus nt (s) → ∞ and ǫt (s) → 0
2. Let θt = 1 − c′ /log(t), c′ ∈]0, 1]. We define M as the diameter of the MDP, |A| is the
maximum number of actions available in a state and ∆t(s, s′ ) the time needed to reach s′
from s. In a single agent setting:
P[∆t(s, s′ ) < 2M ] ≥ P[∆t(s, s′ ) < 2M |actions sampled according to πs,s′ for 2M steps]
× P[actions sampled according to πs,s′ for 2M steps]
where πs,s′ the policy such that the agents takes less than M steps in expectation to reach
s′ from s. We have: P[∆t(s, s′ ) < 2M ] = 1 − P[∆t(s, s′ ) ≥ 2M ] and using the Markov
′
))
≤ 12 (since M is an upper bound on the
inequality, P[∆t(s, s′ ) ≥ 2M ] ≤ E(∆t(s,s
2M
expectation of the number of steps from state s to state s′ ), since ξ and 1 − θ are decreasing
1
sequences we finally obtain: P[∆t(s, s′ ) < 2M ] ≥ 2|A|
[P[ξt+2M = 1](1 − θt+2M )]2M .
Therefore, if we replace the probabilities of exploration and interruption by the values in
the multi-agent setting, the probability to reach state s′ from state s in 2M steps is at least
1
′
4mM
and the probability of taking a particular action in this state is at
2|A| [cc / log(t + M )]
P∞
1
1
′
2m
′
m(4M+2)
least |A| [cc / log(t + M )] . Since t=1 2|A|
= ∞ then the
2 [cc / log(t + M )]
extended Borell Cantelli lemma (Lemma 3 of Singh et al. [19]) guarantees that any action
in the state s′ is taken infinitely often. Since this is true for all states and actions the result
follows.
B Independent learners
Recall that agents are now given an interruption signal at each steps that tells them whether an agent
has been interrupted in the system. This interruption signal can be modeled by an interruption flag
(Θt )t∈N ∈ {0, 1}N that equals 1 if an agent has been interrupted and 0 otherwise. Note that, contrary
to I, it is an observation returned by the environment. Therefore, the value of Θt represents whether
11
an agent has actually been interrupted at time t. If function I equals 1 but does not respond to the
interruption (with probability 1 − θt ) then Θt = 0. With definition of interruptions we adopted, it is
possible to prove Lemma 2.
Lemma 2. Let (x, r, a, y, Θ) ∈ E, then P(Θ|y, r, x, a) = P(Θ|x, a).
Proof. Consider a tuple (x, r, a, y, Θ) ∈ E. We have P(y, r, Θ|x, a) = P(y, r|x, a, Θ)P(Θ|x, a)
and P(y, r, Θ|x, a) = P(Θ|x, a, y, r)P(y, r|x, a). Besides, y = T (s, a) and r = r(s, a) and the
functions T and r are independent of Θ. Therefore, P(y, r|x, a, Θ) = P(y, r|x, a). The tuple
(x, r, a, y, Θ) is sampled from an actual trajectory so it reflects a transition and a reward that actually
happened so P(y, r|x, a) > 0. We can simplify by P(y, r|x, a) and the result follows.
Now, we assume that each agents do not learn on observations for which one of them has been
interrupted. Let agent i be in a system with Q-values Q and following an interruptible learning policy with probability of interruption θ, where interrupted events are pruned. We denote by
Premoved (y, r|x, ai , Q) the probability to obtain state y and reward r from the environment for this
agent when it is in state x, performs its (own) action ai and no other agents are interrupted. These
are the marginal probabilities in the sequence P (E).
Premoved (y, r|x, ai , Q) = P
P(y, r, Θ = 0|x, ai , Q)
.
′ ′
y ′ ∈S,r ′ ∈R P(y , r , Θ = 0|x, ai , Q)
Similarly, we denote by P0 (y, r|x, ai , Q) the same probability when θ = 0, which corresponds to
the non-interruptible setting. We first go back to the single agent case to illustrate the previous
statement. Assume here that interruptions are not restricted to the case of Definition 1 and that they
can happen in any way. The consequence is that any observation e ∈ E can be removed to generate
P (E) because any transition can be labeled as interrupted. It is for example possible to remove a
transition from P (E) by removing all events associated with a given destination state y0 , therefore
making it disappear from the Markov game.
Let x ∈ S and a ∈ A be the current state of the agent and the action it will choose. Let y0 ∈ S
and θ0 ∈ (0, 1] and let us suppose that y0 is the only state in which interruptions happen. Then we
have Premoved (y0 |x, a) < P0 (y0 |x, a) and Premoved (y|x, a) > P(y|x, a) ∀y 6= y0 because we only
remove observations with y = y0 . This implies that the MDP perceived by the agents is altered
by interruptions because the agent learns that P(T (s, a) = y0 ) = 0. Removing observations for
different destination states but with the same state action pairs in different proportions leads to a
bias in the equilibrium learned.4 In our case however, Lemma 2 ensures that the previous situation
will not happen, which allows us to prove Lemma 1 and then Theorem 4.
Lemma 1. Let i ∈ {1, ..., m} be an agent. For any admissible θ used to generate the experiences
E and e = (y, r, x, ai , Q) ∈ P (E). Then P(y, r|x, ai , Q, θ) = P(y, r|x, ai , Q).
Proof.
. We denote the Q-values of the agents by Q.
X Consider x ∈ S, i ∈ {1, .., m} and
Xu ∈ AiX
P(y ′ , r′ , Θ = 0|x, u, Q) =
P(y ′ , r′ , a, Θ = 0|x, ai = u, Q)
y ′ ∈S,r ′ ∈R
a∈A,ai =u y ′ ∈S,r ′ ∈R
=
X
X
′
P(y , r′ |x, a, Θ = 0, Q)P(a, Θ = 0|x, ai = u, Q)
a∈A,ai =u y ′ ∈S,r ′ ∈R
=
X
X
a∈A,ai =u
y ′ ∈S,r ′ ∈R
P(y ′ , r′ |x, a)P(Θ = 0|x, ai = u, Q)P(a|x, ai = u, Θ = 0, Q)
X
= P(Θ = 0|x, ai = u, Q)
a∈A,ai =u
= P(Θ = 0|x, ai = u, Q)[
X
P(a|x, ai = u, Θ = 0, Q)[
X
P(y ′ , r′ |x, a)]
y ′ ∈S,r ′ ∈R
P(a|x, ai = u, Θ = 0, Q)] = P(Θ = 0|x, ai = u, Q)
a∈A,ai =u
i =u,Q)
Therefore, we have Premoved (y, r|x, ai = u, Q) = P(y,r,Θ=0|x,a
P(Θ=0|x,ai =u)
so for any (x, ai , y, r, Q) ∈ P (E), P(y, r|x, ai = u, θ, Q) = Premoved (y, r|x, ai = u, Q) =
4
The example at https://agentfoundations.org/item?id=836 clearly illustrates this problem.
12
P(y, r|x, ai = u, Θ = 0, Q) = P(y, r|x, ai = u, θ = 0, Q). In particular, P(y, r|x, ai = u, θ, Q)
does not depend on the value of θ.
Theorem 4. Independent learners with processing function PIN T , a neutral update rule and a
sequence ǫ compatible with interruptions verify dynamic safe interruptibility.
Proof. We prove that PIN T (E) achieves infinite exploration. The result from Theorem 1 still holds
since we lower-bounded the probability of taking an action in a specific state by the probability
of taking an action in this state when there are no interruptions. We actually used the fact that
there is infinite exploration even if we remove all interrupted episodes to show that there is infinite
exploration.
(i)
(1)
(m)
Now, we prove that P(Qt+1 (xt , at ) = q|Qt , ..., Qt , xt , at , θt ) is independent of θ. We fix
(1)
(m)
i ∈ {1, ..., m} and (xt , at , rt , yt ) ∈ PIN T (E) where at ∈ Ai . With Q˜m
we have
t = Qt , ..., Qt
the following equality:
(i)
P(Qt+1 (xt , at ) = q|Q˜m
t , xt , at , θt ) =
X
˜m
P(F (xt , at , rt , yt , Q˜m
t ) = q|Qt , xt , at , rt , yt , θt )
(r,y)
·P(yt = y, rt = r|Q˜m
t , xt , at , θt )
The independence of F on θ still guarantees that the first term is independent of θ. However,
at ∈ Ai so (rt , yt ) are not independent of θt conditionally on (xt , at ) as it was the case for joint
action learners because interruptions of other agents can change the joint action. The independence
on θ of the second term is given by Lemma 1.
13
| 2 |
Noname manuscript No.
(will be inserted by the editor)
Gaussian Variant of Freivalds’ Algorithm for
Efficient and Reliable Matrix Product Verification
arXiv:1705.10449v1 [cs.DS] 30 May 2017
Hao Ji · Michael Mascagni · Yaohang Li
Received: date / Accepted: date
Abstract In this article, we consider the general problem of checking the
correctness of matrix multiplication. Given three n × n matrices A, B, and
C, the goal is to verify that A × B = C without carrying out the computationally costly operations of matrix multiplication and comparing the product
A × B with C, term by term. This is especially important when some or all of
these matrices are very large, and when the computing environment is prone
to soft errors. Here we extend Freivalds’ algorithm to a Gaussian Variant of
Freivalds’ Algorithm (GVFA) by projecting the product A × B as well as C
onto a Gaussian random vector and then comparing the resulting vectors. The
computational complexity of GVFA is consistent with that of Freivalds’ algorithm, which is O(n2 ). However, unlike Freivalds’ algorithm, whose probability
of a false positive is 2−k , where k is the number of iterations. Our theoretical
analysis shows that when A × B 6= C, GVFA produces a false positive on set
of inputs of measure zero with exact arithmetic. When we introduce round-off
error and floating point arithmetic into our analysis, we can show that the
larger this error, the higher the probability that GVFA avoids false positives.
Hao Ji
Department of Computer Science
Old Dominion University
E-mail: [email protected]
Michael Mascagni
Departments of Computer Science, Mathematics, and Scientific Computing
Florida State University
Applied and Computational Mathematics Division
National Institute of Standards and Technology
E-mail: [email protected]
Yaohang Li
Department of Computer Science
Old Dominion University
Tel.: 757-683-7721
Fax: 757-683-4900
E-mail: [email protected]
2
Hao Ji et al.
Moreover, by iterating GVFA k times, the probability of a false positive decreases as pk , where p is a very small value depending on the nature of the
fault on the result matrix and the arithmetic system’s floating-point precision.
Unlike deterministic algorithms, there do not exist any fault patterns that
are completely undetectable with GVFA. Thus GVFA can be used to provide
efficient fault tolerance in numerical linear algebra, and it can be efficiently
implemented on modern computing architectures. In particular, GVFA can be
very efficiently implemented on architectures with hardware support for fused
multiply-add operations.
Keywords Fault-tolerance · Algorithmic Resilience · Gaussian Variant of
Freivalds’ Algorithm · Matrix Multiplication · Gaussian Random Vector ·
Failure Probability
Mathematics Subject Classification (2000) 65F99 · 65C05 · 62P99
1 Introduction
As the demands on modern linear algebra applications created by the latest
development of high-performance computing (HPC) architectures continues
to grow, so does the likelihood that they are vulnerable to faults. Faults in
computer systems are usually characterized as hard or soft, and in this article
we are motivated primarily with the latter. Soft errors, defined by intermittent
events that corrupt the data being processed, are among the most worrying,
particularly when the computation is carried out in a low-voltage computing environment. For example, the 2, 048-node ASC Q supercomputer at Los
Alamos National Laboratory reports an average of 24.0 board-level cache tag
parity errors and 27.7 CPU failures per week [34]; the 131, 072-CPU BlueGene/L supercomputer at Lawrence Livermore National Laboratory experiences one soft error in its L1 cache every 4–6 hours [19]; more recently, a field
study on Google’s servers reported an average of 5 single bit errors occur in 8
Gigabytes of RAM per hour using the top-end error rate [37]. The reliability of
computations on HPC systems can suffer from soft errors that occur in memory, cache, as well as microprocessor logic [38], and thus produce potentially
incorrect results in a wide variety of ways. We are specifically interested in
examining ways to remedy the consequences of soft errors for certain linear
algebra applications.
Matrix-matrix multiplication is one of the most fundamental numerical
operations in linear algebra. Many important linear algebraic algorithms, including linear solvers, least squares solvers, matrix decompositions, factorizations, subspace projections, and eigenvalue/singular values computations, rely
on the casting the algorithm as a series of matrix-matrix multiplications. This
is partly because matrix-matrix multiplication is one of the level-3 Basic Linear Algebra Subprograms (BLAS) [10,9,17]. Efficient implementation of the
BLAS remains an important area for research, and often computer vendors
spend significant resources to provide highly optimized versions of the BLAS
GVFA for Efficient and Reliable Matrix Product Verification
3
for their machines. Therefore, if a matrix-matrix multiplication can be carried
out free of faults, the linear algebraic algorithms that spend most of their time
in matrix-matrix multiplication can themselves be made substantially faulttolerant [21]. Moreover, there is considerable interest in redesigning versions
of the BLAS to be more fault-tolerant, and this work will certainly contribute
to that goal.
In this article, we consider the general problem of checking the correctness
of matrix-matrix multiplication, i.e., given three n × n matrices A, B, and C,
we want to verify whether A × B = C. In contrast to the best known matrixmatrix multiplication algorithm running in O(n2.3727 ) time [8,39], Freivalds’
algorithm [16] takes advantage of randomness to reduce the time to check a
matrix multiplication to O(n2 ). The tradeoff of Freivalds’ algorithm is that the
probability of failure detection, a false positive, is 2−k , where k is the number
of iterations taken. We extend Freivalds’ algorithm from using binary random
vectors to floating-point vectors by projecting the A × B result as well as C
using Gaussian random vectors. We will refer to this algorithm as the Gaussian
Variant of Freivalds’ Algorithm (GVFA). By taking advantage of a nice property of the multivariate normal distribution, we show that GVFA produces
a false positive on a set of random Gaussian vectors and input matrices of
measure zero. Taking floating point round-off error into account, by iterating
GVFA k times, the probability of false positive decreases exponentially as pk ,
where p is usually a very small value related to the magnitude of the fault
in the result matrix and floating-point precision of the computer architecture.
We also present an efficient implementation of GVFA on computing hardware
supporting fused multiplication-add operations.
The plan of the paper is the following. We first discuss two relevant algorithms from the literature for error detection in matrix-matrix multiplication.
These are the Huang-Abraham scheme, discussed in section 2, and Freivalds’
algorithm, the subject of section 3. The former is a deterministic algorithm
based on carrying row and column sums along in a clever format to verify
correct matrix-matrix multiplication. Freivalds’ algorithm is a random projection of the computed matrix-matrix product to the same random projection
of the matrix-matrix product recomputed from the original matrices using
only matrix-vector multiplication. The random vector used in Freivalds’ algorithm is composed of 0’s and 1’s. In section 4, we present the GVFA, a
variation on Freivalds’ algorithm, where we instead use random Gaussian vectors as the basis of our projections. We analyze the GVFA and prove that with
Gaussian vectors, a false positive occurs only on a set of Gaussian vectors of
measure zero. Further analysis of false positive probabilities in the GVFA in
the presence of floating-point arithmetic with round-off errors is then taken.
Finally, in section 5 we provide a discussion of the results and implications for
fault-tolerant linear algebraic computations and a method of enhancing the
resilience of linear algebraic computations. In addition, in this final section we
provide conclusions and suggest directions for future work.
4
Hao Ji et al.
2 The Huang-Abraham Scheme and its Limit in Error Detection/
Correction
The Huang-Abraham scheme [24] is an algorithm-based fault tolerance method
that simplifies detecting and correcting errors when carrying out matrix-matrix
multiplication operations. This is slightly different from the matrix product
verification problem. The fundamental idea of the Huang-Abraham scheme is
to address the fault detection and correction problem at the algorithmic level
by calculating matrix checksums, encoding them as redundant data, and then
redesigning the algorithm to operate on these data to produce encoded output that can be checked. Compared to traditional fault tolerant techniques,
such as checkpointing [5], the overhead of storing additional checksum data
in the Huang-Abraham scheme is small, particularly when the matrices are
large. Moreover, no global communication is necessary in the Huang-Abraham
scheme [14]. The Huang and Abraham scheme formed the basis of many subsequent detection schemes, and has been extended for use in various HPC
architectures [2,3,32,14].
(a) Generation of a column checksum for A and a row checksum for B, and multiplication
of the extended matrices to produce the checksum matrix for C
(b) Mismatches in the row and column checksums indicate
an element fault in the matrix product
Fig. 1: The Huang-Abraham scheme for detecting faults in matrix-matrix multiplication
GVFA for Efficient and Reliable Matrix Product Verification
5
Fig. 1 illustrates the Huang-Abraham scheme [24] for detecting faults in
matrix-matrix multiplication. First of all, column sums for A and row sums
for B are generated and are added to an augmented representation of A and
B. These are treated as particular checksums in the subsequent multiplication.
Then, multiplication of the extended matrices produces the augmented matrix
for C (Fig. 1(a)) where the checksums can be readily compared. Mismatches
in the row and column checksums indicate an element fault in the matrix
product, C (Fig. 1(b)).
However, there are certain patterns of faults undetectable by the HuangAbraham scheme. Here is a simple 2 × 2 example to illustrate such an undetectable pattern.
Consider the matrices
23
1 −6
56
A=
,B =
, and C =
.
34
1 6
76
Clearly A × B = C holds in this example. Then we use the Huang-Abraham
scheme to calculate the column checksum for A and row checksum for B and
we can get
23
1 −6 −5
AF = 3 4 and BF =
.
1 6 7
57
Then
5 6 11
AF × BF = 7 6 13 = CF .
12 12 24
However, if there is a fault during the computation of C which causes an
exchange
of the first and second columns, an erroneous result matrix C ′ =
65
is generated by exchanging the columns of C. Column or row exchange,
67
usually caused by address decoding faults [20], is a commonly observed memory
fault pattern
[6]. The problem is that the checksum matrix of C ′ becomes
6 5 11
C ′ F = 6 7 13 , where both the row and column checksums match those
12 12 24
of the true product of A × B. Consequently, the Huang-Abraham scheme fails
to detect this fault.
The Huang-Abraham scheme can be viewed as a linear constraint satisfaction problem (CSP), where the variables are the n2 entries in the product
matrix, C, the constraints are the 2n row and column checksums. Also, the
2n×n2 coefficient matrix in the under-determined linear CSP system equation
specifies the selection of row or column elements, as shown in Fig. 2. Clearly,
a product matrix, C, that does not satisfy the CSP equations indicates errors
in C detectable by the Huang-Abraham scheme. The unique, correct product
matrix, C, satisfies the CSP equations. Nevertheless, other possible product
matrices satisfying the CSP equations are the fault patterns undetectable by
6
Hao Ji et al.
the Huang-Abraham scheme. Only when at least n2 constraints with different
element selection are incorporated so that the rank of the coefficient matrix
in the CSP equation is n2 , can the undetectable fault patterns be eliminated.
However, this situation is equivalent to simply checking every element in C.
Fig. 2: Under-determined CSP system in the Huang-Abraham Scheme
It is important to notice that there are an infinite number of existing fault
patterns that satisfy the checksum constraints and thus are undetectable by
the Huang-Abraham scheme, even in the above simple 2 × 2 example (the
rank of the CSP coefficient matrix is 3). Moreover, as dimension, n, increases,
the number of checksum constraints increases only linearly but the number
of elements in a matrix has quadratic growth. Therefore, the undetectable
patterns in the Huang-Abraham scheme increase quadratically with n. As a
result, for multiplications in large matrices, fault detection methods based
on the Huang-Abraham scheme can generate false positive results for a large
number of circumstances.
3 Freivalds’ Algorithm
The fault detection methods based on the Huang-Abraham scheme are deterministic algorithms. As many randomized fault tolerance algorithms [28,29],
with the tradeoff of random uncertainty, Freivalds [16] showed that a probabilistic machine can verify the correctness of a matrix product faster than
direct recalculation. The procedure of the corresponding method, later named
Freivalds’ algorithm, is described in Algorithm 1.
Obviously, if A × B = C, Cω = ABω always holds. Freivalds proved that
when A × B 6= C, the probability of Cω = ABω is less than or equal to 12 .
The running time of the above procedure is O(n2 ) with an implied multiplier
of 3, as it is comprised of three matrix-vector multiplications. This is an upper
bound as one can perhaps optimize the evaluation of Bω and Cω. By iterating
GVFA for Efficient and Reliable Matrix Product Verification
7
Algorithm 1: Freivalds’ Algorithm
1. Randomly sample a vector ω ∈ {0, 1}n with p =
1
2
of 0 or 1.
2. Calculate the projection of C onto ω: Cω = C × ω.
3. Calculate the projection of the product A × B onto ω: ABω = A × (B × ω).
the Freivalds’ algorithm k times, the running time becomes O(kn2 ) and the
probability of a false positive becomes less than or equal to 2−k , according to
the one-sided error. More generalized forms of Freivalds’ algorithm have also
been developed, mainly based on using different sampling spaces [7,1,36,31].
Given at most p erroneous entries in the resulted matrix product, Gasieniec,
Levcopoulos, and Lingas extended Freivalds’ algorithm to one with correcting
√
capability running in O( pn2 log(n) log(p)) time [18].
4 A Gaussian Variant of Freivalds’ Algorithm (GVFA)
4.1 Extending Freivalds’ Algorithm using Gaussian Vectors
Freivalds’ original algorithm, and most of its extensions are based on integer
matrices or matrices over a ring and sampling from discrete spaces. Clearly,
we can also apply Freivalds’ algorithm to matrices with real or complex entries
with the random vector remaining zeros and ones. A simple extension is to
project A × B and C onto a vector ωP of form ωP = (1, r, r2 , ..., rn−1 )T , where
r is a random real number. A false positive occurs only when r is the root of
the corresponding polynomial. However, in practice, rn−1 can easily grow too
large or small exceeding floating point representation [25].
Here we also extend Freivalds’ algorithm by using Gaussian random vectors
for the projection. We use the fact that the multivariate normal distribution
has several nice properties [35], which have been used for detecting statistical
errors in distributed Monte Carlo computations [29]. The extended algorithm
is described in Algorithm 2.
Algorithm 2: Gaussian Variant of Freivalds’ Algorithm
1. Generate a Gaussian random vector, ωG , made up of n independent (but
not necessarily identically) distributed normal random variables with finite
mean and variance.
2. Calculate the projection of C on ωG : CωG = C × ωG .
3. Calculate the projection of product A × B on ωG : ABωG = A × (B × ωG ).
This algorithm, which we call a Gaussian variant of Freivalds algorithm
(GVFA), requires three matrix-vector multiplications and only one vector comparison for fault detection.
8
Hao Ji et al.
4.2 Theoretical Justification
Similar to Freivalds’ algorithm, in GVFA if A × B = C, CωG = ABωG always
holds within a certain floating point round-off threshold. When A×B 6= C, the
chance that CωG = ABωG is a false positive event occurs with measure zero in
exact arithmetic, as shown in Theorem 1. We first state a result of Lukacs and
King [33], shown as Proposition 1, which will be used in the proof of Theorem
1. The main assumption of Proposition 1 is the existence of the nth moment
of each random variable, which many distributions, particularly the normal
distribution, have. One important exception of the normal is that it is the
limiting distribution for properly normalized sums of random variables with
two finite moments. This is Lindeberg’s version of the Central Limit Theorem
[30].
Proposition 1 Let X1 , X2 , · · · , Xn be n independent (but not necessarily identically) distributed random variables with variances σi2 , and assume that the
nth moment of each Xi (i = 1, 2, · · · , n) exists and is finite. The necessary and
sufficient conditions
for the existence
Pn
Pn of two statistically independent linear
forms Y1 = i=1 ai Xi and Y2 = i=1 bi Xi are
(1) Each random variable which has a nonzero coefficient in both forms is
normally
distributed.
Pn
2
(2)
i=1 ai bi σi = 0.
Theorem 1 If A × B 6= C, the set of Gaussian vectors where CωG = ABωG
holds in Algorithm 2 has measure zero.
Proof Let the matrix ∆ ∈ Rn×n denote AB − C. Since A × B 6= C, rank(∆) =
r > 0, and dim(null(∆)) = n − rank(∆) = n − r < n. Here dim(·) denotes
dimension and null(·) denotes the null space, i.e., null(∆) = {x ∈ Rn : ∆×x =
0}.
We can now find n − r of orthonormal vectors, v1 , v2 , · · · , vn−r , to form a
basis for null(∆), such that
null(∆) = span{v1 , v2 , · · · , vn−r },
and r more orthonormal vectors, vn−r+1 , vn−r+2 , · · · , vn , such that
Rn = span{v1 , v2 , · · · , vn−r , vn−r+1 , vn−r+2 , · · · , vn }.
Any vector, and in particular the Gaussian vector, ωG can be written in
this basis as
n
X
δi vi ,
ωG =
i=1
GVFA for Efficient and Reliable Matrix Product Verification
9
where δi are the weights in this particular orthonormal coordinate system. If
we denote V = [v1 , v2 , · · · , vn−r , vn−r+1 , vn−r+2 , · · · , vn ] , we have
V ωG = [δ1 , δ2 , · · · , δn−r , δn−r+1 , δn−r+2 , · · · , δn ] .
CωG = ABωG holds in Algorithm 2 only if A(BωG ) − CωG = (AB − C)ωG =
∆ωG = 0. This means that ωG ∈ null(∆), i.e., δn−r+1 = 0, δn−r+2 = 0, · · · , δn =
0. Due to the fact that ωG is a Gaussian random vector and V is an orthogonal matrix, Proposition 1 tells us that the elements, δi , in the resulting vector
V ωG are normally distributed and statistically independent. With a continuous probability distribution, the discrete event where δi = 0 for all i > n − r
occurs on a set of measure zero and we will say here that it has probability
zero. Hence, GVFA using a Gaussian random projection will have unmatched
CωG and ABωG when A × B 6= C on all but a set of measure zero of Gaussian
vectors, which we will say is probability one.
⊓
⊔
This argument in Theorem 1 is rather direct, but we must point out that
the arguments are true when the computations are exact. In next subsection,
we will analyze GVFA when float-point errors are present.
4.3 Practical Use in Floating-Point Matrix Product Verification
In computer implementations of arithmetic with real numbers, one commonly
uses floating-point numbers and floating-point arithmetic. Floating-point numbers are represented as finite numbers in the sense that they have a fixed
mantissa and exponent size in number of bits. Therefore, there will be a small
probability, p, that CωG = ABωG still holds due to unfortunate floating-point
operations in a system with a known machine epsilon, ǫ, when A × B 6= C.
The value of p depends on the magnitude of the error between A × B and C
as well as ǫ, whose upper bound is justified in Theorem 2.
Theorem 2 Assume that ωG is a standard Gaussian random vector, whose
elements are i.i.d. normal variables with mean 0 and variance 1, i.e., the standard normal. Let ∆ = A × B − C, then the probability, p, that CωG = ABωG
holds in Algorithm 2 using a standard Gaussian random vector ωG under
floating-point uncertainty of size ǫ is
ǫ
p ≤ 2Φ
− 1,
σ
e
where Φ(·) is the cumulative density function of the standard normal, and σ
e
is a constant only related to ∆.
Proof Since A × B 6= C, ∆ = A × B − C 6= 0. Consider the ith element, gi , of
the product vector g = ∆ × ωG , we have
gi = (∆ × ωG )i =
n
X
j=1
∆ij (ωG )j .
10
Hao Ji et al.
Given ǫ, only if |gi | ≤ ǫ for all i = 1, · · · , n, can CωG = ABωG hold. Since ωG
is a standard normal random vector, the gi for all i = 1, · · · , n, are normally
distributed as well. This is because they are linear combinations of normals
themselves. The key is to compute what the mean and variance is of the gi .
The components of the
ωG are i.i.d. standard normals. Thus we have that
E [(ωG )j ] = 0 and E (ωG )2j = 1, for all j = 1, · · · , n. Also, we have that
E [(ωG )i (ωG )j ] = 0 when i 6= j. This allows us to compute the mean:
E(gi ) = E
n
X
j=1
∆ij (ωG )j =
n
X
∆ij E [(ωG )j ] = 0,
j=1
and the second moment about the mean, i.e., the variance:
2
n
X
∆ij (ωG )j
E gi2 − E(gi )2 = E gi2 = E
j=1
n
n
X
X
∆2ij .
= E
∆2ij × 1 =
j=1
j=1
So weP
have that the gi ’s are normally
distributed with mean zero and variance
n
ei2 .
σ
ei2 = j=1 ∆2ij , i.e., gi ∼ N 0, σ
Then, the probability that |gi | ≤ ǫ can be computed as follows. Since
gi ∼ N 0, σ
ei2 , we know that σeg2i ∼ N (0, 1), and so we define the new variables
i
gei = σegi2 and e
ǫ = σeǫ2 , and so we have
i
i
p (|gi | ≤ ǫ) = p (−ǫ ≤ gi ≤ ǫ)
= p (−e
ǫ ≤ gei ≤ e
ǫ)
Z eǫ
1 2
1
√ e− 2 t dt
=
2π
−e
ǫ
= Φ(e
ǫ) − Φ(−e
ǫ).
Since the probability density function of a standard normal is an even function,
we have that Φ(e
ǫ) + Φ(−e
ǫ) = 1, and so we can use −Φ(−e
ǫ) = Φ(e
ǫ) − 1 to get:
p(−ǫ ≤ gi ≤ ǫ) = 2Φ(e
ǫ) − 1 = 2Φ
ǫ
σ
ei
− 1.
Now let us consider computing an upper bound on p(|gi | ≤ ǫ, i = 1, · · · , n).
We have proven that the gi ’s are normal random variables, but they are not
necessarily independent. And so for this we use some simple ideas from conditional probability. By example, consider
p(|g1 | ≤ ǫ and |g2 | ≤ ǫ) = p(|g2 | ≤ ǫ given |g1 | ≤ ǫ)p(|g1 | ≤ ǫ) ≤ p(|g1 | ≤ ǫ).
GVFA for Efficient and Reliable Matrix Product Verification
11
The inequality holds due to the fact that the probabilities are numbers less
than one. Now consider our goal of bounding
ǫ
−1 ,
p(|gi | ≤ ǫ, i = 1, · · · , n) ≤ p(|g1 | ≤ ǫ) = 2Φ
σ
e1
by iterating the conditional probability argument n times. By reordering we
could haveqchosen the bound utilizing any of the gi ’s. However, let us define
Pn
2
σ
e = maxi
j=1 ∆ij , i.e., the maximal standard deviation over all the gi ’s,
which is only related to the matrix ∆. We can use that value instead to get
h ǫ
i
p = p(|gi | ≤ ǫ, i = 1, · · · , n) ≤ 2Φ
−1 .
σ
e
⊓
⊔
As an interesting corollary, we can get a better bound in the case the at
the gi ’s are independent. In that case
n
n
Y
Y
ǫ
2Φ
p(|gi | ≤ ǫ) =
p(|gi | ≤ ǫ, i = 1, · · · , n) =
−1 .
σ
ei
i=1
i=1
qP
n
2
Let σ
e = maxi
j=1 ∆ij , i.e., the maximal standard deviation over all the
gi ’s, which is only related to the matrix ∆. Hence for all i = 1, · · · , n, we have
that
ǫ
ǫ
2Φ
− 1 ≤ 2Φ
− 1.
σei
σ
e
And so, finally we get that
p = p(|gi | ≤ ǫ, i = 1, · · · , n)
n
Y
ǫ
−1
2Φ
=
σ
ei
i=1
in
h ǫ
−1
≤ 2Φ
e
ǫσ
≤ 2Φ
− 1.
σ
e
The last inequality is true since the number raised to the nth power is less
than one.
Note, that independence gives probability of a false positive that is n times
smaller than in the general, dependent case. The conclusion of this seems to
be that the bound in the dependent case is overly pessimistic, and we suspect
that in cases where the matrix ∆ is very sparse, due to a very small number of
errors, that we are in the independent gi ’s case or have very little dependence,
and these more optimistic bounds reflect what happens, computationally.
Theorem 2 reveals two interesting facts about GVFA in term of practical
floating-point matrix product verification:
12
Hao Ji et al.
(1) The bigger the error caused by the fault, the higher the probability that it
can be captured. p is usually very small because the floating point bound,
ǫ, is very small.
(2) Similar to the original Freivalds’ algorithm, higher confidence can be obtained by iterating the algorithm multiple times. In fact, if we iterate k
times using independent Gaussian random vectors, the probability of false
positive decreases exponentially as pk . Actually, due to the fact that p is
usually very small, one or a very small number of iterations will produce
verification with sufficiently high confidence.
R eǫ
1 2
One comment that should be made is that if we consider −eǫ √12π e− 2 t dt
when e
ǫ is small, we can easily approximate this. Since the integrand is at
its maximum at zero, and is a very smooth function, analytic actually, this
integral is approximately the value of the integrand at zeroqtimes the length of
R eǫ
1 2
ǫ π2 . This is justified
the integration interval, i.e., −eǫ √12π e− 2 t dt ≤ 2e
ǫ √12π = e
as e
ǫ is a number on the order of the machine epsilon, which is 2−23Pin single
n
precision or 2−52 in double precision floating point, divided by σ
ei2 = j=1 ∆2ij .
Compared to deterministic methods, such as the Huang-Abraham scheme,
GVFA has the following advantages:
(1) Certain fault patterns, as shown in Section 2, are undetectable in deterministic methods such as the Huang-Abraham scheme. Deterministic methods
absolutely cannot detect faults with certain patterns, i.e., certain patterns
are detected with probability zero. In contrast, there are no fault patterns
that are undetectable by GVFA with 100% probability. Moreover, iterating
the algorithm multiple times can increase the probability of detecting any
fault pattern any value less than one by iteration.
(2) From the computational point-of-view, normal random vectors are generated independently of A, B, and C, which avoids the costly computation
of checksums.
4.4 Huang-Abraham-like GVFA
GVFA can also be implemented in a way similar to that of the Huang-Abraham
scheme by providing row and column verification, as shown in Algorithm 3.
Algorithm 3: Huang-Abraham-like GVFA
1. Generate two n-dimensional Gaussian random vectors, ωR , a column vector,
and ωC , a row vector, where they independent (but not necessarily identically)
distributed normal random variables with finite mean and variance.
2. Calculate the projection of C on ωR and ωC : ωR C = ωR × C and CωC =
C × ωC .
3. Calculate the projection of the product A × B on ωR and ωC : ωR AB =
(ωR × A) × B and ABωC = A × (B × ωC ).
GVFA for Efficient and Reliable Matrix Product Verification
13
Similar to the Huang-Abraham scheme, a mismatched element of the row
vectors of ωR C and ωR AB as well as that of the column vectors of CωC
and ABωC uniquely identify a faulty element in C. By considering floatingpoint errors, the false positive probability of identifying this fault becomes
p2 , according to the analysis in Section 4.3. However, the computational cost
doubles with six matrix-vector multiplications and two vector comparisons.
This is essentially the same work as doing two independent iterations of the
GVFA, and obtains the same bound.
4.5 Implementation using Fused Multiply-Add Hardware
The Fused Multiply-Add (FMA) machine instruction performs one multiply
operation and one add operation with a single rounding step [23]. This was implemented to enable potentially faster performance in calculating the floatingpoint accumulation of products, a := a + b × c. Recall that the GVFA employs
three matrix-vector multiplications to project A × B and C onto a normal
random vector, which requires a sequence of product accumulations that cost
3n(2n − 1) floating-point operations. Therefore, the performance of the GVFA
can be potentially boosted on modern computing architectures that support
the FMA. More importantly, due to a single rounding step used in the FMA
instruction instead of two roundings within separate instructions, less loss of
accuracy occurs when using the FMA instruction in calculating the accumulation of products [4]. This should further reduce the floating-point rounding
errors that cause false positives.
5 Discussion and Conclusions
In this paper, we extend Freivalds’ algorithm, which we call the Gaussian
variant of Freivalds’ algorithm (GVFA), to the real domain by random projection using vectors whose coefficients are i.i.d. normal random variables. If
A×B 6= C, the probability that the resulting vectors match is zero using exact
arithmetic. Considering the round-off errors in floating-point operations, the
probability of fault detection depends on the magnitude of the error caused by
the fault as well as the floating point precision. The new GVFA can be iterated
k times with the probability of false positives decreasing exponentially in k.
In addition to matrix-matrix multiplication, the new algorithm can be applied
to verify a wide variety of computations relevant to numerical linear algebra
as it provides fault tolerance to the computation that defines level 3 of the
BLAS. GVFA can also be used to enforce the trustworthiness of outsourcing
matrix computations on untrusted distributed computing infrastructures such
as clouds or volunteer peer-to-peer platforms [27,26].
The GVFA can be easily extended to a more general matrix multiplication
operation where A is m × p, B is p × n, and C is m × n. The overall computational time then becomes O(mp + np). The algorithm can be further extended
14
Hao Ji et al.
to verify the product of N matrices, which requires overall N +1 matrix-vector
multiplications. The GVFA can also be applied to verifying a wide variety of
matrix decomposition operations such as LU, QR, Cholesky, as well as eigenvalue computations, and singular value decompositions. In this case, faults are
not in the product matrix but occur in the decomposed ones instead. Anyway,
the GVFA can be directly applied with no modifications necessary.
The GVFA is a new tool to detect faults in numerical linear algebra, and
since it is based on random Gaussian projection, it is related to the many new
randomized algorithms being used directly in numerical linear algebra [22,11,
12,13,15]. The fundamental idea of these randomized algorithms is to apply
efficient sampling on the potentially very large matrices to extract their important characteristics so as to fast approximate numerical linear algebra operations. We believe that the GVFA will be a very useful tool in the development
of fault-tolerant and otherwise resilient algorithms for solving large numerical
linear algebra problems. In fact, it seems that the GVFA’s similarity to other,
new, stochastic techniques in numerical linear algebra affords the possibility
of creating stochastic linear solvers that are by their very nature resilient and
fault-tolerant. This is highly relevant for new machines being developed in
HPC to have maximal floating-point operations per second (FLOPS) while
existing within restrictive energy budgets. These HPC systems will be operating at voltages lower than most current systems, and so they are expected
to be particularly susceptible to soft errors. However, even if one is not anticipating the use of these high-end machines, the trend in processor design
is to lower power, and is being driven by the explosion of mobile computing.
Thus, the ability to reliably perform complicated numerical linear algebraic
computations on systems more apt to experience soft faults is a very general
concern. The GVFA will make it much easier to perform such computations
with high fidelity in HPC, cloud computing, mobile applications, as well in
big-data settings.
Acknowledgements We would like to thank Dr. Stephan Olariu for his valuable suggestions on the manuscript. This work is partially supported by National Science Foundation
grant 1066471 for Yaohang Li and Hao Ji acknowledges support from an ODU Modeling
and Simulation Fellowship. Michael Mascagni’s contribution to this paper was partially
supported by National Institute of Standards and Technology (NIST) during his sabbatical.
The mention of any commercial product or service in this paper does not imply an
endorsement by NIST or the Department of Commerce.
References
1. Alon, N., Goldreich, O., Hastad, J., Peralta, R.: Simple construction of almost k-wise
independent random variables. In: Proceedings of the 31st Annual Symposium on Foundations of Computer Science, pp. 544–553. IEEE (1990)
2. Banerjee, P., Abraham, J.A.: Bounds on algorithm-based fault tolerance in multiple
processor systems. IEEE Trans. Comput. 100(4), 296–306 (1986)
3. Banerjee, P., Rahmeh, J.T., Stunkel, C., Nair, V.S., Roy, K., Balasubramanian, V.,
Abraham, J.A.: Algorithm-based fault tolerance on a hypercube multiprocessor. IEEE
Trans. Comput. 39(9), 1132–1145 (1990)
GVFA for Efficient and Reliable Matrix Product Verification
15
4. Boldo, S., Muller, J.M.: Exact and approximated error of the fma. IEEE Trans. Comput.
60(2), 157–164 (2011)
5. Bosilca, G., Delmas, R., Dongarra, J., Langou, J.: Algorithm-based fault tolerance applied to high performance computing. J. Parallel Distrib. Comput. 69(4), 410–416
(2009)
6. Cheng, K.L., Wang, C.W., Lee, J.N.: Fame: a fault-pattern based memory failure analysis framework. In: International Computer Aided Design Conference, pp. 595–598
(2003)
7. Chinn, D.D., Sinha, R.K.: Bounds on sample space size for matrix product verification.
Inform. Process. Lett. 48(2), 87–91 (1993)
8. Coppersmith, D., Winograd, S.: Matrix multiplication via arithmetic progressions. In:
Proceedings of the 19th annual ACM symposium on Theory of computing, pp. 1–6.
ACM (1987)
9. Demmel, J.W., Higham, N.J.: Stability of block algorithms with fast level-3 blas. ACM
Trans. Math. Softw. 18(3), 274–291 (1992)
10. Dongarra, J.J., Cruz, J.D., Hammerling, S., Duff, I.S.: Algorithm 679: A set of level
3 basic linear algebra subprograms: model implementation and test programs. ACM
Trans. Math. Softw. 16(1), 18–28 (1990)
11. Drineas, P., Kannan, R., Mahoney, M.W.: Fast monte carlo algorithms for matrices i:
Approximating matrix multiplication. SIAM J. Comput. 36(1), 132–157 (2006)
12. Drineas, P., Kannan, R., W.Mahoney, M.: Fast monte carlo algorithms for matrices ii:
Computing a low-rank approximation to a matrix. SIAM J. Comput. 36(1), 158–183
(2006)
13. Drineas, P., R.Kannan, Mahoney, M.W.: Fast monte carlo algorithms for matrices iii:
Computing a compressed approximate matrix decomposition. SIAM J. Comput. 36(1),
184–206 (2006)
14. Elnozahy, E.N., Alvisi, L., Wang, Y.M., Johnson, D.B.: A survey of rollback-recovery
protocols in message-passing systems. ACM Comput. Surv. 34(3), 375–408 (2002)
15. Eriksson-Bique, S., Solbrig, M., Stefanelli, M., Warkentin, S., Abbey, R., Ipsen, I.C.:
Importance sampling for a monte carlo matrix multiplication algorithm, with application
to information retrieval. SIAM J. Sci. Comput. 33(4), 1689–1706 (2011)
16. Freivalds, R.: Probabilistic machines can use less running time. In: Proceedings of IFIP
Congress 77, pp. 839–842 (1977)
17. Gallivan, K., Jalby, W., Meier, U.: The use of blas3 in linear algebra on a parallel
processor with a hierarchical memory. SIAM J. SCI. STAT. COMP. 8(6), 1079–1084
(1987)
18. Gasieniec, L., Levcopoulos, C., Lingas, A.: Efficiently correcting matrix products. In:
Algorithms and Computation, pp. 53–64. Springer (2014)
19. Glosli, J.N., Richards, D.F., Caspersen, K.J., Rudd, R.E., Gunnels, J.A., Streitz, F.H.:
Extending stability beyond cpu millennium: a micron-scale atomistic simulation of
kelvin-helmholtz instability. In: Proceedings of the 2007 ACM/IEEE conference on
Supercomputing, pp. 1–11. ACM (2007)
20. de Goor, A.J.V.: Testing semiconductor memories: theory and practice. John Wiley &
Sons, New York (1991)
21. Gunnels, J.A., Katz, D.S., Quintana-Orti, E.S., de Gejin, R.A.V.: Fault-tolerant highperformance matrix multiplication: Theory and practice. In: Proceedings of International Conference on Dependable Systems and Networks, pp. 47–56. IEEE (2001)
22. Halko, N., Martinsson, P.G., Tropp, J.A.: Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev.
53(2), 217–288 (2011)
23. Hokenek, E., Montoye, R.K., Cook, P.W.: Second-generation risc floating point with
multiply-add fused. IEEE J. Solid-State Circuits 25(5), 1207–1213 (1990)
24. Huang, K.H., Abraham, J.A.: Algorithm-based fault tolerance for matrix operations.
IEEE Trans. Comput. 100(6), 518–528 (1984)
25. Korec, I., Wiedermann, J.: Deterministic verification of integer matrix multiplication
in quadratic time. In: SOFSEM 2014: Theory and Practice of Computer Science, pp.
375–382. Springer (2014)
16
Hao Ji et al.
26. Kumar, A., Roch, J.L.: Algorithm-based secure and fault tolerant outsourcing
of matrix computations (2013). Available at https://hal.archives-ouvertes.fr/hal00876156/file/JC2S.pdf
27. Lei, X., Liao, X., Huang, T., Li, H.: Cloud computing service: the case of large matrix
determinant computation. IEEE Trans. Serv. Comput. pp. 1–14 (2014)
28. Li, Y., Mascagni, M.: Grid-based monte carlo application. In: Lecture Notes in Computer Science, pp. 13–25 (2002)
29. Li, Y., Mascagni, M.: Analysis of large-scale grid-based monte carlo applications. Int.
J. High Perform. Comput. Appl. 17(4), 369–382 (2003)
30. Lindeberg, J.W.: Eine neue herleitung des exponentialgesetzes in der wahrscheinlichkeitsrechnung. Math. Z. 15(1), 211–225 (1922)
31. Lisboa, C., Erigson, M., Carro, L.: A low cost checker for matrix multiplication. In:
IEEE Latin-American Test Workshop (2007)
32. Luk, F.T., Park, H.: An analysis of algorithm-based fault tolerance techniques. J.
Parallel Distrib. Comput. 5(2), 172–184 (1988)
33. Lukacs, E., King, E.P.: A property of the normal distribution. Ann. Math. Stat. 25(2),
389–394 (1954)
34. Michalak, S.E., Harris, K.W., Hengartner, N.W., Takala, B.E., Wender, S.A.: Predicting
the number of fatal soft errors in los alamos national laboratory’s asc q supercomputer.
IEEE Trans. Device Mater. Rel. 5(3), 329–335 (2005)
35. Muirhead, R.J.: Aspects of multivariate statistical theory. Wiley, New York (1982)
36. Naor, J., Naor, M.: Small-bias probability spaces: Efficient constructions and applications. SIAM J. Comput. 22(4), 838–856 (1993)
37. Schroeder, B., Pinheiro, E., Weber, W.D.: Dram errors in the wild: a large-scale field
study. COMMUN. ACM 54(2), 100–107 (2011)
38. Shivakumar, P., Kistler, M., Keckler, S.W., Burger, D., Alvisi, L.: Modeling the effect
of technology trends on the soft error rate of combinational logic. In: Proceedings of
International Conference on Dependable Systems and Networks, pp. 389–398. IEEE
(2002)
39. Williams, V.V.: Multiplying matrices faster than coppersmith-winograd. In: Proceedings of the 44th Annual ACM Symposium on Theory of Computing, STOC ’12, pp.
887–898. ACM, New York, NY, USA (2012). DOI 10.1145/2213977.2214056. URL
http://doi.acm.org/10.1145/2213977.2214056
| 8 |
arXiv:1704.07513v2 [math.ST] 17 May 2017
BAYES MODEL SELECTION
QIYANG HAN
Abstract. We offer a general Bayes theoretic framework to tackle the
model selection problem under a two-step prior design: the first-step
prior serves to assess the model selection uncertainty, and the secondstep prior quantifies the prior belief on the strength of the signals within
the model chosen from the first step.
We establish non-asymptotic oracle posterior contraction rates under (i) a new Bernstein-inequality condition on the log likelihood ratio
of the statistical experiment, (ii) a local entropy condition on the dimensionality of the models, and (iii) a sufficient mass condition on the
second-step prior near the best approximating signal for each model.
The first-step prior can be designed generically. The resulting posterior
mean also satisfies an oracle inequality, thus automatically serving as an
adaptive point estimator in a frequentist sense. Model mis-specification
is allowed in these oracle rates.
The new Bernstein-inequality condition not only eliminates the convention of constructing explicit tests with exponentially small type I
and II errors, but also suggests the intrinsic metric to use in a given
statistical experiment, both as a loss function and as an entropy measurement. This gives a unified reduction scheme for many experiments
considered in [23] and beyond. As an illustration for the scope of our
general results in concrete applications, we consider (i) trace regression,
(ii) shape-restricted isotonic/convex regression, (iii) high-dimensional
partially linear regression and (iv) covariance matrix estimation in the
sparse factor model. These new results serve either as theoretical justification of practical prior proposals in the literature, or as an illustration
of the generic construction scheme of a (nearly) minimax adaptive estimator for a multi-structured experiment.
1. Introduction
1.1. Overview. Suppose we observe X (n) from a statistical experiment
(n)
(n)
(X(n) , A(n) , Pf ), where f belongs to a statistical model F and {Pf }f ∈F
is dominated by a σ-finite measure µ. Instead of using a single ‘big’ model
F, a collection of (sub-)models {Fm }m∈I ⊂ F are available to statisticians,
and the art of model selection is to determine which one(s) to use.
Date: May 18, 2017.
2000 Mathematics Subject Classification. 60F17, 62E17.
Key words and phrases. Bayes nonparametrics, model selection, adaptive estimation,
model mis-specification, Bernstein inequality.
Supported in part by NSF Grant DMS-1566514.
1
2
Q. HAN
There are vast literatures on model selection from a frequentist point of
view; we only refer the reader to [4, 36, 11, 44] as some representative pointers for various approaches of penalization, aggregation, etc. On the other
hand, from a Bayes point of view, although posterior contraction rates have
been derived for many different models (see e.g. [21, 43, 23, 16, 14, 15,
42, 45, 46, 28] for some key contributions), understanding towards general
Bayes model selection procedures has been limited. [22] focused on designing adaptive Bayes procedures with models primarily indexed by the
smoothness level of classical function classes in the context of density estimation. Their conditions are complicated and seem not directly applicable
to other settings. [18] designed a prior specific to structured linear problems
in the Gaussian regression model, with their main focus on high-dimensional
(linear) and network problems. It seems non-trivial for their framework to
handle other non-linear models.
Despite these limitations, [22, 18] give useful clues. One common feature
in these papers is a two-step prior design, where the first-step prior Λn
assesses the model selection uncertainty, followed by a second-step prior
Πn,m quantifying the prior belief in the strength of the signals within the
specific chosen model Fm from the first step. Such a prior design is intrinsic
in many proposals for different problems, e.g. [16, 15] for sparse linear
regression, [2] for trace regression, [29, 27] for shape restricted regression,
[19, 38] for problems related to convariance matrix estimation.
This is the starting point of this paper. We give a unified theoretical
treatment to this two-step prior design by identifying common structural
(n)
assumptions on the statistical experiments (X(n) , A(n) , Pf ), the collection
of models {Fm } and the priors {Λn } and {Πn,m } such that the posterior
distribution both
(G1) contracts at an oracle rate with respect to some metric dn :
2
(1.1)
inf
inf dn (f0 , g) + pen(m) ,
m∈I
g∈Fm
where pen(m)1 is related to the ‘dimension’ of Fm , and
(G2) concentrates on the model Fm∗ , where m∗ is the ‘best’ model balancing the bias-variance tradeoff in (1.1).
The oracle formulation (1.1) follows the convention in the frequentist literature [36, 44], and has several advantages: (i) (minimaxity) if the true signal
f0 can be well-approximated by the models {Fm }, the contraction rate in
(1.1) is usually (nearly) minimax optimal, (ii) (adaptivity) if f0 lies in certain
low-dimensional model Fm , the contraction rate adapts to this unknown information, and (iii) (mis-specification) if the models Fm are mis-specified
while d2n (f0 , ∪m∈I Fm ) remains ‘small’, then the contraction rate should still
be rescued by this relatively ‘small’ bias.
1pen(m) may depend on n but we suppress this dependence for notational convenience.
BAYES MODEL SELECTION
3
As the main abstract result of this paper (cf. Theorem 1), we show that
our goals (G1)-(G2) can be accomplished under:
(i) (Experiment) a Bernstein-inequality condition on the log likelihood
ratio for the statistical experiment with respect to dn ;
(ii) (Models) a dimensionality condition of the model Fm measured in
terms of local entropy with respect to the metric dn ;
(iii) (Priors) exponential weighting for the first-step prior Λn , and sufficient mass of the second-step prior Πn,m near the ‘best’ approximating signal f0,m within the model Fm for the true signal f0 .
One important ingredient in studying posterior contraction rates in Bayes
nonparametrics literature has been the construction of appropriate tests
with exponentially small type I and II errors with respect to certain metric
[21, 23]. Such tests date back to the work of Le Cam [31, 32, 33] and Birgé [6,
7, 8], who brought out the special role of the Hellinger metric in which tests
can be constructed generically. On the other hand, the testing framework
[21, 23] requires the prior to spread sufficient mass near the Kullback-Leibler
neighborhood of the true signal. The discrepancy of these two metrics can
be rather delicate, particularly for non i.i.d. and complicated models, and it
often remains unclear which metric is the natural one to use in these models.
Moreover, it is usually a significant theoretical challenge to construct tests
in complicated models, cf. [19, 38], to name a few.
Our Bernstein-inequality condition (i) closes these gaps by suggesting the
usage of an ‘intrinsic metric’ that mimics the behavior of the KullbackLeibler divergence in a given statistical experiment, in which a ‘good’ test
can be constructed generically (cf. Lemma 1). Bernstein inequality is a
fundamental tool in probability theory, and hence can be easily verified
in many statistical experiments including various experiments considered
in [23] and beyond: Gaussian/binary/poisson regression, density estimation, Gaussian autoregression, Gaussian time series and covariance matrix
estimation problems. We identify the intrinsic metrics to use in these experiments. Furthermore, the Bernstein-inequality condition entails sharp
exponential contraction of the posterior distribution near the ‘true’ signal,
complementing a recent result of [28]. Results of this type typically do
not follow directly from general principles in [21, 23], and have mainly been
derived on a case-by-case basis, cf. [16, 19, 18]. As such, we provide a refinement of the seminal testing framework in [21, 23] so that the investigation
of sharp posterior contraction rates in the intrinsic metric of an experiment
essentially reduces to the study of prior design.
Conditions (ii) and (iii) are familiar in Bayes nonparametrics literature.
In particular, the first-step prior can be designed generically (cf. Proposition
1). Sufficient mass of the second-step prior Πn,m is a minimal condition in
the sense that using Πn,m alone should lead to a (nearly) optimal posterior
contraction rate on the model Fm .
4
Q. HAN
These conditions, albeit minimal, imply more than an optimal adaptive
Bayes procedure in the sense of (G1)-(G2). In fact, we show that the posterior mean automatically serves as an adaptive point estimator in a frequentist sense. These results reveal, in a sense, that the task of constructing
adaptive procedures with respect to the intrinsic metric in a given statistical
experiment, in both frequentist and Bayes contexts, is not really harder than
that of designing an optimal non-adaptive prior for each of the models.
A general theory would be less interesting without being able to address
problems of different types. As an illustration of our general framework in
concrete applications, we justify the prior proposals in (i) [2, 34] for the
trace regression problem, and in (ii) [29, 27] for the shape-restricted regression problems. Despite many theoretical results for Bayes high-dimensional
models (cf. [16, 15, 19, 18, 38, 3]), it seems that the important low-rank
trace regression problem has not yet been successfully addressed. Our result here fills in this gap. Furthermore, to the best knowledge of the author,
the theoretical results concerning shape-restricted regression problems provide the first systematic approach that bridges the gap between Bayesian
nonparametrics and shape-restricted nonparametric function estimation literature in the context of adaptive estimation2. We also consider adaptive
Bayes procedures for the high-dimensional partially linear regression model
and the covariance matrix estimation problem in the sparse factor model.
These new results serve as an illustration of the generic construction scheme
of a (nearly) minimax adaptive estimator in a complicated experiment with
multiple structures. Some of these results improve the best known result in
the literature.
During the preparation of this paper, we become aware of a very recent
paper [48] who independently considered the Bayes model selection problem. Both our approach and [48] shed light on the general Bayes model
selection problem, while differing in several important aspects (cf. Remark
2). Moreover, our work here applies to a wide range of applications that are
not covered by [48].
1.2. Notation. Cx denotes a generic constant that depends only on x,
whose numeric value may change from line to line. a .x b and a &x b
mean a ≤ Cx b and a ≥ Cx b respectively, and a ≍x b means a .x b and
(n)
a &x b. For a, b ∈ R, a ∨ b := max{a, b} and a ∧ b := min{a, b}. Pf T
denotes the expectation of a random variable T = T (X (n) ) under the exper(n)
iment (X(n) , A(n) , Pf ).
1.3. Organization. Section 2 is devoted to the general model selection theory. We work out a wide range of experiments that fit into our general theory
2Almost completed at the same time, [35] considered a Bayes approach for univariate
log-concave density estimation, where they derived contraction rates without addressing
the adaptation issue.
BAYES MODEL SELECTION
5
in Section 3. Section 4 discusses various concrete applications as mentioned
above. Most detailed proofs are deferred to Sections 5-6 and the Appendix.
2. General theory
In the two-step prior design framework, we first put a prior Λn on the
model index I, followed by a prior Πn,m on the model Fm chosen from
the first
P step. The overall prior is a probability measure on F given by
Πn ≡ m∈I λn,m Πn,m . The posterior distribution is then a random measure
on F: for a measurable subset B ⊂ F,
R (n) (n)
pf (X ) dΠn (f )
(2.1)
Πn (B|X (n) ) = RB (n)
pf (X (n) ) dΠn (f )
(n)
(n)
where pf (·) denotes the probability density function of Pf
to the dominating measure µ.
with respect
2.1. Assumptions. To state our assumption on the experiment, let
vλ2
, 0 ≤ |λ| < 1/c
2(1 − c|λ|)
denote the ‘Bernstein’ function. This function plays a pivital role in proving
sub-gamma behavior of a given (complicated) random variable, cf. [9]. Here
v and c are the L2 size and L∞ size of the random variable controlling
respectively the degree of its sub-Gaussian and sub-gamma behavior.
(2.2)
ψv,c (λ) =
Assumption A (Experiment: Bernstein-inequality condition). There
exist some absolute c1 > 0 and κ = (κg , κΓ ) ∈ (0, ∞) × [0, ∞) such that for
all n ∈ N, and f0 , f1 ∈ F,
(n)
(n)
p
p
(n)
f0
(n)
f0
≤ c1 exp ψκg nd2n (f0 ,f1 ),κΓ (λ)
− Pf0 log (n)
Pf0 exp λ log (n)
p f1
p f1
holds for all |λ| < 1/κΓ . Here the metric dn : F × F → R≥0 satisfies
(n)
(2.3)
c2 ·
d2n (f0 , f1 )
p f0
1 (n)
≤ Pf0 log (n)
≤ c3 · d2n (f0 , f1 )
n
p
f1
for some absolute constants c2 , c3 > 0.
In Assumption A, we require the log likelihood ratio to satisfy a Bernstein
inequality. In particular, the log likelihood ratio has local Gaussian behavior.
Conversely, if the log likelihood ratio behaves locally like Gaussian, then we
can pick some κΓ > 0 so that the Bernstein inequality holds.
Lemma 1. Let Assumption A hold. Fix f0 , f1 ∈ F, there exists some test
φn such that
(n)
(n)
sup
Pf0 φn + Pf (1 − φn ) ≤ c6 exp(−c7 nd2n (f0 , f1 ))
f ∈F :d2n (f,f1 )≤c5 d2n (f0 ,f1 )
6
Q. HAN
where c5 ≤ 1/4, c6 ∈ [2, ∞), c7 ∈ (0, 1) only depend on c1 , c2 , c3 , κ.
This lemma suggests that under a Bernstein-inequality condition on the
log likelihood ratio, tests exist automatically under the intrinsic metric dn
that mimics the behavior of the Kullback-Leibler divergence in the sense
of (2.3). Several examples will be worked out in Section 3 to illustrate the
choice of an intrinsic metric dn , including the discrete ℓ2 loss for regression
models, a weighted L2 metric for the Gaussian autoregression model, the
Hellinger metric for density estimation, the Frobenius norm for covariance
matrix estimation problem.
Next we state the assumption on the complexity of the models {Fm }m∈I .
Let I = Nq be a q-dimensional lattice with the natural order (I, ≤)3. Here
the dimension q is understood as the number of different structures in the
models {Fm }m∈I . In the sequel we will not explicitly mention q unless
otherwise specified. We require the models to be nested in the sense that
Fm ⊂ Fm′ if m ≤ m′ . Let f0,m denote the ‘best’ approximation of f0 within
the model Fm in the sense that f0,m ∈ arg inf g∈Fm dn (f0 , g)4.
Assumption B (Models: Local entropy condition). For each m ∈ I,
2
sup log N c5 ε, {f ∈ Fm : dn (f, g) ≤ 2ε}, dn ≤ (c7 /2)nδn,m
(2.4) 1 + ε>δ
n,m
holds for all g ∈ {f0,m′ }m′ ≤m . Furthermore there exist absolute constants
c ≥ 1 and γ ≥ 1 such that for any m ∈ I, α ≥ c7 /2 and any h ≥ 1,
X −αnδ2
2
2
2
2
n,m′ ≤ 2e−αnhδn,m /c ,
≤ hγ δn,m
.
c−2 δn,hm
e
(2.5)
m′ ≥hm
Note that if we choose all models Fm = F, then (2.4) reduces to the local
entropy condition in [21, 23]. When Fm is finite-dimensional, typically we
can check (2.4) for all g ∈ Fm . Now we comment on (2.5). The left side of
2 , while the right
(2.5) essentially requires super linearity of the map m 7→ δn,m
side of (2.5) controls the degree of this super linearity. As a leading example,
2
(2.5) will be trivially satisfied with c = 1 and γ = 1 when nδn,m
= cm for
some absolute constant c > 2/c7 .
Finally we state assumptions on the priors.
Assumption C (Priors: Mass condition). For all m ∈ I,
(P1) (First-step prior) There exists some h ≥ 1 such that
X
1
2
2
),
λn,k ≤ 2 exp(−nδn,m
).
λn,m ≥ exp(−2nδn,m
(2.6)
2
k>hm,k∈I
(P2) (Second-step prior)
2
2
).
/c3 ≥ exp(−2nδn,m
Πn,m f ∈ Fm : d2n (f, f0,m ) ≤ δn,m
(2.7)
3For any a, b ∈ I, a ≤ b iff a ≤ b for all 1 ≤ i ≤ q. Similar definition applies to
i
i
<, ≥, >.
4We assume that f
0,m is well-defined without loss of generality.
BAYES MODEL SELECTION
7
Condition (P1) can be verified by using the following generic prior Λn :
(2.8)
2
λn,m ∝ exp(−2nδn,m
).
Proposition 1. Suppose the first condition of (2.5) holds. Then (P1) in
Assumption C holds for the prior (2.8) with h ≥ 2c2 .
(2.8) will be the model selection (first-step) prior on the model index I
in all examples in Section 4.
Condition (P2) is reminiscent of the classical prior mass condition consid2
is understood as the ‘posterior contraction rate’
ered in [21, 23] where δn,m
for the model Fm . Hence (P2) can also be viewed as a solvability condition
imposed on each model. Note that (2.7) only requires a sufficient prior mass
on a Kullback-Leibler ball near f0,m , where [21, 23] uses more complicated
metric balls induced by higher moments of the Kullback-Leibler divergence.
2.2. Main results. The following is the main abstract result of this paper.
Theorem 1. Suppose Assumptions A-C hold for some M ⊂ I with |M| =
2 .
∞, and h ≥ C0 c2 . Let ε2n,m ≡ inf g∈Fm d2n (f0 , g) ∨ δn,m
(1) For any m ∈ M,
(n)
(n)
2
X
Pf0 Πn f ∈ F : d2n (f, f0 ) > C1 inf d2n (f0 , g) + δn,m
g∈Fm
(2.9)
≤ C2 exp − nε2n,m /C2 .
2
(2) For any m ∈ M such that δn,m
≥ inf g∈Fm d2n (f0 , g),
(n)
(2.10)
/ FC3 m |X (n) ≤ C2 exp − nε2n,m /C2 .
Pf0 Πn f ∈
(3) Let fˆn ≡ Πn (f |X (n) ) be the posterior mean. Then
(n) 2 ˆ
2
2
inf dn (f0 , g) + δn,m .
(2.11)
Pf0 dn (fn , f0 ) ≤ C4 inf
m∈M
g∈Fm
Here the constant C0 depends on {ci }3i=1 , κ and {Ci }4i=1 depend on the
{ci }3i=1 , κ, c, h and γ.
The main message of Theorem 1 is that, the task of constructing Bayes
procedures adaptive to a collection of models in the intrinsic metric of a given
statistical experiment, can be essentially reduced to that of designing a nonadaptive prior for each model. Furthermore, the resulting posterior mean
serves as an automatic adaptive point estimator in a frequentist sense. In
particular, if the non-adaptive priors we use on each model lead to (nearly)
optimal posterior contraction rates on these models, adaptation happens
automatically by designing a ‘correct’ model selection prior, e.g. (2.8).
Besides being rate-adaptive to the collection of models, (2.10) shows that
the posterior distribution concentrates on the model Fm that balances the
bias and variance tradeoff in the oracle rates (2.9) and (2.11). Results of
this type have been derived primarily in the Gaussian regression model (cf.
8
Q. HAN
[16, 15, 18]) and in density estimation [22]; here our result shows that this
is a general phenomenon for the two-step prior design.
Note that f0 is arbitrary and hence our oracle inequalities (2.9) and (2.11)
account for model mis-specification errors. Previous work allowing model
mis-specification includes [18] who mainly focuses on structured linear models in the Gaussian regression setting, and [30] who pursued generality at
the cost of harder-to-check conditions. The condition |M| = ∞ is assumed purely for technical convenience. If we have finitely many models
{F1 , . . . , Fm′ } at hand, then we can define Fm ≡ Fm′ for m ≥ m′ so that
this condition is satisfied.
Remark 1. We make some technical remarks.
(1) The probability estimate in (2.9) is sharp (up to constants) in view
of the lower bound result Theorem 2.1 in [28], thus closing a gap that
has not been attainable in a general setting by using [21, 23] directly.
Beyond of theoretical interest in its own right, the sharp estimate
helps us to derive an oracle inequality for the posterior mean as an
important frequentist summary of the posterior distribution. Such
sharp estimates have been derived separately in different models, e.g.
the sparse normal mean model [16], the sparse PCA model [19], and
the structured linear model [18], to name a few.
(2) Assumption A implies, among other things, the existence of a good
test (cf. Lemma 1). In this sense our approach here falls into the
general testing approach adopted in [21, 23]. The testing approach
has difficulties in handling non-intrinsic metrics, cf. [28]. Some
alternative approaches for dealing with non-intrinsic metrics can be
found in [14, 28, 49].
(3) The constants {Ci }4i=1 in Theorem 1 depend at most polynomially
with respect to the constants involved in Assumption A. This allows some flexibility in the choice of the constants therein. In fact,
Bernstein inequality in some dependent cases comes with logarithmic
factors in n, cf. [1, 37].
Remark 2. We compare our results with Theorems 4 and 5 of [48]. Both
their results and our Theorem 1 shed light on the general problem of Bayes
model selection, while differing in several important aspects:
(1) Our Theorem 1 hinges on the new Bernstein-inequality condition,
while the results of [48] are based on the classical mechanism of [23]
which requires the construction of tests. Some merits of our approach
will be clear from Section 3 and (2) below along with Remark 1.
(2) The probability estimate in [48] for the posterior distribution outside
a ball of radius at the targeted contraction rate is asymptotic in nature, while our Theorem 1 provides non-asymptotic sharp estimates.
(3) Theorem 4 of [48] targets at exact model selection consistency, under
a set of additional ‘separation’ assumptions. Our Theorem 1 (2) requires no extra assumptions, and shows the concentration behavior
BAYES MODEL SELECTION
9
of the posterior distribution on the ‘best’ model that balances the
bias-variance tradeoff. This is significant in non-parametric problems: the true signal typically need not belong to any specific model.
(4) Theorem 5 of [48] contains a term involving the cardinality of the
models and hence the models need be apriori finitely many for their
bound to be finite. It remains open to see if this can be removed.
2.3. Proof sketch. Here we sketch the main steps in the proof of our main
abstract result Theorem 1. The details will be deferred to Section 5. The
proof can be roughly divided into two main steps.
(Step 1) We first solve a localized problem on the model Fm by ‘projecting’
the underlying probability measure from Pf0 to Pf0,m . In particular, we
establish exponential deviation inequality for the posterior contraction rate
via the existence of tests guaranteed by Lemma 1:
(n)
2
(n)
2
(2.12) Pf0,m Πn f ∈ F : d2n (f, f0,m )) > M δn,
. exp − c1 nδn,
m̃ |X
m̃ ,
2
2
where m̃ is the smallest index ≥ m such that δn,
m̃ & ℓn (f0 , f0,m ). This index
may deviate from m substantially for small indices.
(Step 2) We argue that, the cost
of the projection in Step 1 is essentially
2 ) factor in the probability bound (2.12), cf.
a multiplicative O exp(c2 nδn,
m̃
Lemma 8, which is made possible by the Bernstein-inequality Assumption
A. Then by requiring c1 ≫ c2 we obtain the conclusion by the definition of
2
2
2
2
δn,
m̃ and the fact that δn,m̃ ≈ ℓn (f0 , f0,m ) ∨ δn,m .
The existence of tests (Lemma 1) is used in step 1. Step 2 is inspired by
the work of [17, 5] in the context of frequentist least squares estimator over
a polyhedral cone in the Gaussian regression setting, where the localized
problem therein is estimation of signals on a low-dimensional face (where
‘risk adaptation’ happens). In the Bayesian context, [16, 15] used a change of
measure argument in the Gaussian regression setting for a different purpose.
Our proof strategy can be viewed as an extension of these ideas beyond the
(simple) Gaussian regression model.
3. Statistical experiments
In this section we work out a couple of specific statistical experiments
that satisfy Bernstein-inequality Assumption A to illustrate the scope of
the general theory in Section 2. Some of the examples come from [23];
we identify the ‘intrinsic’ metric to use in these examples. Since Bernstein
inequality is a fundamental probabilistic tool, and has been derived in a
wide range of complicated (dependent) settings ([1, 37]), we expect many
more experiments to be covered beyond the ones we present here.
3.1. Regression models. Suppose we want to estimate θ = (θ1 , . . . , θn ) in
a given model Θn ⊂ Rn in the following regression models: for 1 ≤ i ≤ n,
(1) (Gaussian) Xi = θi + εi where εi ’s are i.i.d. N (0, 1) and Θn ⊂ Rn ;
(2) (Binary) Xi ∼i.i.d. Bern(θi ) where Θn ⊂ [η, 1 − η]n for some η > 0;
10
Q. HAN
(3) (Poisson) Xi ∼i.i.d. Poisson(θi ) where Θn ⊂ [1/M, M ]n for some
M ≥ 1;
We will use the following metric: for any θ0 , θ1 ∈ Θn ,
n
2
1X
2
ℓn (θ0 , θ1 ) ≡
θ0,1 − θ1,i .
n
i=1
Lemma 2. Assumption A holds for ℓn with
(1) (Gaussian) c1 = c2 = c3 = κg = 1 and κΓ = 0;
(2) (Binary) κΓ = 0 and the constants {ci }3i=1 , κg depend on η only;
(3) (Poisson) constants {ci }3i=1 , κ depending on M only.
Theorem 2. For Gaussian/binary/poisson regression models, let dn ≡ ℓn .
If Assumptions B-C hold, then (2.9)-(2.11) hold.
Using similar techniques we can derive analogous results for Gaussian
regression with random design and white noise model. We omit the details.
3.2. Density estimation. Suppose X1 , . . . , Xn ’s are i.i.d. samples from a
density f ∈ F with respect to a measure ν on the sample space (X, A).
g(x)
We consider the following form of F: f (x) = R eeg dν for some g ∈ G for
X
all x ∈ X. A natural metric to use for density estimation is the Hellinger
metric: for any f0 , f1 ∈ F,
Z p
p
1
h2 (f0 , f1 ) ≡
( f0 − f1 )2 dν.
2 X
Lemma 3. Suppose that G is uniformly bounded. Then Assumption A is
satisfied for h with constants {ci }3i=1 , κ depending on G only.
Theorem 3. For density estimation, let dn ≡ h. If G is a class of uniformly
bounded functions and Assumptions B-C hold, then (2.9)-(2.11) hold.
3.3. Gaussian autoregression. Suppose X0 , X1 , . . . , Xn is generated from
Xi = f (Xi−1 ) + εi for 1 ≤ i ≤ n, where f belongs to a function class F with
a uniform bound M , and εi ’s are i.i.d. N (0, 1). Then Xn is a Markov chain
with transition density pf (y|x) = φ(y − f (x)) where φ is the normal density.
By the arguments on page 209 of [23], this chain has a unique stationary
distribution with density qf with respect to the Lebesgue measure λ on R.
We assume that X0 is generated from this stationary distribution under the
true f . Consider the following metric: for any f0 , f1 ∈ F,
Z
d2r,M (f0 , f1 ) ≡ (f0 − f1 )2 rM dλ
where rM (x) ≡
1
2
(φ(x − M ) + φ(x + M )).
Lemma 4. Suppose that F is uniformly bounded by M . Then Assumption
A is satisfied for dr,M with constants {ci }3i=1 , κ depending on M only.
Theorem 4. For Gaussian autoregression model, if F is uniformly bounded
by M , let dn ≡ dr,M . If Assumptions B-C hold, then (2.9)-(2.11) hold.
BAYES MODEL SELECTION
11
Compared with results obtained in [23] (cf. Section 7.4), we identify the
intrinsic metric dr,M (a weighted L2 norm) for the Gaussian autoregression
model, while [23] uses a weighted Ls (s > 2) norm to check the local entropy
condition, and an average Hellinger metric as the loss function.
3.4. Gaussian time series. Suppose X1 , X2 , . . . is a stationary Gaussian
process with spectral density f ∈ F defined on [−π, π]. Then
the covariance
R π √−1λ(k−l)
(n)
f (λ) dλ.
matrix of X = (X1 , . . . , Xn ) is given by (Tn (f ))kl ≡ −π e
We consider a special form of F: f ≡ fg ≡ exp(g) for some g ∈ G. We will
use the following metric: for any g0 , g1 ∈ G,
1
Dn2 (g0 , g1 ) ≡ kTn (fg0 ) − Tn (fg1 )k2F ,
n
where k·kF denotes the matrix Frobenius norm.
Lemma 5. Suppose that G is uniformly bounded. Then Assumption A is
satisfied for Dn with constants {ci }3i=1 , κ depending on G only.
Theorem 5. For the Gaussian time series model, if G is uniformly bounded,
let dn ≡ Dn . If Assumptions B-C hold, then (2.9)-(2.11) hold.
The metric Dn can always be bounded from above by the usual L2 metric,
and can be related to the L2 metric from below (cf. Lemma B.3 of [20]).
Our result then shows that the metric to use in the entropy condition can
be weakened to the usual L2 norm rather than the much stronger L∞ norm
as in page 202 of [23].
3.5. Covariance matrix estimation. Suppose X1 , . . . , Xn ∈ Rp are i.i.d.
observations from Np (0, Σ) where Σ ∈ Sp (L), the set of p × p covariance
matrices whose minimal and maximal eigenvalues are bounded by L−1 and
L, respectively. We will use the Frobenius norm: for any Σ0 , Σ1 ∈ Sp (L),
DF2 (Σ0 , Σ1 ) ≡ kΣ0 − Σ1 k2F .
Lemma 6. Under the above setting, Assumption A holds for the metric DF
with constants {ci }3i=1 , κ depending on L only.
Theorem 6. For covariance matrix estimation in Sp (L) for some L < ∞,
let dn ≡ DF . If Assumptions B-C hold, then (2.9)-(2.11) hold.
4. Applications
In this section, we consider concrete applications. As we have seen in
previous sections, construction of adaptive Bayes procedures in the intrinsic
metric of an experiment essentially reduces to the design of non-adaptive
priors, and hence we only consider the simplest setup for a particular structure. For instance, once we understand how to analyze the convex Gaussian
regression problem, we can similarly consider convex binary/Poisson regression, convex density estimation, Gaussian autoregression with convex functions, Gaussian time series with convex spectral density problems in their
12
Q. HAN
respective intrinsic metrics. Hence our emphasis in the examples will be
focused on the analysis of different model structures. Models that can be
handled using similar techniques will not be presented in detail (e.g. Remark
4).
We will only explicitly state the corresponding oracle inequalities in the
form of (2.9) for each example to be considered below. The corresponding
results for (2.10) and (2.11) are omitted.
4.1. Trace regression. Consider fitting the Gaussian regression model yi =
f0 (xi ) + εi (1 ≤ i ≤ n) by F ≡ {fA : A ∈ Rm1 ×m2 } where fA (x) = tr(x⊤ A)
for all x ∈ X ≡ Rm1 ×m2 . Let m ≡ m1 ∧ m2 and m̄ ≡ m1 ∨ m2 . The index set
is I = I1 ∪ I2 ≡ {1, . . . , rmax } ∪ {rmax + 1, . . .} where rmax ≤ m. For r ∈ I1 ,
let Fr ≡ {fA : A ∈ Rm1 ×m2 , rank(A) ≤ r}, and for r ∈ I2 , Fr ≡ Frmax 5.
Although various Bayesian methods have been proposed in the literature
(cf. see [2] for a state-to-art summary), theoretical understanding has been
limited. [34] derived an oracle inequality for an exponentially aggragated
estimator for the matrix completion problem. Their result is purely frequentist. Below we consider a two step prior similar to [34, 2], and derive
the corresponding posterior contraction rates.
For a matrix B = (bij ) ∈ Rm1 ×m2 let kBkp denote its Schatten p-norm6.
p = 1 and 2 correspond to the nuclear norm and the Frobenius norm respectively. To introduce the notion of RIP, let X : Rm1 ×m2 → Rn be the
n
linear map defined via A 7→ (tr(x⊤
i A))i=1 .
Definition 1. The linear map X : Rm1 ×m2 → Rn is said to satisfy RIP (r, νr )
for some 1 ≤ r ≤ rmax and some νr = (ν r , ν̄r ) with 0 < ν r ≤ ν̄r < ∞ iff
√ (A)k2 ≤ ν̄r holds for all matrices A ∈ Rm1 ×m2 such that rank(A) ≤ r.
ν r ≤ kX
nkAk2
For r > rmax , X satisfies RIP(r, νr ) iff X satisfies RIP(rmax , νr ). Furthermore, X : Rm1 ×m2 → Rn is said to satisfy uniform RIP (ν; I) on an index
set I iff X satisfies RIP(2r, ν) for all r ∈ I.
RIP(r, νr ) is a variant of the RIP condition introduced in [13, 12, 40] with
scaling factors ν̄r = 1/(1 − δr ) and ν r = 1/(1 + δr ) for some 0 < δr < 1.
Example 1 (Matrix completion). Suppose that xi ∈ Rm1 ×m2 takes value
1 at one position and 0 otherwise. Further assume that A ≤ |A0 |ij ≤ Ā for
all 1 ≤ i ≤ m1 and 1 ≤ j ≤ m2 7. Let Ω ≡ ΩX denote the indices for which
{xi }’s take value 1. Then kX (A)k2 = kA1Ω k2 . Easy calculations show that
5This trick of defining models for high-dimensional experiments will also used in other
applications in later subsections, but we will not explicitly state it again.
1/p
6That is, kBk ≡ Pm σ (B)p
, where {σj (B)} are the singular values of B.
p
j=1 j
7This assumption is usually satisfied in applications: in fact in the Netflix problem
(which is the main motivating example for matrix completion), A0 is the rating matrix
with rows indexing the users and columns indexing movies, and we can simply take A = 1
(one star) and Ā = 5 (five stars).
BAYES MODEL SELECTION
we can take ν = (ν̄, ν) defined by ν̄ =
X is uniform RIP(ν; I).
√
√m1 m2 ∧n
nm1 m2
Ā
·A
,ν =
13
√
√m1 m2 ∧n
nm1 m2
·A
so that
Ā
Example 2 (Gaussian measurement ensembles). Suppose xi ’s are i.i.d. random matrices whose entries are i.i.d. standard normal. Theorem 2.3 of [12]
entails that X is uniform RIP(ν; I) with ν̄ = 1 + δ, ν = 1 − δ for some
δ ∈ (0, 1), with probability at least 1 − C exp(−cn)8, provided n & m̄rmax .
Consider a prior Λn on I of form
λn,r ∝ exp − ctr (m1 + m2 )r log m̄ .
(4.1)
Given the chosen index r ∈ I,
by a prior on
Pra prior⊤ on Fr is induced
m
1
all m1 × m2 matrices of form i=1 ui vi where ui ∈ R
and vi ∈ Rm2 .
Here we use a product prior distribution G with Lebesgue density (g1 ⊗
g2 )⊗r on (Rm1 × Rm2 )r . For simplicity we use gi ≡ g⊗mi for i = 1, 2
where g is symmetric about 0 and non-increasing on (0, ∞)9. Let A0,r ∈
tr ≡ g σ
arg minB:rank(B)≤r ℓ2n (fB , f0 ), and τr,g
max (A0,r ) + 1 where σmax denotes the largest singular value.
Theorem 7. Fix 0 < η < 1/2 and rmax ≤ n. Suppose that there exists
some M ⊂ I such that the linear map X : Rm1 ×m2 → Rn satisfies uniform
RIP(ν; M), and that for all r ∈ M, we have
2η
tr
(4.2)
τr,g
≥ e− log m̄/2η , m̄ ≥ 3 ∨ 3ν̄(1 ∨ σmax (A0,r ))n .
Then there exists some ctr in (4.1) depending on ν̄/ν, η such that for any
r ∈ M,
(4.3)
(n)
Pf0 Πn A ∈ Rm1 ×m2 :
ℓ2n (fA , f0 )
Here ε2n,r
>
C1tr
ℓ2n (f0 , fB )
(m1 + m2 )r log(m̄)
+
n
(n)
inf
Y
≤ C2tr exp −nε2n,r /C2tr .
o
n
≡ max inf B:rank(B)≤r ℓ2n (f0 , fB ), (m1 +mn2 )r log m̄ , and the constants
B:rank(B)≤r
Citr (i = 1, 2) depend on ν̄/ν, η.
By Theorem 5 of [41], the rate in (4.3) is minimax optimal up to a logarithmic factor. To the best knowledge of the author, the theorem above is
the first result in the literature that addresses the posterior contraction rate
in the context of trace regression in a fully Bayesian setup.
(4.2) may be verified in a case-by-case manner; or generically we can take
M = {r0 , r0 + 1, . . .} if the model is well specified, at the cost of sacrificing
8Note here we used the union bound to get a probability estimate r
max exp(−cn) .
exp(−c′ n) for some c′ < c under the assumption that n & m̄rmax .
9We will always use such g in the prior design in the examples in this section.
14
Q. HAN
the form of oracle inequalities (but still get nearly optimal posterior contraction rates) in (4.3). In particular, the first condition of (4.2) prevents
the largest eigenvalue of A0,r from growing too fast. This is in similar spirit
with Theorem 2.8 of [16], showing that the magnitude of the signals cannot
be too large for light-tailed priors to work in the sparse normal mean model.
The second condition of (4.2) is typically a mild technical condition: we only
need to choose η > 0 small enough.
4.2. Isotonic regression. Consider fitting the Gaussian regression model
Yi = f0 (xi )+εi by F ≡ {f : [0, 1] → R : f is non-decreasing}. For simplicity
the design
points are assumed to be xi = i/(n + 1) for all 1 ≤ i ≤ n. Let
Fm ≡ f ∈ F, f is piecewise constant with at most m constant pieces . Consider the following prior Λn on I = N:
(4.4)
λn,m ∝ exp − ciso m log(en) .
Let gm ≡ g ⊗m where g is symmetric and non-increasing on (0, ∞). Then
ḡm (µ) ≡ m!gm 1{µ1 ≤...≤µm } (µ) is a valid density on {µ1 ≤ . . . ≤ µm }. Given
a chosen model Fm by the prior Λn , we randomly pick a set of change
points {xi(k) }m
k=1 (i(1) < . . . < i(m)) and put a prior ḡm on {f (xi(k) )}’s.
[29] proposed a similar prior with Λn being uniform since they assumed
the maximum number of change points is known apriori. Below we derive
a theoretical result without assuming the knowledge
of this. Let f0,m ∈
10
iso = g kf
.
arg ming∈Fm ℓ2n (f0 , g), and τm,g
0,m k∞ + 1
Theorem 8. Fix 0 < η < 1/2. Suppose that
(4.5)
iso
τm,g
≥ e− log(en)/2η .
Then there exists some ciso in (4.4) depending on η such that
(4.6)
m log(en)
(n)
2
iso
2
(n)
Pf0 Πn f ∈ F : ℓn (f, f0 ) > C1
inf ℓn (f0 , g) +
Y
g∈Fm
n
2
iso
≤ C2iso exp −n(εiso
.
n,m ) /C2
n
o
2 ≡ max inf
2 (f , g), m log(en) , and the constants C iso (i =
)
ℓ
Here (εiso
0
g∈F
m n
n,m
i
n
1, 2) depend on η.
(4.6) implies that if f0 is piecewise constant, the posterior distribution
contracts at nearly a parametric rate. (4.5) can be checked by the following.
Lemma 7. If f0 is square integrable, and the prior density g is heavy-tailed
in the sense that there exists some α > 0 such that lim inf |x|→∞ xα g(x) > 0.
Then for any η ∈ (0, 1/α), (4.5) holds uniformly in all m ∈ N for n large
enough depending on α and kf0 kL2 ([0,1]) .
10The value of f
0,m outside of [1/(n + 1), n/(n + 1)] can be defined in a canonical way
by extending f0,m (1/(n + 1)) and f0,m (n/(n + 1)) towards the endpoints.
BAYES MODEL SELECTION
15
4.3. Convex regression. Consider fitting the Gaussian regression model
Yi = f0 (xi ) + εi by F, the class of convex functions on X = [0, 1]d . Let
Fm ≡ f (x) = max1≤i≤m (ai · x + bi ) : ai ∈ Rd , bi ∈ R denote the class of
piecewise affine convex functions with at most m pieces.
We will focus on the multivariate case since the univariate case can be
easily derived using the techniques exploited in isotonic regression. A prior
on each model Fm can be induced by a prior on the slopes
the intercepts
Nm and
⊗d ⊗ g on (Rd ×
{(ai , bi ) ∈ Rd × R}m
g
i=1 . We use a prior with density
i=1
R)m to induce a prior on Fm . Let f0,m ∈ arg ming∈Fm ℓ2n (f0 , g) admit the
(m)
(m)
. Further let
representation given
n by f0,m (x) ≡max1≤i≤m ai o· x + bi
(m)
(m)
cvx ≡ min
.
τm,g
1≤i≤m g kai k∞ + 1 , g |bi | + 1
The prior Λn we will use on the index I = N is given by
λn,m ∝ exp − ccvx dm log 3m · log n .
(4.7)
The first step prior used in [27] is a Poisson proposal, which slightly differs
from (4.7) by a logarithmic factor. This would affect the contraction rate
only by a logarithmic factor.
Theorem 9. Fix 0 < η < 1/4. Suppose that
(4.8)
cvx
τm,g
≥ e− log n·log 3m/8η ,
and n ≥ d. Then there exists some ccvx in (4.7) depending on η such that
(4.9)
d log n · m log 3m
(n)
2
cvx
2
(n)
Pf0 Πn f ∈ F : ℓn (f, f0 ) > C1
inf ℓn (f0 , g) +
Y
g∈Fm
n
2
cvx
≤ C2cvx exp −n(εcvx
.
n,m ) /C2
o
n
2 ≡ max inf
2 (f , g), d log n·m log 3m , and the constants
Here (εcvx
)
ℓ
g∈F
0
n,m
m n
n
Cicvx (i = 1, 2) depend on η.
Our oracle inequality shows that the posterior contraction rate of [27]
(Theorem 3.3 therein) is far from optimal. (4.8) can be satisfied by using
heavy-tailed priors g(·) in the same spirit as Lemma 7–if f0 is square integrable and the design points are regular enough (e.g. using regular grids on
[0, 1]d ). Moreover, explicit rate results can be obtained using approximation
techniques in [26] (cf. Lemma 4.10 therein). We omit detailed derivations.
Remark 3. For univariate convex regression, the term log(3m) in (4.7)-(4.9)
can be removed. The logarithmic term is due to the fact that the pseudodimension of Fm scales as m log(3m) for d ≥ 2, cf. Lemma 22.
Remark 4. Using similar priors and proof techniques we can construct a
(nearly) rate-optimal adaptive Bayes estimator for the support function regression problem for convex bodies [25]. There the models Fm are support
functions indexed by polytopes with m vertices, and a prior on Fm is induced
by a prior on the location of the m vertices. The pseudo-dimension of Fm
can be controlled using techniques developed in [25]. Details are omitted.
16
Q. HAN
4.4. High-dimensional partially linear model. Consider fitting the Gaussian regression model Yi = f0 (xi , zi ) + εi where (xi , zi ) ∈ Rp × [0, 1], by a
partially linear model F ≡ {fβ,u (x, z) = x⊤ β + u(z) ≡ hβ (x) + u(z) : β ∈
Rp , u ∈ U } where the dimension of the parametric part can diverge. We
consider U to be the class of non-decreasing functions as an illustration (cf.
Section 4.2). Consider models F(s,m) ≡ {fβ,u : β ∈ B0 (s), u ∈ Um } where
Um denotes the class of piecewise constant non-decreasing functions with at
most m constant pieces, and B0 (s) ≡ {v ∈ Rp : |supp(v)| ≤ s}. In this
example the model index I is a 2-dimensional lattice. Our goal here is to
construct an estimator that satisfies an oracle inequality over the models
{F(s,m) }(s,m)∈N2 . Consider the following model selection prior:
(4.10)
λn,(s,m) ∝ exp −chp (s log(ep) + m log(en)) .
For a chosen model F(s,m) , consider the following prior Πn,(s,m) : pick randomly a support S ⊂ {1, . . . , p} with |S| = s and a set of change points
Q ≡ {zi(k) }m
k=1 (i(1) < . . . i(m)), and then put a prior gS,Q on βS and
u(zi(k) )’s. For simplicity we use a product prior gS,Q ≡ g⊗s ⊗ ḡm where
ḡm is a prior on {µ1 ≤ . . . ≤ µm } ⊂ Rm constructed in Section 4.2. Let
f0,(s,m) ∈ inf g∈F(s,m) ℓ2n (f0 , g), and write f0,(s,m) (x, z) = x⊤ β0,s + u0,m (z) ≡
h0,s (x) + u0,m (z). Let τs,g ≡ g(kβ0,s k∞ + 1) and τm,g ≡ g(ku0,m k∞ + 1). Let
X ∈ Rn×p be the design matrix so that X ⊤ X/n is normalized with diagonal
elements taking value 111.
Theorem 10. Fix 0 < η < 1/4 and p ≥ n. Suppose that
(4.11)
τs,g ≥ e− log(ep)/2η ,
τm,g ≥ e− log(en)/2η .
Then there exists some chp in (4.10) depending on η such that
(4.12)
s log(ep) m log(en)
(n)
hp
(n)
2
2
+
Y
Pf0 Πn f ∈ F : ℓn (f, f0 ) > C1
inf
ℓn (f0 , fβ,u ) +
fβ,u ∈F(s,m)
n
n
hp
2
≤ C2hp exp −n(εhp
)
/C
.
2
n,(s,m)
o
n
s log(ep)+m log(en)
2 ≡ max inf
2 (f , f
, and the
)
ℓ
),
Here (εhp
0
f
∈F
β,u
n
β,u
(s,m)
n
n,(s,m)
constants Cihp (i = 1, 2) depend on η.
The first condition of (4.11) requires that the magnitude of kβ0,s k∞ does
not grow too fast; see also comments following Theorem 7. The second
condition of (4.11) is the same as in (4.5). When the model is well-specified
in the sense that f0 (x, z) = x⊤ β0 + u0 (z) for some β0 ∈ B0 (s0 ) and u0 ∈ U ,
the oracle rate in (4.12) becomes
s0 log(ep)
m log(en)
(4.13)
+ inf
inf ℓ2n (u0 , u) +
.
m∈N u∈Um
n
n
11This is a common assumption, cf. Section 6.1 of [10].
BAYES MODEL SELECTION
17
The two terms in the rate (4.13) trades off two structures of the experiment:
the sparsity of hβ (x) and the smoothness level of u(z). The resulting phase
transition of the rate (4.13) in terms of these structures is in a sense similar
to the results of [51, 50]. It is not hard to see that (4.13) cannot be improved
in general. Hence our Bayes estimator automatically serves as a theoretically
(nearly) optimal adaptive estimator for the high-dimensional partially linear
regression model.
4.5. Covariance matrix estimation in the sparse factor model. Suppose we observe i.i.d. X1 , . . . , Xn ∈ Rp from Np (0, Σ0 ). The covariance
matrix is modelled by the sparse factor model M ≡ ∪(k,s)∈N2 M(k,s) where
M(k,s) ≡ {Σ = ΛΛ⊤ + I : Λ ∈ R(k,s) (L)} with R(k,s) (L) ≡ {Λ ∈ Rp×k , Λ·j ∈
B0 (s), |σj (Λ)| ≤ L1/2 , ∀1 ≤ j ≤ k}. In this example, the model index I
is a 2-dimensional lattice, and the sparsity structure depends on the rank
structure. Consider the following model selection prior:
(4.14)
λn,(k,s) ∝ exp (−ccov ks log(ep)) .
Theorem 11. Let p ≥ n. There exist some ccov in (4.14) and some sequence
of sieve priors Πn,(k,s) on M(k,s) depending on L such that
(4.15)
ks log(ep)
(n)
(n)
2
cov
′
2
PΣ0 Πn Σ ∈ M : kΣ − Σ0 kF > C1
X
inf kΣ − Σ0 kF +
Σ′ ∈M(k,s)
n
2
cov
.
≤ C2cov exp −n(εcov
)
/C
2
n,(k,s)
n
o
2 ≡ max inf ′
′ − Σ k2 , ks log(ep) , and the constants
Here (εcov
)
kΣ
0
Σ
∈M
F
(s,k)
n,(k,s)
n
Cicov (i = 1, 2) depend on L.
Since spectral norm (non-intrinsic) is dominated by Frobenius norm (intrinsic), our result shows that if the model is well-specified in the sense
that Σ0 ∈ M, then we can construct an adaptive
p Bayes estimator with
convergence rates in both norms no worse than ks log p/n. [38] considered p
the same sparse factor model, where they proved a strictly sub-optimal
rate k3 s log p log n/n in spectral norm under ks & log p. [19] considered
a closely related sparse PCA problem, where the convergence rate under
spectral norm achieves
the same rate as here (cf. Theorem 4.1 therein),
√
while a factor of k is lost when using Frobenius norm as a loss function
(cf. Remark 4.3 therein).
It should be mentioned that the sieve prior Πn,(k,s) is constructed using the metric entropy of M(k,s) and hence the resulting Bayes estimator
and the posterior mean as a point estimator are purely theoretical. We use
this example to illustrate (i) the construction scheme of a (nearly) optimal
adaptive procedure for a multi-structured experiment based on the metric
entropy of the underlying parameter space, and (ii) derivation of contraction rates in non-intrinsic metrics when these metrics can be related to the
intrinsic metrics nicely.
18
Q. HAN
5. Proofs for the main results
5.1. Proof of Theorem 1: main steps. First we need a lemma allowing
a change-of-measure argument.
Lemma 8. Let Assumption A hold. There exists some constant c4 ≥ 1 only
depending on c1 , c3 and κ such that for any random variable U ∈ [0, 1], any
δn ≥ dn (f0 , f1 ) and any j ∈ N,
i
h
−1
2
2
(n)
(n)
Pf0 U ≤ c4 Pf1 U · ec4 njδn + e−c4 njδn .
The next propositions solve the posterior contraction problem for the
‘local’ model Fm .
2
≥ d2n (f0 , f0,m ). Then there
Proposition 2. Fix m ∈ M such that δn,m
exists some constant c8 ≥ 1 (depending on the constants in Assumption A)
such that for j ≥ 8c2 /c7 h,
(5.1)
2
2
(n)
2
X (n) ≤ c8 e−njhδn,m /c8 c .
Pf0,m Πn f ∈ F : d2n (f, f0,m ) > c2 (jh)γ δn,m
2
< d2n (f0 , f0,m ). Let m̃ ≡
Proposition 3. Fix m ∈ M such that δn,m
m̃(m) ≡ inf{m′ ∈ M, m′ ≥ m : δn,m′ ≥ dn (f0 , f0,m )}. Then for j ≥ 8c2 /c7 h,
(5.2)
2
2
(n)
Pf0,m Πn f ∈ F : d2n (f, f0,m ) > c4 (2jh)γ d2n (f0 , f0,m ) X (n) ≤ c8 e−njhδn,m̃ /c8 c .
The proofs of these results will be detailed in later subsections.
Proof of Theorem 1: main steps. Instead of (2.9), we will prove a slightly
stronger statement as follows: for any j ≥ 8c2 /c7 h, and h ≥ 2c4 c8 c2 ,
(5.3)
(n)
2
(n)
2
X
≤ c2 e−jnεn,m /c2 .
Pf0 Πn f ∈ F : d2n (f, f0 ) > c1 j γ inf d2n (f0 , g) + δn,m
g∈Fm
Here the constants ci (i = 1, 2) depends on the constants involved in Assumption A and c, h.
Proof of (5.3).
First consider the overfitting case. By Proposition 2 and Lemma 8, we
2
see that when δn,m
≥ d2n (f0 , f0,m ) holds, for j ≥ 8c2 /c7 h,
(n)
2
Pf0 Πn f ∈ F : d2n (f, f0 ) > 2d2n (f0 , f0,m ) + 2c2 (jh)γ δn,m
X (n)
(n)
2
2
γ 2
(n)
≤ Pf0 Πn f ∈ F : dn (f, f0,m ) > c (jh) δn,m X
2
2
(n)
(5.4) ≤ c4 P (n) Πn f ∈ F : d2n (f, f0,m ) > c2 (jh)γ δn,m
X
ec4 njδn,m
f0,m
2
−c−1
njδ
n,m
+e 4
2
−njδn,m
≤ c8 c4 e
h
−c4
c 8 c2
−1
+ c4 e−c4
2
njδn,m
2
−1
≤ 2c8 c4 e−jnδn,m min{c4 ,c4 } .
BAYES MODEL SELECTION
19
Here in the second line we used the fact that d2n (f, f0,m ) ≥ d2n (f, f0 )/2 −
d2n (f0 , f0,m ). (5.4) completes the estimate for overfitting m ∈ M.
2
< d2n (f0 , f0,m ).
Next consider the underfitting case: fix m ∈ M such that δn,m
Apply Proposition 3 and Lemma 8, and use arguments similar to (5.4) to
see that for j ≥ 8c2 /c7 h,
(5.5)
(n)
Pf0 Πn f ∈ F : d2n (f, f0 ) > 2c4 (2jh)γ + 2 d2n (f0 , f0,m ) X (n)
2
(n)
2
4
γ 2
(n)
≤ c4 Pf0,m Πn f ∈ F : dn (f, f0,m ) > c (2jh) dn (f0 , f0,m ) X
ec4 njδn,m̃
−1
2
2
−c−1
jnδn,
4
m̃
+e
≤ 2c8 c4 e−njδn,m̃ min{c4 ,c4 } .
Here in the second line we used (i) 2d2n (f, f0,m ) ≥ d2n (f, f0 ) − 2d2n (f0 , f0,m ),
and (ii) δn,m̃ ≥ dn (f0 , f0,m ). The claim of (5.3) follows by combining (5.4)
and (5.5).
Proof of (2.11). The proof is essentially integration of tail estimates by a
peeling device. Let the event Aj be defined via
2
2
Aj := {c1 j γ d2n (f0 , f0,m ) + δn,m
< d2n (f, f0 ) ≤ c1 (j + 1)γ d2n (f0 , f0,m ) + δn,m
}.
Then,
(n)
(n)
(n)
Pf0 d2n (fˆn , f0 ) = Pf0 d2n Πn (f |X (n) ), f0 ≤ Pf0 Πn d2n (f, f0 )|X (n)
X
2
≤ Cc1 ,c,c7,h,γ d2n (f0 , f0,m ) + δn,m
+
Pfn0 Πn d2n (f, f0 )1Aj X (n)
j≥8c2 /c7 h
2γ+1 c1 c2
2
+
≤ Cc1 ,c,c7,h,γ d2n (f0 , f0,m ) + δn,m
n
X
2
j γ nε2n,m e−jnεn,m /c2 .
j≥8c2 /c7 h
The inequality in the first line of the above display is due to Jensen’s inequality applied with dn (·, f0 ), followed by Cauchy-Schwarz inequality. The
summation can be bounded up to a constant depending on γ, c1 , c2 by
X
X
2
2
(jnε2n,m )γ e−jnεn,m/c2 ≤
(jnε2n,m )γ e−jnεn,m /c2 (j + 1)nε2n,m − jnε2n,m
j≥8c2 /c7 h
j≥8c2 /c7 h
where the inequality follows since nε2n,m ≥ nε2n,1 ≥ 1. This quantity can be
R∞
bounded by a constant multiple of 0 xγ e−x/c2 dx independent of m. Now
2
majorizes 1/n up to a constant,
the proof is complete by noting that δn,m
and then taking infimum over m ∈ M.
5.2. Proofs of Propositions 2 and 3. We will need several lemmas before
the proof of Propositions 2 and 3.
Lemma 9. Let Assumption A hold. Let F be a function class defined on the
sample space X. Suppose that N : R≥0 → R≥0 is a non-increasing function
20
Q. HAN
such that for some ε0 ≥ 0 and every ε ≥ ε0 , it holds that
N (c5 ε, {f ∈ F : ε < dn (f, f0 ) ≤ 2ε}, dn ) ≤ N (ε).
Then for any ε ≥ ε0 , there exists some test φn such that
2
(n)
Pf0 φn
c6 N (ε)e−c7 nε
,
≤
1 − e−c7 nε2 +
2
(n)
sup
f ∈F :dn (f,f0 )≥ε
Pf (1 − φn ) ≤ c6 e−c7 nε .
The constants c5 , c6 , c7 are taken from Lemma 1.
Lemma 10. Let Assumption A hold. Suppose that Π is a probability measure on {f ∈ F : dn (f, f0 ) ≤ ε}. Then for every C > 0, there exists some
C ′ > 0 depending on C, κ such that
Z (n)
pf
′
2
(n)
−(C+c3 )nε2
(5.6)
Pf0
dΠ(f ) ≤ e
≤ c1 e−C nε .
(n)
p f0
The proof of these lemmas can be found in Appendix C.
Proof of Proposition 2. Fix m′ ∈ I with m′ ≥ m. Now we invoke Lemma
9 with F ≡ Fm′ , f0 ≡ f0,m ∈ Fm ⊂ Fm′ [since m′ ≥ m], ε0 ≡ δn,m′ and
2
log N (ε) ≡ (c7 /2)nδn,m
′ for ε = ε0 to see that, there exists some test φn,m′
such that
(5.7)
(n)
Pf0,m φn,m′
≤
and that
(5.8)
2
log N (ε)−c7 nδn,m
′
c6 e
2
−c7 nδn,m
′
1−e
2
−c7 nδn,m
′ /2
≤ 2c6 e
2
−c7 nδn,m
′
(n)
sup
2
f ∈Fm′ :d2n (f,f0,m )≥δn,m
′
,
Pf (1 − φn,m′ ) ≤ c6 e
.
2
Note that here in (5.7) we used the fact that nδn,m
′ ≥ 2/c7 by definition of
′
δn,m . Now for the fixed j, m as in the statement of the proposition, we let
φn := supm′ ∈I:m′ ≥jhm φn,m′ be a global test for big models. Then by (5.7),
X
X
2
2
−c nδ2
/2
(n)
(n)
Pf0,m φn ≤
Pf0,m φn,m′ ≤
2c6 e 7 n,m′ ≤ 4c6 e−(c7 /2c )njhδn,m .
m′ ≥jhm
m′ ≥jhm
Here we used the left side of (2.5). This implies that for any random variable
U ∈ [0, 1], we have
(5.9)
(n)
(n)
Pf0,m U · φn ≤ Pf0,m φn ≤ 4c6 e−(c7 /2c
2 )njhδ 2
n,m
.
On the power side, with m′ = jhm applied to (5.8) we see that
(5.10)
(n)
(n)
sup
sup
Pf (1 − φn ) ≤
Pf (1 − φn )
2
f ∈Fjhm :d2n (f,f0,m )≥c2 (jh)γ δn,m
2
f ∈Fjhm :d2n (f,f0,m )≥δn,jhm
2
≤ c6 e−c7 nδn,jhm ≤ 2c6 e−(c7 /c
2 )njhδ 2
n,m
.
BAYES MODEL SELECTION
21
2
≥
The first inequality follows from the right side of (2.5) since c2 (jh)γ δn,m
2
δn,jhm, and the last inequality follows from the left side of (2.5). On the
other hand, by applying Lemma 10 with C = c3 and ε2 ≡
(n)
2
c7 jhδn,m
,
8c3 c2
we see
2
c njhδn,m
−C ′ 7
8c3 c2
and it
that there exists some event En such that Pf0,m (Enc ) ≤ c1 e
holds on the event En that
(5.11)
(n)
Z (n)
Z
pf
pf
dΠ(f ) ≥ λn,m
dΠn,m (f )
(n)
(n)
2
{f ∈Fm :d2n (f,f0,m )≤c7 jhδn,m
/8c3 c2 } p
pf0,m
f0,m
≥ λn,m e−
2
c7 njhδn,m
4c2
Πn,m
2
f ∈ Fm : d2n (f, f0,m ) ≤ c7 jhδn,m
/8c3 c2
Note that
(5.12)
(n)
2
2
γ 2
(n)
Pf0,m Πn f ∈ F : dn (f, f0,m ) > c (jh) δn,m X
(1 − φn )1En
R
(n) (n)
dΠ(f
)
/p
p
2
2
γ
2
f0,m
f ∈F :dn (f,f0,m )>c (jh) δn,m f
(n)
= Pf0,m
(1 − φn )1En
R (n) (n)
pf /pf0,m dΠ(f )
2
≤
.
2
ec7 njhδn,m /4c
2 /8c c2
λn,m Πn,m f ∈ Fm : d2n (f, f0,m ) ≤ c7 jhδn,m
3
#
"Z
(n) (n)
(n)
pf /pf0,m dΠ(f )(1 − φn )
× Pf0,m
2
f ∈F :d2n (f,f0,m )>c2 (jh)γ δn,m
≡ (I) · (II)
where the inequality follows from (5.11). On the other hand, the expectation
term in the above display can be further calculated as follows:
Z
(n)
Pf (1 − φn ) dΠ(f )
(II) =
2
f ∈F :d2n (f,f0,m )>c2 (jh)γ δn,m
(5.13)
≤
sup
2
f ∈Fjhm :d2n (f,f0,m )>c2 (jh)γ δn,m
≤ 2c6 e−(c7 /c
2 )njhδ 2
n,m
(n)
Pf (1 − φn ) + Π F \ Fjhm
+ 4e−(1/c
2 )njhδ 2
n,m
≤ 6c6 e−(c7 /c
2 )njhδ 2
n,m
.
The first term in the third line follows from (5.10) and the second term
follows from (P1) in Assumption C along with the left side of (2.5). By
(P1)-(P2) in Assumption C and j ≥ 8c2 /c7 h,
(5.14)
(n)
2
X (n) (1 − φn )1En ≤ Ce−(c7 /4c
Pf0,m Πn f ∈ F : d2n (f, f0,m ) > c2 (jh)γ δn,m
2 )njhδ 2
n,m
Hence we conclude (5.1) from (5.9), probability estimate on Enc , and (5.14).
.
22
Q. HAN
Proof of Proposition 3. The proof largely follows the same lines as that of
Proposition 2. See Appendix C for details.
5.3. Completion of proof of Theorem 1.
2
≥ d2n (f0 , f0,m ), following
Proof of (2.10). For any m ∈ M such that δn,m
the similar reasoning in (5.12) with j = 8c2 /c7 h,
(5.15)
(n)
/ Fjhm |X (n) 1En
Pf0,m Πn f ∈
2
≤
λn,m Πn,m
≤ Ce−(c7 /4c
2
ec7 njhδn,m /4c
· Π F \ Fjhm
2
2
2
f ∈ Fm : dn (f, f0,m ) ≤ c7 jhδn,m /8c3 c
2 )njhδ 2
n,m
.
From here (2.10) can be established by controlling the probability estimate
for Enc as in Proposition 2, and a change of measure argument as in (5.4)
using Lemma 8.
5.4. Proof of Lemma 1.
Proof of Lemma 1. Let c> 0 be a constant to be
specified later. Consider
(n)
pf
0
≤ −cnd2n (f0 , f1 ) . We first consider type
the test statistics φn ≡ 1 log (n)
pf
1
I error. Under the null hypothesis, we have for any λ1 ∈ (0, 1/κΓ ),
(n)
(n)
p
p
f0
f0
(n)
(n)
Pf0 φn ≤ Pf0 log (n)
≤ −(c + c2 )nd2n (f0 , f1 )
− Pf0 log (n)
p f1
p f1
≤ c1 exp ψκg nd2n (f0 ,f1 ),κΓ (−λ1 ) · exp −λ1 (c + c2 )nd2n (f0 , f1 ) .
Choosing λ1 > 0 small enough (depending on κ, c2 , c) we get
(n)
Pf0 φn ≤ C1 exp(−C2 nd2n (f0 , f1 ))
where C1 , C2 > 0 depend on c1 , c2 , c, κ. Next we handle the type II error.
′
To this end,
3 c5 to be specified later, consider the event
for a constant c > c
(n)
En ≡ 1 log
pf
(n)
1
pf
< c′ nd2n (f0 , f1 ) , where f ∈ F is such that d2n (f, f1 ) ≤
c5 d2n (f0 , f1 ), and λ2 ∈ (0, 1/κΓ ),
(n)
(n)
pf
pf
(n)
(n)
(n)
Pf (Enc ) ≤ Pf log (n) − Pf log (n) > c′ nd2n (f0 , f1 ) − c3 nd2n (f, f1 )
p f1
p f1
(n)
(n)
p
p
f
f
(n)
(n)
≤ Pf log (n) − Pf log (n) > (c′ − c3 c5 )nd2n (f0 , f1 )
p f1
p f1
′
2
≤ e−λ2 (c −c3 c5 )ndn (f0 ,f1 ) c1 exp(ψκg nd2n (f,f1 ),κΓ (λ2 )).
BAYES MODEL SELECTION
23
By choosing λ2 > 0 small enough depending on c3 , c5 , c′ , κ, we see that
(n)
Pf (Enc ) ≤ C3 exp(−C4 nd2n (f0 , f1 ))
for some constants C3 , C4 depending only on c1 , c3 , c5 , c′ , κ (in particular,
does not depend on f ). On the other hand,
(n)
(n)
p
p
f
(n)
(n)
f1
+ log (n) < cnd2n (f0 , f1 ) (1En + 1Enc )
Pf (1 − φn ) = Pf log (n)
pf
p f0
(n)
p
f
(n)
(n)
(n)
≤ Pf log (n) < (c + c′ )nd2n (f0 , f1 ) + Pf (Enc ) ≡ (∗) + Pf (Enc ).
p f0
(n)
(n)
pf
≥ c2 d2n (f, f0 ) ≥ c2 (1 −
√
c5 )2 d2n (f0 , f1 ), we continue our
√
computation: for c, c′ such that c + c′ < c2 (1 − c5 )2 , and λ3 ∈ (0, 1/κΓ ),
(n)
(n)
p
p
√
f
(n)
(n)
f
(∗) ≤ Pf log (n) − Pf log (n) < − c2 (1 − c5 )2 − (c + c′ ) nd2n (f0 , f1 )
p f0
p f0
√ 2
′
2
(−λ )
ψ
≤ e−λ3 c2 (1− c5 ) −(c+c ) ndn (f0 ,f1 ) c1 e κg nd2n (f,f0 ),κΓ 3 .
Since Pf
log
(n)
pf
0
Now choose λ3 > 0 small enough depending on c2 , c5 , c, c′ , κ we see that for
any f ∈ F such that d2n (f, f1 ) ≤ c5 d2n (f0 , f1 ),
(n)
Pf (1 − φn ) ≤ C5 exp(−C6 nd2n (f0 , f1 ))
where C5 , C6 depending on c1 , c2 , c5 , c, c′ , κ, C3 , C4 . Now we need to choose
√
c, c′ , c5 such that c > 0, c′ > c3 c5 , c + c′ < c2 (1 − c5 )2 . This can be done by
choosing c5 ≤ min{1/4, c2 /16c3 } and c′ = c = 2c3 c5 .
5.5. Proof of Lemma 8. We recall a standard fact.
Lemma 11. If a random variable X satisfies
E exp(λX)
≤ exp(ψv,c (λ)),
t2
then for t > 0, P (X ≥ t) ∨ P(X ≤ −t) ≤ exp − 2(v+ct) .
(n)
pf
2
0
Proof of Lemma 8. For c = 2c3 , consider the event En ≡ log (n) < cjnδn .
pf
1
By Lemma 11, we have for some constant C > 0 depending on c1 , c3 and κ,
(n)
(n)
p
p
(n)
f0
(n)
(n)
f0
− Pf0 log (n)
≥ cjnδn2 − c3 nd2n (f0 , f1 )
Pf0 (Enc ) ≤ Pf0 log (n)
p f1
p f1
(n)
(n)
p f0
p f0
(n)
(n)
≤ Pf0 log (n)
− Pf0 log (n)
≥ c3 jnδn2 (since dn (f0 , f1 ) ≤ δn )
p f1
p f1
n2 j 2 δn4
≤ C exp −C −1 njδn2 .
≤ C exp −
2
C(njδn + 1)
24
Q. HAN
We remind the reader that the constant C may not be the same in the above
series of inequalities, and hence the last inequality follows by noting that (i)
if njδn2 ≥ 1, then we replace the denumerator of the second last line by 2,
(ii) if njδn2 < 1, then we increase C ≥ 1. Then
(n)
p
−1
2
(n)
(n)
(n)
(n)
f0
1En + Ce−C njδn
Pf0 U = Pf0 U 1En + Pf0 U 1Enc ≤ Pf1 U (n)
p f1
(n)
2
≤ Pf1 U · ecnjδn + Ce−C
completing the proof.
−1 njδ 2
n
,
5.6. Proof of Proposition 1.
P
2
Proof of Proposition 1. Let Σn = m∈I e−2nδn,m be the total mass. Then
2
2
2
e−2nδn,1 ≤ Σn ≤ 2e−2nδn,1 /c ≤ 2. The first condition ofP(P1) is trivial. We only need to verify the second condition of (P1):
k>hm λn,k =
P
2
2
2
2
2
−2nδ
n,k ≤ e2nδn,1 · 2e−(2h/c )nδn,m ≤ 2e−2nδn,m , where the first
Σ−1
n
k>hm e
inequality follows from (2.5) and the second by the condition h ≥ 2c2 .
6. Proofs for applications
The proofs of the theorems in Section 4 follow the similar route by verifying (i) the local entropy condition in Assumption B, (ii) the summability
condition in (2.5) and (iii) the sufficient mass condition (P2) in Assumption
C. We remind the reader that we use (2.8) in all examples as the first-step
(model selection) prior. We only prove Theorems 7 and 11 in this section.
The proofs for Theorems 8-10 are deferred to Appendix B.
6.1. Proof of Theorem 7.
Lemma 12. Let r ∈ I. Suppose that the linear map X : Rm1 ×m2 → Rn
is uniform RIP(ν; I). Then for any ε > 0 and A0 ∈ Rm1 ×m2 such that
rank(A0 ) ≤ r, we have
log N c5 ε, {fA ∈ Fr : ℓn (fA , fA0 ) ≤ 2ε}, ℓn ≤ 2(m1 + m2 )r · log 18ν̄/c5 ν .
We will need the following result.
Lemma 13. Let S(r, B) = {A ∈ Rm1 ×m2 : rank(A) ≤ r, kAk2 ≤ B}. Then
(m1 +m2 −1)r
N ε, S(r, B), k·k2 ≤ 9B
.
ε
Proof of Lemma 13. The case for B = 1 follows from Lemma 3.1 of [12] and
the general case follows by a scaling argument. We omit the details.
Proof of Lemma 12. We only need to consider the case r ≤ rmax . First note
that the entropy in question equals
√
√
(6.1) log N c5 nε, {X (A) : kX (A − A0 )k2 ≤ 2 nε, rank A ≤ r}, k·k2 .
BAYES MODEL SELECTION
25
By uniform RIP(ν; I), the set to be covered in the above display is contained
in {X (A) : kA − A0 k2 ≤ 2ε/ν, rank A ≤ r} ⊂ X (S(2r, 2ε/ν )). On the other
hand, again by uniform RIP(ν; I), a c√
5 ε/ν̄-cover of the set S(2r, 2ε/ν) under
the Frobenius norm k·k2 induces a c5 nε-cover of X (S(2r, 2ε/ν )) under the
Euclidean k·k2 norm. This implies that (6.1) can be further bounded from
above by
log N c5 ε/ν̄, S(2r, 2ε/ν ), k·k2 ≤ 2(m1 + m2 )r · log 18ν̄/c5 ν ,
where the last inequality follows from Lemma 13.
2 = 4 log(18ν̄/c5 ν) ∨ 1 ·(m1 +m2 )r log m̄ . Clearly δ 2 satisfies
Now we take δn,r
n,r
c7
η
n
(2.5) with c ≡ 1 and γ = 1.
Lemma 14. Suppose that X : Rm1 ×m2 → Rn is uniform RIP(ν; I), and
that (4.2) holds. Then (P2) in Assumption C holds.
Proof of Lemma 14. We only need to consider r ≤ rmax . First note that
(6.2)
2
/c3
Πn,r fA ∈ Fr : ℓ2n (fA , fA0,r ) ≤ δn,r
√
√
= ΠG A ∈ Rm1 ×m2 : kX (A − A0,r )k2 ≤ nδn,r / c3 , rank(A) ≤ r
√
≥ ΠG A ∈ Rm1 ×m2 : kA − A0,r k2 ≤ δn,r /ν̄ c3 , rank(A) ≤ r .
P
Let A0,r ≡ ri=1 σi ūi v̄i⊤ be the spectral decomposition of A0,r , and let ui ≡
P
√
√
σi ūi and vi ≡ σi v̄i . Then A0,r ≡ ri=1 ui vi⊤ . Now for u∗i ∈ Bm1 (ui , ε)
P
and vi∗ ∈ Bm2 (vi , ε), i = 1, . . . , r, let A∗ ≡ ri=1 u∗i (vi∗ )⊤ , then by noting
√
that the Frobenius norm is sub-multiplicative and that kui k2 = kvi k2 = σi ,
we have for ε ≤ 1,
kA∗ − A0,r k2 ≤
≤
r
X
k(ui − u∗i )vi⊤ k2 + ku∗i (vi − vi∗ )⊤ k2
i=1
r
X
i=1
√
√
(ε σi + ( σi + ε)ε) ≤ ρr ε
√
where ρr ≡ i=1 (2 σi + 1). Now with εn,r ≡
can be further bounded from below as follows:
Pr
δ
√n,r
ν̄ c3 ρr
∧ 1 we see that (6.2)
(6.2) ≥ ΠG (∩ri=1 {(u∗i , vi∗ ) : u∗i ∈ Bm1 (ui , εn,r ), vi∗ ∈ Bm2 (vi , εn,r )})
≥
tr (m1 +m2 )r
(τr,g
)
r
Y
i=1
vol (Bm1 (ui , εn,r )) · vol (Bm2 (vi , εn,r ))
−1
−1
tr
r
≥ (τr,g
· εn,r )(m1 +m2 )r vm
v r ≥ e−(m1 +m2 )r·(log m̄/2+log τr,g +log(εn,r ∨1)) .
1 m2
√
where vd = vol(Bd (0, 1)), and vd ≥ (1/ d)d . Hence in order that the right
2
side of the above display can be bounded from below by e−2nδn,r , it suffices
26
Q. HAN
to require that
(6.3)
log m̄
−1
.
max log τr,g
, log(ε−1
n,r ∨ 1) ≤
2η
2
2
It is easy to calculate that ε−2
n,r ≤ 9ν̄ c3 (1 ∨ σmax (A0,r )) rmax n. Now the
conclusion follows by noting that (4.2) implies (6.3) since rmax ≤ n and
c3 = 1.
Proof of Theorem 7. The theorem follows by Theorems 1 and 2, Proposition
1 coupled with Lemmas 12 and 14.
6.2. Proof of Theorem 11.
Lemma 15. For any Σ0 ∈ M(k,s) ,
log N c5 ε, {Σ ∈ M(k,s) : kΣ − Σ0 kF ≤ CL ε}, k·kF
√
≤ ks log(ep/s) + ks log(6 kL/c5 ε).
Proof. The set involved in the entropy is equivalent to
o
n
(6.4)
Λ ∈ R(k,s) (L) : kΛΛ⊤ − Λ0 Λ⊤
0 kF ≤ CL ε, k·kF .
√
We claim that supΛ∈R(k,s) kΛΛ⊤ kF ≤ kL. To see this, let Λ ≡ P ΞQ⊤ be the
singular value decomposition of Λ, where P ∈ Rp×p , Q ∈ Rk×k are unitary
matrices and Ξ ∈ Rp×k is a diagonal matrix. Then kΛΛ⊤ k2F = kΞΞ⊤ k2F ≤
kL, proving the claim. Combined with (6.4) and Euclidean embedding, we
see that the entropy in question can be bounded by
√
log N c5 ε, {v ∈ B0 (ks; pk) : kvk2 ≤ 2 kL}, k·k2
√ !ks
√
kL
pk
6
≤ ks log(ep/s) + ks log(6 kL/c5 ε),
≤ log
c5 ε
ks
where B0 (s; pk) ≡ {v ∈ Rpk : |supp(v)| ≤ s}.
′
′
2
Proof of Theorem 11. Take δn,(k,s)
= KC ′ ks
n log(C p) for some C ≥ e depending on c5 , c7 , L and some absolute constant K ≥ 1. Apparently (2.5)
holds with c = 1, γ = 1. The prior Πn,(k,s) on M(k,s) will be the uniform disq
log(C ′ p) covering-ball of the set {Σ ∈ M(k,s) }
tribution on a minimal C ′ cks
3n
under the Frobenius norm k·kF . The above lemma entails that the cardinality for such a cover is no more than exp(C ′′ ks log(C ′′ p)) for another constant
C ′′ ≥ e depending on c3 , c5 , c7 , L. Hence
o
n
2
≥ exp(−C ′′ ks log(C ′′ p)),
/c3
Πn,(k,s) Σ ∈ M(k,s) : kΣ − Σ0,(k,s)kF ≤ δn,(k,s)
2
which can be bounded from below by exp(−2nδn,(k,s)
) by choosing K large
enough. The claim of Theorem 11 now follows from these considerations
along with Theorems 1 and 6, Proposition 1.
BAYES MODEL SELECTION
27
Appendix A. Proof of lemmas in Section 3
(n)
Proof of Lemma 2. Let Pθ0 denote the probability measure induced by the
joint distribution of (X1 , . . . , Xn ) when the underlying signal is θ0 .
First consider Gaussian regression case. It is easy to calculate that
(n)
n
X
p0
1
1
(X (n) ) =
− (Xi − θ0,i )2 + (Xi − θ1,i )2 ,
log θ(n)
2
2
p θ1
i=1
(n)
(n)
Pθ0 log
p θ0
(n)
p θ1
1
(X (n) ) = nℓ2n (θ0 , θ1 ).
2
Then
(n)
Pθ0 exp
"
≤ P exp
(n)
(n)
λ log
n
X
i=1
p θ0
(n)
p θ1
(X
(n)
)−
εi λ θ0,i − θ1,i
(n)
Pθ0 log
!
p θ0
(n)
p θ1
(X
(n)
!#
)
≤ exp λ2 nℓ2n (θ0 , θ1 )/2 .
Next consider binary regression. Easy calculation shows that
(n)
log
Pθ(n) log
0
p θ0
(n)
p θ1
(n)
p θ0
(n)
p θ1
=
n
X
Xi log
1 − θ0,i
θ0,i
+ (1 − Xi ) log
,
θ1,i
1 − θ1,i
θ0,i log
θ0,i
1 − θ0,i
+ (1 − θ0,i ) log
.
θ1,i
1 − θ1,i
i=1
=
n
X
i=1
Using the inequality cx ≤ log(1 + x) ≤ x for all −1 < x ≤ c′ for some c > 0
depending on c′ > −1 only, we have shown Pθ(n) log
0
(n)
0
(n)
pθ
1
pθ
≍ nℓ2n (θ0 , θ1 ) under
the assumed condition that Θn ⊂ [η, 1 − η]n . Now we verify the Bernstein
condition:
"
!#
(n)
(n)
p θ0
p θ0
(n)
Pθ0 exp λ log (n) − Pθ(n) log (n)
0
p θ1
p θ1
!
!
n
n
X
X
(n)
2
2
ti /8
(Xi − θ0,i )ti ≤ exp λ
= Pθ0 exp λ
i=1
i=1
θ
1−θ
where ti ≡ ti (θ0 , θ1 ) = log 1−θ0,i0,i · θ1,i1,i and the last inequality follows
from Hoeffding’s inequality (cf. Section 2.6 of [9]). The claim follows by
h
i2
θ0,i −θ1,i
noting that t2i = log (1−θ
+
1
≍ (θ0,i − θ1,i )2 by the assumed
0,i )θ1,i
condition and the aforementioned inequality log(1 + x) ≍ x in a constrained
range.
28
Q. HAN
Finally consider Poisson regression. It is easy to see that
(n)
log
(n)
Pθ0 log
p θ0
(n)
p θ1
(n)
p θ0
(n)
p θ1
=
n
X
Xi log
θ0,i
+ (θ1,i − θ0,i ),
θ1,i
θ0,i log
θ0,i
+ (θ1,i − θ0,i ).
θ1,i
i=1
=
n
X
i=1
Note that for any 1/M ≤ p, q ≤ M ,
2
q
q
q
p
− 1 ≍ (p − q)2 ,
p log − (p − q) = p − log − 1 +
≍p·
q
p
p
p
where in the middle we used the fact that − log x − 1 + x ≍ (x − 1)2 for x
(n)
bounded away from 0 and ∞. This shows that Pθ0 log
(n)
0
(n)
pθ
1
pθ
≍ nℓ2n (θ0 , θ1 ).
Next we verify the Bernstein condition:
"
!#
(n)
(n)
p θ0
p θ0
(n)
Pθ0 exp λ log (n) − Pθ(n) log (n)
0
p θ1
pθ
#
# 1 " n
" n
X
X
(n)
λti
θ0,i e − 1 − λti
(Xi − θ0,i )ti ≤ exp
≤ Pθ0 exp λ
i=1
i=1
where ti = log(θ0,i /θ1,i ). Now for any |λ| ≤ 1, we have eλti −1−λti ≍ λ2 t2i ≤
λ2 t2i /(1−|λ|). On the other hand, θ0,i t2i = θ0,i (log(θ0,i /θ1,i ))2 ≍ (θ0,i −θ1,i )2 ,
completing the proof.
Proof of Lemma 3. Since the log-likelihood ratio for X1 , . . . , Xn can be decomposed into sums of the log-likelihood ratio for single samples, and the
log-likelihood ratio is uniformly bounded over F (since G is bounded), classical Bernstein inequality applies to see that for any couple (f0 , f1 ), the Bernstein condition in Assumption A holds with v = κg nVarf0 (log f0 /f1 ), c = κΓ
where κg , κΓ depend only on G. Hence we only need to verify that
Varf0 (log f0 /f1 ) . h2 (f0 , f1 ),
Pf0 (log f0 /f1 ) ≍ h2 (f0 , f1 ).
This can be seen by Lemma 8 of [24] and the fact that Hellinger metric is
dominated by the Kullback-Leiber divergence.
Lemma 16. Let Z be a random variable bounded by M > 0. Then E exp(Z) ≤
exp eM EZ .
Proof. Note that
log E exp(Z) = log (E [exp(Z) − 1] + 1) ≤ E [exp(Z) − 1] ≤ eM EZ.
P
where the last inequality follows from Taylor expansion ex −1 = nk=1 xk /k! ≤
P
x k≥1 M k−1 /k! ≤ xeM .
BAYES MODEL SELECTION
29
Proof of Lemma 4. We omit explicit dependence of M on the notation dr,M
(n)
and rM in the proof. Let Pf0 denote the probability measure induced by
the joint distribution of (X0 , . . . , Xn ) where X0 is distributed according to
the stationary density qf0 . Easy computation shows that
(n)
n−1
X
p f0
1
2
log (n) =
εi+1 (f0 (Xi ) − f1 (Xi )) + (f0 (Xi ) − f1 (Xi )) ,
2
p f1
i=0
(n)
Z
p f0
n
(n)
Pf0 log (n) =
(f0 − f1 )2 qf0 dλ.
2
p f1
Here λ denotes the Lebesgue measure on R. By the arguments on page 209
of [23], we see that r . qf0 . r. Hence we only need to verify the Bernstein
condition. By Cauchy-Schwarz inequality,
(A.1)
2
!
(n)
n−1
X
p
(n)
f
(n)
0
≤ P exp 2λ
P exp λ log
εi+1 (f0 (Xi ) − f1 (Xi ))
f0
f0
(n)
p f1
i=0
!
n−1
X
(n)
(f0 (Xi ) − f1 (Xi ))2
× Pf0 exp λ
i=0
≡ (I) × (II).
The first term (I) can be handled by an inductive calculation. First note
that for any |µ| ≤ 2 and X1 ∈ R,
(A.2)
2 (f
Pp(·|X1 ) eµ
2
0 (X2 )−f1 (X2 )
≤ ee
16M 2 µ2 P
2
p(·|X1 ) (f0 −f1 )(X2 )
2 d2 (f ,f )
r 0 1
≤ eCM µ
where the first inequality follows from Lemma 16 and the second inequality
follows from r(·) . pf (·|x) . r(·) holds for all x ∈ R where the constant
Pn−1
involved depends only on M . Let Sn ≡ i=0
εi+1 (f0 (Xi ) − f1 (Xi )) and
εn ≡ (ε1 , . . . , εn ). Then for |λ| ≤ 1, let µ ≡ 2λ,
(n)
(n)
Pf0 e2λSn = Pf0 eµSn
= EX0 ,εn−1 eµSn−1 Eεn eµεn (f0 (Xn−1 )−f1 (Xn−1 ))
2
2
≤ EX0 ,εn−1 eµSn−1 eµ (f0 (Xn−1 )−f1 (Xn−1 )) /2
h
i
2
2
≤ EX0 ,εn−2 eµSn−2 Eεn−1 eµεn−1 (f0 (Xn−2 )−f1 (Xn−2 ))+µ (f0 (Xn−1 )−f1 (Xn−1 )) /2
1/2
≤ EX0 ,εn−2 eµSn−2 Eεn−1 e2µεn−1 (f0 (Xn−2 )−f1 (Xn−2 ))
1/2
µ2 (f0 (Xn−1 )−f1 (Xn−1 ))2
× Ep(·|Xn−2 ) e
2
2
2 2
≤ EX0 ,εn−2 eµSn−2 eµ (f0 (Xn−2 )−f1 (Xn−2 )) · eCM µ dr (f0 ,f1 )/2 ,
30
Q. HAN
where the last inequality follows from (A.2). Now we can iterate the above
calculation to see that
(I) ≤ exp(CM λ2 nd2r (f0 , f1 )).
(A.3)
Next we consider (II). Since for any non-negative random variables Z1 , . . . , Zn ,
Q
Q
we have E ni=1 Zi ≤ ni=1 (EZin )1/n . It follows that
(A.4)
n
Y
1/n
(n)
Pf0 exp nλ(f0 (Xi ) − f1 (Xi ))2
= Pqf0 exp(nλ(f0 (X0 ) − f1 (X0 ))2
(II) ≤
i=1
where the last inequality follows by stationarity. On the other hand, by
Jensen’s inequality,
(n)
p
λn
f0
(n)
exp −λPf0 log (n)
≤ exp − Pqf0 (f0 − f1 )2
2
(A.5)
p f1
≤ Pqf0 exp(−λn(f0 (X0 ) − f1 (X0 ))2 /2).
Collecting (A.1) and (A.3)-(A.5), we see that for |λ| ≤ 1,
(n)
(n)
(n)
p
p f0
p f0
p
(n)
f0
Pf (n) exp λ log (n) − Pf (n) log (n) ≤ (I) · (II) exp −λPf0 log (n)
0
0
p f1
p f1
p f1
′
≤ exp(CM
λ2 nd2r (f0 , f1 )),
completing the proof.
(n)
Proof of Lemma 5. For any g ∈ G, let pg denote the probability density
function of a n-dimensional multivariate normal distribution with covariance
(n)
matrix Σg ≡ Tn (fg ), and Pg the expectation taken with respect to the
(n)
density pg . Then for any g0 , g1 ∈ G,
(n)
log
(A.6)
pg 0
(n)
pg 1
1
1
−1
(n)
− log det(Σg0 Σ−1
(X (n) ) = − (X (n) )⊤ (Σ−1
g0 − Σg1 )X
g1 ),
2
2
(n)
Pg(n)
log
0
pg 0
(n)
pg 1
1
1
log det(Σg0 Σ−1
= − tr(I − Σg0 Σ−1
g1 ) −
g1 )
2
2
where we used the fact that for a random vector X with covariance matrix
−1/2
(n)
Σ, EX ⊤ AX = tr(ΣA). Let G ≡ Σg0 X (n) ∼ N (0, I) under Pg0 , and
1/2
1/2
B ≡ I − Σg0 Σ−1
g1 Σg0 , then
(n)
Yn ≡ log
pg 0
(X (n) ) − Pg(n)
log
0
(n)
(n)
pg 0
(n)
pg 1
pg 1
i
i
1h ⊤
1 h (n) ⊤ −1
(n)
−1
G
BG
−
tr(B)
.
)X
−
tr(I
−
Σ
Σ
)
=
−
= − (X ) (Σg0 − Σ−1
g0 g1
g1
2
2
BAYES MODEL SELECTION
31
Let B = U ⊤ ΛU be the spectral decomposition of B where U is orthonormal
and Λ = diag(λ1 , . . . , λn ) is a diagonal matrix. Then we can further compute
⊤
−2Yn =d G ΛG − tr(Λ) =
n
X
i=1
λi (gi2 − 1)
where g1 , . . . , gn ’s are i.i.d. standard normal. Note that for any |t| < 1/2,
1
√
2π
Z
∞
et(x
2 −1)
e−x
−∞
2 /2
dx = √
1
e−t
2
= e 2 (− log(1−2t)−2t) ≤ et /(1−2t)
1 − 2t
P
1
k
where the inequality follows from − log(1 − 2t) − 2t =
k≥2 k (2t) =
P
2t2
1
(2t)k ≤ 1−2t
. Hence apply the above display with t = −λλi /2,
4t2 k≥0 k+2
we have that for any |λ| < 1/ maxi λi ,
(A.7)
Z ∞
n
n
Y
Y
1
2
2
2
√
E exp −λ · λi (gi − 1)/2 =
E exp(λYn ) =
e−λ·λi (x −1)/2 e−x /2 dx
2π −∞
i=1
i=1
P
n
Y
λ2 i λ2i
λ2 λ2i
exp
≤
≤ exp
.
4 + 4λλi
4 − 4|λ| maxi |λi |
i=1
Denote k·k and k·kF the matrix operator norm and Frobenius norm respectively. By the arguments on page 203 of [23], we have kΣg k ≤ 2πkeg k∞
−1 −g
and kΣ−1
g k ≤ (2π) ke k∞ . Since G is a class of uniformly bounded function classes, the spectrum of the covariance matrices Σg and their inverses
running over g must be bounded. Hence
(A.8)
−1
max|λi | = kBk = k(Σg1 − Σg0 )Σ−1
g1 k ≤ kΣg1 − Σg0 kkΣg1 k ≤ CG < ∞.
i
Next, note that
(A.9)
X
i
λ2i
1/2
= (tr(BB ⊤ ))1/2 = kBkF = k(Σg1 − Σg0 )Σ−1
g1 kF
′
≤ kΣ−1
g1 kkΣg1 − Σg0 kF ≤ CG
p
nDn2 (g0 , g1 )
where in the first inequality we used kM N kF = kN M kF for symmetric
matrices M, N and the general rule kP QkF ≤ kP kkQkF . Collecting (A.7)(A.9) we see that Assumption A is satisfied for v = κg nDn2 (g0 , g1 ) and c = κΓ
for some constants κg , κΓ depending on G only.
32
Q. HAN
(n)
Finally we establish equivalence of n1 Pg0 log
(A.6), we have
(n)
pg0
(n)
pg1
and Dn2 (g0 , g1 ). First by
(n)
1
1
= − tr(I − Σg0 Σ−1
log det(Σg0 Σ−1
g1 ) −
g1 )
2
2
1 −1/2
−1/2
−1/2
tr Σg1 (Σg0 − Σg1 )Σg−1/2
−
log
det
I
+
Σ
(Σ
−
Σ
)Σ
=
g0
g1
g1
g1
1
2
1
1
2
2
−1 2
′′
2
≤ kI − Σg0 Σ−1
g1 kF ≤ kΣg1 − Σg0 kF kΣg1 k ≤ CG nDn (g0 , g1 ).
4
4
Here in the second line we used the fact that det(AB −1 ) = det(I +B −1/2 (A−
B)B −1/2 ), and in the third line we used the fact − log det(I + A) + tr(A) ≤
1
2
2 tr(A ) for any p.s.d. matrix A, due to the inequality log(1 + x) − x ≥
− 12 x2 for all x ≥ 0. On the other hand, by using the reversed inequality
log(1 + x) − x ≤ −cx2 for all 0 ≤ x ≤ c′ where c is a constant depending only
Pg(n)
log
0
pg 0
(n)
pg 1
(n)
on c′ , we can establish Pg0 log
the proof.
(n)
pg0
(n)
pg1
≥ CG′′′ nDn2 (g0 , g1 ), thereby completing
Proof of Lemma 6. Note that
(n)
n
X
p Σ0
1
1 ⊤ −1
−1
−1
(n)
X (Σ0 − Σ1 )Xi − log det(Σ0 Σ1 ) ,
log (n) (X ) = −
2 i
2
p
i=1
Σ1
(n)
PΣ0 log
(n)
p Σ0
(n)
p Σ1
n
n
log det(Σ0 Σ−1
= − tr(I − Σ0 Σ−1
1 )−
1 ).
2
2
The rest of the proof proceeds along the same line as in Lemma 5.
Appendix B. Proof of remaining theorems in Section 4
B.1. Proof of Theorem 8.
Lemma 17. Let n ≥ 2. Then for any g ∈ Fm ,
log N c5 ε, {f ∈ Fm : ℓn (f, g) ≤ 2ε}, ℓn ≤ 2 log(6/c5 ) · m log(en).
Proof of Lemma 17. Let Qm denote all m-partitions of the design points
n
x1 , . . . , xn . Then it is easy to see that |Qm | = m−1
. For a given mpartition Q ∈ Qm , let Fm,Q ⊂ Fm denote all monotonic non-decreasing
functions that are constant on the partition Q. Then the entropy in question
can be bounded by
n
(B.1)
log
max N c5 ε, {f ∈ Fm,Q : ℓn (f, g) ≤ 2ε}, ℓn .
m − 1 Q∈Qm
On the other hand,√for any fixed m-partition Q ∈ Qm√
, the entropy
term
above equals N c5 nε, {γ ∈ Pn,m,Q : kγ − gk2 ≤ 2 nε}, k·k2 , where
Pn,m,Q ≡ {(f (x1 ), . . . , f (xn )) : f ∈ Fm,Q }. By Pythagoras theorem, the set
BAYES MODEL SELECTION
33
involved in the entropy is included in {γ ∈ Pn,m,Q : kγ − πPn,m,Q (g)k2 ≤
√
2 nε} where πPn,m,Q is the natural projection from Rn onto the subspace
Pn,m,Q . Clearly Pn,m,Q is contained in a linear subspace with dimension no
more than m. Using entropy result for the finite-dimensional space [Problem
2.1.6 in [47], page 94 combined with the discussion in page 98 relating the
packing number and covering number],
(B.2)
√ m
3 · 2 nε
√
log N c5 ε, {f ∈ Fm,Q : ℓn (f, f0,m ) ≤ 2ε}, ℓn ≤ log
= m log(6/c5 ).
c5 nε
n
The claim follows by (B.1)-(B.2), and log m−1
≤ m log(en).
1 m log(en)
5)
2
∨
. It is clear that (2.5)
≡ 4 log(6/c
Hence we can take δn,m
c7
η
n
is satisfied with c ≡ 1 and γ = 1.
Lemma 18. Suppose that (4.5) holds . Then (P2) in Assumption C holds.
Proof of Lemma 18. Let Q0,m = {Ik }m
k=1 be the associated m-partition of
{x1 , . . . , xn } of f0,m ∈ Fm with the convention that {Ik } ⊂ {x1 , . . . , xn }
is ordered from smaller values to bigger ones. Thenit is easy to see that
µ0,m = (µ0,1 , . . . , µ0,m ) ≡ f0,m (xi(1) ), . . . , f0,m (xi(m) ) ∈ Rm is well-defined
and µ0,1 ≤ . . . ≤ µ0,m . It is easy to see that any f ∈ Fm,Q0,m satisfying
√
the property that sup1≤k≤m |f (xi(k) ) − µ0,k | ≤ δn,m / c3 leads to the error
2 /c . Hence
estimate ℓ2n (f, f0,m ) ≤ δn,m
3
(B.3)
2
/c3 }
Πn,m {f ∈ Fm : ℓ2n (f, f0,m ) ≤ δn,m
−1
n
2
≥
Πḡm {f ∈ Fm,Q0,m : ℓ2n (f, f0,m ) ≤ δn,m
/c3 }
m−1
−1
m
√
n
≥
Πḡm {µ ∈ Rm : µ ≡ µ0,k + εk k=1 , 0 ≤ ε1 ≤ . . . ≤ εm ≤ δn,m / c3 }
m−1
−1
√ m 1
n
≥
·
inf
√ ḡm (µ)(1 ∧ δn,m / c3 )
m
m
m−1
m!
µ∈R :µ≡(µ0,k +εk )k=1 ,0≤ε1 ≤...≤εm ≤1∧δn,m / c3
−1
√c
3
iso )−1 ∨1 −m log
√
n
−m log(en)−m log (τm,g
∨1
iso m
δn,m
· (τm,g
) (1 ∧ δn,m / c3 )m ≥ e
≥
.
m−1
Here the first inequality in the last line follows from the definition of ḡm
iso . The claim follows by verifying (4.5) implies that the second and
and τm,g
1
· m log(en) [the
third term in the exponent above are both bounded by 2η
√ −1
third term does not contribute to the condition since c3 δn,m ≤ n by noting
c3 = 1 in the Gaussian regression setting and definition of η].
Proof of Theorem 8. The theorem follows by Theorems 1 and 2, Proposition
1 coupled with Lemmas 17 and 18.
We now prove Lemma 7. We need the following result.
34
Q. HAN
Lemma 19. Let f0 := (f0 (x1 ), . . . , f0 (xn )) ∈ Rn , and f0,m := (f0,m (x1 ), . . . , f0,m (xn )) ∈
Rn where f0,m ∈ arg ming∈Fm ℓ2n (f0 , g). Suppose that kf0 k2 ≤ L, and that
there exists some element f ∈ Fm such that f ≡ (f (x1 ), . . . , f (xn )) satisfies
kf k2 ≤ L. Then kf0,m k2 ≤ 3L.
Proof of Lemma 19. It can be seen that f0,m ∈ arg minγ∈Pn,m Lf0 (γ) ≡
arg minγ∈Pn,m kf0 − γk2 where Pn,m ≡ {(f (x1 ), . . . , f (xn )) : f ∈ Fm }. For
any γ ∈ Pn,m such that kγk2 ≤ L, the loss function satisfies Lf0 (γ) ≤ 2L
by triangle inequality. If kf0,m k2 > 3L, then Lf0 (f0,m ) = kf0 − f0,m k2 ≥
kf0,m k2 − kf0 k2 > 3L − L = 2L, contradicting the definition of f0,m as a
minimizer of Lf0 (·) over Pm,n . This shows the claim.
R1 2
R
1
Proof of Lemma 7. Let L = 0 f . Note that kf0 k22 ≤ 2n 0 f 2 (x) dx =
√
2nL2 . By Lemma
19, we see that kf0,m k2 ≤ 3 2nL which
√ entails that
√
kf0,m k∞ ≤ 3 2nL. Now the conclusion follows from g(3 2nL + 1) ≥
(en)−1/2η while the left side is at least on the order of n−α/2 as n → ∞.
B.2. Proof of Theorem 9. Checking the local entropy assumption B requires some additional work. The notion of pseudo-dimension will be useful
in this regard. Following [39] Section 4, a subset V of Rd is said to have
pseudo-dimension t, denoted as pdim(V ) = t, if for every x ∈ Rt+1 and
indices I = (i1 , · · · , it+1 ) ∈ {1, · · · , n}t+1 with iα 6= iβ for all α 6= β, we
can always find a sub-index set J ⊂ I such that no v ∈ V satisfies both
vi > xi for all i ∈ J and vi < xi for all i ∈ I \ J.
Lemma 20. Let n
Suppose that pdim(Pn,m ) ≤ Dm where Pn,m :=
≥ 2.
n
{ f (x1 ), . . . , f (xn ) ∈ R : f ∈ Fm }. Then for all g ∈ Fm ,
log N c5 ε, {f ∈ Fm : ℓn (f, g) ≤ 2ε}, ℓn ) ≤ C · Dm log n
for some constant C > 0 depending on c5 .
To prove Lemma 20, we need the following result, cf. Theorem B.2 [25].
Lemma 21. Let V be a subset of Rn with supv∈V kvk∞ ≤ B and pseudodimension at most t. Then, for every ε > 0, we have N (ε, A, k·k2 ) ≤
√ κt
, holds for some absolute constant κ ≥ 1.
4 + 2Bε n
Proof of Lemma 20. Note that the entropy in question can be bounded by
√
√
log N c5 ε n, {Pn,m − g} ∩ Bn (0, 2 nε), k·k2 .
Since translation does not change the pseudo-dimension of a set, Pn,m − g
has the same pseudo-dimension with that of Pn,m , which is bounded
√ from
above by Dm by assumption.
Further note that {Pn,m − g} ∩ Bn(0, 2 nε) is
√
uniformly bounded by 2 nε, hence an application of Lemma 21 yields that
the above display can be further bounded as follows:
log N c5 ε, {f ∈ Fm : ℓn (f, g) ≤ 2ε}, ℓn ) ≤ κDm log 4 + 4n/c5 ) ≤ C · Dm log n
for some constant C > 0 depending on c5 whenever n ≥ 2.
BAYES MODEL SELECTION
35
The pseudo-dimension of the class of piecewise affine functions Fm can
be well controlled, as the following lemma shows.
Lemma 22 (Lemma 4.9 in [26]). pdim(Pn,m ) ≤ 6md log 3m.
As an immediate result of Lemmas 20 and 22, we can take for n ≥ 2,
:= (C ∨ 1/η) d · logn n · m log 3m for some C ≥ 2/c7 depending on c5 , c7 .
2
δn,m
Lemma 23. Suppose that (4.8) holds and n ≥ d. Then (P2) in Assumption
C holds.
Proof of Lemma 23. We write f0,m ≡ max1≤i≤m ai · x + bi throughout
√
the proof. We first claim that for any a∗i ∈ Bd (ai , δn,m /2 c3 d) and b∗i ∈
√
∗
∗
∗
∗ (x) := max
B1 (bi , δn,m /2 c3 ), let gm
1≤i≤m (ai · x + bi ), then ℓ∞ (gm , f0,m ) ≤
√
δn,m / c3 . To see this, for any x ∈ X, there exists some index ix ∈ {1, . . . , m}
∗ (x) = a∗ · x + b∗ . Hence
such that gm
ix
ix
∗
gm
(x) − f0,m (x) ≤ a∗ix − aix · x + b∗ix − bix ≤ ka∗ix − aix k2 kxk2 + |b∗ix − bix |
δn,m
δn,m √
δn,m
≤ √
· d+ √ = √ .
2 c3
c3
2 c3 d
The reverse direction can be shown similarly, whence the claim follows by
taking supremum over x ∈ X. This entails that
(B.4)
2
/c3
Πn,m f ∈ Fm : ℓ2n (f, f0,m ) ≤ δn,m
n
p
√ o
∗
∗ ∗
∗
c
d),
b
∈
B
(b
,
δ
/2
c3 )
(a
,
b
)
:
a
∈
B
(a
,
δ
/2
≥ ΠG ∩m
3
1
i
n,m
i
n,m
d
i
i i
i
i=1
=
m
Y
i=1
m
Y
p
√
Πg⊗d Bd (ai , δn,m /2 c3 d) · Πg B1 (bi , δn,m /2 c3 )
d
δn,m
δn,m
√
√
g(kai k∞ + 1) · g(|bi | + 1) ·
≥
∧1
∧ 1 vd
4c3
4c3 d
i=1
√
1
4c3 d
−1
∨ 1 − md log d
≥ exp − 2m(d + 1) log τm,g ∨ 1 − m(d + 1) log
δn,m
2
√
where vd ≡ vol(Bd (0, 1)) and we used the fact that vd ≥ (1/ d)d . Now by
requiring that n ≥ d and
(B.5)
√
d
4c3 d
−1
∨1
≤
log n · m log 3m,
max 2m(d + 1) log τm,g ∨ 1 , m(d + 1) log
δn,m
2η
√
−1 ≤ √n,
the claim follows by verifying (4.8) implies (B.5)[since 4c3 dδn,m
the second term is bounded by md log n. The inequality follows by noting
η < 1/4].
d
Lemma 24. For n ≥ 2, (2.5) is satisfied for c = 1 and γ = 2.
36
Q. HAN
2
= c log n(m log 3m) throughProof. For fixed n ≥ 2 and η > 0, write nδn,m
out the proof, where c ≥ 2/c7 . Then for any α ≥ c7 /2 and h ≥ 1, since
log(3m′ ) ≥ log(3hm) ≥ log(3m) for any m′ ≥ hm, we have
X −αnδ2
X
2
′
e−αchm log n log 3m
n,m′ ≤
≤ 2e−αhnδn,m .
e
e−αcm (log n·log 3m) =
−αc
log
n
log
3m
1−e
′
′
m ≥hm
m ≥hm
For the second condition of (2.5), note that for γ = 2, in order to verify
2
2 , it suffices to have hm log(3hm) ≤ h2 m log(3m), equivalently
δn,hm
≤ h2 δn,m
3hm ≤ (3m)h , and hence 3h−1 ≥ h for all h ≥ 1 suffices. This is valid and
hence completing the proof.
Proof of Theorem 9. This is a direct consequence of Theorems 1 and 2,
Lemma 23 and 24, combined with Proposition 1.
B.3. Proof of Theorem 10.
Lemma 25. Let n ≥ 2, then for any g ∈ F(s,m) ,
log N (c5 ε, {f ∈ F(s,m) : ℓn (f, g) ≤ 2ε}, ℓn ) ≤ 2 log(6/c5 ) s log(ep) + m log(en) .
Proof. The proof borrows notation from the proof of Lemma 17. Further
let Ss denote all subsets of {1, . . . , p} with cardinality at most s. Then the
entropy in question can be bounded by
p
n
log
N (c5 ε, {f ∈ F(s,m),(S,Q) : ℓn (f, g) ≤ 2ε}, ℓn )
max
s m − 1 S∈Ss ,Q∈Qm
≤ s log(ep) + m log(en)
√
√
log N c5 nε, {γ ∈ Pn,(S,Q) : kγ − gk2 ≤ 2 nε}, k·k2
+
max
S∈Ss ,Q∈Qm
n
n
where Pn,(S,Q) ≡ {(x⊤
i β+u(zi ))i=1 ∈ R : supp(β) = S, u is constant on the partitions of Q}
is contained in a linear subspace of dimension no more than s + m. Now
similar arguments as in Lemma 17 shows that the entropy term in the above
display can be bounded by (s + m) log(6/c5 ), proving the claim.
log(en)
5)
2
∨ η1 s log(ep)+m
.
Hence we can take δn,(s,m)
≡ 4 log(6/c
c7
n
Lemma 26. (2.5) holds with c = 1 and γ = 1.
Proof. For the first condition of (2.5), note that for any h ≥ 1, with c′ ≡
4 log(6/c5 )
∨ η1 in the proof, for any α ≥ c7 /2, αc′ ≥ 2 log(6/c5 ) ≥ 2 since
c7
c5 ≤ 1/4,
X
X
X
2
′
′
−αnδn,(s
′ ,m′ )
e
=
e−αc s log(ep)
e−αc m log(en)
(s′ ,m′ )≥(hs,hm)
s′ ≥hs
m′ ≥hm
′
′
2
−αnhδn,(s,m)
≤ (1 − e−αc )−2 e−αc h(s log(ep)+m log(en) ≤ 2e
The second condition of (2.5) is easy to verify by our choices of c, γ.
Lemma 27. Suppose (4.11) holds. Then (P2) in Assumption C holds.
.
BAYES MODEL SELECTION
37
Proof. Using notation in Lemma 25,
(B.6)
2
/c3 }
Πn,(s,m) {f ∈ F(s,m) : ℓ2n (f, f0,(s,m) ) ≤ δn,(s,m)
−1
−1
p
n
2
≥
/c3 }
Πg⊗s ⊗ḡm {f ∈ F(s,m),(S0 ,Q0) : ℓ2n (f, f0,(s,m) ) ≤ δn,(s,m)
s
m−1
5)
∨ η1 throughout the proof, and
where f0,m ∈ F(s,m),(S0 ,Q0) . Let c′ ≡ 4 log(6/c
c7
2
2
δn,s
≡ c′ s log(ep)/n and δn,m
≡ c′ m log(en)/n. To bound the prior mass
of the above display from below, it suffices to bound the product of the
following two terms:
2
πs ≡ Πg⊗s {β ∈ B0 (s) : βS0c = 0, ℓ2n (hβ , hβ0,s ) ≤ δn,s
/2c3 } ,
(B.7)
2
πm ≡ Πḡm {u ∈ Um,Q0 : ℓ2n (u, u0,m ) ≤ δn,m
/2c3 } .
The first term equals
√
√
Πg⊗s β ∈ B0 (s) : βS0c = 0, kXβ − Xβ0,s k2 ≤ nδn,s / 2c3
(B.8)
δn,s
1
c
√
≥ Πg⊗s
β ∈ B0 (s) : βS0 = 0, kβ − β0,s k2 ≤
·
σΣ
2c3
Here the inequality follows by noting
2
kXβ − Xβ0,s k22 ≤ n(β − β0,s )⊤ Σ(β − β0,s ) ≤ nσΣ
kβ − β0,s k22 ,
√
where σΣ denotes the largest singular value of X ⊤ X/n. Note that σΣ ≤ p
since the trace for X ⊤ X/n is p and the trace of a p.s.d. matrix dominates
the largest eigenvalue. The set in the last line of (B.8) is supported on RpS0
s
δ
1
s
√n,s ∧ 1
·
and hence can be further bounded from below by τs,g
vs
σΣ
2c3
where vs = vol(Bs (0, 1)). Hence
s
δn,s
1
s
∧ 1 vs
·√
πs ≥ (τs,g ∧ 1)
σΣ
2c3
(B.9)
2
s
2c3 σΣ
1
−1
≥ exp − s log s − s log τs,g ∨ 1 − log
∨1 ,
2
2
2
δn,s
√
where in the last inequality we used that vs ≥ (1/ s)s . By repeating the
arguments in the proof of Lemma 18, we have
m
2c3
−1
∨1 .
πm ≥ exp − m log τm,g ∨ 1 − log
(B.10)
2
2
δn,m
Combining (B.6), (B.7), (B.9) and (B.10), we see that
(B.11)
2
/c3 }
Πn,(s,m) {f ∈ F(s,m) : ℓ2n (f, f0,(s,m) ) ≤ δn,(s,m)
−1
−1
∨1
∨ 1 − m log τm,g
≥ exp −2s log(ep) − m log(en) − s log τs,g
2
2c3 σΣ
2c3
m
s
∨ 1 − log
∨1
.
× exp − log
2
2
2
δn,s
2
δn,m
38
Q. HAN
In order that the right side of the above display bounded from below by
2
exp(−2nδn,(s,m)
), we only need to require that
)
√
(
2c3 σΣ
∨1
−s log
1
−1
δ
n,s
≥ e− 2η s log(ep) ,
min e−s log(τs,g ∨1) , e
(
−1
τm,g
∨1
min e−m log(
−m log
), e
√
2c3
δn,m
)
∨1
1
≥ e− 2η m log(en) .
The first terms in the above two lines lead to (4.11). The other terms in the
2c3 c7
2
n≤
above two lines do not contribute by noting that 2c3 /δn,m
≤ 4 log(6/c
5)
(1/2)n ≤ en since c3 = 1 (in Gaussian regression model) and c7 ∈ (0, 1),
2 /δ 2 ≤ σ 2 n ≤ pn ≤ p2 and η < 1/4.
while 2c3 σΣ
n,s
Σ
Proof of Theorem 10. The claim of the theorem follows by Theorems 1 and
2, Proposition 1 and Lemmas 25-27.
Appendix C. Proof of auxiliary lemmas in Section 5
Proof of Lemma 9. Let Fj := {f ∈ F : jε < dn (f, f0 ) ≤ 2jε} and Gj ⊂ Fj
be the collection of functions that form a minimal c5 jε covering set of Fj
under the metric dn . Then by assumption |Gj | ≤ N (jε). Furthermore, for
each g ∈ Gj , it follows by Lemma 1 that there exists some test ωn,j,g such
that
2
(n)
(n)
sup
Pf0 ωn,j,g + Pf (1 − ωn,j,g ) ≤ c6 e−c7 ndn (f0 ,g) .
(C.1)
f ∈F :dn (f,g)≤c5 dn (f0 ,g)
Recall that g ∈ Gj ⊂ Fj , then dn (f0 , g) > jε. Hence the indexing set above
contains {f ∈ F : dn (f, g) ≤ c5 jε}. Now we see that
(n)
Pf0 ωn,j,g ≤ c6 e−c7 nj
2 ε2
,
(n)
sup
f ∈F :dn (f,g)≤c5 jε
Pf (1 − ωn,j,g ) ≤ c6 e−c7 nj
Consider the global test φn := supj≥1 maxg∈Gj ωn,j,g , then
X
XX
2 2
(n)
(n)
N (jε)e−c7 nj ε
ωn,j,g ≤ c6
Pf0 φn ≤ Pf0
≤ c6 N (ε)
j≥1
e−c7
.
j≥1
j≥1 g∈Gj
X
2 ε2
nj 2 ε2
2
≤ c6 N (ε)e−c7 nε · 1 − e−c7 nε
2
−1
+
.
On the other hand, for any f ∈ F such that dn (f, f0 ) ≥ ε, there exists some
j ∗ ≥ 1 and some gj ∗ ∈ Gj ∗ such that dn (f, gj ∗ ) ≤ j ∗ c5 ε. Hence
∗ 2 2
2
(n)
(n)
Pf (1 − φn ) ≤ Pf
1 − ωn,j ∗,gj∗ ≤ c6 e−c7 n(j ) ε ≤ c6 e−c7 nε .
The right hand side of the above display is independent of individual f ∈ F
such that dn (f, f0 ) ≥ ε and hence the claim follows.
BAYES MODEL SELECTION
39
Proof of Lemma 10. By Jensen’s inequality, the left side of (5.6) is bounded
by
(n)
Pf0
Z
(n)
≤ Pf0
(n)
log
Z
p f0
(n)
pf
−
(n)
Pf0 log
(n)
p f0
(n)
pf
(n)
log
p f0
(n)
(n)
pf
− Pf0 log
(n)
≤ exp(−Cλnε2 ) · c1 Pf0 exp λ
2
dΠ(f ) ≥ C + c3 )nε − c3 n
(n)
p f0
(n)
pf
Z
Z
d2n (f0 , f )
dΠ(f )
dΠ(f ) ≥ Cnε2
(n)
log
p f0
(n)
pf
(n)
− Pf0 log
(n)
p f0
(n)
pf
dΠ(f ) .
Using Jensen’s inequality, the last term in the right side of the above display
can be further bounded by
(n)
(n)
Z
Z
p
p
f0
(n)
(n)
f0
dΠ(f ) ≤ eψκg nd2n (f0 ,f ),κΓ (λ) dΠ(f )
− Pf0 log (n)
Pf0
exp λ log (n)
pf
pf
where the last inequality follows from Fubini’s theorem and Assumption A.
Now the condition on the prior Π entails that
(n)
Pf0
Z
(n)
pf
(n)
p f0
−(C+c3 )nε2
dΠ(f ) ≤ e
≤ c1 exp −Cλnε2 + ψκg nε2 ,κΓ (λ) .
The claim follows by choosing λ > 0 small enough depending on C, κ.
Proof of Proposition 3. We may assume without loss of generality that dn (f0 , f0,m ) <
∞ so that m̃ is well-defined since |M| = ∞ and (2.5). By definition we have
δn,m̃ ≥ dn (f0 , f0,m ) and δn,m̃−1 < dn (f0 , f0,m ). In this case, the global test
can be constructed via φ̃n := supm′ ∈I,m′ ≥jhm̃ φn,m′ . Then analogous to (5.9)
and (5.10), for any random variable U ∈ [0, 1],
(n)
(C.2)
Pf0,m U · φ̃n ≤ 4c6 e−(c7 /2c
sup
2
f ∈Fjhm̃ :d2n (f,f0,m )≥c2 (jh)γ δn,
m̃
(n)
Pf (1 − φ̃n ) ≤ 2c6 e−(c7 /c
(n)
Pf0,m (E˜nc )
2 )njhδ 2
n,m̃
2 )njhδ 2
n,m̃
−C ′
≤ c1 e
,
.
2
c7 njhδn,
m̃
8c3 c2
Similar to (5.11), there exists an event Ẽn with
and the following is true on the event E˜n :
(C.3)
Z Y
2
n
c7 njhδn,
m̃
pf
2
2
.
dΠ(f ) ≥ λn,m e− 4c2 Πn,m f ∈ Fm : d2n (f, f0,m ) ≤ c7 jhδn,
m̃ /8c3 c
pf0,m
i=1
40
Q. HAN
Repeating the reasoning in (5.12), (5.13) and (5.14) we see that
(C.4)
(n)
2
4
γ 2
(n)
Pf0,m Πn f ∈ F : dn (f, f0,m) > c (2jh) dn (f0 , f0,m ) X
(1 − φ̃n )1Ẽn
2
≤
ec7 njhδn,m̃ /4c
λn,m Πn,m
Z
×
≤ (· · · ) ×
≤ Ce−(c7 /4c
n
2
2 /8c c2
f ∈ Fm : d2n (f, f0,m ) ≤ c7 jhδn,
3
m̃
(n)
f ∈F :d2n (f,f0,m )>c4 (2jh)γ d2n (f0 ,f0,m )
sup
2
f ∈Fjhm̃ :d2n (f,f0,m )≥c2 (jh)γ δn,
m̃
2 )njhδ 2
n,m̃
o
Pf (1 − φ̃n ) dΠ(f )
(n)
Pf (1
− φ̃n ) + Π F \ Fjhm̃
.
2
Here the third line is valid since c4 (2jh)γ d2n (f0 , f0,m ) > c4 (2jh)γ δn,
m̃−1 ≥
2
2
γ
2
2
γ
2
c (jh) δn,m̃ by the right side of (2.5), which entails δn,m̃ ≤ c 2 δn,m̃−1 . The
fourth line uses (C.2) and assumption (P1), together with the fact that
δn,m̃ ≥ δn,m . (5.2) follows from (C.2), probability estimate for Enc and (C.4).
Acknowledgements
The author is indebted to Chao Gao for his numerous suggestions that
lead to a substantially improved version of the paper. He thanks Johannes
Schmidt-Hieber for very helpful comments on an earlier version of the paper.
The author would also like to thank Jon Wellner for constant support and
continuous encouragement as this work developed.
References
[1] R. a. Adamczak. A tail inequality for suprema of unbounded empirical processes with
applications to Markov chains. Electron. J. Probab., 13:no. 34, 1000–1034, 2008.
[2] P. Alquier, V. Cottet, N. Chopin, and J. Rousseau. Bayesian matrix completion: prior
specification. arXiv preprint arXiv:1406.1440, 2014.
[3] S. Banerjee and S. Ghosal. Posterior convergence rates for estimating large precision
matrices using graphical models. Electron. J. Stat., 8(2):2111–2137, 2014.
[4] A. Barron, L. Birgé, and P. Massart. Risk bounds for model selection via penalization.
Probab. Theory Related Fields, 113(3):301–413, 1999.
[5] P. C. Bellec. Sharp oracle inequalities for least squares estimators in shape restricted
regression. arXiv preprint arXiv:1510.08029, 2015.
[6] L. Birgé. Approximation dans les espaces métriques et théorie de l’estimation. Z.
Wahrsch. Verw. Gebiete, 65(2):181–237, 1983.
[7] L. Birgé. Robust testing for independent nonidentically distributed variables and
Markov chains. In Specifying statistical models (Louvain-la-Neuve, 1981), volume 16
of Lect. Notes Stat., pages 134–162. Springer, New York, 1983.
[8] L. Birgé. Model selection via testing: an alternative to (penalized) maximum likelihood estimators. Ann. Inst. H. Poincaré Probab. Statist., 42(3):273–325, 2006.
BAYES MODEL SELECTION
41
[9] S. Boucheron, G. Lugosi, and P. Massart. Concentration inequalities. Oxford University Press, Oxford, 2013. A nonasymptotic theory of independence, With a foreword
by Michel Ledoux.
[10] P. Bühlmann and S. van de Geer. Statistics for high-dimensional data. Springer Series
in Statistics. Springer, Heidelberg, 2011. Methods, theory and applications.
[11] F. Bunea, A. B. Tsybakov, and M. H. Wegkamp. Aggregation for Gaussian regression.
Ann. Statist., 35(4):1674–1697, 2007.
[12] E. J. Candès and Y. Plan. Tight oracle inequalities for low-rank matrix recovery from
a minimal number of noisy random measurements. IEEE Trans. Inform. Theory,
57(4):2342–2359, 2011.
[13] E. J. Candès and T. Tao. Decoding by linear programming. IEEE Trans. Inform.
Theory, 51(12):4203–4215, 2005.
[14] I. Castillo. On Bayesian supremum norm contraction rates. Ann. Statist., 42(5):2058–
2091, 2014.
[15] I. Castillo, J. Schmidt-Hieber, and A. van der Vaart. Bayesian linear regression with
sparse priors. Ann. Statist., 43(5):1986–2018, 2015.
[16] I. Castillo and A. van der Vaart. Needles and straw in a haystack: posterior concentration for possibly sparse sequences. Ann. Statist., 40(4):2069–2101, 2012.
[17] S. Chatterjee, A. Guntuboyina, and B. Sen. On risk bounds in isotonic and other
shape restricted regression problems. Ann. Statist., 43(4):1774–1800, 2015.
[18] C. Gao, A. W. van der Vaart, and H. H. Zhou. A general framework for bayes structured linear models. arXiv preprint arXiv:1506.02174, 2015.
[19] C. Gao and H. H. Zhou. Rate-optimal posterior contraction for sparse PCA. Ann.
Statist., 43(2):785–818, 2015.
[20] C. Gao and H. H. Zhou. Rate exact Bayesian adaptation with modified block priors.
Ann. Statist., 44(1):318–345, 2016.
[21] S. Ghosal, J. K. Ghosh, and A. W. van der Vaart. Convergence rates of posterior
distributions. Ann. Statist., 28(2):500–531, 2000.
[22] S. Ghosal, J. Lember, and A. van der Vaart. Nonparametric Bayesian model selection
and averaging. Electron. J. Stat., 2:63–89, 2008.
[23] S. Ghosal and A. van der Vaart. Convergence rates of posterior distributions for
non-i.i.d. observations. Ann. Statist., 35(1):192–223, 2007.
[24] S. Ghosal and A. van der Vaart. Posterior convergence rates of Dirichlet mixtures at
smooth densities. Ann. Statist., 35(2):697–723, 2007.
[25] A. Guntuboyina. Optimal rates of convergence for convex set estimation from support
functions. Ann. Statist., 40(1):385–411, 2012.
[26] Q. Han and J. A. Wellner. Multivariate convex regression: global risk bounds and
adaptation. arXiv preprint arXiv:1601.06844, 2016.
[27] L. A. Hannah and D. B. Dunson. Bayesian nonparametric multivariate convex regression. arXiv preprint arXiv:1109.0322, 2011.
[28] M. Hoffmann, J. Rousseau, and J. Schmidt-Hieber. On adaptive posterior concentration rates. Ann. Statist., 43(5):2259–2295, 2015.
[29] C. Holmes and N. Heard. Generalized monotonic regression using random change
points. Statistics in Medicine, 22(4):623–638, 2003.
[30] B. J. K. Kleijn and A. W. van der Vaart. Misspecification in infinite-dimensional
Bayesian statistics. Ann. Statist., 34(2):837–877, 2006.
[31] L. Le Cam. Convergence of estimates under dimensionality restrictions. Ann. Statist.,
1:38–53, 1973.
[32] L. Le Cam. On local and global properties in the theory of asymptotic normality of
experiments. pages 13–54, 1975.
[33] L. Le Cam. Asymptotic methods in statistical decision theory. Springer Series in Statistics. Springer-Verlag, New York, 1986.
42
Q. HAN
[34] T. T. Mai and P. Alquier. A Bayesian approach for noisy matrix completion: optimal
rate under general sampling distribution. Electron. J. Stat., 9(1):823–841, 2015.
[35] E. Mariucci, K. Ray, and B. Szabo. A bayesian nonparametric approach to log-concave
density estimation. arXiv preprint arXiv:1703.09531, 2017.
[36] P. Massart. Concentration inequalities and model selection, volume 1896 of Lecture
Notes in Mathematics. Springer, Berlin, 2007. Lectures from the 33rd Summer School
on Probability Theory held in Saint-Flour, July 6–23, 2003, With a foreword by Jean
Picard.
[37] F. Merlevède, M. Peligrad, and E. Rio. Bernstein inequality and moderate deviations under strong mixing conditions. In High dimensional probability V: the Luminy
volume, volume 5 of Inst. Math. Stat. (IMS) Collect., pages 273–292. Inst. Math.
Statist., Beachwood, OH, 2009.
[38] D. Pati, A. Bhattacharya, N. S. Pillai, and D. Dunson. Posterior contraction in sparse
Bayesian factor models for massive covariance matrices. Ann. Statist., 42(3):1102–
1130, 2014.
[39] D. Pollard. Empirical processes: theory and applications. NSF-CBMS Regional Conference Series in Probability and Statistics, 2. Institute of Mathematical Statistics,
Hayward, CA; American Statistical Association, Alexandria, VA, 1990.
[40] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions of linear
matrix equations via nuclear norm minimization. SIAM Rev., 52(3):471–501, 2010.
[41] A. Rohde and A. B. Tsybakov. Estimation of high-dimensional low-rank matrices.
Ann. Statist., 39(2):887–930, 2011.
[42] J. Rousseau. Rates of convergence for the posterior distributions of mixtures of betas
and adaptive nonparametric estimation of the density. Ann. Statist., 38(1):146–180,
2010.
[43] X. Shen and L. Wasserman. Rates of convergence of posterior distributions. Ann.
Statist., 29(3):687–714, 2001.
[44] A. B. Tsybakov. Aggregation and minimax optimality in high-dimensional estimation.
In Proceedings of the International Congress of Mathematicians, pages 225–246, 2014.
[45] A. W. van der Vaart and J. H. van Zanten. Rates of contraction of posterior distributions based on Gaussian process priors. Ann. Statist., 36(3):1435–1463, 2008.
[46] A. W. van der Vaart and J. H. van Zanten. Adaptive Bayesian estimation using a
Gaussian random field with inverse gamma bandwidth. Ann. Statist., 37(5B):2655–
2675, 2009.
[47] A. W. van der Vaart and J. A. Wellner. Weak Convergence and Empirical Processes.
Springer Series in Statistics. Springer-Verlag, New York, 1996.
[48] Y. Yang and D. Pati. Bayesian model selection consistency and oracle inequality with
intractable marginal likelihood. arXiv preprint arXiv:1701.00311, 2017.
[49] W. W. Yoo and S. Ghosal. Supremum norm posterior contraction and credible sets
for nonparametric multivariate regression. Ann. Statist., 44(3):1069–1102, 2016.
[50] Z. Yu, M. Levine, and G. Cheng. Minimax optimal estimation in high dimensional
semiparametric models. arXiv preprint arXiv:1612.05906, 2016.
[51] M. Yuan and D.-X. Zhou. Minimax optimal rates of estimation in high dimensional
additive models: Universal phase transition. arXiv preprint arXiv:1503.02817, 2015.
(Q. Han) Department of Statistics, Box 354322, University of Washington,
Seattle, WA 98195-4322, USA.
E-mail address: [email protected]
| 10 |
Ref: International Conference on Artificial Neural Networks (ICANN), Springer LNCS,
Vol. 9887, pp. 170–178, Barcelona, Spain, September 2016.
DNN-Buddies: A Deep Neural Network-Based
Estimation Metric for the Jigsaw Puzzle Problem
Dror Sholomon1 , Eli (Omid) David1 , and Nathan S. Netanyahu1,2
1
arXiv:1711.08762v1 [cs.CV] 23 Nov 2017
2
Department of Computer Science, Bar-Ilan University, Ramat-Gan 52900, Israel
[email protected], [email protected], [email protected]
Center for Automation Research, University of Maryland, College Park, MD 20742
[email protected]
Abstract. This paper introduces the first deep neural network-based estimation metric for the jigsaw puzzle problem. Given two puzzle piece edges,
the neural network predicts whether or not they should be adjacent in the
correct assembly of the puzzle, using nothing but the pixels of each piece.
The proposed metric exhibits an extremely high precision even though no
manual feature extraction is performed. When incorporated into an existing puzzle solver, the solution’s accuracy increases significantly, achieving
thereby a new state-of-the-art standard.
(a)
(b)
Fig. 1: Jigsaw puzzle before and after reassembly using our DNN-Buddies scheme in
an enhanced solver.
1
Introduction
Jigsaw puzzles are a popular form of entertainment, available in different variation
of difficulty to challenge children, adults and even professional players. Given n × m
2
D. Sholomon, E.O. David, N.S. Netanyahu
different non-overlapping tiles of an image, the objective is to reconstruct the original image, taking advantage of both the shape and chromatic information of each
piece. Despite the popularity and vast distribution of jigsaw puzzles, their assembly
is not trivial computationally, as this problem was proven to be NP-hard [1] [8].
Nevertheless, a computational jigsaw solver may have applications in many realworld applications, such as biology [16], chemistry [25], literature [18], speech descrambling [27], archeology [2] [15], image editing [5], and the recovery of shredded
documents or photographs [3,7,14,17]. Regardless, as noted in [11], research of the
topic may be justified solely due to its intriguing nature.
Recent years have witnessed a vast improvement in the research and development
of automatic jigsaw puzzle solvers, manifested in both puzzle size, solution accuracy,
and amount of manual human intervention required. In its most basic form, every
puzzle solver requires some function to evaluate the compatibility of adjacent pieces
and a strategy for placing the pieces as accurately as possible. Most strategies are
greedy and rely heavily on some “trick” to estimate whether two pieces are truly
adjacent (e.g. two pieces that are each the most compatible piece from all pieces to
one another, four pieces that form a loop where each pair’s compatibility is above a
threshold, etc). Such heuristics were dubbed an “estimation metric” in [20], as they
allow estimating the adjacency correctness of two pieces without knowing the correct
solution. The majority of recent works focused on devising elaborate, hand-crafted
compatibility functions and high-precision estimation metrics.
Despite the proven effectiveness of neural networks in the field of computer vision,
no attempt has been made to automatically devise a high-precision estimation metric
for the jigsaw puzzle problem. This might be due to the highly imbalanced nature
of the puzzle problem, as in each n × m puzzle, there are O(n × m) matching
piece-pairs and O(n2 × m2 ) possible mismatching ones. In this paper we propose a
novel estimation metric relying on neural networks. The proposed metric achieves
extremely high precision despite the lack of any manually extracted features.
The proposed metric proves to be highly effective in real-world scenarios. We
incorporated the metric in our GA-based solver, using no hand-crafted sophisticated compatibility measure and experimented with the currently known challenging
benchmarks of the hardest variant of the jigsaw puzzle problem: non-overlapping,
(28 × 28) square pieces (i.e. only chromatic information is available to the solver)
where both piece orientation and puzzle dimensions are unknown. The enhanced
solver proposed sets a new state-of-the-art in terms of the accuracy of the solutions
obtained and the number of perfectly reconstructed puzzles.
2
Previous Work
Jigsaw puzzles were first introduced around 1760 by John Spilsbury, a Londonian
engraver and mapmaker. Nevertheless, the first attempt by the scientific community to computationally solve the problem is attributed to Freeman and Garder [9]
who in 1964 presented a solver which could handle up to nine-piece problems. Ever
since then, the research focus regarding the problem has shifted from shape-based
to merely color-based solvers of square-tile puzzles. In 2010 Cho et al. [4] presented
Deep Neural Network Based Estimation Metric for the Jigsaw Puzzle Problem
3
a probabilistic puzzle solver that could handle up to 432 pieces, given some a priori
knowledge of the puzzle. Their results were improved a year later by Yang et al. [26]
who presented a particle filter-based solver. Furthermore, Pomeranz et al. [20] introduced that year, for the first time, a fully automated square jigsaw puzzle solver
that could handle puzzles of up to 3,000 pieces. Gallagher [10] has further advanced
this by considering a more general variant of the problem, where neither piece orientation nor puzzle dimensions are known. Son et al. [24] improved the accuracy
of the latter variant using so-called “loop-constraints”. Palkin and Tal [19] further
improved the accuracy and handled puzzles with missing pieces. Sholomon et al. [21]
presented a genetic algorithm (GA)-based solver for puzzles of known orientation
which was later generalized to other variants [22,23].
2.1
Compatibility Measures and Estimation Metrics
As stated earlier, most works focus on the compatibility measure and an estimation
metric. A compatibility measure is a function that given two puzzle piece edges (e.g.
the right edge of piece 7 versus the upper edge of piece 12) predicts the likelihood
that these two edges are indeed placed as neighbors in the correct solution. This
measure applies to each possible pair of piece edges. The estimation metric, on the
other hand, predict whether two piece edges are adjacent but may not apply to
many possible pairs. Following is a more detailed review of the efforts made so far
in the field.
Cho et al. [4] surveyed four compatibility measures among which they found dissimilarity the most accurate. Dissimilarity is the sum (over all neighboring pixels)
of squared color differences (over all color bands). Assuming pieces xi , xj are represented in some three-dimensional color space (like RGB or YUV) by a K × K × 3
matrix, where K is the height/width of a piece (in pixels), their dissimilarity, where
xj is to the right of xi , for example, is
v
uK 3
uX X
(xi (k, K, cb) − xj (k, 1, cb))2 ,
(1)
D(xi , xj , r) = t
k=1 cb=1
where cb denotes the color band.
Pomeranz et al. [20] also used the dissimilarity measure but found empirically
that using the (Lp )q norm works better than the usual L2 norm. Moreover, they
presented the high-precision best-buddy metric. Pieces xi and xj are said to bestbuddies if
∀xk ∈ P ieces, C(xi , xj , R1 ) ≥ C(xi , xk , R1 )
and
(2)
∀xp ∈ P ieces, C(xj , xi , R2 ) ≥ C(xj , xp , R2 )
where P ieces is the set of all given image pieces and R1 and R2 are “complementary”
spatial relations (e.g. if R1 = right, then R2 = left and vice versa).
Gallagher [10] proposed yet another compatibility measure, called the Mahalanobis gradient compatibility (MGC) as a preferable compatibility measure to those
4
D. Sholomon, E.O. David, N.S. Netanyahu
used by Pomeranz et al. [20]. The MGC penalizes changes in intensity gradients,
rather than changes in intensity, and learns the covariance of the color channels, using the Mahalanobis distance. Also, Gallagher suggested using dissimilarity ratios.
Absolute distances between potential piece edge matches are sometimes not indicative (for example in smooth surfaces like sea and sky), so considering the absolute
score, divided by the second-best score available seems more indicative.
Son et al. [24] suggested “loop-constraints”, four or more puzzle piece edges where
the compatibility ratio between each pair is in the top ten among all possible pairs of
piece edges in the given puzzle. Palkin and Tal [19] proposed a greedy solver based
on an L1 -norm asymmetric dissimilarity and the best-buddies estimation metric.
3
3.1
DNN-Buddies
Motivation
We propose a novel estimation metric called “DNN-Buddies”. Our goal is to obtain
a classifier which predicts the adjacency likelihood of two puzzle piece edges in the
correct puzzle configuration.
Note that despite the exponential nature of the problem (as there are O((nm)!)
possible arrangements of the pieces, taking into account rotations), the problem can
be solved theoretically by assigning correctly, in a consecutive manner, n × m − 1
piece-edge pairs. (This is reminiscent of finding a minimal spanning tree, as noted
by [10].) Hence, the classifier’s precision is of far greater importance than its recall.
A classifier with perfect precision and a recall of
n×m−1
1
n×m−1
=
<
all possible matches
4 × (n × (m − 1) + (n − 1) × m)
8
(3)
might achieve a perfect solution by itself.
3.2
Challenges
A straight-forward solution might have been to train a neural network against
matching-pairs vs. non-matching ones. However, the issue of a jigsaw puzzle piece
matching is of an imbalanced nature. In each n × m puzzle, there are O(n × m)
matching pairs of piece edges and O(n2 × m2 ) possible nonmatching ones. A thorough review on the challenges and tactics to avoid them can be found in [13].
The trivial approach of random or uninformed undersampling, i.e. randomly
choosing the required number of nonmatching pairs leads to a low-precision and highrecall metric, the very opposite of the goal set beforehand. We believe that the reason
for this shortcoming is that there exist many “easy-to-spot” mismatches but only a
handful of “hard-to-spot” ones. Thus, we resort to informed undersampling, choosing
a subset of “good” mismatching pairs according to some criterion. Nevertheless, we
avoid using any manual feature selection or other sophisticated image-related means.
In the jigsaw puzzle domain, similarly to many other problem domains, the
solver does not actually try to reassemble the original image (as this problem is
Deep Neural Network Based Estimation Metric for the Jigsaw Puzzle Problem
5
not mathematically defined), but rather tries solving a “proxy problem” which is
to achieve an image whose global overall score between abutting-edges is minimal.
Thus, we choose using the compatibility measure as the undersampling criterion.
3.3
Neural Network Training
For training and cross-validation, we use the 2,755 images of size 360 × 480 pixels
from the IAPR TC-12 Benchmark [12]. Each image is first converted to YUV space
followed by the normalization of each channel separately (via z-score normalization).
Next, each (puzzle) image is divided to 12 × 17 tiles, where each tile is of size 28 × 28
pixels (as in all previous works); finally, we create a balanced set of positive and
negative samples of puzzle-piece pairs, using informed undersampling as will be
described below. In the end, we obtain a balanced set of 970,224 pairs overall.
To balance our dataset, we use the most basic compatibility score which is the
dissimilarity between two piece-edges in the YUV color-space, as described in Eq. 1,
as an undersampling criterion. For each puzzle piece edge xi,j (i = 1..n×m, j = 1..4),
we find its most compatible piece edge xk1,l1 and its second most compatible piece
edge xk2,l2 . If the pair of edges xi,j −xk1,l1 is indeed adjacent in the original image, we
add this pair to the pool of positively-labeled samples and toss the pair xi,j − xk2,l2
to the pool of negatively-labeled samples. Otherwise, xi,j − xk1,l1 is added to the
negatively-labeled samples and the other pair is discarded. The latter is done to avoid
training the network on adjacent pieces which happen to be vastly different due to a
significant change of the image scenery in the corresponding region. In other words,
we restrict our interest to highly compatible piece edges that are indeed adjacent.
Since this method leads to more negative samples than positive ones, we eventually
randomly throw some negative samples to balance out the set.
From each image pair we extract the two columns near the edge, i.e. the column
of abutting pixels in each edge and the one next to it. This results is an input of size
(28 × 4 × 3 =) 336 pixels. We use a feed-forward neural network (FFNN) of five fully
connected layers of size 336, 100, 100, 100, and 2. The output is a softmax layer
containing two neurons. We expect (0, 1) for matching pairs and (1, 0) otherwise.
The activation used in all layers is the rectified linear unit (ReLU) function, i.e.
f (x) = max(0, x). Figure 2 depicts the network’s structure.
We trained the network in a supervised manner using Stochastic Gradient Descent that minimizes the negative log likelihood of the error for 100 iterations. The
resulting network reaches 95.04% accuracy on the training set and 94.62% on a
held-out test set.
All dataset preparation and network training was performed using Torch7 [6].
4
Experimental Results
For each piece edge xi,j (i = 1..n × m, j = 1..4), if its most compatible piece edge xk,l
is classified positively using the DNN-Buddies network, we define xk,l to be xi,j ’s
DNN-buddy piece edge. Note that each piece edge can have only a single DNN-buddy;
6
D. Sholomon, E.O. David, N.S. Netanyahu
Fig. 2: Architecture of our DNN-Buddies scheme.
also, some pieces might not have a DNN-buddy at all (if the most compatible piece
is not classified as one by the DNN-Buddies network).
First, we evaluate the precision of the proposed metric, i.e. how many DNNbuddies are indeed adjacent in the original image. Using the well known dataset
presented by Cho et al. [4] of 20 432-piece puzzles, we obtained a precision of 94.83%.
Next, we incorporated the estimation metric (due to the proposed DNN-Buddies
scheme) into the GA-based solver proposed by us previously [23]. Unfortunately,
due to lack of space, no self-contained review of genetic algorithms and the proposed
method can be included in this paper. Nevertheless, the modification required with
respect to the existing GA framework is rather simple; if a DNN-buddy pair appears
in one of the parents, assign this pair in the child. Figure 3 describes the modified
crossover operator in the GA framework according to the above (see Step 2, which
includes the new DNN-buddy phase).
Until
1.
2.
3.
4.
5.
(n − 1) relative relations are assigned do
Try assigning all common relative relations in the parents.
Try assigning all DNN-buddy relative relations in the parents.
Try assigning all best-buddy relative relations in the parents.
Try assigning all existing most-compatible relative relations.
Try assigning random relative relations.
Fig. 3: Crossover overview
We ran the augmented solver on the 432-piece puzzle set and on the two additional datasets proposed by Pomeranz et al. [20] of 540- and 805- piece puzzles.
We evaluated our results according to the neighbor comparison which measures the
fraction of correct neighbors and the number of puzzles perfectly reconstructed for
each set.
Deep Neural Network Based Estimation Metric for the Jigsaw Puzzle Problem
7
Table 1 presents the accuracy results of the same solver with and without the
DNN-Buddies metric. For each dataset we achieve a considerable improvement in
the overall accuracy of the solution, as well as the number of perfectly reconstructed
puzzles. Moreover, our enhanced deep neural network-based scheme appears to outperform the current state-of-the-art results, as it yields accuracy levels of 95.65%,
96.37% and 95.86%, which surpass, respectively, the best results known of 95.4% [19],
94.08% and 94.12% [23].
# of Pieces
432
540
805
GA
Neighbor
Perfect
94.88%
94.08%
94.12%
11
8
6
Our (GA + DNN-Buddies)
Neighbor
perfect
95.65%
96.37%
95.86%
12
11
8
Table 1: Comparison of our accuracy results with and without the new DNN-Buddies
estimation metric.
5
Conclusions
In this paper we presented the first neural network-based estimation metric for the
jigsaw puzzle problem. Unlike previous methods, no manual feature crafting was
employed. The novel method exhibits high precision and when combined with a
real-world puzzle solver, it significantly improves the solution’s accuracy to set a
new state-of-the art standard.
References
1. Altman, T.: Solving the jigsaw puzzle problem in linear time. Applied Artificial Intelligence an International Journal 3(4), 453–462 (1989)
2. Brown, B., Toler-Franklin, C., Nehab, D., Burns, M., Dobkin, D., Vlachopoulos, A.,
Doumas, C., Rusinkiewicz, S., Weyrich, T.: A system for high-volume acquisition and
matching of fresco fragments: Reassembling Theran wall paintings. ACM Transactions
on Graphics 27(3), 84 (2008)
3. Cao, S., Liu, H., Yan, S.: Automated assembly of shredded pieces from multiple photos.
In: IEEE International Conference on Multimedia and Expo. pp. 358–363 (2010)
4. Cho, T., Avidan, S., Freeman, W.: A probabilistic image jigsaw puzzle solver. In: IEEE
Conference on Computer Vision and Pattern Recognition. pp. 183–190 (2010)
5. Cho, T., Butman, M., Avidan, S., Freeman, W.: The patch transform and its applications to image editing. In: IEEE Conference on Computer Vision and Pattern
Recognition. pp. 1–8 (2008)
6. Collobert, R., Kavukcuoglu, K., Farabet, C.: Torch7: A matlab-like environment for
machine learning. In: BigLearn, NIPS Workshop. No. EPFL-CONF-192376 (2011)
7. Deever, A., Gallagher, A.: Semi-automatic assembly of real cross-cut shredded documents. In: ICIP. pp. 233–236 (2012)
8
D. Sholomon, E.O. David, N.S. Netanyahu
8. Demaine, E., Demaine, M.: Jigsaw puzzles, edge matching, and polyomino packing:
Connections and complexity. Graphs and Combinatorics 23, 195–208 (2007)
9. Freeman, H., Garder, L.: Apictorial jigsaw puzzles: The computer solution of a problem
in pattern recognition. IEEE Transactions on Electronic Computers EC-13(2), 118–127
(1964)
10. Gallagher, A.: Jigsaw puzzles with pieces of unknown orientation. In: IEEE Conference
on Computer Vision and Pattern Recognition. pp. 382–389 (2012)
11. Goldberg, D., Malon, C., Bern, M.: A global approach to automatic solution of jigsaw
puzzles. Computational Geometry: Theory and Applications 28(2-3), 165–174 (2004)
12. Grubinger, M., Clough, P., Müller, H., Deselaers, T.: The IAPR TC-12 benchmark:
A new evaluation resource for visual information systems. In: International Workshop
OntoImage. vol. 5, p. 10 (2006)
13. He, H., Garcia, E.A.: Learning from imbalanced data. Knowledge and Data Engineering, IEEE Transactions on 21(9), 1263–1284 (2009)
14. Justino, E., Oliveira, L., Freitas, C.: Reconstructing shredded documents through feature matching. Forensic science international 160(2), 140–147 (2006)
15. Koller, D., Levoy, M.: Computer-aided reconstruction and new matches in the forma
urbis romae. Bullettino Della Commissione Archeologica Comunale di Roma pp. 103–
125 (2006)
16. Marande, W., Burger, G.: Mitochondrial DNA as a genomic jigsaw puzzle. Science
318(5849), 415–415 (2007)
17. Marques, M., Freitas, C.: Reconstructing strip-shredded documents using color as feature matching. In: ACM symposium on Applied Computing. pp. 893–894 (2009)
18. Morton, A.Q., Levison, M.: The computer in literary studies. In: IFIP Congress. pp.
1072–1081 (1968)
19. Paikin, G., Tal, A.: Solving multiple square jigsaw puzzles with missing pieces. In:
Computer Vision and Pattern Recognition, 2015 IEEE Conference on. pp. 4832–4839.
IEEE (2015)
20. Pomeranz, D., Shemesh, M., Ben-Shahar, O.: A fully automated greedy square jigsaw
puzzle solver. In: IEEE Conference on Computer Vision and Pattern Recognition. pp.
9–16 (2011)
21. Sholomon, D., David, O.E., Netanyahu, N.S.: A genetic algorithm-based solver for very
large jigsaw puzzles. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 1767–1774 (2013)
22. Sholomon, D., David, O.E., Netanyahu, N.S.: A generalized genetic algorithm-based
solver for very large jigsaw puzzles of complex types. In: AAAI Conference on Artificial
Intelligence. pp. 2839–2845 (2014)
23. Sholomon, D., David, O.E., Netanyahu, N.S.: Genetic algorithm-based solver for very
large multiple jigsaw puzzles of unknown dimensions and piece orientation. In: ACM
Conference on Genetic and Evolutionary Computation. pp. 1191–1198 (2014)
24. Son, K., Hays, J., Cooper, D.B.: Solving square jigsaw puzzles with loop constraints.
In: European Conference on Computer Vision 2014, pp. 32–46. Springer (2014)
25. Wang, C.S.E.: Determining molecular conformation from distance or density data.
Ph.D. Thesis, Massachusetts Institute of Technology (2000)
26. Yang, X., Adluru, N., Latecki, L.J.: Particle filter with state permutations for solving
image jigsaw puzzles. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 2873–2880 (2011)
27. Zhao, Y., Su, M., Chou, Z., Lee, J.: A puzzle solver and its application in speech
descrambling. In: WSEAS International Conference Computer Engineering and Applications. pp. 171–176 (2007)
| 1 |
arXiv:1709.10154v1 [cs.SY] 28 Sep 2017
Finite-Time Distributed Linear Equation Solver
for Minimum l1 Norm Solutions
Jingqiu Zhou, Wang Xuan, Shaoshuai Mou, and Brian. D. O. Anderson
October 2, 2017
Abstract
This paper proposes distributed algorithms for multi-agent networks
to achieve a solution in finite time to a linear equation Ax = b where A
has full row rank, and with the minimum l1 -norm in the underdetermined
case (where A has more columns than rows). The underlying network is
assumed to be undirected and fixed, and an analytical proof is provided for
the proposed algorithm to drive all agents’ individual states to converge
to a common value, viz a solution of Ax = b, which is the minimum l1 norm solution in the underdetermined case. Numerical simulations are
also provided as validation of the proposed algorithms.
1
Introduction
A significant amount of effort in the control community has recently been
given to distributed algorithms for solving linear equations over multi-agent networks, in which each agent only knows part of the equation and controls a state
vector that can be looked at as an estimate of the solution of the overall linear equations [1–5]. Numerous extensions along this direction include achieving
solutions with the minimum Euclidean norm [6, 7], elimination of the initialization step [8], reduction of state vector dimension by utilizing the sparsity of the
linear equation [9] and achieving least square solutions [10–15]. All these algorithms yield asymptotic convergence, but require an infinite number of sensing
or communication events.
∗ This work was supported by a funding from Northrop Grumman Cooperation. J. Zhou,
X. Wang and S. Mou are with the School of Aeronautics and Astronautics, Purdue University, West Lafayette, IN 47906 USA (e-mail: [email protected], [email protected],
[email protected]). B. D. O. Anderson is with Hangzhou Dianzi University, Hangzhou,
China, The Australian National University and Data-61 CSIRO (formerly NICTA), Canberra, ACT 2600 Australia, e-mail: [email protected]; his work is supported by
Data-61, CSIRO, and by the Australian Research Council’s Discovery Projects DP-130103610
and DP-160104500. Corresponding Author: Shaoshuai Mou.
1
∗
Solutions to underdetermined linear equations with the minimum l1 norm are
perhaps the most important in many engineering applications including earthquake location detection [16] , analysis of statistical data [17], solving biomeganetic inverse problems [18], and so on. One most intriguing case among these
applications is compressive sensing, which enables transmission of sparse data in
a very efficient way [19]. The decoding process of compressive sensing requires
solving of linear equations with a minimum number of non-zero entries of the
solution vectors, which, however, is an NP-hard problem and usually computationally costly [20]. Thus researchers usually turn to achieve solutions with minimum l1 norm instead for which the function to be minimized is convex [21, 22].
Most existing results for achieving minimum l1 norm solutions are based on
the idea of Lasso [23] including Alternating Direction Method of Multipliers
(ADMM) [24], the Primal-Dual Interior-Point Method [25, 26], Gradient Projection Methods [27], Homotopy Methods [28], Iterative Shrinkage-Thresholding
Methods [29] and Proximal Gradient Methods [30]. Interesting as these results
are, they either achieve limited accuracy dominated by a threshold parameter,
involve solving a much larger linear equation, or lead to high computational
complexity.
In this paper we aim to develop distributed algorithms for multi-agent networks to achieve in finite time a solution of linear equations or, in the underdetermined case, one with the minimum l1 norm. By distributed is meant that
each agent only knows part of the overall linear equation and can communicate
with only its nearby neighbors. The problem of interest is formulated in Section
2. We introduce in section 3 the concepts to be employed in the paper including
Filippov set-valued maps, Filippov solutions, generalized Lie derivatives, based
on which a preliminary result is achieved. In Section 4, we will first propose a
distributed algorithm to drive all agents’ state vectors to converge in finite time
to the same solution of the overall linear equations. Then we present a centralized update for achieving a solution with the minimum l1 norm. Motivated
by the projection-consensus flow proposed in [13] and the finite-time gradient
flow for consensus devised in [31–36], we utilize a combination of the proposed
distributed linear equation solver and the proposed centralized algorithm for
minimum l1 -norm solutions to develop a distributed linear equation solver for
achieving the minimum l1 norm solution, which is shown to converge in finite
time. We provide simulations in Section 5 and concluding remarks in Section 6.
Notation: Let r denote an arbitrary positive integer. Let 1r denote the vector
in Rr with all entries equal to 1s. Let Ir denote the r × r identity matrix. We
let col {A1 , A2 , · · · , Ar } be a stack of matrices Ai possessing the same number
of columns with the index in a top-down ascending order, i = 1, 2, · · · , r. Let
diag {A1 , A2 , · · · , Ar } denote a block diagonal matrix with Ai the ith diagonal
block entry, i = 1, 2, · · · , r. By M > 0 and M ≥ 0 are meant that the square
matrix M is positive definite and positive semi-definite, respectively. By M 0 is
meant the transpose of a matrix M . Let ker M and image M denote the kernel
and image of a matrix M , respectively. Let ⊗ denote the Kronecker product.
Let k · k1 denote the l1 norm of a vector in Rr .
2
2
Problem Formulation
Consider a network of m agents, i = 1, 2, ..., m; inside this network, each
agent can observe states of certain other agents called its neighbors. Let Ni
denote the set of agent i’s neighbors. We assume that the neighbor relation is
symmetric, that is, j ∈ Ni if and only if i ∈ Nj . Then all these neighbor relations
can be described by an m-node-m̄-edge undirected graph G such that there is an
undirected edge connecting i and j if and only if i and j are neighbors. In this
paper we only consider the case in which G is connected, fixed and undirected.
Suppose that each agent i knows Ai ∈ Rni ×n and bi ∈ Rni and controls a
state vector yi (t) ∈ Rn . Then all these Ai and bi can be stacked into an overall
equation Ax = b, where A = col {A1 , A2 , · · ·, Am }, b = col {b1 , b2 , · · ·, bm }.
Without loss of generality for the problems of interest to us, we assume A to have
full row-rank. Let x∗ denote a solution to Ax = b (and in the underdetermined
case, x∗ is not unique); and let x̄∗ denote its minimum l1 -norm solution, that
is,
x̄∗ = arg min kxk1
(1)
Ax=b
∗
(In the non-singular A case, x and x̄∗ necessarily coincide) The problem of
interest in this paper is to develop distributed algorithms for each agent i to
update its state vector yi (t) by only using its neighbors’ states such that all yi (t)
to converge in finite time to a common value x∗ and if desired in the nonsquare
case the value x̄∗ .
3
Key Concepts and Preliminary Results
Before proceeding, we introduce some key concepts and preliminary results
for future derivation and analysis. The key references for the background we
summarize are [37] and [38].
3.1
Filippov set-valued maps and Filippov Solutions
By a Filippov set-valued map F [f ] : Rr → B ⊂ Rr associated with a function
f : Rr → Rr is meant
\ \
co{f (B(x, δ))/S}
(2)
F [f ](x) ,
δ>0 µ(S)=0
Here B(x, δ) stands for the open ball on Rr , whose center is at x and has a radius
of δ; µ(S) denotes the Lebesgue measure of S; and co stands for the convex
closure. Let sgn (x) : Rr → Rr be a function with the kth entry, k = 1, 2, ..., r,
defined as
(x)k > 0;
1,
−1, (x)k < 0;
(sgn (x))k =
(3)
0,
(x)k = 0.
3
It follows that the Filippov set-valued map F [sgn ](x) for x ∈ Rr is defined
entrywise as:
1 (x)k > 0
[−1, 1] (x)k = 0
(F [sgn ](x))k =
(4)
−1 (x)k < 0
for k = 1, 2, ..., r. Note that even if (x)i = (x)j = 0, the ith and jth entries
of a vector in F [sgn ](x) may not necessarily be equal since each of them could
be chosen as arbitrary values in the interval [−1, 1]. From the definition of
F [sgn ](x), one can verify that
q 0 x = kxk1 ,
∀q ∈ F [sgn ](x)
(5)
While for any w ∈ Rr , there holds
q 0 w ≤ kwk1
(6)
.
By a Filippov solution for ẋ ∈ F [f ](x) is meant a Caratheodory solution
x(t) such that ẋ ∈ F [f ](x) for almost all t, x(t) is absolutely continuous and
can be written in the form of an indefinite integral. The following two lemmas
treat existence of such a Filippov solution.
Lemma 1 ( Proposition 3 in [37]) If f : Rr → Rr is measurable and locally
bounded, then for any initial point x0 ∈ Rr , there exists a Filippov solution1 for
ẋ ∈ F [f ](x).
Lemma 2 ( Theorem 8 in page 85 of [38] ) Let a vector-valued f (t, x) be defined
almost-everywhere in the domain G of time space (t, x). With f (t, x) measurable
and locally bounded almost-everywhere in an open domain G, let
\ \
F [f ](t, x) ,
co{f (t, B(x, δ)/S)}.
(7)
δ>0 µ(S)=0
Then for any point (t0 , x0 ) ⊂ G, there exists a Filippov solution of ẋ ∈ F [f ](t, x)
with x(t0 ) = x0 .
Note that Lemma 2 establishes the existence of a solution for time-varying
systems. This is more general than Lemma 1, which only guarantees the existence of solutions to time-invariant systems.
3.2
Generalized Gradients and Generalized Lie Derivatives
For a locally Lipschitz function w : Rr → R, the generalized gradient of w is
∂w(x) , co{ lim ∇w(xi ) : xi → x, xi ∈
/ S ∪ Ωw }
i→∞
1 There
is no implication that the solution exists on an infinite interval
4
(8)
where S ⊂ Rr is an arbitrarily chosen set of measure zero, Ωw denotes the set
of points at which w is not differentiable, and co denotes convex hull. Specially,
for the function ||x||1 , one computes the kth element of its generalized gradient
to be:
1 xk > 0
[−1, 1] xk = 0
(∂||x||1 )k =
(9)
−1 xk < 0
It follows from this and the definition of F [sgn ](x) in (4) that
F [sgn ](x) = ∂kxk1
(10)
For a set-valued map F : Rr → B(Rr ), the generalized Lie derivative of w is
defined as
L̃F w(x) = {q ∈ R : there exists α ∈ F(x)
such that ∀β ∈ ∂w(x), q = β 0 α}
(11)
The above definition of generalized Lie derivative implies that for each α in
F(x), we check if the inner product β 0 α is a fixed value for all β ∈ ∂w(x). If so,
this inner product is an element in L̃F w(x), but note that the set L̃F w(x) may
be empty. Moreover, for locally Lipschitz and regular(see [39],p.39 and [40], p.3
for detailed discussion of regular function2 ) functions w(x), one has the following
lemma:
Lemma 3 (Proposition 10 in [37]) Let x : [0, t1 ] → Rr be a solution for
ẋ ∈ F(x(t)), where F is any set-valued map. Let w(x) : Rr → R be locally
Lipschitz and regular. Then w(x(t)) is differentiable at almost all t ∈ [0, t1 ];
∈ L̃F w(x(t)) for almost all t ∈ [0, t1 ].
The derivative of w(x(t)) satisfies dw(x(t))
dt
Lemma 3 guarantees the existence of generalized Lie derivatives for functions
that are locally Lipschitz and regular. If one focuses on a specific solution, one
can show that α in (11) is a special vector as summarized in the following lemma.
Lemma 4 (See Proof of Lemma 1 in [40]) Let x(t) denote a specific solution
of a differential enclosure. Suppose w(x) is locally Lipschitz and regular. Let
I ⊂ [0, ∞) denote the time interval for which ẋ(t) exists. Then
dw(x(t)))
= β 0 ẋ(t)
dt
where β is any vector in ∂w(x).
2 That
a function w(x) : Rn → R is called regular at x ∈ Rn if
0 (x, v).
1. for all v ∈ Rn there exists the usual right directional derivative w+
0 (x, v) = w o (x, v).
2. for all v ∈ Rn , w+
5
(12)
3.3
Preliminary Results
For any positive semi-definite matrix M ∈ Rr×r , M 6= 0 one can define
Φ(M ) = {q ∈ Rr | ∃φ ∈ F [sgn (q)], M φ(q) = 0}
(13)
and its compliment
Φc (M ) = {q ∈ Rr | ∀φ ∈ F [sgn (q)], M φ(q) 6= 0}.
(14)
We impose a further requirement on M , namely that Φc (M ) is nonempty which
can be easily ensured. Let
Λ(M ) = {φ|φ ∈ F [sgn ](q), q ∈ Φc (M )}
(15)
Now F [sgn ](q) is a closed set for any fixed q; also note that F [sgn ](q) can only
be one of a finite number of different sets; hence it is easy to check for a given
M whether Φc (M ) is nonempty, (and in a later use of the result, it proves easy
to check). It further follows that Λ(M ) is also a closed set. Consequently, the
continuous function f (φ) = φ0 M φ has a nonzero minimum on Λ(M ). We denote
λ(M ) = min f (φ).
(16)
φ∈Λ(M )
From the definition of Φc (M ) and Λ(M ), one has λ(M ) > 0. To summarize,
one has the following lemma:
Lemma 5 For any nonnegative-definite matrix M , we let Φc (M ), Λ(M ) and
λ(M ) defined as above. Suppose that Φc (M ) is nonempty. Then λ(M ) is a
positive constant.
For the m-node-m̄-edge graph G, we label all its nodes as 1, 2, ..., m and all
its edges as 1, 2, ..., m̄. Assign an arbitrary direction to each edge in G. Then
the incidence matrix of G denoted by H = [hik ]m×m̄ is defined as follows
i is the head of the kth edge;
1,
−1, i is the tail of the kth edge;
hik =
(17)
0,
otherwise.
Since G is connected, then ker H 0 is the span of 1m [41]. Moreover, one has the
following lemma:
Lemma 6 Suppose A has full-row rank and G is connected. Let P̄ = diag {P1 , P2 , ..., Pm }
where each Pi is the projection matrix to ker Ai . Let H̄ = H ⊗ In with H the
incidence matrix of G. Then one has
image H̄ ∩ ker P̄ = 0
(18)
image H̄ 0 ∩ Φ(H̄ 0 P̄ H̄) = 0
(19)
and
6
Proof of Lemma 6: Let u be a vector such that
P̄ H̄u = 0.
(20)
The vector v = H̄u lies in image H̄ and ker P̄ , and we will show it is zero to
establish (18). Define Ā = diag {A1 , A2 , ..., Am }, which is full row rank since A
has full row rank. Then P̄ = I − Ā0 (ĀĀ0 )−1 Ā. It follows from (20) that
H̄u = Ā0 (ĀĀ0 )−1 ĀH̄u
(21)
Multiplying both sides of the above equation by 10m ⊗ In , one has
0 = (1m ⊗ In )0 Ā0 (ĀĀ0 )−1 ĀH̄u
(22)
Since (10m ⊗ In0 )Ā0 = A0 there holds
0 = A0 (ĀĀ0 )−1 ĀH̄u
(23)
Since A0 is full column rank, one has
0 = (ĀĀ0 )−1 ĀH̄u
(24)
From (21) and equation (24) one has
H̄u = 0
(25)
Furthermore, we notice (20) and conclude that (18) is true.
For any q ∈ image H̄ 0 ∩ Φ(H̄ 0 P̄ H̄), there exists a vector p such that
q = H̄ 0 p
(26)
and a vector φ ∈ F [sgn ](q) such that H̄ 0 P̄ H̄φ = 0. Note that P̄ is a projection
matrix; then
P̄ H̄φ = 0.
(27)
From (18) one then has
H̄φ = 0.
(28)
From φ ∈ F [sgn ](q) and (5), one has
||q||1 = φ0 q
which together with (26) and (28) implies ||q||1 = 0. Then one has q = 0 and
(19) is true.
Now consider the system
ẋ ∈ −M F [sgn ](x)
(29)
with any positive semi-definite matrix M ∈ Rr×r . The existence of a Filippov
solution to (29) can be guaranteed by Lemma 1. The existence interval is
7
t ∈ [0, ∞) because of the global bound on the right-hand side to (29). Let x(t)
denote such a Filippov solution for any given x(0). Note that the function kxk1
is locally Lipschitz and regular(The word was introduced early with reference).
Then by Lemma 3, the time derivative of kx(t)k1 exists for almost all t ∈ [0, ∞)
and is in the set of generalized Lie derivatives. In other words, there exists a
set I = [0, ∞) \ T with T of Lebesgue measure 0 such that
dkx(t)k1
exists for all t ∈ I
dt
(30)
Proposition 1 Let x(t) denote a Filippov solution to (29) for any given x(0) ∈
Rr . Then
•
dkx(t)k1
≤ 0,
dt
t ∈ I;
(31)
• there exists a finite time T such that
x(T ) ∈ Φ(M );
(32)
• if it is further true that
dkx(t)k1
= 0,
dt
t ∈ [T, ∞) \ T ,
(33)
one has
x(t) = x(T )
t ∈ [T, ∞)
(34)
Remark 1 Note that the right-hand side of (29) is a projection of the gradient
flow of the potential function kx(t)k1 . It is also standard that the gradient
law of a real analytic function with a lower bound converges to a single point
which is both a local minimum and a critical point of the potential function
[42]. However, if the real analytic property does not hold, the convergence result
may fail. Indeed, the function kx(t)k1 here is obviously not real analytic, and
one cannot immediately assert that (29) will drive kx(t)k1 to its minimum,
not to mention the finite time result in Proposition 1. Thus Proposition 1 is
nontrivial and will serve as the foundation for devising finite-time distributed
linear equation solvers in this paper.
Proof of Proposition 1: By (12) in Lemma 4, one has
d||x(t)||1
= β 0 ẋ,
dt
t∈I
(35)
holds for all β ∈ ∂kx(t)k1 . It follows from ẋ ∈ −M F [sgn ](x) in (29) that at
each t there exists a γ(t) ∈ F [sgn ](x) such that
ẋ = −M γ(t)
8
(36)
Since β could be chosen as any vector in ∂kx(t)k1 , γ(t) ∈ F [sgn ](x) and ∂kxk1 =
F [sgn ](x) from (10), one could choose β = γ(t) in (35), which together with
(36) leads to
dkx(t)k1
= −γ(t)0 M γ(t), t ∈ I
(37)
dt
Note that M is positive semi-definite. Thus (31) is true.
We use the method of contradiction to prove that there exists a finite time
T such that x(T ) ∈ Φ(M ), T ∈ I. Suppose such a finite time does not exist.
One has
x(t) ∈ Φc (M ), t ∈ I
Then
dkx(t)k1
≤ −λ(M ), t ∈ I
dt
where λ(M ) is as defined in (16). It follows that
kx(t)k1 ≤ kx(0)k1 − λ(M )t,
t ∈ [0, ∞).
This contradicts the fact that kx(t)k1 ≥ 0, t ∈ [0, ∞) since λ(M ) is a positive
constant by Lemma 5. Thus there exists a finite time T such that x(T ) ∈ Φ(M ).
From the assumption (33), the fact that M is semipositive definite and (37),
one has
M γ(t) = 0, t ∈ [T, ∞) \ T .
(38)
It follows from this and (36) that
ẋ(t) = 0,
t ∈ [T, ∞) \ T .
By integration of ẋ(t) from T to ∞, one has x(t) = x(T ),
completes the proof.
4
t ∈ [T, ∞). This
Algorithms and Main Results
In this section, we study three related problems: finite time distributed
solution of Ax = b, centralized solution of Ax = b achieving a minimum l1 norm, and then finally, using ideas from the first two, and finding a distributed
algorithm achieving the minimum l1 -norm solution of Ax = b in finite time.
4.1
Finite-Time Distributed Linear Equation Solver
In this subsection, we will present a distributed update for achieving a solution x∗ to Ax = b in finite time. Of course, A is assumed to have full row
rank, but not necessarily be square. Recall that distributed linear equation
solvers based on the agreement principle require each agent i to limit its update subject to its own constraint Ai x = bi while seeking consensus with its
neighbors [1]. Such an agreement principle in continuous-time systems can be
9
achieved by a projection-consensus flow which at each agent i projects a function of its neighbors’ states to the subspace defined by its linear equation [13].
By choosing the finite-time gradient flow for consensus developed by [36] within
the projection-consensus flow, we are led to postulate the following update for
each agent i:
X
ẏi = −Pi
φij , Ai yi (0) = bi
(39)
j∈Ni
where φij ∈ F [sgn ](yi − yj ) and Pi denote the projection matrix to the kernel
of Ai .
Note the special property that F [sgn ](0) is a point which can be chosen
arbitrarily from the interval [−1, 1]. Generally speaking, different agents may
have different choices for F [sgn ](0). Before proceeding, we make the following
assumption on coordinations between neighbor agents:
Assumption 1 Each pair of neighbor agents i and j takes the choice of φij
and φji when yi = yj such that
φij = −φji
(40)
Under Assumption 1 and the definition of F [sgn ](x), one always has φij = −φji
no matter whether yi is equal to yj or not.
Let y = col {y1 , y2 , · · · , ym }, P̄ = diag{P1 , P2 , · · · , Pm } and H̄ = H ⊗ In
with H the incidence matrix of G. Then from (39) we have
ẏ ∈ −P̄ H̄F [sgn ](H̄ 0 y)
(41)
By Lemma 1 there exists a Filippov solution to system (41) for t ∈ [0, ∞), we
denote this solution by y(t). By Lemma 3, there exists a set I = [0, ∞) \ T
1
exists for all t ∈ I. Moreover,
with T of Lebesgue measure 0 such that dky(t)k
dt
one has the following main theorem, which establishes existence of a limiting
consensus solution but for the moment leaves unspecified the L1 -optimality.
Theorem 1 Under Assumption 1 and the updates (39), and with A assumed
to have full row rank, all yi (t), i = 1, 2, ..., m, converge to be a single solution
to Ax = b in finite time.
Proof of Theorem 1. Since Ai yi (0) = bi and ẏi is in the kernel of Ai , one has
Ai yi (t) = bi for all t ∈ [0, ∞). Then to prove Theorem 1, it is sufficient to show
that all yi reach consensus in finite time. Note that G is connected and ker H 0
is spanned by the vector 1m . Then H̄ 0 y = 0 if and only if all yi are equal. Thus
to prove Theorem 1, it suffices to prove z(t) = H̄ 0 y(t) converges to 0 in finite
time. By multiplying both sides of (41) by H̄ 0 , one has
ż ∈ −H̄ 0 P̄ H̄F [sgn ](z)
(42)
By Proposition 1, we have
dkz(t)k1
≤ 0, t ∈ I;
dt
10
(43)
and there exists a finite time T ∈ I such that
z(T ) ∈ Φ(H̄ 0 P̄ H̄).
(44)
From this, we know the fact that z(T ) = H̄ 0 y(T ) so that z(T ) ∈ image H̄ 0 , and
recalling (19) in Lemma 6, one has
z(T ) = 0,
which by (43) implies
kz(t)k1 = 0,
t ∈ [T, ∞).
It follows that
z(t) = 0,
t ∈ [T, ∞).
This completes the proof.
4.2
Finite-Time Centralized Update for Minimum l1 -norm
Solution
In this subsection, we will propose a centralized update for achieving the
minimum l1 -norm solution to Ax = b. By noting that kxk1 is convex, we
conceive of using a negative gradient flow of ||x||1 subject to x remaining on the
manifold Ax = b in order to achieve x̄∗ = arg minAx=b kxk1 . This leads us to
the following update:
ẏ ∈ −P F [sgn ](y),
Ay(0) = b.
(45)
where P denotes the projection matrix onto the kernel of A. Again by Lemma
1 one has there exists a Filippov solution to system (45) for t ∈ [0, ∞), which
we denote by y(t). By Lemma 3, there exists a set I = [0, ∞) \ T with T of
1
measure 0 such that dky(t)k
exists for all t ∈ I. Moreover, we have the following
dt
main theorem:
Theorem 2 With A of full row rank, the Filippov solution y(t) to (45) converges in finite time to a constant, which is the minimum l1 -norm solution to
Ax = b.
Proof of Theorem 2: By Proposition 1, one has
dky(t)k1
≤ 0, t ∈ I;
dt
(46)
and there exists a finite time T ∈ I such that
y(T ) ∈ Φ(P ).
Then there exists a vector φ ∈ F [sgn ](y(T )) such that P φ = 0. This and (5)
imply
φ0 y(T ) = ky(T )k1
(47)
11
Moreover, let ȳ denote any solution to Ax = b. Recall (6), there holds
φ0 ȳ ≤ kȳk1
(48)
Since φ ∈ ker P , one has φ ∈ image A0 . This implies that there exists a vector
q such that
q 0 A = φ0
(49)
From Ay(T ) = b = Aȳ and (47)-(49), one has
ky(T )k1 = φ0 y(T ) = q 0 Ay(T ) = q 0 Aȳ = φ0 ȳ ≤ kȳk1
(50)
where ȳ is any solution to Ax = b. Thus y(T ) is a minimum l1 norm solution
to Ax = b. This and (46) implies ||y(t)||1 reaches its minimum value subject to
Ay(t) = b for t ∈ [T, ∞) \ T . Thus
dky(t)k1
= 0,
dt
t ∈ [T, ∞) \ T
(51)
which satisfies the assumption (33) in Proposition 1 again. Then y(t) = y(T ), t ∈
[T, ∞). Thus y(t) is the minimum l1 -norm solution to Ax = b for t ∈ [T, ∞).
This completes the proof.
4.3
Finite-Time Distributed Update for Minimum l1 -norm
Solutions
In this subsection we will develop a distributed update for a multi-agent
network to achieve the minimum l1 -norm solution to Ax = b in finite time.
Motivated to study a combination of the finite-time distributed linear equation solver in (39) and the finite-time centralized update for minimum l1 -norm
solutions in (45), we propose the following update for agent i, i = 1, 2, ..., m:
X
ẏi = −k(t)Pi φi − Pi
φij
(52)
j∈Ni
where φi ∈ F [sgn ](yi ), φij ∈ F [sgn ](yi − yj ), with φij = −φji in case yi = yj ,
and
Ai yi (0) = bi .
(53)
We assume that k(t) ∈ R is measurable and locally bounded almost everywhere
for t ∈ [0, ∞), and
lim k(t) = δ
(54)
t→∞
Z ∞
k(t)dt = ∞
(55)
0
where δ is a sufficiently small nonnegative number depending on the connection
of the network and A, note that 0 is always a feasible choice of δ. One example
δ̄
+δ. One simple case is choosing δ̄ = 1, δ = 0, and
of a choice of k(t) is k(t) = t+1
12
1
resulting in k(t) = t+1
, obtained by taking δ to be zero. This choice obviates the
need to decide how small one has to be to meet a “sufficiently small” condition,
but may result in rather slow convergence. Now from Ai yi (0) = bi and the fact
that Pi is the projection to the kernel of Ai (which ensures ẏi ∈ ker Ai ), one has
Ai yi (t) = bi ,
t ∈ [0, ∞)
(56)
Let y = col {y1 , y2 , y3 , ..., ym }, P̄ = diag {P1 , P2 , ..., Pm }, and H̄ = H ⊗ In
with H the incidence matrix of G. From the updates in (52) and Assumption
1, one has
ẏ ∈ −k(t)P̄ F [sgn ](y) − P̄ H̄F [sgn ](H̄ 0 y)
(57)
Note that sgn (y), k(t) and P̄ H̄F [sgn ](H̄ 0 y) are measurable and locally bounded
almost everywhere. Then by Lemma 2 there exists a Filippov solution to
system (57) for any given y(0) satisfying (53), which we denote by y(t) =
col {y1 (t), y2 (t), y3 (t), ..., ym (t)} . By Lemma 3, there exists a set I = [0, ∞) \ T
1
exists for all t ∈ I. Then one
with T of Lebesgue measure 0 such that dky(t)k
dt
has the following theorem:
Theorem 3 Under Assumption 1 and the update (52) and with A of full row
rank, all yi (t), i = 1, 2, ..., m converge in finite time to the same value which is
the minimum l1 -norm solution to Ax = b.
Proof of Theorem 3: We first prove that all yi (t) reach a consensus in finite
time by showing that z(t) converges to 0 in finite time, where z(t) = H̄ 0 y(t).
Multiplying both sides of (57) by H̄ 0 , one has
ż ∈ −k(t)H̄ 0 P̄ F [sgn ](y) − H̄ 0 P̄ H̄F [sgn ](z)
(58)
By Lemma 4, one has
dkz(t)k1
= β 0 ż, t ∈ I
dt
where β can be any vector in ∂kzk1 . Note that ∂kzk1 = F [sgn ](z). Then
dkz(t)k1
= −k(t)γ 0 H̄ 0 P̄ η − γ 0 H̄ 0 P̄ H̄γ,
dt
t∈I
(59)
where η ∈ F [sgn ](y), γ ∈ F [sgn ](z) and β is chosen to be equal to γ. Since
z(t) ∈ image H̄ 0 , then by Lemma 6, if also z(t) ∈ Φ(H̄ 0 P̄ H̄), one will have
z(t) = 0. Thus
z(t) ∈ Φc (H̄ 0 P̄ H̄) as long as kz(t)k1 6= 0.
(60)
Now by the definition of λ(H̄ 0 P̄ H̄) and Lemma 5, one has
γ 0 H̄ 0 P̄ H̄γ ≥ ρ as long as kz(t)k1 6= 0
13
(61)
where ρ = λ(H̄ 0 P̄ H̄) is a positive constant. Let κ(H̄, P̄ ) denote an upper bound
on |γ 0 H̄ 0 P̄ η|, and define an upper bound on δ by
δ<
ρ
κ(H̄, P̄ )
(62)
This captures the idea stated previously that δ depends on A and the graph.
For any δ chosen as in (62), there must exist a finite time T1 such that
| − k(t)γ H̄ 0 P̄ η| ≤ ρ2 ,
t ∈ [T1 , ∞).
(63)
where we have ρ2 = κ(H̄, P̄ )δ < ρ. From (59), (61) and (63), one has
dkz(t)k1
≤ −(ρ − ρ2 ) as long as kz(t)k1 6= 0, t ∈ [T1 , ∞) \ T
dt
(64)
with ρ a positive constant. Thus there must exist a finite time T2 ≥ T1 such
that
z(T2 ) = 0
(65)
Next we prove that
z(t) = 0,
t ∈ [T2 , ∞)
(66)
We prove this by contradiction. Suppose (66) is not true. Then there exists
a time T̄2 > T2 such that z(T̄2 ) 6= 0. Then kz(T̄2 )k1 > 0. Since kz(t)k1 is
continuous, there exists a time T2∗ such that kz(T2∗ )k1 takes its maximum value
for t ∈ [T2 , T̄2 ]. Again, since kz(t)k1 is continuous, there exists a sufficiently
small but positive such that kz(t)k1 > 0 is differentiable for t ∈ [T2∗ − , T2∗ ].
Because kz(t)k1 is differentiable almost everywhere, we know that
kz(T2∗ )k1 =
Z
T2∗
T2∗ −
dkz(t)k1
dt + kz(T2∗ − )k1
dt
(67)
Because kz(t)k1 > 0 in [T2∗ − , T2∗ ], by (64) and (67) we have
kz(T2∗ )k1 ≤ −(ρ − ρ2 ) + kz(T2∗ − )k1 < kz(T2∗ − )k1
(68)
This contradicts the fact that kz(T2∗ )k1 is the maximum value on [T2 , T̄2 ]. Thus
(66) is true.
By (66), one has there exists a vector ȳ(t) such that
y1 (t) = y2 (t) = · · · = ym (t) = ȳ(t),
t ∈ [T2 , ∞)
(69)
Moreover, Aȳ(T2 ) = b since Ai yi (t) = bi for i = 1, 2, ..., m. To prove Theorem 3,
we only need to prove that ȳ(t) converges to be the minimum l1 -norm solution
to Ax = b. To see why this is so, we let P denote the projection matrix to the
kernel of A. Then P Pi = P for i = 1, 2, ..., m. Multiplying both sides of (52)
by P , one has
X
ẏi = −k(t)P φi − P
φij , i = 1, 2, ..., m
(70)
j∈Ni
14
Since G is undirected, then φij appears in the update i if φji appears in its
neighbor j’s update. By adding the updates in (70) for i = 1, 2, ..., m and
noting φij = −φji for any two neighbors i and j, one has
m
X
ẏi = −k(t)P
m
X
i=1
φi
(71)
i=1
where φi ∈ F [sgn ](yi ). By (69), one knows all yi (t) reach a consensus ȳ(t) for
t ∈ [T2 , ∞). Note that if the kth entry of ȳ(t) is 0, the kth entry of each φi
can be selected as an arbitrary value from [−1, 1], which may be different for
different entries, but their average is still an arbitrary value in [−1, 1]. Thus
m
1 X
φi ∈ F [sgn ](ȳ(t)),
m i=1
t ∈ [T2 , ∞)
(72)
From (71) and (72) we have
ȳ˙ ∈ −k(t)P F [sgn ](ȳ),
Let τ =
Rt
T2
t ∈ [T2 , ∞)
(73)
k(s)ds. From
dȳ dt
dȳ
=
·
dτ
dt dτ
and (73), one has
dȳ
∈ −P F [sgn ](ȳ), τ ∈ [0, ∞)
(74)
dτ
with Aȳ(τ ) = b for τ = 0. This is exactly the same as the centralized update in
(45). By Theorem 1, there exists a finite time Γ such that y(τ ) is the minimum
l1 -norm solution to Ax = b for τ ∈ [Γ, ∞). By the relation between τ and t, one
has correspondingly that there exist a finite time T such that ȳ(t) is a minimum
l1 -norm solution for t ∈ [T, ∞). This completes our proof.
5
Simulation Result
In this section, we will report several simulations of the proposed algorithms
for solving an underdetermined linear equation Ax = b in a four-agent undirected and connected network as in Figure
1.
0
0
0
Here, A and b are partitioned as A = A1 A02 A03 A04 and b = b01 b02 b03 b04 ,
15
1
2
4
3
Figure 1: A four agent network
respectively. Each agent i knows Ai and bi with
0.63
0.58
0.65
0.33
0.68
0.22
0
A1 =
0.49
0.21
0.62
0.57
0.71
0.28
0.44
0.06
0.77
0.16
0.84
0.62
A03 =
0.74
0.26
0.44
0.28
0.50
0.38
0.04
0.60
0.50
0.81
0.01
0.51
,
0.23
0.29
0.25
0.21
0.66
0.90
0.34
0.94
0.28
0.41
0.75
0.56
,
0.41
0.89
0.69
0.23
0.88
0.63
0.80
0.25
0.53
0.79
0.13
0.79
0
A2 =
0.34
0.45
0.10
0.94
0.78
0.70
0.05
0.09
0.65
0.69
0.94
0.73
A04 =
0.51
0.59
0.76
0.95
0.39
0.03
b01 = 0.47
b03 = 0.63
0.52 ,
0.33 ,
b02 = 0.77
b04 = 0.31
0.99
0.65
0.38
0.12
0.76
0.52
0.55
0.24
0.55
0.51
0.58
0.85
0.23
0.33
0.92
0.66
0.92
0.06
0.13
0.94
0.40
0.69
0.24
0.92
0.34
0.65
(75)
(76)
Example 1: We utilize the distributed update (39) to achieve a solution
∗
to Ax
by
x in finite time in the above four-agent network. Let
0= b 0denoted
0
0 0
y = y1 y2 y3 y4 where yi (t) denote the state of agent i that is the estimate
of agent i to x∗ . Then ky(t) − 1m ⊗ x∗ k1 measures the difference between all
agents’ estimations and the solution x∗ . As shown by simulations in Figure
2, ky(t) − 1m ⊗ x∗ k1 reaches 0 in finite time. This suggests all agents’ states
achieves a consensus to x∗ in finite time, consistent with the claim of Theorem
1.
Example 2: We employ the centralized update (45) with state vector y(t) to
achieve x̄∗ which denotes a minimum l1 -norm solution to Ax = b. As shown in
16
Figure 2: Finite time achieving a solution under the update(39)
Figure 3, ky(t) − x̄∗ k1 reaches 0 in finite time and maintains to be 0 afterwards.
This indicates that the minimum l1 -norm solution x̄∗ is achieved in finite time
corresponding to Theorem 2. It is worth noting that one could observe multiple
phases of convergence in Figure 2. This is because F [sgn ](y(t)) in the update
(45) takes different values piece-wisely, and results in different convergence rates.
Figure 3: Centralized solver for achieving a minimum l1 norm solution under
the update (45)
Example 3: Finally, we utilize the distributed update (52) to achieve a
minimum l1 solution to Ax = b denoted by x̄∗ in finite time. Here k(t) is
δ̄
chosen to take the form 1+t
+ δ with δ̄ and δ constants. We still let y =
0
0
0
0 0
y1 y2 y3 y4 where yi (t) denote the state of agent i that is the estimate of
17
agent i to x̄∗ . Then ky(t)−1m ⊗ x̄∗ k1 measures the difference between all agents’
estimations and x̄∗ . As shown in Figure 4 and Figure 5, all yi (t) reach the same
minimum l1 -norm solution in finite time regardless of different choices of δ̄ and
δ. Moreover, by fixing δ̄ and increasing the value of δ in k(t), one achieves
a significantly faster convergence as shown in Figure 4. Similarly, increasing δ̄
with a fixed δ also leads to a faster convergence, although not that dramatically,
as shown in Figure 5.
Figure 4: Distributed solver for achieving minimum l1 norm solution under the
δ̄
update (52), where k(t) = t+1
+ δ with fixed δ̄ = 0.1 and different values of δ.
Figure 5: Distributed solver for achieving minimum l1 norm solution under
δ̄
update (52) where k(t) = t+1
+ δ with fixed δ = 0.01 and different values of δ̄.
We also note from Figure 4 and Figure 5 that the convergence time required
in this distributed way for minimum l1 -norm solutions is dramatically longer,
18
roughly speaking, 1δ times longer, than that in the centralized case in Figure
3. The major reason for this is that the centralized update appearing in the
distributed update (52) is scaled by k(t), which is smaller than 1. The time
required for consensus in this four-agent network example is minor under the
distributed update (52) as indicated in Figure 6. Let ȳ(t) denote the average of
all four agents’ states. The evolution of ky(t) − 1m ⊗ ȳ(t)k1 in Figure 6 suggests
that all agents’ states reach a consensus in a finite time similar to that in Figure
2. We anticipate that when it comes to a very large network, the convergence
time for consensus might play a more significant role in convergence of the
distributed update (52).
Figure 6: Consensus of distributed solver under update (52) with k(t) =
0.01
6
0.1
t+1
+
Conclusion
We have developed continuous-time distributed algorithms for achieving
solutions, and minimum l1 -norm solutions, respectively, to linear equations
Ax = b in finite time. The algorithms result from combination of the projectionconsensus flow proposed in [13] and the finite-time gradient flow for consensus
devised in [36], and work for fixed undirected multi-agent networks. Future
work includes the generalization of the proposed update to networks that are
directed and time-varying.
References
[1] S. Mou, J. Liu, and A. S. Morse. A distributed algorithm for solving a linear
algebraic equation. IEEE Transactions on Automatic Control, 60(11):2863–
2878, 2015.
19
[2] B. D. O. Anderson, S. Mou, U. R. Helmke, and A. S. Morse. Decentralized
gradient algorithm for solution of a linear equation. Numerical Algebra,
Control and Optimization, 6(3):319–328, 2016.
[3] J. Lu and C. Y. Tang. A distributed algorithm for solving positive definite
linear equations over networks with membership dynamics. IEEE Transactions on Control of Network Systems, 2016.
[4] J. Wang and N. Elia. Distributed solution of linear equations over unreliable
networks. Proceedings of American Control Conference, pages 6471–6476,
2016.
[5] S. Mou and A. S. Morse. A fixed-neighbor, distributed algorithm for solving
a linear algebraic equation. European Control Conference, pages 2269–2273,
2013.
[6] X. Wang, S. Mou, and D. Sun. Improvement of a distributed algorithm
for solving linear equations. IEEE Transactions on Industrial Electronics,
64(4):3113–3117, 2017.
[7] P. Wang, W. Ren, and Z. Duan. Distributed minimum weighted norm
solution to linear equations associated with weighted inner product. Proceedings of the 55th Conference on Decision and Control, pages 5220–5225,
2016.
[8] L. Wang, D. Fullmer, and A. S. Morse. A distributed algorithm with an
arbitary initialization for solving a linear algebraic equation. Proceedings
of American Control Conference, pages 1078–1081, 2016.
[9] S. Mou, Z. Lin, L. Wang, D. Fullmer, and A. S. Morse. A distributed
algorithm for efficiently solving linear equations and its applications (special
issue jcw). Systems & Control Letters, 91:21–27, 2016.
[10] J. Wang and N. Elia. Distributed least square with intermittent communications. In 2012 American Control Conference (ACC), pages 6479–6484,
June 2012.
[11] J. Wang and N. Elia. A control perspective for centralized and distributed
convex optimization. In 2011 50th IEEE Conference on Decision and Control and European Control Conference, pages 3800–3805, Dec 2011.
[12] B. Gharesifard and J. Cortés. Distributed continuous-time convex optimization on weight-balanced digraphs. IEEE Transactions on Automatic
Control, 59(3):781–786, March 2014.
[13] G. Shi and B.D. O.Anderson. Distributed network flows solving linear
algebraic equations. Proceedings of American Control Conference, pages
2864–2869, 2016.
20
[14] Y. Liu, C. Lageman, B. D. O. Anderson, and G. Shi. Exponential least
squares solvers for linear equations over networks. IFAC World Congress,
Toulouse, pages 2598–2603, 2017.
[15] Y. Liu, Y. Lou, B. D. O. Anderson, and G. Shi. Network flows as least
squares solvers for linear equations. 56th IEEE Conference on Decision
and Control, Melbourne, 2017. accepted.
[16] P. M. Shearer. Improving local earthquake locations using the l1 norm and
waveform cross correlation: Application to the whittier narrows, california,
aftershock sequence. Journal of Geophysical Research: Solid Earth, 102,
1997.
[17] Y. Dodge. Statistical data analysis based on the L1-norm and related methods. Birkhäuser, 2012.
[18] R. Beucker and H. A. Schlitt. On minimal lp-norm solutions of the biomagnetic inverse problem. Technical report, Zentralinstitut für Angewandte
Mathematik, 1996.
[19] D. Baron, M. F. Duarte, M. B. Wakin, S. Sarvotham, and R. G. Baraniuk.
Distributed compressive sensing. 2009. arXiv:0901.3403.
[20] Y. C. Eldar and G. Kutyniok. Compressed sensing: Theory and applications. Cambridge University Press, 2012.
[21] E. J. Candes and T. Tao. Decoding by linear programming. IEEE Transactions on Information Theory, (12):4203–4215, 2005.
[22] E. J. Candes, J. Romberg, and T. Tao. Robust uncertainty principles:
Exact signal reconstruction from highly incomplete frequency information.
IEEE Transactions on Information Theory, (2):489–509, 2006.
[23] A. Y. Yang, A. Genesh, Z. H. Zhou, S. S. Sastry, and Y. Ma. A review
of fast l (1)-minimization algorithms for robust face recognition. Technical report, CALIFORNIA UNIV BERKELEY DEPT OF ELECTRICAL
ENGINEERING AND COMPUTER SCIENCE, 2010.
[24] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed
optimization and statistical learning via the alternating direction method
of multipliers. Foundations and Trends R in Machine Learning, 3(1):1–122,
2011.
[25] K. Frisch. The logarithmic potential method of convex programming. Memorandum, University Institute of Economics, Oslo, 5(6), 1955.
[26] M. Kojima, N. Megiddo, and S. Mizuno. Theoretical convergence of largestep primaldual interior point algorithms for linear programming. Mathematical Programming, 59:1–21, 1993.
21
[27] M. Figueiredo, R.D. Nowak, and J.S. Wright. Gradient projection for sparse
reconstruction: Application to compressed sensing and other inverse problems. IEEE Journal of selected topics in signal processing, 1:586–597, 2007.
[28] M. R. Osborne, B. Presnell, and B. A. Turlach. A new approach to variable
selection in least squares problems. IMA journal of numerical analysis,
2000.
[29] I. Daubechies, M. Defrise, and C. De Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on pure and applied mathematics, pages 1413–1457, 2004.
[30] S. Becker, J. Bobin, and E. J. Candès. Nesta: A fast and accurate firstorder method for sparse recovery. SIAM Journal on Imaging Sciences,
pages 1–39, 2011.
[31] M. Cao, A. S. Morse, and B.D.O. Anderson. Agreeing asynchronously.
IEEE Transactions on Automatic Control, 53(8):1826–1838, 2008.
[32] M. Cao, A. S. Morse, and B. D. O. Anderson. Reaching a consensus in a
dynamically changing environment: A graphical approach. SIAM Journal
on Control and Optimization, 47(2):575–600, 2008.
[33] M. Cao, D. A. Spielman, and A. S. Morse. A lower bound on convergence
of a distributed network consensus algorithm. In Decision and Control,
2005 and 2005 European Control Conference. CDC-ECC’05. 44th IEEE
Conference on, pages 2356–2361. IEEE, 2005.
[34] C. Lageman and Z. Y. Sun. Consensus on spheres: Convergence analysis
and perturbation theory. In Decision and Control (CDC), 2016 IEEE 55th
Conference on, pages 19–24. IEEE, 2016.
[35] J. Qin and C. Yu. Exponential consensus of general linear multi-agent
systems under directed dynamic topology. Automatica, 50(9):2327–2333,
2014.
[36] J. Cortés. Finite-time convergent gradient flows with applications to network consensus. Automatica, 42(11):1993–2000, 2006.
[37] J. Cortes. Discontinuous dynamical system: A tutorial on solutions, nonsmooth analysis and stability. IEEE Control System Magazine, pages 36–73,
2008.
[38] A. F. Filippov. Differential equations with discontinuous righthand sides:
control systems, volume 18. Springer Science & Business Media.
[39] F. H. Clarke. Optimization and nonsmooth analysis. SIAM, 1990.
[40] A. Bacciotti and F. Ceragioli. Stability and stabilization of discontinuous
systems and nonsmooth lyapunov functions. ESAIM: Control, Optimisation and Calculus of Variations, 4:361–376, 1999.
22
[41] F. Chung. Spectral graph theory. American Mathematical Soc, 1997.
[42] P. A. Absil and K. Kurdyka. On the stable equilibrium points of gradient
systems. Systems & control letters, 55(7):573–577, 2006.
23
| 3 |
1
UAV-Aided Wireless Communication Designs
With Propulsion Energy Limitations
Subin Eom, Hoon Lee, Junhee Park and Inkyu Lee, Fellow, IEEE
arXiv:1801.02782v1 [cs.IT] 9 Jan 2018
School of Electrical Eng., Korea University, Seoul, Korea
Email: {esb777, ihun1, pjh0585, inkyu}@korea.ac.kr
Abstract
This paper studies unmanned aerial vehicle (UAV) aided wireless communication systems where
a UAV supports uplink communications of multiple ground nodes (GNs) while flying over the area of
the interest. In this system, the propulsion energy consumption at the UAV is taken into account so that
the UAV’s velocity and acceleration should not exceed a certain threshold. We formulate the minimum
average rate maximization problem and the energy efficiency (EE) maximization problem by jointly
optimizing the trajectory, velocity, and acceleration of the UAV and the uplink transmit power at the GNs.
As these problems are non-convex in general, we employ the successive convex approximation (SCA)
techniques. To this end, proper convex approximations for the non-convex constraints are derived, and
iterative algorithms are proposed which converge to a local optimal point. Numerical results demonstrate
that the proposed algorithms outperform baseline schemes for both problems. Especially for the EE
maximization problem, the proposed algorithm exhibits about 109 % gain over the baseline scheme.
I. I NTRODUCTION
Recently, unmanned aerial vehicles (UAVs) have received great attentions as a new communication entity in wireless networks [1]. Compared to conventional terrestrial communications
where users are served by ground base stations (BSs) fixed at given position [2], UAV-aided
systems could be dispatched to the field with various purposes such as disaster situations and
military uses. Moreover, located high above users, UAVs are likely to have line-of-sight (LoS)
communication links for air-to-ground channels.
Utilizing these advantages, UAVs have been considered to diverse wireless communication
systems. The authors in [3] and [4] studied a mobile relaying system where a UAV helps the
communication of ground nodes (GNs) without direct communication links. In this UAV-aided
2
relaying system, compared to conventional static relay schemes [5], [6], the UAV can move closer
to source and destination nodes in order to obtain good channel conditions, and thus the system
throughput can be significantly improved. In [3], the throughput of mobile relaying channels
was maximized by optimizing the transmit power at the source and the relay node as well as the
trajectory of the mobile relay. For the fixed relay trajectory, the work [4] addressed the secrecy
rate maximization problem for the UAV-based relaying system with an external eavesdropper.
In addition, UAVs have been adopted to assist conventional terrestrial communication infrastructures [7]–[9]. For the disaster situation, UAVs were employed in [7] to recover malfunctioned
ground infrastructure. The work in [8] examined a system where the UAV serves cell-edge users
by jointly optimizing UAV’s trajectory, bandwidth allocation, and user partitioning. Also, the
flying computing cloudlets with UAVs were introduced to provide the offloading opportunities
to multiple users [9].
Moreover, the UAVs could play the role of mobile BSs in wireless networks [10]–[12]. The
authors in [10] derived mathematical expressions for the optimum altitude of the UAVs that
maximizes the coverage of the cellular network. Also, the trajectory optimization methods for
mobile BSs were presented in [11] and [12]. Assuming that the GNs are located in a line, the
minimum throughput performance was maximized in [11] by optimizing the position of a UAV
on a straight line. This result was extended in [12] to a general scenario where multiple UAVs
fly three-dimensional space to communicate with GNs. The joint optimization algorithms for
the UAV trajectory, transmit power, and time allocation were provided in [12] to maximize the
minimum throughput performance. However, these works did not consider the propulsion energy
consumption of the UAVs necessary for practical UAV designs under limited on-board energy
situation [13].
By taking this issue into account, recent works [14]–[16] investigated energy efficiency (EE)
of the UAV system. Different from conventional systems which consider only communicationrelated energy consumption [17]–[19], the EE of the UAV should addresses the propulsion energy
at the UAV additionally. The authors in [14] maximized the EE by controlling the turning radius
of a UAV for mobile relay systems. Also, by jointly optimizing the time allocation, speed,
and trajectory, both the spectrum efficiency and the EE were maximized in [15]. In [16], the
propulsion energy consumption of the UAV was theoretically modeled, and the EE of the UAV
was maximized for a single GN system.
This paper studies UAV-aided wireless communications where a UAV with limited propulsion
3
energy receives the data of multiple GNs in the uplink. It is assumed that all GNs and the
UAV operate in the same frequency band and there are no direct communication links among
GNs. Under these setup, we formulate the minimum rate maximization problem and the EE
maximization problem by jointly optimizing the UAV trajectory, the velocity, the acceleration,
and the uplink transmit power at the GNs. A similar approach for solving the minimum rate
maximization was studied in [12], but the authors in [12] did not involve the propulsion energy
consumption at the UAV. For the EE maximization problem, our work can be regarded as a
generalization of the single GN system in [16] to the multi-GN scenario, and thus we need to
deal with inter-node interference as well. Due to these issues, existing algorithms presented in
[12] and [16] cannot be directly applied to our problems.
To tackle our problem of interest, we introduce auxiliary variables which couple the trajectory
variables and the uplink transmit power in order to jointly optimize these variables. As the
equivalent problem is still non-convex, we employ the successive convex approximation (SCA)
technique which successively solves approximated convex problems of the original non-convex
one. In order to apply the SCA to our optimization problems, we present new convex surrogate
functions for the non-convex constraints. Then, we propose efficient algorithms for the minimum
rate maximization problem and the EE maximization problem which yield local optimal solutions.
Simulation results confirm that the proposed algorithms provide a significant performance gain
over baseline schemes.
The rest of this paper is organized as follows: Section II explains the system model and the
problem formulations for the UAV-aided communication systems. In Section III, the minimum
rate maximization and the EE maximization algorithms are proposed. We examine the circular
trajectory case as baseline schemes in Section IV. Section V presents the numerical results for
the proposed algorithms and we conclude the paper in Section VI.
Notations: Throughout this paper, the bold lower-case and normal letters denote vectors and
scalars, respectively. The space of M-dimensional real-valued vectors are represented by RM ×1 .
For a vector a, kak and aT indicate norm and transpose, respectively. The gradient of a function
f is defined as ∇f . For a time-dependent function x(t), ẋ(t) and ẍ(t) stand for the first-order
and second-order derivatives with respect to time t, respectively.
4
Fig. 1. UAV-enabled wireless network
II. S YSTEM M ODEL A ND P ROBLEM F ORMULATION
As shown in Fig. 1, we consider UAV-aided wireless communications where a UAV receives
uplink information transmitted from K GNs. The UAV horizontally flies at a constant altitude H
with a time period T , while the GNs are located at fixed positions, which are perfectly known to
the UAV in advance. For the location of the GNs and the UAV, we employ a three-dimensional
Cartesian coordinate system, and thus the horizontal coordinate of GN k (k = 1, ..., K) is denoted
by wk = [xk yk ]T . Also, we define the time-varying horizontal coordinate of the UAV at time
instant t as q(t) = [qx (t) qy (t)]T , for 0 ≤ t ≤ T . Then, the instantaneous velocity v(t) and the
acceleration a(t) of the UAV are expressed by v(t) , q̇(t) and a(t) , q̈(t), respectively.
Continuous time expressions of variables make analysis and derivations in the UAV systems
intractable. For ease of analysis, we discretize the time duration T into N time slots with the
same time interval δt =
T
N
[3]. As a result, the trajectory of the UAV can be represented by N
vector sequences q[n] , q(nδt ), v[n] , v(nδt ), and a[n] , a(nδt ) for n = 0, 1, ..., N. When
the discretized time interval δt is chosen as a small number, the velocity and the acceleration
can be approximated by using Taylor expansions as [16]
v[n] = v[n − 1] + a[n − 1]δt , for n = 1, ..., N,
(1)
q[n] = q[n − 1] + v[n − 1]δt + 21 a[n − 1]δt2 , for n = 1, ..., N.
(2)
Also, assuming the periodical operation at the UAV, we have [12]
q[0] = q[N], v[0] = v[N], a[0] = a[N],
(3)
5
which implies that after one period T , the UAV returns to its starting location with the same
velocity and acceleration.
In addition, the acceleration and the velocity of the practical UAV are subject to
ka[n]k ≤ amax , for n = 0, 1, ...N,
(4)
Vmin ≤ kv[n]k ≤ Vmax , for n = 0, 1, ..., N,
(5)
where amax indicates the maximum UAV acceleration in m/sec2 and Vmin and Vmax stand for
the minimum and the maximum UAV speed constraints in m/sec, respectively. Notice that the
minimum speed constraint Vmin is important for practical fixed-wing UAV designs which need
to move forward to remain aloft and thus cannot hover over a fixed location [16].
For the power consumption at the UAV, we take into account the propulsion power utilized
for maintaining the UAV aloft and supporting its mobility. The propulsion power of the UAV
Pprop [n] at time slot n is given by [16]
c2
Pprop [n] = c1 kv[n]k +
kv[n]k
3
ka[n]k2
1+
, for n = 0, 1, ..., N,
g2
(6)
where c1 and c2 are the parameters related to the aircraft design and g = 9.8 m/sec2 equals the
gravitational acceleration. Thus, the average propulsion power and the total consumed propulsion
P
PN
energy over N time slots are obtained by N1 N
n=1 Pprop [n] and δt
n=1 Pprop [n], respectively. The
power consumed by signal processing circuits such as analog-to-digital converters and channel
decoders are ignored since they are practically much smaller than the propulsion power [16].
Now, let us explain the channel model between the UAV and the GNs. We assume that the
air-to-ground communication links are dominated by the LoS links. Moreover, the Doppler effect
due to the UAV mobility is assumed to be well compensated. Then, the effective channel gain
hk [n] from GN k to the UAV at time slot n follows the free-space path loss model as [3]
hk [n] =
γ0
,
2
dk [n]
(7)
where γ0 , β0 /σ 2 represents the reference signal-to-noise ratio (SNR) at 1 m with β0 and σ 2
being the channel power at 1 m and the white Gaussian noise power at the UAV, respectively,
and the distance dk [n] is written by
dk [n] =
p
kq[n] − wk k2 + H 2 .
(8)
6
At time slot n, GN k transmits its data signal to the UAV with power 0 ≤ pk [n] ≤ Ppeak ,
where Ppeak is the peak transmission power constraint at the GNs. Accordingly, the instantaneous
achievable rate Rk [n] can be expressed as
Rk [n] = log2
where the term
PK
j=1,j6=k
1+
pk [n]hk [n]
1+
PK
pj [n]hj [n]
j=1,j6=k
!
,
(9)
pj [n]hj [n] stands for interference from other GNs. Therefore, the
achievable average rate of the GN k and the total information bits transmitted from GN k
PN
P
over N time slots are denoted as N1 N
n=1 Rk [n], respectively, where W
n=1 Rk [n] and W δt
means the bandwidth.
In this paper, we jointly optimize the variables q[n], v[n], and a[n] and the uplink transmit
power pk [n] at the GNs so that the minimum average rate among multiple GNs and the EE are
maximized, respectively. First, the minimum rate maximization problem can be formulated as
(P 1) :
max
{q[n],v[n],a[n]}
{pk [n],τ }
s.t.
τ
(10a)
N
1 X
Rk [n] ≥ τ, ∀k,
N n=1
0 ≤ pk [n] ≤ Ppeak , ∀k, n,
N
1 X
Pprop [n] ≤ Plim ,
N n=1
(10b)
(10c)
(10d)
(1) − (5),
where Plim in (10d) indicates the propulsion power constraint at the UAV.
Next, to support all of the individual GNs, the fairness based EE [20]–[22] is more suitable than
the network-wise EE [18], [19]. Thus, we define the EE in the UAV-aided wireless communication
systems as the ratio between the minimum information bits transmitted among the GNs and the
total energy consumed at the UAV. Therefore, the EE maximization problem can be written by
(P 2) :
max
{q[n],v[n],a[n]}
{pk [n],η}
η
PN
Pprop [n]
W
Rk [n] ≥ η, ∀k,
n=1
N
X
n=1
s.t.
(1) − (5), (10c).
(11a)
(11b)
7
In general, (P1) and (P2) are non-convex problems due to the constraints and the objective
functions. Compared to [12], we additionally consider the propulsion power constraint (10d) in
the minimum rate maximization problem (P1). Also, note that the EE maximization problem
(P2) can be regarded as a generalization of [16] which investigated only a single GN scenario.
From these respects, the works in [12] and [16] can be regarded as special cases of our problems
(P1) and (P2), respectively. To solve the problems (P1) and (P2), we adopt the SCA framework
[23] [24] which iteratively solves approximated convex problems for the original non-convex
problems.
III. P ROPOSED A LGORITHM
In this section, we propose iterative algorithms for efficiently solving (P1) and (P2) by applying
the SCA method. First, the minimum rate maximization problem (P1) is considered in Section
III-A, and then it is followed by the EE maximization problem (P2) in Section III-B.
A. Minimum Average Rate Maximization
Applying the change of variables as
Gk [n] , pk [n]hk [n] =
pk [n]γ0
, ∀k, n,
kq[n] − wk k2 + H 2
(12)
where Gk [n] is a new optimization variable, the constraint (10c) becomes 0 ≤ Gk [n] ≤
Gk,max [n], ∀k, n, where Gk,max [n] , Ppeak hk [n] =
Ppeak γ0
.
kq[n]−wk k2 +H 2
Then, we can rewrite the
achievable rate Rk [n] in (9) as
Rk [n] = log2
1+
K
X
m=1
P
G
[n]
.
where R̂k [n] , log2 1 + K
j=1,j6=k j
!
Gm [n]
− R̂k [n],
(13)
8
By introducing new auxiliary variables {V1 [n]}, (P1) can be recast to
(P 1.1) :
max
{q[n],v[n],a[n]}
{Gk [n],V1 [n],τ }
τ
N
1 X
log2
N n=1
s.t.
(14a)
1+
K
X
Gm [n]
m=1
!
− R̂k [n]
!
≥ τ, ∀k,
0 ≤ Gk [n] ≤ Gk,max [n], ∀k, n,
N
c2
1 X
c2 ka[n]k2
c1 kv[n]k3 +
+ 2
≤ Plim ,
N n=1
V1 [n]
g V1 [n]
(14b)
(14c)
(14d)
Vmin ≤ V1 [n], ∀n,
(14e)
V12 [n] ≤ kv[n]k2 , ∀n,
(14f)
kv[n]k ≤ Vmax , ∀n,
(14g)
(1) − (4).
It can be shown that at the optimal point of (P1.1), the inequality constraint in (14f) holds
with the equality, since otherwise we can enlarge the feasible region corresponding to (14d) by
increasing V1 [n]. Therefore, we can conclude that (P1.1) is equivalent to (P1). Thanks to the
new auxiliary variables {V1 [n]}, constraints (14d) and (14e) now become convex, while (14b),
(14c), and (14f) are still non-convex in general.
To address these constraints, we employ the SCA methods. First, it can be checked that
constraint (14b) is given by a difference of two concave functions. Hence, the convex surrogate
function R̂kub [n] for R̂k [n] can be computed from a first order Taylor approximation as
!
K
K
X
X
Gj,l [n] ≥ R̂k [n],
(Gj,l+1[n] − Gj,l [n]) + log2 1 +
R̂kub [n] , Γ̂k [n]
j=1,j6=k
(15)
j=1,j6=k
where Gk,l [n] indicates a solution of Gk [n] attained at the l-th iteration of the SCA process and
P
Γ̂k [n] , log2 e/(1 + K
j=1,j6=k Gj,l [n]). Next, to identify the surrogate functions of (14c) and
(14f), we present the following lemmas.
Lemma 1: Denoting {ql [n]} as a solution for {q[n]} calculated at the l-th iteration, the concave
surrogate function Glb
k,max [n] for Gk,max [n] can be expressed as
Glb
k,max [n] , Ppeak γ0
kql+1 [n] − wk k2
+ Bk [n](ql+1 [n] − wk )T (ql [n] − wk ) + Ck [n]
−
H4
≤ Gk,max[n],
!
(16)
9
where the constants Bk [n] and Ck [n] are respectively given as
!
1
1
−
Bk [n] , 2
2 ,
4
H
kql [n] − wk k2 + H 2
kql [n] − wk k2
2kql [n] − wk k2
1
+
Ck [n] ,
.
2 −
H4
kql [n] − wk k2 + H 2
kql [n] − wk k2 + H 2
Proof: Please refer to Appendix A.
Lemma 2: From a solution {vl [n]} obtained at the l-th iteration, the concave surrogate function
of kvl+1 [n]k2 can be computed as
−kvl+1 [n]k2 + 2vlT (2vl+1 [n] − vl [n]) ≤ kvl+1 [n]k2 .
(17)
Proof: Applying a similar process in Appendix A, we can conclude that the function in
(17) satisfies the conditions for a concave surrogate function [23].
With the aid of Lemmas 1 and 2, at the (l + 1)-th iteration, the non-convex constraints in
(14c) and (14f) can be approximated as
0 ≤ Gk [n] ≤ Glb
k,max [n],
(18)
V12 [n] ≤ −kvl+1 [n]k2 + 2vlT (2vl+1 [n] − vl [n]) .
(19)
As a result, with given solutions {ql [n], vl [n], Gk,l [n]} at the l-th iteration, we solve the following
problem at the (l + 1)-th iteration of the SCA procedure
(P 1.2) :
max
{ql+1 [n],vl+1 [n],a[n]}
{Gk,l+1 [n],V1 [n],τ lb }
s.t.
τ lb
N
1 X
log2
N n=1
(20a)
1+
K
X
m=1
Gm,l+1 [n]
!
− R̂kub [n]
!
≥ τ lb , ∀k, (20b)
(1) − (4), (14d), (14e), (14g), (18), (19),
where τ lb denotes the lower bound of τ in the original problem (P1). Since (P1.2) is a convex
problem, it can be optimally solved via existing convex optimization solvers, e.g. CVX [25].
Based on these results, we summarize the proposed iterative procedure in Algorithm 1.
Algorithm 1: Proposed algorithm for (P1)
Initialize {q0 [n], v0 [n], Gk,0 [n]}, ∀k, n and let l = 0.
Repeat
Compute {ql+1 [n], vl+1 [n], Gk,l+1[n]} for (P1.2) with given {ql [n], vl [n], Gk,l [n]}.
Update l ← l + 1.
Until Convergence.
Gk,l+1 [n]
Obtain pk [n] = hk,l+1
.
[n]
10
For the convergence analysis of Algorithm 1, let us define the objective values of (P1) and
(P1.2) at the l-th iteration as τl and τllb , respectively. Then we can express the relationship
lb
τl = τllb ≤ τl+1
≤ τl+1 ,
(21)
where the first equation holds because the surrogate functions in (15), (16), and (17) are tight at
the given local points, the second inequality is derived from the non-decreasing property of the
optimal solution of (P1.2), and the third inequality follows from the fact that the approximation
problem (P1.2) is a lower bound of the original problem (P1).
From (21), we can conclude that the objective value τ in (P1) is non-decreasing for every
iterations of Algorithm 1. Since the objective value τ in (P1) has a finite upper bound value and
at given local points, the surrogate functions in (15), (16), and (17) obtain the same gradients
as their original functions, it can be verified that Algorithm 1 is guaranteed to converge to at
least a local optimal solution for (P1) [23], [24].
B. Energy Efficiency Maximization
In this subsection, we consider the EE maximization problem (P2). First, by applying (12)(13), and introducing an auxiliary variable {V1 [n]}, (P2) can be transformed as
η
(P 2.1) :
max
PN
2
2
3
{q[n],v[n],a[n]}
c1 kv[n]k + V1c[n]
+ c2gka[n]k
2 V [n]
n=1
1
{Gk [n],V1 [n],η}
!
!
K
N
X
X
Gm [n] − R̂k [n] ≥ η, ∀k,
s.t.
W
log2 1 +
(22a)
(22b)
m=1
n=1
(1) − (4), (14c), (14e) − (14g).
Similar to (P1.1), we can see that (P2.1) is equivalent to (P2), but (P2.1) is still non-convex due
to the constraints in (14c), (14f), and (22b).
To tackle this issue, we can employ the similar SCA process presented in Section III-A. By
adopting (15) and Lemmas 1 and 2, a convex approximation of (P2.1) at the (l + 1)-th iteration
is given by
(P 2.2) :
max
{ql+1 [n],vl+1[n],a[n]}
{Gk,l+1 [n],V1 [n],ηlb }
s.t.
η lb
PN
3
n=1 c1 kv[n]k +
W
N
X
n=1
log2 1 +
c2
V1 [n]
K
X
m=1
+
(23a)
c2 ka[n]k2
g 2 V1 [n]
!
Gm,l+1 [n]
(1) − (4), (14e), (14g), (18), (19),
− R̂kub [n]
!
≥ η lb , ∀k (23b)
11
where η lb denotes the lower bound of η in the original problem (P2).
It can be shown that (P2.2) is a concave-convex fractional problem, which can be optimally
P
c2
3
solved via the Dinkelbach’s method [26], [27]. Then, denoting µ = N
n=1 c1 kv[n]k + V1 [n] +
c2 ka[n]k2
g 2 V1 [n]
with a given constant λm , (P2.2) can be converted to (P2.3) as
(P 2.3) :
η lb − λm µ
max
{ql+1 [n],vl+1 [n],a[n]}
{Gk,l+1 [n],V1 [n],ηlb }
s.t.
(24a)
(1) − (4), (14e), (14g), (18), (19), (23b).
Based on (P2.3), we summarize the proposed iterative procedure in Algorithm 2. The convergence
and the local optimality of Algorithm 2 can be verified similar to Algorithm 1, and thus the
details are omitted for brevity.
Algorithm 2: Proposed algorithm for (P2)
Initialize {q0 [n], v0 [n], Gk,0[n]}, ∀k, n and let λ0 = 0, m = 0, and l = 0.
Repeat
Repeat
Compute {ql+1 [n], vl+1 [n], Gk,l+1 [n]} for (P2.3) with given
{ql [n], vl [n], Gk,l [n]}, ∀k, n and λm .
Update l ← l + 1.
Until Convergence.
Let F (λm ) = η lb − λm µ and λm+1 = η lb /µ.
Update m ← m + 1.
Let {q0 [n], v0 [n], Gk,0[n]} = {ql+1 [n], vl+1 [n], Gk,l+1[n]}, ∀k, n and l = 0.
Until Convergence.
Gk,l+1 [n]
Obtain pk [n] = hk,l+1
, ∀k, n.
[n]
It is worthwhile to note that we need to initialize the trajectory variables {q[n], v[n]} for (P1) and
(P2). However, it is not trivial to find such variables satisfying the UAV movement constraints
(1)-(5) and the propulsion power constraint (10d). This will be clearly explained in Section IV-C.
IV. C IRCULAR
TRAJECTORY SYSTEM
Now, we examine the circular trajectory system which will be used as a baseline scheme.
First, we choose the center of the circular trajectory c = [x0 y0 ]T as the geometrical mean of
the GNs c =
PK
k=1
K
wk
. Denoting r as the radius of the trajectory and θ[n] as the angle of the
circle along which the UAV flies at time slot n, the horizontal coordinate of the UAV q[n] can
be obtained by q[n] = [r cos θ[n] + x0 r sin θ[n] + y0 ]T . Also, the location of GN k wk can be
12
represented as wk = [ζk cos ϕk + x0 ζk sin ϕk + y0]T , where ζk and ϕk equal the distance and the
angle between the geometric center c and GN k, respectively. Thus, the distance dk [n] between
p
the UAV and GN k in (8) can be expressed as dk [n] = r 2 + ζk2 + H 2 − 2rζk cos (θ[n] − ϕk ).
By adopting the angular velocity ω[n] and the angular acceleration α[n], equations in (1)-(6)
can be rewritten as
ω[n] = ω[n − 1] + α[n − 1]δt , for n = 1, ..., N,
(25)
θ[n] = θ[n − 1] + ω[n − 1]δt + 12 α[n − 1]δt2 , for n = 1, ..., N,
(26)
θ[N] = θ[0] + 2π, ω[0] = ω[N], α[0] = α[N],
(27)
ka[n]k2 = kak [n]k2 + ka⊥ [n]k2 = r 2 α2 [n] + r 2 ω 4 [n] ≤ a2max , for n = 0, 1, ...N,
(28)
ωmin ≤ ω[n] ≤ ωmax , for n = 0, 1, ..., N,
(29)
Pprop [n] = c1 r 3 ω 3 [n] +
c2
rω[n]
+
c2 rω 3 [n]
g2
+
c2 rα2 [n]
,
g 2 ω[n]
for n = 0, 1, ..., N,
(30)
where ak [n] and a⊥ [n] are the tangential and centripetal accelerations, respectively, and ωmin ,
Vmin /r and ωmax , Vmax /r indicate the minimum and maximum angular velocity, respectively.
Similar to Section III, we address the minimum average rate maximization problem and the
EE maximization problem for the circular trajectory, which are respectively formulated as
(P 3) :
max
{θ[n],ω[n],α[n]}
{r,pk [n],τ }
s.t.
τ
(31a)
rmin ≤ r ≤ rmax ,
(31b)
(10b) − (10d), (25) − (29),
(P 4) :
max
{θ[n],ω[n],α[n]}
{r,pk [n],η}
s.t.
where rmin ,
Vmin T
2π
and rmax
η
PN
n=1
Pprop [n]
(32a)
(10c), (11b), (25) − (29), (31b),
Vmax T
a
max
√ 4
, min
,
denote the minimum and
2π
2
max(
ω [n]+α [n])
maximum radius of the circular trajectory, respectively. It is emphasized that (P3) and (P4)
are difficult to solve because of the non-convex constraints and objective functions. To deal with
the problems (P3) and (P4), similar SCA frameworks in Section III are applied.
13
A. Minimum Average Rate Maximization and EE maximization
For the minimum average rate maximization problem (P3), we first find {r, pk [n]} with
given {θ[n], ω[n], α[n]} and then updates {θ[n], ω[n], α[n], pk [n]} for a fixed r. For given
{θ[n], ω[n], α[n]}, we adopt the change of variable Sk [n] and Sk,max [n] as
pk [n]γ0
,
(r − ζk cos (θ[n] − θk ))2 + ζk2 sin2 (θ[n] − θk ) + H 2
Sk [n] , pk [n]hk [n] =
(33)
Ppeak γ0
.
(r − ζk cos (θ[n] − θk ))2 + ζk2 sin2 (θ[n] − θk ) + H 2
Sk,max [n] , Ppeak hk [n] =
(34)
Similar to the method in Section III-A, we employ the SCA to Sk,max [n]. Based on Lemma 1,
lb1
the concave surrogate function Sk,max
[n] of Sk,max [n] with a solution rl at the l-th iteration can
be chosen as
lb1
Sk,max
[n] , Ppeak γ0 −
rl+1 − b̌k [n]
Ǎ2k [n]
2
≤ Sk,max [n], ∀n,
!
+ B̌k [n] rl+1 − b̌k [n] rl − b̌k [n] + Čk [n]
(35)
where the constants b̌k [n], Ǎk [n], B̌k [n], and Čk [n] are respectively given by
b̌k [n] , ζk cos (θ[n] − θk ) ,
Ǎk [n] , ζk2 sin2 (θ[n] − θk ) + H 2 ,
B̌k [n] , 2
1
Ǎ2k [n]
−
1
2 ,
2
rl − b̌k [n] + Ǎk [n]
2
2
2 rl − b̌k [n]
rl − b̌k [n]
1
Čk [n] ,
.
+
2 −
2
2
Ǎ2k [n]
rl − b̌k [n] + Ǎk [n]
rl − b̌k [n] + Ǎk [n]
By applying (15), (P3) for fixed {θ[n], ω[n], α[n]} can be reformulated as an approximated
convex problem at the (l + 1)-th iteration of the SCA
(P 3.1) :
max
{rl+1 ,Sk,l+1 [n],τ lb1 }
s.t.
τ lb1
N
1 X
log2
N n=1
(36a)
1+
K
X
Sm,l+1 [n]
m=1
lb1
0 ≤ Sk [n] ≤ Sk,max
[n], ∀k, n,
(10d), (31b),
!
− R̆kub [n]
!
≥ τ lb1 , ∀k,
(36b)
(36c)
14
P
K
where R̆kub [n] , Γ̆k [n]
log2 e
P
. (P3.1)
1+ K
j=1,j6=k Sj,l [n]
j=1,j6=k
(Sj,l+1[n] − Sj,l [n]) +log2 1 +
PK
j=1,j6=k
Sj,l [n] and Γ̆k [n] ,
can be successively solved by the CVX until convergence.
Next, we present a solution for (P3) with a given r. To obtain the concave surrogate function
of Sk,max [n], we introduce the following lemma which identifies the surrogate function of the
cosine function.
Lemma 3: For any given φl , the concave surrogate function of cos φ can be computed as
−(φ − φl + sin φl )2
sin2 φl
+ cos φl +
≤ cos φ.
2
2
(37)
Proof: With a similar process in Appendix A, we can conclude that the function in (37)
satisfies the conditions for a concave surrogate function [23].
lb2
By inspecting Lemmas 1 and 3, the concave surrogate function Sk,max
[n] for Sk,max[n] can be
identified as
2
rζk θl+1 [n] − b̂k [n]
lb2
+ B̂k [n] sin(θl [n] − θk ) θl+1 [n] − b̂k [n] + Ĉk [n]
Sk,max
[n] , Ppeak γ0 −
2
Âk [n]
≤
Ppeak γ0
≤ Sk,max [n],
2
rζk θl+1 [n] − b̂k [n] + Âk [n]
(38)
where b̂k [n], Âk [n], B̂k [n], and Ĉk [n] are given by
b̂k [n] , θl [n] − sin (θl [n] − θk ) ,
Âk [n] , r 2 + ζk2 + H 2 − rζk 2 cos (θl [n] − θk ) + sin2 (θl [n] − θk ) ,
Ĉk [n] ,
B̂k [n] , 2rζk
1
Â2k [n]
−
1
2 ,
rζk sin2 (θl [n] − θk ) + Âk [n]
2rζk sin2 (θl [n] − θk )
rζk sin2 (θl [n] − θk )
+
−
.
2
2
2
rζk sin2 (θl [n] − θk ) + Âk [n]
Â
[n]
k
rζk sin (θl [n] − θk ) + Âk [n]
1
15
By utilizing (15) and (38), at the (l + 1)-th iteration of the SCA algorithm with a given r,
(P3) can be approximated to the following convex problem.
(P 3.2) :
max
{θl+1 [n],ω[n],α[n]}
{Sk,l+1 [n],τ lb2 }
s.t.
τ lb2
(39a)
N
1 X
log2
N n=1
1+
K
X
!
Sm,l+1 [n]
m=1
lb2
0 ≤ Sk [n] ≤ Sk,max
[n], ∀k, n,
!
− R̆kub [n]
≥ τ lb2 , ∀k,
(39b)
(39c)
(10d), (25) − (29).
We then successively solve (P3.2) by the CVX until convergence. Similar to Algorithm 1, a
solution of problem (P3) is obtained by alternately solving (P3.1) and (P3.2) until the objective
value converges.
For the EE maximization problem (P4) in the circular trajectory case, we can apply similar
methods in Section III-B. Based on (P3.1) and (P3.2), given {θ[n], ω[n], α[n]} and r, (P4) can
be transformed into two concave-convex fractional problems. By using Algorithm 2, we can
alternately solve these problems until convergence.
B. Trajectory Initialization
To initialize the proposed algorithms, we employ a simple circular path concept in [12]. First,
the initial angular velocity ω0 is set to ω0 =
2π
,
T
which implies θ0 [n] = 2π Nn , ∀n. Next, the initial
radius r0 is chosen to fulfill the constraints in (4), (5), and (10d), which can be expressed as
Vmin T
Vmax T amax
(40)
≤ r0 ≤ min 2π , ω2 ,
2π
0
c1 r03 ω03 +
c2
r0 ω0
+
c2 r0 ω03
g2
≤ Plim .
(41)
We can simply find r0 which maximizes the minimum rate in (P1) and (P3) under constraints
(40) and (41) via one-dimensional line search. For the EE maximization problems (P2) and (P4),
r0 can be computed in the range of (40). As a result, the initial trajectory q0 [n] can be written
by q0 [n] = [r0 cos 2π Nn + x0 r0 sin 2π Nn + y0 ]T (n = 0, 1, ..., N) and the initial velocity v0 [n]
can be simply obtained as v0 [n] = (q0 [n + 1] − q0 [n])/δt (n = 0, 1, ..., N − 1) assuming δt2 ≈ 0
in (2).
16
T = 50 sec
T = 100 sec
T = 150 sec
T = 400 sec
800
600
400
y (m)
200
0
-200
-400
-600
-800
-1000
-1200 -1000 -800 -600 -400 -200
0
200
400
600
800 1000
x (m)
Fig. 2. Optimized UAV trajectories for different periods T with Plim = 150 W.
V. N UMERICAL R ESULTS
In this section, we provide numerical results to validate the effectiveness of the proposed
algorithms. For the simulations, we consider K = 6 GNs which are distributed as in Fig. 2 where
the locations of the GNs are marked with the triangles. The constant altitude, the bandwidth, the
reference SNR, and the peak transmission power are set to be H = 100 m, W = 1 MHz, γ0 = 80
dB, and Ppeak = 10 dBm, respectively. Also, the minimum velocity, the maximum velocity, and
the maximum acceleration of the UAV are determined as Vmin = 3 m/sec, Vmax = 100 m/sec, and
amax = 5 m/sec2 , respectively. For the propulsion power consumption model in (6), the constants
c1 and c2 are set as c1 = 9.26 × 10−4 and c2 = 2250, respectively, which make the minimum
propulsion power consumption Pprop,min = 100 W when kvk = 30 m/sec.
We first demonstrate the performance of the minimum rate maximization algorithms. Fig. 2
illustrates the optimized UAV trajectories with various T for Plim = 150 W. It is observed that
when T is smaller than 150 sec, as T increases, the UAV tries to get closer to all GNs in order
to improve the channel conditions from the GNs. In contrast, if T is sufficiently large (T = 400
sec), the UAV is now able to visit all the GNs within a given time period. Thus, the UAV can
17
1
0.9
Max-min rate (bps/Hz)
0.8
Proposed
Circular w/ opt. r, ω, α, & p
Circular w/ opt. r & p
Circular w/ opt. r
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
50
100
150
200
250
300
350
400
450
500
Period T (sec)
Fig. 3. Max-min rate with respect to the period T with Plim = 150 W.
hover over each GN for a while by traveling smooth path around the GNs. This is different
from the results in [12] where the UAV does not have practical movement constraints. This can
be explained as follows: Due to the constraints on the velocity and the propulsion power, the
UAV cannot stay at fixed positions as in [12]. Therefore, the UAV continuously moves around
as close to the GNs as possible to maintain good communication channels without exceeding
the propulsion power limit Plim .
Fig. 3 shows the maximized minimum (max-min) rate performance of the proposed algorithm
as a function of T . We compare the performance of the proposed algorithm with the following
circular trajectory based methods.
- Circular with optimum r, ω, α, and p: radius, angular velocity, angular acceleration, and
uplink transmit power are jointly optimized with (P3) in Section IV-A with the circular
trajectory.
- Circular with optimum r and p: radius and uplink transmit power are jointly optimized with
(P3.1) in Section IV-A with the circular trajectory.
- Circular with optimum r: radius is optimized with Ppeak as the initial circular trajectory in
18
1000
Plim = 110 W
800
Plim = 130 W
600
No power limit
400
y (m)
200
0
-200
-400
-600
-800
-1000
-1500 -1250 -1000 -750 -500 -250
0
250
500
750 1000 1250
x (m)
Fig. 4. Optimized UAV trajectories for different propulsion power limit Plim with T = 400 sec.
Section IV-B
First, it can be verified that the proposed algorithm outperforms the baseline schemes regardless
of the time period T . Also, we can see that the max-min rate in the proposed algorithm
monotonically increases with T , since more time is available at the UAV to hover around each
GN. In contrast, in the baseline schemes which are restricted in circular shape trajectory, the
max-min rate performance first increases as T grows, and then decreases after a certain T . This
is due to a fact that in order to satisfy the propulsion power constraint, the radius of the circular
trajectory should increase as T gets large, and thus the UAV may become too far away from the
geometric center of the GNs after a certain T . Therefore, we can expect the performance gain
of the proposed algorithm over baseline schemes is to grow with T .
Fig. 4 illustrates the optimized UAV trajectories for various propulsion power limit Plim with
T = 400 sec. It can be shown that for Plim = 110 W, the trajectory of the UAV is restricted to a
smooth path with a large turning radius to consume a low propulsion power. However, as Plim
gets larger, we observe quick changes along the trajectory path. Thus the UAV can move with
a much smaller turning radius, which enhances the max-min rate performance.
19
1
0.9
Proposed
Circular w/ opt. r, ω, α, & p
Circular w/ opt. r & p
Circular w/ opt. r
Max-min rate (bps/Hz)
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
100
125
150
175
200
225
250
Propulsion power limit (W)
Fig. 5. Max-min rate with respect to the propulsion power limit Plim with T = 400 sec.
In Fig. 5, we depict the average max-min rate of various schemes as a function of the
propulsion power constraint Plim . For both the proposed algorithm and the baseline schemes,
the max-min rate first increases as Plim grows and then gets saturated. This can be explained
as follows: With a large Plim , the trajectory and the velocity of the UAV change more freely to
attain good channel conditions, and thus the max-min rate increases. However, even if a large
Plim is given, the max-min rate cannot continue to increase because there are practical limits on
the velocity and acceleration. Similar to Fig. 3, we can see that the proposed algorithm provides
significant performance gains over the baseline schemes.
Next, in Fig. 6, we investigate the optimized trajectory of the EE maximization problem with
various T . As T increases, the overall patterns are similar to Fig. 2. Nevertheless, to balance
between the rate performance and the propulsion power consumption, the EE maximization
trajectory shows a smooth path with a relatively large turning radius, and thus the average
propulsion power consumption becomes lower.
To present the impact of the energy efficient UAV communication designs, Fig. 7 depicts the
UAV speed of the proposed EE maximization method with T = 400 sec. For comparison, we
20
1000
T = 50 sec
T = 200 sec
T = 400 sec
T = 500 sec
800
600
400
y (m)
200
0
-200
-400
-600
-800
-1000
-1250 -1000 -750 -500 -250
0
250
500
750
1000 1250
x (m)
Fig. 6. Optimized energy efficient UAV trajectories for different periods T
100
Minimum rate maximization w/o P lim
90
EE maximization
80
Speed (m/sec)
70
60
50
40
30
20
10
0
0
50
100
150
200
250
300
350
400
Time (sec)
Fig. 7. UAV speeds for the max-min rate without propulsion power constraint and the EE maximization with T = 400 sec
21
TABLE I
P ERFORMANCE COMPARISON WITH MAX - MIN RATE AND EE MAXIMIZATION FOR T = 400
Max-min rate
w/o Plim
EE maximization
Proposed
Circular
Proposed
Circular
SEC
Average
speed
(m/sec)
Average
acceleration
(m/sec2 )
Average
max-min rate
(bps/Hz)
Average
power
(Watts)
Energy
efficiency
(kbits/Joule)
18.42
13.50
25.73
15.35
4.73
1.88
2.71
0.27
0.99
0.53
0.79
0.47
553.01
541.41
122.14
151.33
1.80
0.98
6.47
3.10
also consider the max-min rate scheme without the propulsion power constraint. It is observed
that for the max-min rate case, the UAV tries to fly between the GNs as fast as possible and
stay over the GNs with a low speed. On the other hand, the EE maximization scheme keeps the
speed of the UAV at around 30 m/sec in order not to waste the propulsion energy.
Finally, Table I presents the performance comparison of the max-min rate without propulsion
power constraint and the EE maximization designs for both the proposed and the baseline
schemes with T = 400 sec. We can see that the max-min rate methods consume much higher
propulsion power by allowing a large variation of the speed and the average acceleration. In
contrast, the speed of the proposed EE maximization design slowly varies with low acceleration,
and thus much higher EE can be achieved. We observe that the proposed EE maximization
algorithm exhibits about 259 % gain over the max-min rate without the propulsion power
constraint and 109 % gain over the circular baseline EE maximization scheme.
VI. C ONCLUSION
In this paper, we have studied the UAV-aided wireless communication optimization under
the practical propulsion energy constraint at the UAV. For both the minimum average rate
maximization problem and the EE maximization problem, the UAV trajectories and the uplink
transmit power of the GNs have been jointly optimized. By applying the SCA technique, we
have proposed efficient iterative algorithms which find local optimal solutions. Numerical results
have demonstrated that the proposed algorithms provide substantial performance gains compared
to the baseline schemes.
22
A PPENDIX A
PROOF OF LEMMA
Let us define a function f1 (u) ,
1
ρkuk2 +z
1
for u = [ux uy ]T where z and ρ are positive constants.
For any given ul ∈ R2×1 , in order for arbitrary function g1 (u|ul ) to be a concave surrogate
function of f1 (u), it must satisfy the following conditions: f1 (ul ) = g1 (ul |ul ), ∇g1 (ul |ul ) =
∇f1 (ul ), and g1 (u|ul ) ≤ f1 (u), ∀u [23]. Denoting the function g1 (u|ul ) as
where B , 2ρ
1
z2
ρkuk2
g1 (u|ul ) , − 2 + BuT ul + C,
(42)
z
2
2ρkul k2
− ρkuz 2l k , it can be easily shown
− (ρku k12 +z)2 and C , ρkul1k2 +z + (ρku
k2 +z)2
l
l
that f1 (ul ) = g1 (ul |ul ), i.e., g1 (u|ul ) fulfills the first condition for the surrogate function.
Also, the gradient of f1 (u) and g1 (u|ul ) with respect to u can be respectively computed as
−2ρu
,
(ρkuk2 + z)2
2ρu
∇g1 (u|ul ) = − 2 + Bul .
z
∇f1 (u) = −
(43)
(44)
Since two gradients in (43) and (44) become identical at u = ul , g1 (u|ul ) satisfies the second
condition for the surrogate function.
To prove the global lower bound condition, we can calculate the Hessian matrix ∇2u h1 (u|ul )
of the function h1 (u|ul ) , f1 (u) − g1 (u|ul ) as
#
"
2 2
2
E
+
4ρz
u
4ρz
u
u
x y
x
,
∇2 h1 (u|ul ) = D
2
4ρz ux uy E + 4ρz 2 u2y
where D ,
2ρ
z 2 (ρkuk2 +z)3
(45)
> 0 and E , ρ3 kuk6 + 3ρ2 zkuk4 + 2ρz 2 kuk2 ≥ 0. One can easily
check that the Hessian in (45) is a positive semi-definite matrix, which implies that h1 (u|ul ) is
a convex function.
Since ∇h1 (u|ul ) = 0 at u = ul from (43) and (44), the global minimum of h1 (u|ul ) is
achieved at u = ul with h1 (ul |ul ) = 0. As a result, we can show that h1 (u|ul ) is greater than
or equal to 0 for any given ul , and thus the third condition for the surrogate function holds. By
substituting u = ql+1 [n] − wk , ul = ql [n] − wk , z = H 2 , and ρ = 1 and multiplying f1 (u) and
g1 (u|ul ) by Ppeak γ0 , Lemma 1 is thus proved.
R EFERENCES
[1] Y. Zeng, R. Zhang, and T. J. Lim, “Wireless communications with unmanned aerial vehicles: opportunities and challenges,”
IEEE Commun. Mag., vol. 54, pp. 36–42, May. 2016.
23
[2] W. Lee, I. Lee, J. S. Kwak, B.-C. Ihm, and S. Han, “Multi-BS MIMO cooperation: challenges and practical solutions in
4G systems,” IEEE Wireless Commun., vol. 19, pp. 89–96, Feb. 2012.
[3] Y. Zeng, R. Zhang, and T. J. Lim, “Throughput maximization for UAV-enabled mobile relaying systems,” IEEE Trans.
Commun., vol. 64, pp. 4983–4996, Dec. 2016.
[4] Q. Wang, Z. Chen, W. Mei, and J. Fang, “Improving physical layer security using UAV-enabled mobile relaying,” IEEE
Wireless Commun. Lett., vol. 6, pp. 310–313, Jun. 2017.
[5] C. Song, K.-J. Lee, and I. Lee, “Designs of MIMO amplify-and-forward wireless relaying networks: challenges and
solutions,” IEEE Access, vol. 5, pp. 9223–9234, May. 2017.
[6] H.-B. Kong, C. Song, H. Park, and I. Lee, “A new beamforming design for MIMO AF relaying systems with direct link,”
IEEE Trans. Commun., vol. 62, pp. 2286–2295, Jul. 2014.
[7] A. Merwaday and I. Guvenc, “UAV assisted heterogeneous networks for public safety communications,” in Proc. IEEE
WCNC, pp. 329–334, May. 2015.
[8] J. Lyu, Y. Zeng, and R. Zhang, “Spectrum sharing and cyclical multiple access in UAV-aided cellular offloading,” arXiv
preprint arXiv:1705.09024, 2017.
[9] S. Jeong, O. Simeone, and J. Kang, “Mobile edge computing via a UAV-mounted cloudlet: optimization of bit allocation
and path planning,” accepted in IEEE Trans. Veh. Technol. [Online] Available: http://arxiv.org/abs/1609.05362.
[10] A. Al-Hourani, S. Kandeepan, and S. Lardner, “Optimal LAP altitude for maximum coverage,” IEEE Wireless Commun.
Lett., vol. 3, pp. 569–572, Jul. 2014.
[11] J. Lyu, Y. Zeng, and R. Zhang, “Cyclical multiple access in UAV-aided communications: a throughput-delay tradeoff,”
IEEE Wireless Commun. Lett., vol. 5, pp. 600–603, Dec. 2016.
[12] Q. Wu, Y. Zeng, and R. Zhang, “Joint trajectory and communication design for multi-UAV enabled wireless networks,”
arXiv:1705.02723, 2017.
[13] A. Filippone, Flight performance of fixed and rotary wing aircraft. Elsevier, 2006.
[14] D. H. Choi, S. H. Kim, and D. K. Sung, “Energy-efficient maneuvering and communication of a single UAV-based relay,”
IEEE Trans. Aerosp. Electron. Syst., vol. 50, pp. 2320–2327, Jul. 2014.
[15] J. Zhang, Y. Zeng, and R. Zhang, “Spectrum and energy efficiency maximization in UAV-enabled mobile relaying,” in
Proc. IEEE ICC, pp. 1–6, May. 2017.
[16] Y. Zeng and R. Zhang, “Energy-efficient UAV communication with trajectory optimization,” IEEE Trans. Wireless Commun.,
vol. 16, pp. 3747–3760, Jun. 2017.
[17] H. Kim, S.-R. Lee, C. Song, K.-J. Lee, and I. Lee, “Optimal power allocation scheme for energy efficiency maximization
in distributed antenna systems,” IEEE Trans. Commun., vol. 63, pp. 431–440, Feb. 2015.
[18] J. Xu and L. Qiu, “Energy efficiency optimization for MIMO broadcast channels,” IEEE Trans. Wireless Commun., vol. 12,
pp. 690–701, Feb. 2013.
[19] S.-R. Lee, J. Jung, H. Park, and I. Lee, “A new energy-efficient beamforming strategy for MISO interfering broadcast
channels based on large systems analysis,” IEEE Trans. Wireless Commun., vol. 15, pp. 2872–2882, Apr. 2016.
[20] B. Du, C. Pan, W. Zhang, and M. Chen, “Distributed energy-efficient power optimization for CoMP systems with max-min
fairness,” IEEE Commun. Lett., vol. 18, pp. 999–1002, Jun. 2014.
[21] Y. Li, M. Sheng, C. W. Tan, Y. Zhang, Y. Sun, X. Wang, Y. Shi, and J. Li, “Energy-efficient subcarrier assignment and
power allocation in OFDMA systems with max-min fairness guarantees,” IEEE Trans. Commun., vol. 63, pp. 3183–3195,
Sep. 2015.
[22] Y. Li, M. Sheng, X. Wang, Y. Zhang, and J. Wen, “Max-min energy-efficient power allocation in interference-limited
wireless networks,” IEEE Trans. Veh. Technol., vol. 64, pp. 4321–4326, Sep. 2015.
24
[23] B. R. Marks and G. P. Wright, “A general inner approximation algorithm for nonconvex mathematical programs,” Operations
Research, vol. 26, pp. 681–683, Aug. 1978.
[24] Y. Sun, P. Babu, and D. P. Palomar, “Majorization-minimization algorithms in signal processing, communications, and
machine learning,” IEEE Trans. Signal Process., vol. 65, pp. 794–816, Feb. 2017.
[25] M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex programming, version 2.1.” Available:
http://cvxr.com/cvx, 2017.
[26] W. Dinkelbach, “On nonlinear fractional programming,” Management science, vol. 13, pp. 492–498, Mar. 1967.
[27] A. Zappone and E. Jorswieck, “Energy efficiency in wireless networks via fractional programming theory,” Foundations
and Trends in Communications and Information Theory, vol. 11, pp. 185–396, Jun. 2015.
| 7 |
c xxxx Society for Industrial and Applied Mathematics
⃝
SIAM/ASA J. UNCERTAINTY QUANTIFICATION
Vol. xx, pp. x
Mathematical Properties of Polynomial Dimensional Decomposition
∗
arXiv:1804.01647v1 [math.NA] 5 Apr 2018
Sharif Rahman†
Abstract. Many high-dimensional uncertainty quantification problems are solved by polynomial dimensional decomposition (PDD), which represents Fourier-like series expansion in terms of random
orthonormal polynomials with increasing dimensions. This study constructs dimension-wise and
orthogonal splitting of polynomial spaces, proves completeness of polynomial orthogonal basis
for prescribed assumptions, and demonstrates mean-square convergence to the correct limit – all
associated with PDD. A second-moment error analysis reveals that PDD cannot commit larger
error than polynomial chaos expansion (PCE) for the appropriately chosen truncation parameters. From the comparison of computational efforts, required to estimate with the same precision
the variance of an output function involving exponentially attenuating expansion coefficients, the
PDD approximation can be markedly more efficient than the PCE approximation.
Key words. Uncertainty quantification, ANOVA decomposition, multivariate orthogonal polynomials, polynomial chaos expansion.
1. Introduction. Polynomial dimensional decomposition (PDD) is a hierarchical, infinite series expansion of a square-integrable random variable involving measure-consistent
orthogonal polynomials in independent random variables. Introduced by the author [20, 21]
as a polynomial variant of the well-known analysis-of-variance (ANOVA) dimensional decomposition (ADD), PDD deflates the curse of dimensionality to some extent by developing
an input-output behavior of complex systems with low effective dimensions [4], wherein the
effects of degrees of interactions among input variables weaken rapidly or vanish altogether.
Approximations stemming from truncated PDD are commonly used for solving uncertainty
quantification problems in engineering and applied sciences, including multiscale fracture
mechanics [6], random eigenvalue problems [24], computational fluid dynamics [27], and
stochastic design optimization [25], to name a few. However, all existing works on PDD are
focused on practical applications with almost no mathematical analysis of PDD. Indeed, a
number of mathematical issues concerning necessary and sufficient conditions for the completeness of PDD basis functions; convergence, exactness, and optimal analyses of PDD;
and approximation quality of the truncated PDD have yet to be studied or resolved. This
paper fills the gap by establishing fundamental mathematical properties to empower PDD
with a solid foundation, so that PDD can be as credible as its close cousin, polynomial
chaos expansion (PCE) [5, 10, 28, 29], providing an alternative, if not a better, choice for
uncertainty quantification in computational science and engineering.
The principal objective of this work is to examine important mathematical properties of
PDD, not studied heretofore, for arbitrary but independent probability measures of input
random variables. The paper is organized as follows. Section 2 defines or discusses mathematical notations and preliminaries. Two sets of assumptions on the input probability
measures required by PDD are explained. A brief exposition of univariate and multivariate
orthogonal polynomials consistent with a general but product-type probability measure,
∗
This work was supported by the U.S. National Science Foundation under Grant Number CMMI-1462385.
College of Engineering and Applied Mathematics & Computational Sciences, The University of Iowa, Iowa
City, IA 52242 ([email protected]). Questions, comments, or corrections to this document may be directed
to that email address.
†
1
x–x
2
S. RAHMAN
including their second moment properties, is given in Section 3. The section also describes
relevant polynomial spaces and construction of their dimension-wise orthogonal decompositions. The orthogonal basis and completeness of multivariate orthogonal polynomials have
also been proved. Section 4 briefly explains ADD, followed by presentations of PDD for a
square-integrable random variable. The convergence and exactness of PDD are explained.
In the same section, a truncated PDD and its approximation quality are discussed. The
formulae for the mean and variance of a truncated PDD are also derived. The section ends
with an explanation on how and when the PDD can be extended for infinitely many input
variables. Section 5 briefly describes degree-wise orthogonal decompositions of polynomial
spaces, leading to PCE. In Section 6, a second-moment error analysis of PDD is conducted,
followed by a comparison with that of PCE. Finally, conclusions are drawn in Section 7.
2. Input Random Variables. Let N := {1, 2, . . .}, N0 := N ∪ {0}, and R := (−∞, +∞)
represent the sets of positive integer (natural), non-negative integer, and real numbers,
respectively. Denote by A{i} , i = 1, . . . , N , an ith bounded or unbounded subdomain of R,
{i} ⊆ RN .
so that AN := ×N
i=1 A
Let (Ω, F, P) be a complete probability space, where Ω is a sample space representing an
abstract set of elementary events, F is a σ-algebra on Ω, and P : F → [0, 1] is a probability
measure. With B N := B(AN ) representing the Borel σ-algebra on AN ⊆ RN , consider an
AN -valued input random vector X := (X1 , . . . , XN )T : (Ω, F) → (AN , BN ), describing the
statistical uncertainties in all system parameters of a stochastic problem. The input random
variables are also referred to as basic random variables [10]. The non-zero, finite integer N
represents the number of input random variables and is often referred to as the dimension
of the stochastic problem.
Denote by FX (x) := P(∩N
i=1 {Xi ≤ xi }) the joint distribution function of X, admitting
the joint probability density function fX (x) := ∂ N FX (x)/∂x1 · · · ∂xN . Given the abstract
probability space (Ω, F, P) of X, the image probability space is (AN , B N , fX dx), where AN
can be viewed as the image of Ω from the mapping X : Ω → AN , and is also the support of
fX (x). Similarly, each component random variable Xi is defined on the abstract marginal
probability space (Ω{i} , F {i} , P{i} ) comprising sample space Ω{i} , σ-algebra F {i} , and probability measure P{i} . Then, the corresponding image probability space is (A{i} , B {i} , fXi dxi ),
where A{i} ⊆ R is the image sample space of Xi , B{i} is the Borel σ-algebra on A{i} , and
fXi (xi ) is the marginal probability density function of Xi . Relevant statements and objects in the abstract probability space have obvious counterparts in the associated image
probability space. Both probability spaces will be used in this paper.
Two sets of assumptions used by PDD are as follows.
Assumption 2.1.The input random vector X := (X1 , . . . , XN )T : (Ω, F) → (AN , B N )
satisfies all of the following conditions:
(1) Each input random variable Xi : (Ω{i} , F {i} ) → (A{i} , B {i} ) has absolutely continuous
marginal distribution function FXi (xi ) := P(Xi ≤ xi ) and continuous marginal density
function fXi (xi ) := ∂FXi (xi )/∂xi with a bounded or unbounded support A{i} ⊆ R.
(2) All component random variables Xi , i = 1, . . . , N , are statistically independent, but
not necessarily identical. In consequence,
X is endowed with a product-type probability
∏
density function, that is, fX (x) = N
i=1 fXi (xi ) with a bounded or unbounded support
AN ⊆ RN .
(3) Each input random variable Xi possesses finite moments of all orders, that is, for all
POLYNOMIAL DIMENSIONAL DECOMPOSITION
i = 1, . . . , N and l ∈ N0 ,
∫
∫
[ ]
l
l
(2.1) µi,l := E Xi :=
Xi (ω)dP(ω) =
Ω
3
AN
xli fX (x)dx
=
∫
A{i}
xli fXi (xi )dxi < ∞,
where E is the expectation operator with respect to the probability measure P or fX (x)dx.
Assumption 2.2.The moments and marginal density function of each input random variable Xi , where i = 1, . . . , N , satisfy at least one of the following conditions [10]:
(1) The density function fXi (xi ) has a compact support, that is, there exists a compact
interval [ai , bi ], ai , bi ∈ R, such that P(ai ≤ Xi ≤ bi ) = 1.
(2) For the moment sequence {µi,l }l∈N0 for Xi , there holds
(2.2)
lim inf
l→∞
(µi,2l )1/2l
< ∞.
2l
(3) For the moment sequence {µi,l }l∈N0 for Xi , there holds
∑
(2.3)
l∈N0
1
= ∞.
(µi,2l )1/2l
(4) The random variable Xi is exponentially integrable, that is, there exists a real number
a > 0 such that
∫
exp(a|xi |)fXi (xi )dxi < ∞.
(2.4)
A{i}
(5) If the density function fXi (xi ) is symmetric, differentiable, and strictly positive, then
there exists a real number a > 0 such that
∫
ln fXi (xi )
−xi dfXi (xi )/dxi
−
(2.5)
→ ∞ as (xi → ∞, xi ≥ a).
2 dxi = ∞ and
fXi (xi )
{i}
1
+
x
A
i
Assumption 2.1 assures the existence of an infinite sequence of orthogonal polynomials
consistent with the input probability measure. Assumption 2.2, in addition to Assumption
2.1, guarantees the input probability measure to be determinate, resulting in a complete
orthogonal polynomial basis of a function space of interest. Both assumptions impose
only mild restrictions on the probability measure. Examples of input random variables
satisfying Assumptions 2.1 and 2.2 are Gaussian, uniform, exponential, beta, and gamma
variables, which are commonly used in uncertainty quantification. These assumptions, to be
explained in the next section, are vitally important for the determinacy of the probability
measure and the completeness of the orthogonal polynomial basis. Therefore, for both
PDD and PCE, which entail orthogonal polynomial expansions, Assumptions 2.1 and 2.2
are necessary. Unfortunately, they are not always clearly specified in the PDD or PCE
literature. A prototypical example where Assumption 2.1 is satisfied, but Assumption 2.2
is not, is the case of a lognormal random variable. As noted by Ernst et al. [10], the
violation of Assumption 2.2 leads to indeterminacy of the input probability measure and
thereby fails to form a complete orthogonal polynomial basis. Finally, Assumptions 2.1 and
2.2 can be modified to account for random variables with discrete or mixed distributions
[11] or dependent random variables [23]. The discrete or mixed distributions and dependent
variables are not considered in this paper.
4
S. RAHMAN
3. Measure-Consistent Orthogonal Polynomials and Polynomial Spaces.
3.1. Univariate orthogonal polynomials. Consider an ith random variable Xi defined
on the abstract probability space (Ω{i} , F {i} , P{i} ) with its image (A{i} , B {i} , fXi dxi ). Let
Π{i} := R[xi ] be the space of real polynomials in xi . For any polynomial pair P{i} , Q{i} ∈
Π{i} , define an inner product
∫
[
]
(3.1)
(P{i} , Q{i} )f dx :=
P{i} (xi )Q{i} (xi )fXi (xi )dxi =: E P{i} (Xi )Q{i} (Xi )
Xi
i
A{i}
with respect to the probability measure fXi (xi )dxi and the induced norm
∥P{i} ∥fXi dxi :=
√
(P{i} , P{i} )f
Xi dxi
=
(∫
Ai
2
P{i}
(xi )fXi (xi )dxi
)1/2
√ [
]
2 (X ) .
=: E P{i}
i
Under Assumption 2.1,∫ moments of Xi of all orders exist and are finite, including zeroorder moments µi,0 := A{i} fXi (xi )dxi = 1, i = 1, . . . , N , that are always positive. Clearly,
∥P{i} ∥ > 0 for all non-zero P{i} ∈ Π{i} . Then, according to Gautschi [12], the inner product
in (3.1) is positive-definite on Π{i} . Therefore, there exists an infinite set of univariate
orthogonal polynomials, say, {P{i},ji (xi ) : ji ∈ N0 }, P{i},ji ̸= 0, which is consistent with the
probability measure fXi (xi )dxi , satisfying
{
2
(
)
E[P{i},j
(Xi )], ji = ki ,
i
(3.2)
P{i},ji , P{i},ki f dx =
i
Xi
0,
ji ̸= ki ,
2
for ki ∈ N0 , where 0 < E[P{i},j
(Xi )] < ∞. Here, in the notation for the polynomial
i
P{i},ji (xi ), the first and second indices refer to the ith variable and degree ji , respectively.
Prominent examples of classical univariate orthogonal polynomials comprise Hermite, Laguerre, and Jacobi polynomials, which are consistent with the measures defined by Gaussian, gamma, and beta densities on the whole real line, semi-infinite interval, and bounded
interval, respectively. Many orthogonal polynomials, including the three classical polynomials mentioned, can be expressed in a unified way by invoking hypergeometric series,
incorporated in a tree structure of the Askey scheme [1]. For even more general measures,
established numerical techniques, such as Gram-Schmidt [13] and Stieltjes’ procedure [26],
can be used to generate any measure-consistent orthogonal polynomials.
3.2. Multivariate orthogonal polynomials. For N ∈ N, denote by {1, . . . , N } an index
set, so that u ⊆ {1, . . . , N } is a subset, including the empty set ∅, with cardinality 0 ≤
|u| ≤ N . When ∅ ̸= u ⊆ {1, . . . , N }, a |u|-dimensional multi-index is denoted by ju :=
|u|
(ji1 , . . . , ji|u| ) ∈ N0 with degree |ju | := ji1 + · · · + ji|u| , where jip ∈ N0 , p = 1, . . . , |u|,
represents the pth component of ju .1
For ∅ ̸= u ⊆ {1, . . . , N }, let Xu := (Xi1 , . . . , Xi|u| )T , a subvector of X, be defined on
the abstract probability space (Ωu , F u , Pu ), where Ωu is the sample space of Xu , F u is a
σ-algebra on Ωu , and Pu is a probability measure. The corresponding image probability
space is (Au , B u , fXu dxu ), where Au := ×i∈u A{i} ⊆ R|u| is the image sample space of Xu ,
1
The same symbol | · | is used for designating both the cardinality of a set and the degree of a multi-index
in this paper.
POLYNOMIAL DIMENSIONAL DECOMPOSITION
5
B u is the Borel σ-algebra on Au , and fXu (xu ) is the marginal
∏ probability density function
of Xu supported on Au . Under Assumption 2.1, fXu (xu ) = i∈u fXi (xi ). Denote by
Πu := R[xu ] = R[xi1 , . . . , xi|u| ]
the space of all real polynomials in xu . Then, given the inner product
∫
Pu (xu )Qu (xu )fXu (xu )dxu =: E [Pu (Xu )Qu (Xu )] ,
(Pu , Qu )fX dxu :=
u
Au
Πu
two polynomials Pu ∈
and Qu ∈ Πu in xu are called orthogonal to each other if
(Pu , Qu )fX dxu = 0 [8]. Moreover, a polynomial Pu ∈ Πu is said to be an orthogonal
u
polynomial with respect to fXu dxu if it is orthogonal to all polynomials of lower degree,
that is, if [8]
(3.3)
(Pu , Qu )fX
u dxu
= 0 ∀Qu ∈ Πu with deg Qu < deg Pu .
|u|
Let {Pu,ju (xu ) : ju ∈ N0 }, ∅ ̸= u ⊆ {1, . . . , N }, represent an infinite set of multivariate orthogonal polynomials, which is consistent with the probability measure fXu (xu )dxu ,
satisfying
(3.4)
(Pu,ju , Pu,ku )fX
u dxu
|u|
=: E [Pu,ju (Xu )Pu,ku (Xu )] = 0 ∀ju ̸= ku , ku ∈ N0 .
Clearly, each Pu,ju ∈ Πu is a multivariate orthogonal polynomial satisfying (3.3). Due
to the product-type probability measure of Xu , a consequence of statistical independence
from Assumption 2.1, such multivariate polynomials exist and are easily constructed by
tensorizing univariate orthogonal polynomials.
Proposition 3.1.Let X := (X1 , . . . , XN )T : (Ω, F) → (AN , B N ) be a vector of N ∈ N
input random variables fulfilling Assumption 2.1. Suppose that the sets of univariate orthogonal polynomials for all marginal measures are obtained as {P{i},ji (xi ) : ji ∈ N0 },
i = 1, . . . , N . Then, for ∅ ̸= u ⊆ {1, . . . , N }, the set of multivariate orthogonal polynomials
in xu consistent with the probability measure fXu (xu )dxu is
{
} ⊗{
}
|u|
P{i},ji (xi ) : ji ∈ N0 ,
(3.5)
Pu,ju (xu ) : ju ∈ N0 =
⊗
i∈u
where the symbol
denotes tensor product. In terms of an element, the multivariate
orthogonal polynomial of degree |ju | = ji1 + · · · + ji|u| is
∏
(3.6)
Pu,ju (xu ) =
P{i},ji (xi ).
i∈u
Proof. Consider two distinct polynomials Pu,ju (xu ) and Pu,ku (xu ) from the set {Pu,ju (xu ) :
|u|
ju ∈ N0 } satisfying (3.5). Since ju ̸= ku , ju and ku must differ in at least one component.
Without loss of generality, suppose that ji1 ̸= ki1 . Then, by Fubini’s theorem, with statistical independence of random variables in mind,
∫
(Pu,ju Pu,ku )fX dxu = Au Pu,ju (xu )Pu,ku (xu )fXu (xu )dxu
u
|u| [
]
∏
∫
= ×|u| A{ip }
P{ip },jip (xip )P{ip },kip (xip )fXip (xip )dxip ×
(3.7)
p=2
p=2
∫
P
(xi1 )P{i1 },ki1 (xi1 )fXi1 (xi1 )dxi1
{i
}
{i
},j
1
i1
A 1
= 0,
6
S. RAHMAN
where the equality to zero in the last line results from the recognition that the inner integral
vanishes by setting i = i1 in (3.2).
|u|
In addition, for ju ∈ N0 ,
]
[ 2
] ∏ [ 2
(3.8)
(Pu,ju Pu,ju )fX dxu =: E Pu,j
(X
)
=
E
P
(X
)
>0
u
i
{i},ji
u
u
i∈u
and is finite by virtue of the existence of the set of univariate orthogonal polynomials
|u|
{P{i},ji (xi ) : ji ∈ N0 } for i = 1, . . . , N . Therefore, {Pu,ju (xu ) : ju ∈ N0 } satisfying (3.5)
is a set of multivariate orthogonal polynomials consistent with the probability measure
fXu (xu )dxu .
Once the multivariate orthogonal polynomials are obtained, they can be scaled to generate multivariate orthonormal polynomials, as follows.
Definition 3.2.A multivariate orthonormal polynomial Ψu,ju (xu ), ∅ ̸= u ⊆ {1, . . . , N },
|u|
ju ∈ N0 , of degree |ju | = ji1 + · · · + ji|u| that is consistent with the probability measure
fXu (xu )dxu is defined as
(3.9)
Ψu,ju (xu ) := √
Pu,ju (xu )
2 (X )]
E[Pu,j
u
u
where Ψ{i},ji (xi ) := P{i},ji (xi )/
√
=
∏
P{i},j (xi )
√ [ i
Ψ{i},ji (xi ),
] =:
2
i∈u
i∈u
E P{i},ji (Xi )
∏
2
E[P{i},j
(Xi )] is a univariate orthonormal polynomial in
i
xi of degree ji that is consistent with the probability measure fXi (xi )dxi
3.3. Dimension-wise orthogonal decomposition of polynomial spaces. An orthogonal
decomposition of polynomial spaces entailing dimension-wise splitting leads to PDD. Here,
to facilitate such splitting of the polynomial space Πu for any ∅ ̸= u ⊆ {1, . . . , N }, limit the
power jip of the ip -th variable, where ip ∈ u ⊆ {1, . . . , N }, p = 1, . . . , |u|, and |u| > 0, to take
on only positive integer values. In consequence, ju := (ji1 , . . . , ji|u| ) ∈ N|u| , the multi-index
of Pu,ju (xu ), has degree |ju | = ji1 + · · · + ji|u| , varying from |u| to ∞ as ji1 ̸= · · · ji|u| ̸= 0.
For ju ∈ N|u| and xu := (xi1 , . . . , xi|u ), a monomial in the variables xi1 , . . . , xi|u| is the
ji
ji
|u|
product xjuu = xi11 . . . xi|u|
and has a total degree |ju |. A linear combination of xjuu , where
|ju | = l, |u| ≤ l ≤ ∞, is a homogeneous polynomial in xu of degree l. For ∅ ̸= u ⊆
{1, . . . , N }, denote by
Qul := span{xjuu : |ju | = l, ju ∈ N|u| }, |u| ≤ l < ∞,
the space of homogeneous polynomials in xu of degree l where the individual degree of each
variable is non-zero and by
Θum := span{xjuu : |u| ≤ |ju | ≤ m, ju ∈ N|u| }, |u| ≤ m < ∞,
the space of polynomials in xu of degree at least |u| and at most m where the individual
degree of each variable is non-zero. The dimensions of the vector spaces Qul and Θum ,
respectively, are
{
} ( l−1 )
u
|u|
(3.10)
dim Ql = # ju ∈ N : |ju | = l =
|u| − 1
POLYNOMIAL DIMENSIONAL DECOMPOSITION
and
dim Θum =
(3.11)
m
∑
l=|u|
u
Z|u|
dim Qul =
7
) ( )
m (
∑
l−1
m
=
.
|u| − 1
|u|
l=|u|
Θu|u| .
Let
:=
For each |u| + 1 ≤ l < ∞, denote by Zlu ⊂ Θul the space of orthogonal
polynomials of degree exactly l that are orthogonal to all polynomials in Θul−1 , that is,
Zlu := {Pu ∈ Θul : (Pu , Qu )fXu dxu = 0 ∀ Qu ∈ Θul−1 }, |u| + 1 ≤ l < ∞.
Then Zlu , provided that the support of fXu (xu ) has non-empty interior, is a vector space
of dimension
(
)
l−1
Mu,l := dim Zlu = dim Qul =
.
|u| − 1
Many choices exist for the basis of Zlu . Here, to be formally proved in Section 3.3.2, select
{Pu,ju (xu ) : |ju | = l, ju ∈ N|u| } ⊂ Zlu to be a basis of Zlu , comprising Mu,l number of basis
functions. Each basis function Pu,ju (xu ) is a multivariate orthogonal polynomial of degree
|ju | as defined earlier. Clearly,
Zlu = span{Pu,ju (xu ) : |ju | = l, ju ∈ N|u| }, |u| ≤ l < ∞.
According to Proposition 3.3, to be presented later, Pu,ju (Xu ) is orthogonal to Pv,kv (Xv )
whenever (1) u ̸= v and ju , kv are arbitrary; or (2) u = v and ju ̸= kv . Therefore, any two
distinct polynomial subspaces Zlu and Zlv′ , where ∅ ̸= u ⊆ {1, . . . , N }, ∅ ̸= v ⊆ {1, . . . , N },
|u| ≤ l < ∞, and |v| ≤ l′ < ∞, are orthogonal whenever u ̸= v or l ̸= l′ . In consequence,
there exist orthogonal decompositions of
Θum =
m
⊕
l=|u|
Zlu =
m
⊕
l=|u|
span{Pu,ju (xu ) : |ju | = l, ju ∈ N|u| }
= span{Pu,ju (xu ) : |u| ≤ |ju | ≤ m, ju ∈ N|u| }
with the symbol ⊕ representing orthogonal sum and
Πu
(3.12)
= 1⊕
= 1⊕
∞
⊕
⊕
∅̸=v⊆u l=|v|
⊕
∅̸=v⊆u
Zlv
=1⊕
⊕
∞
⊕
∅̸=v⊆u l=|v|
span{Pv,jv (xv ) : |jv | = l, jv ∈ N|v| }
span{Pv,jv (xv ) : jv ∈ N|v| },
where 1 := span{1}, the constant subspace, needs to be added because the subspace Zlv
excludes constant functions.
Recall that ΠN is the space of all real polynomials in x. Then, setting u = {1, . . . , N }
in (3.12) first and then swapping v for u yields yet another orthogonal decomposition of
ΠN
(3.13)
= 1⊕
= 1⊕
= 1⊕
⊕
∞
⊕
∅̸=u⊆{1,...,N } l=|u|
∞
⊕
⊕
∅̸=u⊆{1,...,N } l=|u|
⊕
∅̸=u⊆{1,...,N }
Zlu
span{Pu,ju (xu ) : |ju | = l, ju ∈ N|u| }
span{Pu,ju (xu ) : ju ∈ N|u| }.
8
S. RAHMAN
Note that the last expression of (3.13) is equal to the span of
(3.14)
N
⊗
{
}
{
}
N
Pj (x) : j ∈ N0 :=
P{i},ji (xi ) : ji ∈ N0 ,
i=1
representing an infinite set of orthogonal polynomials in x.
Given the dimension-wise orthogonal splitting of ΠN , any square-integrable function of
input random vector X can be expanded as a Fourier-like series of hierarchically ordered
multivariate orthogonal or orthonormal polynomials in Xu , ∅ =
̸ u ⊆ {1, . . . , N }. The
expansion is referred to as PDD, to be formally presented and analyzed in Section 4.
3.4. Statistical properties of random multivariate polynomials. When the input random variables X1 , . . . , XN , instead of real variables x1 , . . . , xN , are inserted in the argument, the multivariate polynomials Pu,ju (Xu ) and Ψu,ju (Xu ), where ∅ =
̸ u ⊆ {1, . . . , N }
and ju ∈ N|u| , become functions of random input variables. Therefore, it is important
to establish their second-moment properties, to be exploited in the remaining part of this
section and Section 4.
Proposition 3.3.Let X := (X1 , . . . , XN ) be a vector of N ∈ N input random variables
fulfilling Assumption 2.1. For ∅ ̸= u, v ⊆ {1, . . . , N }, ju ∈ N|u| , and kv ∈ N|v| , the first- and
second-order moments of multivariate orthogonal polynomials are
(3.15)
E [Pu,ju (Xu )] = 0
and
(3.16)
E [Pu,ju (Xu )Pv,kv (Xv )] =
]
∏ [
2
E P{i},j
(X
)
i ,
i
u = v, ju = kv ,
i∈u
0,
otherwise,
respectively.
independence of random variables, E[Pu,ju (Xu )] =
∏ Proof. Using (3.6) and statistical
|u|
i∈u E[P{i},ji (Xi ] for any ju ∈ N . Since each component of ju is non-zero, (3.2), with
the constant function P{i},0 ̸= 0 in mind, produces E[P{i},ji (Xi ] = 0 for any i ∈ u, ji ∈ N,
resulting in (3.15).
To obtain the non-trivial result of (3.16), set u = v and ju = kv and use (3.8) directly.
The trivial result of (3.16) is obtained by considering two subcases. First, when u = v and
ju ̸= kv , (3.7) yields the result already. Second, when u ̸= v and ju , kv ∈ N|v| are arbitrary,
then u and v differ by at least one element. Suppose that i ∈ (u ∪ v) is that element with
the associated degree ji ∈ N. Using the statistical independence of random variables and
the fact that E[P{i},ji (Xi ] = 0, as already demonstrated, produces the desired result.
Corollary 3.4.For ∅ ̸= u, v ⊆ {1, . . . , N }, ju ∈ N|u| , and kv ∈ N|v| , the first- and secondorder moments of multivariate orthonormal polynomials are
(3.17)
E [Ψu,ju (Xu )] = 0
and
(3.18)
respectively.
E [Ψu,ju (Xu )Ψv,kv (Xv )] =
{
1,
0,
u = v, ju = kv ,
otherwise,
POLYNOMIAL DIMENSIONAL DECOMPOSITION
9
3.5. Orthogonal basis and completeness. An important question regarding multivariate orthogonal polynomials discussed in the preceding subsection is whether they constitute a complete basis in a function space of interest, such as a Hilbert space. Let
L2 (AN , B N , fX dx) represent a Hilbert space of square-integrable functions with respect to
the probability measure fX (x)dx supported on AN . The following two propositions show
that, indeed, measure-consistent orthogonal polynomials span various spaces of interest.
Proposition 3.5.Let X := (X1 , . . . , XN )T : (Ω, F) → (AN , B N ) be a vector of N ∈ N
input random variables fulfilling Assumption 2.1 and Xu := (Xi1 , . . . , Xi|u| )T : (Ωu , F u ) →
(Au , B u ), ∅ ̸= u ⊆ {1, . . . , N }, be a subvector of X. Then {Pu,ju (xu ) : |ju | = l, ju ∈ N|u| },
the set of multivariate orthogonal polynomials of degree l, |u| ≤ l < ∞, consistent with the
probability measure fXu (xu )dxu , is a basis of Zlu .
Proof. Under Assumption 2.1, orthogonal polynomials consistent with the probability
(M )
(1)
measure fXu (xu )dxu exist. Denote by Pu,l = (Pu,l , . . . , Pu,l u,l )T a column vector of the
elements of {Pu,ju (Xu ) : |ju | = l, ju ∈ N|u| } arranged according to some monomial order. Let
(1)
(M
)
(j)
aTu,l = (au,l , . . . , au,l u,l ) be a row vector comprising some constants au,l ∈ R, j = 1, . . . , Mu,l .
Set aTu,l Pu,l = 0. Multiply both sides of the equality from the right by PTu,l , integrate with
respect to the measure fXu (xu )dxu over Au , and apply transposition to obtain
(3.19)
Gu,l au,l = 0,
where Gu,l = E[Pu,l PTu,l ] is an Mu,l × Mu,l matrix with its (p, q)th element
∫
[
]
(pq)
(p)
(q)
(p)
(q)
Gu,l =
Pu,l (xu )Pu,l (xu )fXu (xu )dxu = E Pu,l (Xu )Pu,l (Xu )
Au
representing the covariance between two elements of Pu,l . According to Proposition 3.3,
any two distinct polynomials from {Pu,ju (xu ) : |ju | = l, ju ∈ N|u| } are orthogonal, meaning
(p) (q)
that E[Pu,l Pu,l ] is zero if p ̸= q and positive and finite if p = q. Consequently, Gu,l is a
diagonal, positive-definite matrix and hence invertible. Therefore, (3.19) yields au,l = 0,
proving linear independence of the elements of Pu,l or the set {Pu,ju (xu ) : |ju | = l, ju ∈ N|u| }.
Furthermore, the dimension of Zlu , which is Mu,l , matches exactly the number of elements
of the aforementioned set. Therefore, the spanning set {Pu,ju (xu ) : |ju | = l, ju ∈ N|u| } forms
a basis of Zlu .
Proposition 3.6.Let X := (X1 , . . . , XN )T : (Ω, F) → (AN , B N ) be a vector of N ∈ N input random variables fulfilling both Assumptions 2.1 and 2.2 and Xu := (Xi1 , . . . , Xi|u| )T :
(Ωu , F u ) → (Au , B u ), ∅ ̸= u ⊆ {1, . . . , N }, be a subvector of X. Consistent with the probability measure fXu (xu )dxu , let {Pu,ju (xu ) : |ju | = l, ju ∈ N|u| }, the set of multivariate
orthogonal polynomials of degree l, |u| ≤ l < ∞, be a basis of Zlu . Then the set of polynomials from the orthogonal sum
1⊕
⊕
∞
⊕
∅̸=u⊆{1,...,N } l=|u|
span{Pu,ju (xu ) : |ju | = l, ju ∈ N|u| }
is dense in L2 (AN , B N , fX dx). Moreover,
(3.20)
L2 (AN , B N , fX dx) = 1 ⊕
⊕
∞
⊕
∅̸=u⊆{1,...,N } l=|u|
Zlu
10
S. RAHMAN
where the overline denotes set closure.
Proof. Under Assumption 2.1, orthogonal polynomials exist. According to Theorem 3.4
of Ernst et al. [10], which exploits Assumption 2.2, the polynomial space Π{i} = R[xi ] is
dense in L2 (A{i} , B {i} , fXi dxi ). Now use Theorem 4 of Petersen [19], which asserts that
if, for p ≥ 1 and all i = 1, . . . , N , Π{i} is dense in Lp (A{i} , B{i} , fXi dxi ), then so is ΠN =
R[x1 , . . . , xN ] in Lp (AN , B N , fX dx). Therefore, the set of polynomials from the orthogonal
sum, which is equal to ΠN as per (3.13), is dense in L2 (AN , B N , fX dx). Including the limit
points of the orthogonal sum yields (3.20).
4. Polynomial Dimensional Decomposition. Let y(X) := y(X1 , . . . , XN ) be a realvalued, square-integrable output random variable defined on the same probability space
(Ω, F, P). The vector space L2 (Ω, F, P) is a Hilbert space such that
∫
∫
[
]
E y 2 (X) :=
y 2 (X(ω))dP(ω) =
y 2 (x)fX (x)dx < ∞
AN
Ω
with inner product
(y(X), z(X))L2 (Ω,F,P) :=
∫
y(X(ω))z(X(ω))dP(ω) =
√
(y(X), y(X))L2 (Ω,F,P) =
Ω
∫
AN
y(x)z(x)fX (x)dx =: (y(x), z(x))fX dx
and norm
∥y(X)∥L2 (Ω,F,P) :=
√
(y(x), y(x))fX dx =: ∥y(x)∥fX dx .
It is elementary to show that y(X(ω)) ∈ L2 (Ω, F, P) if and only if y(x) ∈ L2 (AN , B N , fX dx).
4.1. ADD. The ADD, expressed by the recursive form [17, 22]
∑
(4.1a)
y(X) = y∅ +
yu (Xu ),
(4.1b)
y∅ =
(4.1c)
yu (Xu ) =
∫
AN
∫
∅̸=u⊆{1,...,N }
y(x)fX (x)dx,
AN −|u|
y(Xu , x−u )fX−u (x−u )dx−u −
∑
yv (Xv ),
v⊂u
is a finite, hierarchical expansion of y in terms of its input variables with increasing dimensions, where u ⊆ {1, · · · , N } is a subset with the complementary set −u = {1, · · · , N }\u
and yu is a |u|-variate component function describing a constant or an |u|-variate interaction of Xu = (Xi1 , · · · , Xi|u| ) on y when |u| = 0 or |u| > 0. Here, (Xu , x−u ) denotes an
N -dimensional vector whose ith component is Xi if i ∈ u and xi if i ∈
/ u. The summation
in (4.1a) comprises 2N − 1 terms with each term depending on a group of variables indexed
by a particular subset of {1, · · · , N }. When u = ∅, the sum in (4.1c) vanishes, resulting in
the expression of the constant function y∅ in (4.1b). When u = {1, · · · , N }, the integration
in the last line of (4.1c) is on the empty set, reproducing (4.1a) and hence finding the last
function y{1,··· ,N } . Indeed, all component functions of y can be obtained by interpreting
literally (4.1c). This decomposition, first presented by Hoeffding [16] in relation to his seminal work on U -statistics, has been studied by many other researchers described by Efron
and Stein [9], the author [22], and references cited therein.
POLYNOMIAL DIMENSIONAL DECOMPOSITION
11
The ADD can also be generated by tensorizing a univariate function space decomposition
into its constant subspace and remainder, producing [14]
⊕
(4.2)
L2 (AN , B N , fX dx) = 1 ⊕
Wu ,
∅̸=u⊆{1,...,N }
where
{
}
Wu := yu ∈ L2 (Au , B u , fXu dxu ) : E [yu (Xu )yv (Xv )] = 0 if u ̸= v, v ⊆ {1, . . . , N }
is an ADD subspace comprising |u|-variate component functions of y. However, the subspaces Wu , ∅ ̸= u ⊆ {1, . . . , N }, are in general infinite-dimensional; therefore, further
discretization of Wu is necessary. For instance, by introducing measure-consistent orthogonal polynomial basis discussed in Section 3, a component function yu (Xu ) ∈ Wu can be
expressed as a linear combination of these basis functions. Indeed, comparing (3.20) and
(4.2) yields the closure of an orthogonal decomposition of
Wu =
(4.3)
∞
⊕
l=|u|
Zlu
into polynomial spaces Zlu , |u| ≤ l < ∞. The result is a polynomial refinement of ADD,
which is commonly referred to as PDD.
4.2. PDD. The PDD of a square-integrable random variable y(X) ∈ L2 (Ω, F, P) is simply the expansion of y(X) with respect to a complete, hierarchically ordered, orthonormal
polynomial basis of L2 (Ω, F, P). There are at least two ways to explain PDD: a polynomial
variant of ADD and a dimension-wise orthogonal polynomial expansion.
4.2.1. Polynomial variant of ADD. The first approach, explained by the author in
a prior work [20], involves the following two steps: (1) expand the ANOVA component
function
∑
Cu,ju Ψu,ju (Xu )
(4.4)
yu (Xu ) ∼
ju ∈N|u|
in terms of the basis of Wu , which originally stems from the basis of Zlu , |u| ≤ l < ∞, with
∫
yu (Xu )Ψu,ju (xu )fXu (xu )dxu , ∅ ̸= u ⊆ {1, . . . , N }, ju ∈ N|u| ,
(4.5)
Cu,ju =
A|u|
representing the associated expansion coefficients; and (2) apply (4.1c) to (4.5) and exploit
orthogonal properties of the basis. The end result is the PDD [20] of
∑
∑
(4.6)
y(X) ∼ y∅ +
Cu,ju Ψu,ju (Xu ),
∅̸=u⊆{1,...,N } ju ∈N|u|
where, eventually,
(4.7)
Cu,ju =
∫
AN
y(x)Ψu,ju (xu )fX (x)dx.
Comparing (4.1) and (4.6), the connection between PDD and ADD is clearly palpable, where
the former can be viewed as a polynomial variant of the latter. For instance, Cu,ju Ψu,ju (Xu )
in (4.6) represents a |u|-variate, |ju |th-order PDD component function of y(X), describing
the |ju |th-order polynomial approximation of yu (Xu ). In addition, PDD inherits all desirable
properties of ADD [20].
12
S. RAHMAN
4.2.2. Dimension-wise orthogonal polynomial expansion. The second approach entails polynomial expansion associated with the dimension-wise orthogonal splitting of polynomial spaces, as explained in Section 3.3. The latter approach has not been published
elsewhere and is, therefore, formally presented here as Theorem 4.1.
Theorem 4.1. Let X := (X1 , . . . , XN )T : (Ω, F) → (AN , BN ) be a vector of N ∈ N
input random variables fulfilling Assumptions 2.1 and 2.2. For ∅ ̸= u ⊆ {1, . . . , N } and
Xu := (Xi1 , . . . , Xi|u| )T : (Ωu , F u ) → (Au , B u ), denote by {Ψu,ju (Xu ): ju ∈ N|u| } the set of
multivariate orthonormal polynomials consistent with the probability measure fXu (xu )dxu .
Then
(1) any random variable y(X) ∈ L2 (Ω, F, P) can be hierarchically expanded as a Fourier-like
series, referred to as the PDD of
y(X) ∼ y∅ +
(4.8)
= y∅ +
∑
∞
∑
∑
Cu,ju Ψu,ju (Xu )
∅̸=u⊆{1,...,N } l=|u| ju ∈N|u|
|ju |=l
∑
∑
Cu,ju Ψu,ju (Xu ),
∅̸=u⊆{1,...,N } ju ∈N|u|
where the expansion coefficients y∅ ∈ R and Cu,ju ∈ R, ∅ ̸= u ⊆ {1, . . . , N }, ju ∈ N|u| ,
are defined by
∫
(4.9)
y∅ := E [y(X)] :=
y(x)fX (x)dx,
AN
Cu,ju := E [y(X)Ψu,ju (Xu )] :=
(4.10)
∫
AN
y(x)Ψu,ju (xu )fX (x)dx;
and
(2) the PDD of y(X) ∈ L2 (Ω, F, P) converges to y(X) in mean-square; furthermore, the
PDD converges in probability and in distribution.
Proof. Under Assumptions 2.1 and 2.2, a complete infinite set of multivariate orthogonal polynomials in xu consistent with the probability measure fXu (xu )dxu exists. From
Proposition 3.6 and the fact that orthonormality is merely scaling, the set of polynomials
from the orthogonal sum
(4.11)
1⊕
⊕
∞
⊕
∅̸=u⊆{1,...,N } l=|u|
span{Ψu,ju (xu ) : |ju | = l, ju ∈ N|u| } = ΠN
is also dense in L2 (AN , B N , fX dx). Therefore, any square-integrable random variable y(X)
can be expanded as shown in (4.8). Combining the two inner sums of the expansion forms
the equality in the second line of (4.8).
From the denseness, one has the Bessel’s inequality [7]
(4.12)
[
E y∅ +
∑
∑
∅̸=u⊆{1,...,N } ju ∈N|u|
]2
Cu,ju Ψu,ju (Xu )
[
]
≤ E y 2 (X) ,
POLYNOMIAL DIMENSIONAL DECOMPOSITION
13
proving that the PDD converges in mean-square or L2 . To determine the limit of convergence, invoke again Proposition 3.6, which implies that the set on the left side of (4.11) is
complete in L2 (AN , B N , fX dx). Therefore, Bessel’s inequality becomes an equality
[
]2
∑
∑
[
]
(4.13)
E y∅ +
Cu,ju Ψu,ju (Xu ) = E y 2 (X) ,
∅̸=u⊆{1,...,N } ju ∈N|u|
known as the Parseval identity [7] for a multivariate orthonormal system, for every random
variable y(X) ∈ L2 (Ω, F, P). Furthermore, as the PDD converges in mean-square, it does
so in probability. Moreover, as the expansion converges in probability, it also converges in
distribution.
Finally, to find the expansion coefficients, define a second moment
]2
[
∑
∑
(4.14)
ePDD := E y(X) − y∅ −
Cv,kv Ψv,kv (Xv )
∅̸=v⊆{1,...,N } kv ∈N|v|
of the difference between y(X) and its full PDD. Differentiate both sides of (4.14) with
respect to y∅ and Cu,ju , ∅ ̸= u ⊆ {1, . . . , N }, ju ∈ N|u| , to write
∂ePDD
∂y∅
(4.15)
]2
[
∑
∑
∂
Cv,kv Ψv,kv (Xv )
=
E y(X) − y∅ −
∂y∅
∅̸=v⊆{1,...,N } kv ∈N|v|
[
}2 ]
{
∑
∑
∂
= E
Cv,kv Ψv,kv (Xv )
y(X) − y∅ −
∂y∅
|v|
∅̸
=
v⊆{1,...,N
}
k
∈N
v
}
]
[{
∑
∑
Cv,kv Ψv,kv (Xv ) − y(X) × 1
= 2E y∅ +
∅̸=v⊆{1,...,N } kv ∈N|v|
= 2 {y∅ − E [y(X)]}
and
∂ePDD
∂Cu,ju
(4.16)
]2
[
∑
∑
∂
Cv,kv Ψv,kv (Xv )
=
E y(X) − y∅ −
∂Cu,ju
∅̸=v⊆{1,...,N } kv ∈N|v|
[
{
}2 ]
∑
∑
∂
Cv,kv Ψv,kv (Xv )
= E
y(X) − y∅ −
∂Cu,ju
|v|
∅̸
=
v⊆{1,...,N
}
kv ∈N
[{
}
]
∑
∑
= 2E y∅ +
Cv,kv Ψv,kv (Xv ) − y(X) Ψu,ju (Xu )
∅̸=v⊆{1,...,N } kv ∈N|v|
= 2 {Cu,ju − E [y(X)Ψu,ju (Xu )]} .
Here, the second, third, and last lines of both (4.15) and (4.16) are obtained by interchanging the differential and expectation operators, performing the differentiation, swapping the
expectation and summation operators and applying Corollary 3.4, respectively. The interchanges are permissible as the infinite sum is convergent as demonstrated in the preceding
paragraph. Setting ∂ePDD /∂y∅ = 0 in (4.15) and ∂ePDD /∂Cu,ju = 0 in (4.16) yields (4.9)
and (4.10), respectively, completing the proof.
14
S. RAHMAN
The expressions of the expansion coefficients can also be derived by simply replacing
y(X) in (4.9) and (4.10) with the full PDD and then using Corollary 3.4. In contrast, the
proof given here demonstrates that the PDD coefficients are determined optimally.
It should be emphasized that the function y must be square-integrable for the meansquare and other convergences to hold. However, the rate of convergence depends on the
smoothness of the function. The smoother the function, the faster the convergence. If the
function is a polynomial, then its PDD exactly reproduces the function. These results can
be easily proved using classical approximation theory.
A related expansion, known by the name of RS-HDMR [18], also involves orthogonal
polynomials in connection with ADD. However, the existence, convergence, and approximation quality of the expansion, including its behavior for infinitely many input variables,
have not been reported.
4.3. Truncation. The full PDD contains an infinite number of orthonormal polynomials or coefficients. In practice, the number must be finite, meaning that PDD must be
truncated. However, there are multiple ways to perform the truncation. A straightforward
approach adopted in this work entails (1) keeping all polynomials in at most 0 ≤ S ≤ N
variables, thereby retaining the degrees of interaction among input variables less than or
equal to S and (2) preserving polynomial expansion orders (total) less than or equal to
S ≤ m < ∞. The result is an S-variate, mth-order PDD approximation2
yS,m (X) = y∅ +
(4.17)
= y∅ +
S
m
∑
∑
s=1
l=s
∑
∑
∑
Cu,ju Ψu,ju (Xu )
∅̸=u⊆{1,...,N } ju ∈N|u|
|u|=s
|ju |=l
∑
Cu,ju Ψu,ju (Xu )
∅̸=u⊆{1,...,N } ju ∈N|u|
1≤|u|≤S
|u|≤|ju |≤m
of y(X), containing
(4.18)
LS,m = 1 +
S ( )( )
∑
N
m
s=1
s
s
number of expansion coefficients including y∅ . It is important to clarify a few things about
the truncated PDD proposed. First, a different truncation with respect to the polynomial
expansion order based on ∞-norm as opposed to 1-norm, that is, ∥ju ∥∞ ≤ m, was employed
in prior works [20, 21, 24]. Therefore, comparing (4.17) and (4.18) with the existing truncation, if it is desired, should be done with care. Having said this, the proposed truncation
has one advantage over the existing one: a direct comparison with a truncated PCE is
possible; this will be further explained in the forthcoming sections. Second, the right side
of (4.17) contains sums of at most S-dimensional orthonormal polynomials, representing at
most S-variate PDD component functions of y. Therefore, the term “S-variate” used for
the PDD approximation should be interpreted in the context of including at most S-degree
interaction of input variables, even though yS,m is strictly an N -variate function. Third,
when S = 0, y0,m = y∅ for any m as the outer sums of (4.17) vanish. Finally, when S → N
2
The nouns degree and order associated with PDD or orthogonal polynomials are used synonymously in
the paper.
POLYNOMIAL DIMENSIONAL DECOMPOSITION
15
and m → ∞, yS,m converges to y in the mean-square sense, generating a hierarchical and
convergent sequence of PDD approximations. Readers interested in an adaptive version of
PDD, where the truncation parameters are automatically chosen, are directed to the work
of Yadav and Rahman [30], including an application to design optimization [25].
It is natural to ask about the approximation quality of (4.17). Since the set of polynomials from the orthogonal sum in (4.11) is complete in L2 (AN , B N , fX dx), the truncation
error y(X) − yS,m (X) is orthogonal to any element of the subspace from which yS,m (X) is
chosen, as demonstrated below.
Proposition 4.2.For any y(X) ∈ L2 (Ω, F, P), let yS,m (X) be its S-variate, mth-order
PDD approximation. Then the truncation error y(X)−yS,m (X) is orthogonal to the subspace
ΠN
S,m := 1 ⊕
(4.19)
⊕
⊕
∅̸=u⊆{1,...,N } ju ∈N|u|
1≤|u|≤S
|u|≤|ju |≤m
span{Ψu,ju (Xu ) : ju ∈ N|u| } ⊆ L2 (Ω, F, P),
comprising all polynomials in X with the degree of interaction at most S and order at most
m, including constants. Moreover, E[y(X) − yS,m (X)]2 → 0 as S → N and m → ∞.
Proof. Let
∑
∑
(4.20)
ȳS,m (X) := ȳ∅ +
C̄v,kv Ψv,kv (Xv ),
∅̸=v⊆{1,...,N } kv ∈N|v|
1≤|v|≤S
|v|≤|kv |≤m
with arbitrary expansion coefficients ȳ∅ and C̄v,kv , be any element of the subspace ΠN
S,m of
2
L (Ω, F, P) described by (4.19). Then
(4.21)
E[{
[{y(X) − yS,m (X)}ȳS,m (X)]
}
∑
∑
∑
∑
= E
Cu,ju Ψu,ju (Xu ) +
Cu,ju Ψu,ju (Xu )
{
∅̸=u⊆{1,...,N }
ju ∈N|u|
1≤|u|≤S
m+1≤|ju |<∞
× ȳ∅ +
= 0,
∑
∑
∅̸=v⊆{1,...,N } kv
1≤|v|≤S
|v|≤|kv |≤m
∈N|v|
∅̸=u⊆{1,...,N } ju ∈N|u|
S+1≤|u|≤N |u|≤|ju |<∞
}]
C̄v,kv Ψv,kv (Xv )
where the last line follows from Corollary 3.4, proving the first part of the proposition. For
the latter part, the Pythagoras theorem yields
(4.22)
2
E[{y(X) − yS,m (X)}2 ] + E[yS,m
(X)] = E[y 2 (X)].
2 (X)] → E[y 2 (X)] as S → N and m → ∞. Therefore, E[{y(X) −
From Theorem 4.1, E[yS,m
2
yS,m (X)} ] → 0 as S → N and m → ∞.
The second part of Proposition 4.2 entails L2 convergence, which is the same as the
mean-square convergence described in Theorem 4.1. However, an alternative route is chosen
for the proof of the proposition. Besides, Proposition 4.2 implies that the PDD approximation is optimal as it recovers the best approximation from the subspace ΠN
S,m , as described
by Corollary 4.3.
16
S. RAHMAN
Corollary 4.3.Let ΠN
S,m in (4.19) define the subspace of all polynomials in X with the degree of interaction at most S and order at most m, including constants. Then the S-variate,
mth-order PDD approximation yS,m (X) of y(X) ∈ L2 (Ω, F, P) is the best approximation in
the sense that
E [y(X) − yS,m (X)]2 =
(4.23)
inf
ȳS,m ∈ΠN
S,m
E [y(X) − ȳS,m (X)]2 .
Proof. Consider two elements yS,m (X) and ȳS,m (X) of the subspace ΠN
S,m , where the
former is the S-variate, mth-order PDD approximation of y(X) with the expansion coefficients defined by (4.9) and (4.10) and the latter is any S-variate, mth-order polynomial
function, described by (4.20), with arbitrary chosen expansion coefficients. From Proposition 4.2, the truncation error y(X) − yS,m (X) is orthogonal to both yS,m (X) and ȳS,m (X)
and is, therefore, orthogonal to their linear combinations, yielding
E [{y(X) − yS,m (X)}{yS,m (X)] − ȳS,m (X)}] = 0.
Consequently,
(4.24)
E [y(X) − ȳS,m (X)]2 = E [y(X) − yS,m (X)]2 + E [yS,m (X) − ȳS,m (X)]2
≥ E [y(X) − yS,m (X)]2 ,
as the second expectation on the right side of the first line of (4.24) is non-negative, thereby
proving the mean-square optimality of the S-variate, mth-order PDD approximation.
The motivations behind ADD- and PDD-derived approximations are the following. In a
practical setting, the function y(X), fortunately, has an effective dimension [3] much lower
than N , meaning that the right side of (4.1a) can be effectively approximated by a sum
of lower-dimensional component functions yu , |u| ≪ N , but still maintaining all random
variables X of a high-dimensional uncertainty quantification problem. Furthermore, an Svariate, mth-order PDD approximation is grounded on a fundamental conjecture known to
be true in many real-world uncertainty quantification problems: given a high-dimensional
function y, its |u|-variate, |ju |th-order PDD component function Cu,ju Ψu,ju (Xu ), where S +
1 ≤ |u| ≤ N and m+1 ≤ |ju | < ∞, is small and hence negligible, leading to an accurate lowvariate, low-order approximation of y. The computational complexity of a truncated PDD is
polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to a
substantial extent. Although PCE contains the same orthogonal polynomials, a recent work
on random eigenvalue analysis of dynamic systems reveals markedly higher convergence rate
of the PDD approximation than the PCE approximation [24].
4.4. Output statistics and other probabilistic characteristics. The S-variate, mthorder PDD approximation yS,m (X) can be viewed as a surrogate of y(X). Therefore, relevant
probabilistic characteristics of y(X), including its first two moments and probability density
function, if it exists, can be estimated from the statistical properties of yS,m (X).
Applying the expectation operator on yS,m (X) and y(X) in (4.17) and (4.8) and imposing Corollary 3.4, their means
(4.25)
E [yS,m (X)] = E [y(X)] = y∅
POLYNOMIAL DIMENSIONAL DECOMPOSITION
17
are the same and independent of S and m. Therefore, the PDD truncated for any values
of 0 ≤ S ≤ N and S ≤ m < ∞ yields the exact mean. Nonetheless, E[yS,m (X)] will be
referred to as the S-variate, mth-order PDD approximation of the mean of y(X).
Applying the expectation operator again, this time on [yS,m (X) − y∅ ]2 and [y(X) − y∅ ]2 ,
and employing Corollary 3.4 results in the variances
(4.26)
∑
var [yS,m (X)] =
∑
2
Cu,j
u
∅̸=u⊆{1,...,N } ju
1≤|u|≤S
|u|≤|ju |≤m
∈N|u|
and
(4.27)
var [y(X)] =
∑
∑
2
Cu,j
u
∅̸=u⊆{1,...,N } ju ∈N|u|
of yS,m (X) and y(X), respectively. Again, var[yS,m (X)] will be referred to as the S-variate,
mth-order PDD approximation of the variance of y(X). Clearly, var[yS,m (X)] approaches
var[y(X)], the exact variance of y(X), as S → N and m → ∞.
Being convergent in probability and distribution, the probability density function of
y(X), if it exists, can also be estimated by that of yS,m (X). However, no analytical formula
exists for the density function. In that case, the density can be estimated by sampling
methods, such as Monte Carlo simulation (MCS) of yS,m (X). Such simulation should not
be confused with crude MCS of y(X), commonly used for producing benchmark results
whenever possible. The crude MCS can be expensive or even prohibitive, particularly when
the sample size needs to be very large for estimating tail probabilistic characteristics. In
contrast, the MCS embedded in the PDD approximation requires evaluations of simple
polynomial functions that describe yS,m . Therefore, a relatively large sample size can be
accommodated in the PDD approximation even when y is expensive to evaluate.
4.5. Infinitely many input variables. In many fields, such as uncertainty quantification,
information theory, and stochastic process, functions depending on a countable sequence
{Xi }i∈N of input random variables need to be considered [15]. Under certain assumptions,
PDD is still applicable as in the case of finitely many random variables, as demonstrated
by the following proposition.
Proposition 4.4.Let {Xi }i∈N be a countable sequence of input random variables defined
on the probability space (Ω, F∞ , P), where F∞ := σ({Xi }i∈N ) is the associated σ-algebra
generated. If the sequence {Xi }i∈N satisfies Assumptions 2.1 and 2.2, then the PDD of
y({Xi }i∈N ) ∈ L2 (Ω, F∞ , P), where y : AN → R, converges to y({Xi }i∈N ) in mean-square.
Moreover, the PDD converges in probability and in distribution.
Proof. According to Proposition 3.6, ΠN is dense in L2 (AN , B N , fX dx) and hence in
2
L (Ω, FN , P) for every N ∈ N, where FN := σ({Xi }N
i=1 ) is the associated σ-algebra generated by {Xi }N
.
Here,
with
a
certain
abuse
of
notation,
ΠN is used as a set of polynomial
i=1
functions of both real variables x and random variables X. Now, apply Theorem 3.8 of
Ernst et al. [10], which says that if ΠN is dense in L2 (Ω, FN , P) for every N ∈ N, then
Π∞ :=
∞
∪
N =1
ΠN ,
18
S. RAHMAN
a subspace of L2 (Ω, F∞ , P), is also dense in L2 (Ω, F∞ , P). But, using (4.11),
Π∞ =
∞
∪
N =1
= 1⊕
1⊕
∞
⊕
⊕
∅̸=u⊆{1,...,N } l=|u|
∞
⊕
⊕
∅̸=u⊆N l=|u|
span{Ψu,ju : |ju | = l, ju ∈ N|u| }
span{Ψu,ju : |ju | = l, ju ∈ N|u| },
demonstrating that the set of polynomials from the orthogonal sum in the last line is dense in
L2 (Ω, F∞ , P). Therefore, the PDD of y({Xi }i∈N ) ∈ L2 (Ω, F∞ , P) converges to y({Xi }i∈N )
in mean-square. Since the mean-square convergence is stronger than the convergence in
probability or in distribution, the latter modes of convergence follow readily.
5. Polynomial Chaos Expansion. In contrast to the dimension-wise splitting of polynomial spaces in PDD, a degree-wise orthogonal splitting of polynomial spaces results in
PCE. The latter decomposition is briefly summarized here as PCE will be compared with
PDD in the next section.
5.1. Degree-wise orthogonal decomposition of polynomial spaces. Let j := j{1,...,N } =
(j1 , . . . , , jN ) ∈ NN
0 , ji ∈ N0 , i ∈ {1, . . . , N }, define an N -dimensional multi-index. For
x = (x1 , . . . , xN ) ∈ AN ⊆ RN , a monomial in the variables x1 , . . . , xN is the product
xj = xj11 · · · xjNN and has a total degree |j| = j1 + · · · + jN . Denote by
j
N
ΠN
p := span{x : 0 ≤ |j| ≤ p, j ∈ N0 }, p ∈ N0 ,
the space of real polynomials in x of degree at most p. Let V0N := ΠN
0 = span{1} be
the space of constant functions. For each 1 ≤ l < ∞, denote by VlN ⊂ ΠN
l the space of
orthogonal polynomials of degree exactly l that are orthogonal to all polynomials in ΠN
l−1 ,
that is,
N
VlN := {P ∈ ΠN
l : (P, Q)fX dx = 0 ∀ Q ∈ Πl−1 }, 1 ≤ l < ∞.
N
From Section 3, with u = {1, . . . , N } in mind, select {Pj (x) : |j| = l, j ∈ NN
0 } ⊂ Vl to be
N
a basis of Vl . Each basis function Pj (x) is a multivariate orthogonal polynomial in x of
degree |j|. Obviously,
VlN = span{Pj (x) : |j| = l, j ∈ NN
0 }, 0 ≤ l < ∞.
According to (3.7) with u = {1, . . . , N }, Pj (x) is orthogonal to Pk (x) whenever j ̸= k.
Therefore, any two polynomial subspaces VlN and VrN , where 0 ≤ l, r < ∞, are orthogonal
whenever l ̸= r. In consequence, there exists another orthogonal decomposition of
⊕
⊕
N
(5.1)
ΠN =
VlN =
span{Pj (x) : |j| = l, j ∈ NN
0 } = span{Pj (x) : j ∈ N0 }.
l∈N0
l∈N0
Compared with (3.13), (5.1) represents a degree-wise orthogonal decomposition of ΠN .
5.2. PCE. Given the degree-wise orthogonal decomposition of ΠN , the PCE of any
square-integrable output random variable y(X) is expressed by [5, 10, 28, 29]
(5.2)
y(X) ∼
∞
∑
∑
l=0
j∈NN
0
|j|=l
Cj Ψj (X) =
∑
j∈NN
0
Cj Ψj (X),
POLYNOMIAL DIMENSIONAL DECOMPOSITION
19
where {Ψj (X) : j ∈ NN
0 } is an infinite set of measure-consistent multivariate orthonormal
polynomials in X that can be obtained by scaling Pj in (3.14) and Cj ∈ R, j ∈ NN
0 , are the
2
PCE expansion coefficients. Like PDD, the PCE of y(X) ∈ L (Ω, F, P) under Assumptions
2.1 and 2.2 also converges to y(X) in mean-square, in probability, and in distribution.
Since the PCE of y(X) in (5.2) is an infinite series, it must also be truncated in applications. A commonly adopted truncation is based on retaining orders of polynomials less
than or equal to a specified total degree. In this regard, given 0 ≤ p < ∞, the pth-order
PCE approximation of y(X) ∈ L2 (Ω, F, P) reads
(5.3)
p
∑
∑
yp (X) =
l=0
∑
Cj Ψj (X) =
j∈NN
0
|j|=l
Cj Ψj (X).
j∈NN
0
0≤|j|≤p
This kind of truncation is related to the total degree index set
{
}
N
∑
j ∈ NN
ji ≤ p
0 :
i=1
for defining the recovered multivariate polynomial space of a pth-order PCE approximation.
Other kinds of truncation entail
{
}
{
}
N
∏
N
N
j ∈ N0 : max ji ≤ p and j ∈ N0 :
(ji + 1) ≤ p + 1 ,
i=1,...,N
i=1
describing the tensor product and hyperbolic cross index sets, respectively, to name just two.
The total degree and tensor product index sets are common choices, although the latter
one suffers from the curse of dimensionality, making it impractical for high-dimensional
problems. The hyperbolic cross index set, originally introduced for approximating periodic
functions by trigonometric polynomials [2], is relatively a new idea and has yet to receive
widespread attention. All of these choices and possibly others, including their anisotropic
versions, can be used for truncating PCE. In this work, however, only the total degree index
set is used for the PCE approximation. This is consistent with the 1-norm of ju used for
truncating PDD in (4.17).
6. Error Analysis.
6.1. PDD error. Define a second-moment error,
eS,m := E [y(X) − yS,m (X)]2 ,
(6.1)
stemming from the S-variate, mth-order PDD approximation presented in the preceding
section. Replacing y(X) and yS,m (X) in (6.1) with the right sides of (4.8) and (4.17),
respectively, produces
(6.2)
eS,m =
S
∑
∞
∑
∑
∑
s=1 l=m+1 ∅̸=u⊆{1,...,N } ju
|u|=s
|ju |=l
2
Cu,j
u
∈N|u|
+
N
∑
s=S+1
∞
∑
l=s
∑
∑
2
Cu,j
,
u
∅̸=u⊆{1,...,N } ju
|u|=s
|ju |=l
∈N|u|
where the second term vanishes expectedly when S = N as the lower limit of the outer sum
exceeds the upper limit. In (6.2), the first term of the PDD error is due to the truncation
20
S. RAHMAN
of polynomial expansion orders involving interactive effects of at most S variables, whereas
the second term of the PDD error is contributed by ignoring the interactive effects of larger
than S variables. Obviously, the error for a general function y depends on which expansion
coefficients decay and how they decay with respect to S and m. Nonetheless, the error
decays monotonically with respect to S and/or m as stated in Proposition 6.1. Other than
that, nothing more can be said about the PDD error.
Proposition 6.1.For a general function y, eS+i,m+j ≤ eS,m , where 0 ≤ S < N , S ≤ m <
∞, and i and j are equal to either 0 or 1, but not both equal to 0.
Proof. Setting i = 1, j = 0 and using (6.2),
eS+1,m − eS,m =
∞
∑
∑
∑
2
Cu,j
−
u
l=m+1 ∅̸=u⊆{1,...,N } ju ∈N|u|
|u|=S+1
|ju |=l
∞
∑
∑
∑
2
Cu,j
≤ 0,
u
l=S+1 ∅̸=u⊆{1,...,N } ju ∈N|u|
|u|=S+1
|ju |=l
where the inequality to zero in the last line results from the fact that, as S ≤ m, the first
term is smaller than or equal to the second term. Similarly, setting i = 0, j = 1 and using
(6.2),
S
∑
∑
∑
2
eS,m+1 − eS,m = −
Cu,j
≤ 0.
u
s=1 ∅̸=u⊆{1,...,N } ju ∈N|u|
|u|=s
|ju |=m+1
Finally, setting i = 1, j = 1,
eS+1,m+1 − eS,m = eS+1,m+1 − eS,m+1 + eS,m+1 − eS,m ≤ 0,
as eS+1,m+1 − eS,m+1 ≤ 0 and eS,m+1 − eS,m ≤ 0.
Corollary 6.2.For a general function y, eS ′ ,m′ ≤ eS,m whenever S ′ ≥ S and m′ ≥ m.
In practice, the effects of interaction among input variables and polynomial expansion
2 , which is equal to
order become increasingly weaker as |u| and |ju | grow. In this case, Cu,j
u
2
the variance of Cu,ju Ψu,ju (Xu ), decreases with |u| and |ju |. Given the rates at which Cu,j
u
decreases with |u| and |ju |, a question arises on how fast does eS,m decay with respect to S
and m. Proposition 6.3, Corollary 6.4, and subsequent discussions provide a few insights.
2 , ∅ ̸= u ⊆ {1, · · · , N },
Proposition 6.3.For a class of functions y, assume that Cu,j
u
−|u| −|j |
2
ju ∈ N|u| , attenuates according to Cu,j
≤ Cp1 p2 u , where C > 0, p1 > 1, and p2 > 1
u
are three real-valued constants. Then it holds that
[{
]
}
1 + p1 (p2 − 1) N
(6.3)
var [y(X)] ≤ C
−1
p1 (p2 − 1)
and
(6.4)
eS,m ≤ C
[ S
∑
s=1
∞
∑
l=m+1
( )(
)
N
∑
N
l − 1 −s −l
p1 p2 +
s
s−1
s=S+1
]
)
∞ ( )(
∑
N
l − 1 −s −l
p p
.
s
s−1 1 2
l=s
Proof. With the recognition that
( )
(
)
N
l−1
|u|
#{∅ ̸= u ⊆ {1, · · · , N } : |u| = s} =
, #{ju ∈ N : |ju | = l} =
,
s
|u| − 1
POLYNOMIAL DIMENSIONAL DECOMPOSITION
21
−|u| −|j |
2
use Cu,j
≤ Cp1 p2 u in (4.27) and (6.2) to obtain (6.3) and (6.4).
u
Corollary 6.4.For the function class described in Proposition 6.3, eS+i,m+j < eS,m , where
0 ≤ S < N , S ≤ m < ∞, and i and j are equal to either 0 or 1, but not both equal to 0.
According to Corollary 6.4, eS,m decays strictly monotonically with respect to S and/or
m for any rate parameters p1 > 1 and p2 > 1. When the equality holds in (6.3) and (6.4)
from Proposition 6.3, Figure 1, comprising three subfigures, presents three sets of plots of
the relative error, eS,m /var[y(X)], against m for five distinct values of S = 1, 2, 3, 5, 9. These
subfigures, each obtained for N = 20, correspond to three distinct cases of the values of p1
and p2 : (1) p1 = 500, p2 = 5; (2) p1 = 5, p2 = 500; and (3) p1 = 500, p2 = 500. In all cases,
the error for a given S decays first with respect to m, and then levels off at a respective
limit when m is sufficiently large. The limits get progressively smaller when S increases
as expected. However, the magnitude of this behavior depends on the rates at which the
expansion coefficient attenuates with respect to the degree of interaction and polynomial
expansion order. When p1 > p2 , as in case 1 [Figure 1 (top)], the error for a given S
decays slowly with respect to m due to a relatively weaker attenuation rate associated with
the polynomial expansion order. The trend reverses when the attenuation rate becomes
stronger and reaches the condition p1 < p2 , as in case 2 [Figure 1 (middle)]. For larger
values of S, for example, S = 5 or 9, the respective limits are significantly lower in case 2
than in case 1. When the attenuation rates are the same and large, as in case 3 [Figure 1
(bottom)], the decay rate of error accelerates substantially.
6.2. Relationship between PDD and PCE. Since PDD and PCE share the same orthonormal polynomials, they are related. Indeed, the relationship was first studied by
Rahman and Yadav [24], who determined that any one of the two infinite series from PDD
and PCE defined by (4.8) and (5.2) can be rearranged to derive the other. In other words,
the PDD can also be viewed as a reshuffled PCE and vice versa. However, due to a strong
connection to ADD endowed with a desired hierarchical structure, PDD merits its own
appellation. More importantly, the PDD and PCE when truncated are not the same. In
fact, two important observations stand out prominently. First, the terms in the PCE approximation are organized with respect to the order of polynomials. In contrast, the PDD
approximation is structured with respect to the degree of interaction between a finite number of random variables. Therefore, significant differences may exist regarding the accuracy,
efficiency, and convergence properties of their truncated sum. Second, if a stochastic response is highly nonlinear, but contains rapidly diminishing interactive effects of multiple
random variables, the PDD approximation is expected to be more effective than the PCE
approximation. This is because the lower-variate terms of the PDD approximation can be
just as nonlinear by selecting appropriate values of m in (4.17). In contrast, many more
terms and expansion coefficients are required to be included in the PCE approximation to
capture such high nonlinearity. In this work, a theoretical comparison between PDD and
PCE in the context of error analysis, not studied in prior works, is presented.
For error analysis, it is convenient to write a PCE approximation in terms of a PDD
approximation. Indeed, there exists a striking result connecting PCE with PDD approximations, as explained in Proposition 6.3.
Proposition 6.5.Let yp (X) and yS,m (X) be the pth-order PCE approximation and Svariate, mth-order PDD approximation of y(X) ∈ L2 (Ω, F, P), respectively, where 0 ≤
S ≤ N , S ≤ m < ∞, and 0 ≤ p < ∞. Then the pth-order PCE approximation and the
22
S. RAHMAN
Figure 1. PDD errors for various attenuation rates of the expansion coefficients; (top) p1 = 500, p2 = 5;
(middle) p1 = 5, p2 = 500; (bottom) p1 = 500, p2 = 500.
POLYNOMIAL DIMENSIONAL DECOMPOSITION
23
(p ∧ N )-variate, pth-order PDD approximation are the same, that is,
(6.5)
yp (X) = yp∧N,p (X),
where y0,0 (X) = y∅ and p ∧ N denotes the minimum of p and N .
Proof. According to Rahman and Yadav [24], the right side of (5.3) can be reshuffled,
resulting in a long form of the PCE approximation, expressed by
[ N −s+1
]
p−s+1
p−s+1
N
N
s
∑
∑
∑
∑
∑
∏
(6.6) yp (X) = y∅ +
···
···
C{i1 ···is },(ji1 ···jis )
ψiq jiq (Xiq ) ,
s=1
i1 =1
|
is =is−1 +1
{z
s sums
}
ji1 =1
|
{z
jis =1
}
q=1
s sums;ji1 +···+jis ≤p
in terms of the PDD expansion coefficients. Note that, depending on the condition p ≤ N
or p ≥ N , at most p-dimensional or N -dimensional sums survive in (6.6), meaning that the
pth-order PCE approximation retains effects of at most (p ∧ N )-degree interaction and at
most pth-order polynomial expansion order. Accordingly, the compact form of the PCE
approximation can be written as
(6.7)
yp (X) = y∅ +
p∧N
∑
s=1
p
∑
l=s
∑
∑
Cu,ju Ψu,ju (Xu ) = yp∧N,p (X),
∅̸=u⊆{1,...,N } ju
|u|=s
|ju |=l
∈N|u|
completing the proof.
Using Proposition 6.5, the number of expansion coefficients, say, Lp associated with the
pth-order PCE approximation can be calculated from that required by the (p ∧ N )-variate,
pth-order PDD approximation. Accordingly, setting S = p ∧ N and m = p in (4.18),
(6.8)
Lp = Lp∧N,p = 1 +
p∧N
∑
s=1
( )( )
N
p
(N + p)!
=
N !p!
s
s
with the last expression commonly found in the PCE literature [29]. The advantage of (6.7)
over (5.3) is obvious: the PDD coefficients, once determined, can be reused for the PCE
approximation and subsequent error analysis, thereby sidestepping calculations of the PCE
coefficients.
6.3. PDD vs. PCE errors. Define another second-moment error,
(6.9)
ep := E [y(X) − yp (X)]2 ,
resulting from the pth-order PCE approximation yp (X) of y(X). Using Proposition 6.5,
ep = ep∧N,p , meaning that the PCE error analysis can be conducted using the PDD approximation.
Proposition 6.6.For a general function y, let eS,m and ep denote the PDD and PCE
errors defined by (6.1) and (6.9), respectively. Given a truncation parameter 0 ≤ p < ∞ of
the PCE approximation, if the truncation parameters of the PDD approximation are chosen
such that p ∧ N ≤ S ≤ N and p ∨ S ≤ m < ∞, then
(6.10)
eS,m ≤ ep ,
24
S. RAHMAN
where p ∨ S denotes the maximum of p and S.
Proof. The result follows from Propositions 6.1 and 6.5, and Corollary 6.2.
Proposition 6.6 aids in selecting appropriate truncation parameters to contrast the
second-moment errors due to PDD and PCE approximations. However, the proposition
does not say anything about the computational effort. Proposition 6.7 and subsequent discussion explain the relationship between computational effort and error committed by both
PDD and PCE approximations for a special class of functions.
2 , ∅ ̸= u ⊆ {1, · · · , N },
Proposition 6.7.For a special class of functions y, assume that Cu,j
u
−|u| −|j |
2
ju ∈ N|u| , diminishes according to Cu,j
≤ Cp1 p2 u , where C > 0, p1 > 1, and p2 > 1
u
are three real-valued constants. Then it holds that
( )(
)
)
p∧N
∞
N
∞ ( )(
∑ ∑
∑
∑
N
l − 1 −s −l
N
l − 1 −s −l
(6.11) ep ≤ C
p p +
p p
.
s
s−1 1 2
s
s−1 1 2
s=1
s=p∧N +1
l=p+1
l=s
Proof. Replacing S and m in (6.4) with p ∧ N and p, respectively, obtains the result.
Theoretically, the numbers of expansion coefficients required by the PDD and PCE
approximations can be used to compare their respective computational efforts. Table 1
presents for N = 20 the requisite numbers of expansion coefficients when PDD is truncated
at S = 1, 2, 3, 5, 9 and m = 1 − 20, and when PCE is truncated at p = 1 − 20. They are
calculated using (4.18) and (6.8) for PDD and PCE approximations, respectively. According
to Table 1, the growth of the number of expansion coefficients in PCE is steeper than that
in PDD. The growth rate increases markedly when the polynomial expansion order is large.
This is primarily because a PCE approximation is solely dictated by a single truncation
parameter p, which controls the largest polynomial expansion order preserved, but not the
degree of interaction independently. In contrast, two different truncation parameters S
and m are involved in a PDD approximation, affording a greater flexibility in retaining
the largest degree of interaction and largest polynomial expansion order. In consequence,
the numbers of expansion coefficients and hence the computational efforts by the PDD and
PCE approximations can vary appreciably.
Table 1
Growth of expansion coefficients in the PDD and PCE approximations
LS,m
m or p
S=1
1
21
2
41
S=2
S=3
S=5
S=9
Lp
21
231
231
3
61
631
1771
5
101
2001
13,401
1771
53,130
53,130
9
181
7021
102,781
2,666,755
10,015,005
10,015,005
12
241
12,781
263,581
14,941,024
211,457,454
225,792,840
15
301
20,251
538,951
53,710,888
2,397,802,638
3,247,943,160
20
401
36,501
1,336,101
265,184,142
51,855,874,642
137,846,528,820
Using the equalities in (6.3), (6.4), and (6.11), Figure 2 depicts how the relative PDD
error, eS,m /var[y(X)], and the relative PCE error ep /var[y(X)], vary with respect to the
POLYNOMIAL DIMENSIONAL DECOMPOSITION
25
Figure 2. PDD vs. PCE errors for various attenuation rates of the expansion coefficients; (top) p1 = 500,
p2 = 5; (middle) p1 = 5, p2 = 500; (bottom) p1 = 500, p2 = 500.
26
S. RAHMAN
number of expansion coefficients required for N = 20. Again, the three preceding cases of
the attenuation rates p1 and p2 with respect to the degree of interaction and polynomial
expansion order are studied. In all cases, the PDD or PCE errors decay with respect to
S, m, and p as expected. However, in the PDD approximation, the error for a fixed S
may decline even further by increasing m, whereas no such possibility exists in the PCE
approximation. This behavior is pronounced in case 1, that is, when p1 > p2 [Figure 2
(top)]. For example, in case 1, the bivariate, sixth-order PDD approximation (S = 2,
m = 6) achieves a relative error of 8.54 × 10−5 employing only 2971 expansion coefficients.
In contrast, to match the same-order error, the sixth-order PCE approximation (p = 6)
is needed, committing a relative error of 7.15 × 10−5 at the cost of 230,230 expansion
coefficients. Therefore, the PDD approximation is substantially more economical than the
PCE approximation for a similar accuracy. However, when p1 > p2 , as in case 2 [Figure
2 (middle)], the computational advantage of PDD over PCE approximations disappears as
the attenuation rate associated with the polynomial expansion order is dominant over that
associated with the degree of interaction. Nonetheless, in case 2, the S-variate, mth-order
PDD approximation with the lowest m possible cannot commit more error than the mthorder PCE approximation for the same computational effort. Finally, when the attenuation
rates are the same, as in case 3 [Figure 2 (bottom)], the PDD approximation is still more
computationally efficient than the PCE approximation. For instance, the trivariate, fifthorder PDD (S = 3, m = 5) and fifth-order PCE (p = 6) approximations require 13,401
and 53,130 expansion coefficients to commit the same-order errors of 5.07 × 10−14 and
3.51×10−14 , respectively. But, unlike in case 1, an unnecessarily large polynomial expansion
order may render the PDD approximation more expensive than required.
Readers should take note that the comparative error analyses reported here are limited
to PDD and PCE approximations derived from truncations according to the total degree
index set. For other index sets, such as the tensor product and hyperbolic cross index sets,
it would be intriguing to find whether a similar conclusion arises.
7. Conclusion. The fundamental mathematical properties of PDD, representing Fourierlike series expansion in terms of random orthogonal polynomials with increasing dimensions,
were studied. A dimension-wise splitting of appropriate polynomial spaces into orthogonal
subspaces, each spanned by measure-consistent orthogonal polynomials, was constructed,
resulting in a polynomial refinement of ADD and eventually PDD. Under prescribed assumptions, the set of measure-consistent orthogonal polynomials was proved to form a
complete basis of each subspace, leading to an orthogonal sum of such sets of basis functions, including the constant subspace, to span the space of all polynomials. In addition,
the orthogonal sum is dense in a Hilbert space of square-integrable functions, leading to
mean-square convergence of PDD to the correct limit, including for the case of infinitely
many random variables. The optimality of PDD and the approximation quality due to
truncation were demonstrated or discussed. From the second-moment error analysis of a
general function of 1 ≤ N < ∞ random variables, given 0 ≤ p < ∞, the (p ∧ N )-variate,
pth-order PDD approximation and pth-order PCE approximation are the same. Therefore,
an S-variate, mth-order PDD approximation cannot commit a larger error than a pth-order
PCE approximation if p ∧ N ≤ S ≤ N and p ∨ S ≤ m < ∞. From the comparison of computational efforts, required to estimate with the same accuracy the variance of an output
function entailing exponentially attenuating expansion coefficients, the PDD approximation
can be substantially more economical than the PCE approximation.
POLYNOMIAL DIMENSIONAL DECOMPOSITION
27
REFERENCES
[1] R. Askey and J. Wilson, Some Basic Hypergeometric Polynomials that Generalize Jacobi Polynomials, Mem. Amer. Math. Soc., 319, AMS, Providence, RI, 1985.
[2] K. I. Babenko, Approximation by trigonometric polynomials in a certain class of periodic functions
of several variables, Soviet Math. Dokl., 1 (1960), pp. 672–675.
[3] R. Bellman, Dynamic Programming, Princeton University Press: Princeton, NJ, 1957.
[4] R. E. Caflisch, W. Morokoff, and A. Owen, Valuation of mortgage backed securities using brownian bridges to reduce effective dimension, Journal of Computational Finance, 1 (1997), pp. 27–46.
[5] R. H. Cameron and W. T. Martin, The orthogonal development of non-linear functionals in series
of fourierhermite functionals, Ann. Math., 48 (1947), pp. 385–392.
[6] A. Chakraborty and S. Rahman, Stochastic multiscale models for fracture analysis of functionally
graded materials, Engineering Fracture Mechanics, 75 (2008), pp. 2062–2086.
[7] R. Courant and D. Hilbert, Methods of Mathematical Physics, vol. I, Interscience Publishers Inc.,
1966.
[8] C. F. Dunkl and Y. Xu, Orthogonal Polynomials of Several Variables, Encyclopedia of Mathematics
and its Applications 155, Cambridge University Press, second ed., 2001.
[9] B. Efron and C. Stein, The jackknife estimate of variance, The Annals of Statistics, 9 (1981), pp. pp.
586–596.
[10] O. G. Ernst, A. Mugler, H. J. Starkloff, and E. Ullmann, On the convergence of generalized
polynomial chaos expansions, ESAIM: Mathematical Modelling and Numerical Analysis, 46 (2012),
pp. 317–339.
[11] G. Freud, Orthogonal Polynomials, Akademiai: Budapest, 1971.
[12] W. Gautschi, Orthogonal polynomials: computation and approximation, Numerical mathematics and
scientific computation, Oxford University Press, 2004.
[13] G. H. Golub and C. F. van Loan, Matrix computations, The John Hopkins University Press,
third ed., 1996.
[14] M. Griebel, Sparse grids and related approximation schemes for higher dimensional problems, in
Foundations of Computational Mathematics (FoCM05), L. Pardo, A. Pinkus, E. Suli, and M. Todd,
eds., Cambridge University Press, 2006, pp. 106–161.
[15] M. Griebel, F. Y. Kuo, and I. H. Sloan, The anova decomposition of a non-smooth function of
infinitely many variables can have every term smooth, Mathematics of Computation, 86 (2017),
pp. 1855–1876.
[16] W. Hoeffding, A class of statistics with asymptotically normal distribution, The Annals of Mathematical Statistics, 19 (1948), pp. pp. 293–325, http://www.jstor.org/stable/2235637.
[17] F. Y. Kuo, I. H. Sloan, G. W. Wasilkowski, and H. Wozniakowski, On decompositions of
multivariate functions, Mathematics of Computation, 79 (2011), pp. 953–966.
[18] G. Li, J. Hu, and S. W. Wang, Random sampling-high dimensional model representation (rs-hdmr)
and orthogonality of its different order component functions, Journal of Physical Chemistry A, 110
(2006), pp. 2474–2485.
[19] L. C. Petersen, On the relation between the multidimensional moment problem and the onedimensional moment problem, Math. Scand., 51 (1982), pp. 361–366.
[20] S. Rahman, A polynomial dimensional decomposition for stochastic computing, International Journal
for Numerical Methods in Engineering, 76 (2008), pp. 2091–2116.
[21] S. Rahman, Extended polynomial dimensional decomposition for arbitrary probability distributions,
Journal of Engineering Mechanics, 135 (2009), pp. 1439–1451.
[22] S. Rahman, Approximation errors in truncated dimensional decompositions, Mathematics of Computation, 83 (2014), pp. 2799–2819.
[23] S. Rahman, A generalized anova dimensional decomposition for dependent probability measures,
SIAM/ASA Journal on Uncertainty Quantification, 2 (2014), pp. 670–697.
[24] S. Rahman and V. Yadav, Orthogonal polynomial expansions for solving random eigenvalue problems,
International Journal for Uncertainty Quantification, 1 (2011), pp. 163–187.
[25] X. Ren, V. Yadav, and S. Rahman, Reliability-based design optimization by adaptive-sparse polynomial dimensional decomposition, Structural and Multidisciplinary Optimization, 53 (2016),
pp. 425–452.
[26] T. J. Stieltjes, Quelques recherches sur la thorie des quadratures dites mcaniques, Ann. Sci. cole
Norm, 3 (1884), pp. 409–426.
28
S. RAHMAN
[27] K. Tang, P. M. Congedo, and R. Abgrall, Adaptive surrogate modeling by anova and sparse
polynomial dimensional decomposition for global sensitivity analysis in fluid simulation, Journal of
Computational Physics, 314 (2016), pp. 557–589.
[28] N. Wiener, The homogeneous chaos, American Journal of Mathematics, 60 (1938), pp. 897–936.
[29] D. Xiu and G. E. Karniadakis, The wiener-askey polynomial chaos for stochastic differential equations, SIAM Journal of Scientific Computing, 24 (2002), pp. 619–644.
[30] V. Yadav and S. Rahman, Adaptive-sparse polynomial dimensional decomposition for highdimensional stochastic computing, Computer Methods in Applied Mechanics and Engineering, 274
(2014), pp. 56–83.
| 10 |
arXiv:1308.6074v2 [q-bio.GN] 3 Apr 2014
Exploration and retrieval of
whole-metagenome sequencing samples
Sohan Seth 1 , Niko Välimäki 2,3 , Samuel Kaski 1,3 , Antti Honkela 3
1
Helsinki Institute for Information Technology HIIT,
Department of Information and Computer Science,
Aalto University, Espoo, Finland
2
Genome-Scale Biology Program and Department of Medical Genetics,
University of Helsinki, Helsinki, Finland
3
Helsinki Institute for Information Technology HIIT,
Department of Computer Science, University of Helsinki, Helsinki, Finland
April 4, 2014
Abstract
Over the recent years, the field of whole metagenome shotgun sequencing has witnessed significant
growth due to the high-throughput sequencing technologies that allow sequencing genomic samples cheaper,
faster, and with better coverage than before. This technical advancement has initiated the trend of sequencing multiple samples in different conditions or environments to explore the similarities and dissimilarities
of the microbial communities. Examples include the human microbiome project and various studies of the
human intestinal tract. With the availability of ever larger databases of such measurements, finding samples
similar to a given query sample is becoming a central operation. In this paper, we develop a content-based
exploration and retrieval method for whole metagenome sequencing samples. We apply a distributed string
mining framework to efficiently extract all informative sequence k-mers from a pool of metagenomic samples and use them to measure the dissimilarity between two samples. We evaluate the performance of
the proposed approach on two human gut metagenome data sets as well as human microbiome project
metagenomic samples. We observe significant enrichment for diseased gut samples in results of queries
with another diseased sample and very high accuracy in discriminating between different body sites even
though the method is unsupervised. A software implementation of the DSM framework is available at
https://github.com/HIITMetagenomics/dsm-framework.
1 Introduction
Metagenomics is the study of microbial communities in their natural habitat using genomics techniques [28].
It is undergoing a boom due to proliferation of high-throughput sequencing technologies. Many studies focus
at targeted sequencing of specific marker genes such as the 16S rRNA gene in bacteria, but recently there
has been a growing interest in whole metagenome sequencing (see, e.g. [18, 27]). While targeted studies
provide data for phylogenetic profiling at a lower cost, whole metagenomes provide much more information, for example, about the collective metabolism [5], and the population genetics of the community [22].
Recent studies have also found associations between features of whole human gut metagenomes and type II
diabetes [19]. New data are accumulating rapidly, with a popular web-based MG-RAST server [14] listing
almost 3000 public whole metagenomes.
Analysing whole-metagenome shotgun (WMS) sequencing data is very challenging. The original sample typically contains genetic material from hundreds to thousands of bacterial species of different abundances [9], most of which have not been fully sequenced previously. After sequencing, we obtain a huge
1
Bag of metagenomic samples
s1
ACTAGTCA
TAGCATAG
CCATGACA
CTTAATGA
s5
ATCGCAGA
AGGTTAAT
GTGTACCG
TCAACGGG
s3
ACTGACTG
ATTCCTTA
CTATGCAC
GTTGCTTC
s2
ATGACATA
GATCATGA
CACATGCA
CATGACTG
s6
Feature
extraction
s4
GATGGATT
GTCAGTAC
GTACTGAC
ACTGCATG
Dissimilarity
evaluation
Dissimilarity
values
Query: S3
Retrieve: S6,S1,...
Query: S6
Retrieve: S5,S3,...
ACTGGTCA
CTTAAGGC
GTGTACCA
AGGACAAC
Figure 1: Given a set of metagenomic samples our objective is to be able to retrieve relevant samples to a
query sample. For this, we need to extract relevant features and evaluate a pairwise similarity (or dissimilarity) measure. The samples are then ranked in the order of increasing dissimilarity from the query.
collection of short sequence reads whose species of origin is unknown. While significant progress has been
made, analysis relying on either the limited previously annotated genomes, or assembling the reads into
novel more complete genomes, remains difficult and inefficient, and potentially susceptible to annotation
biases.
In this paper we introduce an efficient purely data-driven feature extraction and selection method as well
as similarity measures for WMS sequencing data sets, and apply them in retrieval of similar data sets. Such
content-based retrieval is an extremely powerful tool for exploration of the data and generating hypotheses
of disease associations, as previously demonstrated with gene expression data [2, 3]. Retrieval from existing
databases makes it possible to automatically explore a much greater variety of hypotheses than relying solely
on the more common specifically designed focused studies.
Content-based similarity measures and retrieval of similar metagenomic data sets have been suggested
previously [16, 10, 26, 6], based on quantifying abundances over a relatively small number of predetermined
features requiring existing annotation. Up to some thousands of known taxa, genes or metabolic pathways
have been used. We introduce similarity measures that are based solely on raw sequencing reads, and hence,
unbiased and insensitive to the quality of the existing annotation. A similar measure has been previously
suggested by [11], but only for pairwise comparisons using a method that is computationally too expensive
to scale to even modestly large data sets. Furthermore, instead of considering all sequences of particular
length, also known as k-mers, as has been done earlier for other tasks and by [11], we employ an efficient
distributed string mining algorithm to find informative subsequences that can be of any length.
In order to deal with the very large number of features some feature selection is necessary. Previous
approaches for detecting relevant features in metagenomic data have been based on direct comparison of
two classes of samples. Again, most of these methods work on up to some thousands of features [30, 17, 23],
with the notable exception of one study [19] where quantification and association testing was done for
over 4.3 million predefined genes. Without feature selection one can use short k-mers [1] or limit to a
set of k-mers that are likely to be informative, such as k-mers associated with well characterised protein
families [4]. While there are no previous examples of unsupervised feature selection for metagenomics,
it is a common practice in information retrieval with text documents [31]; a particularly relevant method
assesses the entropy of the distribution of documents in which a specific term occurs [8].
We evaluate the performance of the proposed unsupervised, unconstrained retrieval method on synthetic
data, as well as metagenomic samples from human body sites [18, 19, 27]. To evaluate the performance of
the retrieval engine, we use external validation based on a ground truth similarity between two samples. To
simplify this process, we consider a binary similarity, which is crude but easily accessible. The human gut
samples in [18, 19] come from studies exploring the change in bacterial species composition between healthy
persons and either inflammatory bowel disease or type II diabetes. We utilize disease state to construct a
2
1. Normalization
Collection of
metagenomic
samples
2. Regularization
4. Distributed
string mining
Dissimilarity matrix
5. Dissimilarity
computation
3. Entropy filtering
6. Evaluation
Annotations
Figure 2: Processing steps of our method. Given a collection of metagenomic samples, we use the collection
as an input to the distributed string mining method (4). For the method, we estimate the frequency of
each k-mer (1,2), evaluate if the k-mer is informative or not (3), and compute the needed dissimilarities
(5). Finally, in this paper we evaluate the performance considering the existing annotations as ground truth;
annotations are not needed for the retrieval in general.
binary ground truth. Thus, we study if, given the metagenomic sample of a person with a disease, the
retrieval finds metagenomic samples related by having the same disease. In the body site data [27] we use
the body sites as ground truth to investigate whether it is possible to identify the bacterial communities at
different body sites in an unsupervised setting without the need of reference genomes. It should be noted
that especially for the gut data, two samples may be related in other ways too. The external validation with
one simple ground truth nonetheless provides an objective platform for comparing different methods. Given
that the method is unsupervised and hence completely oblivious of the disease labels, if such retrieval is
successful it is a promising starting point for developing methods for leveraging data from earlier patients in
early detection of disease and personalized medicine.
2 Approach
Our objective is to extract and select suitable features for representing WMS sequencing samples and to
form a pairwise dissimilarity measure for a collection of such samples. Given this dissimilarity one can
query with a sample and retrieve other samples that are similar to it (Fig. 1). The measure needs to be
reasonably rapidly computable, yet captures relevant differences between the samples, and does all this with
as little prior biological knowledge and annotations as possible, since detailed quantitative prior knowledge
is typically not yet available for metagenomics.
Evaluating dissimilarity requires representing the metagenomic sample in a suitable feature space. A
standard choice for representing objects over strings is to estimate the k-mer frequency values, where a kmer here is a string of k letters from the DNA alphabet {A,C,T,G}. Therefore, there are 4k possible k-mers for
any given k. It is standard practice to set k to a specific value, typically a small value to keep the estimation
problem tractable both computationally and statistically. A larger k would give better discriminability but
not without bounds, as for finite data set sizes there simply is not enough data to estimate long k-mers. We
argue that instead of setting k to a particular value, it is more effective to estimate all possible k-mers for
all possible k which the data supports. This makes the problem more challenging, since the number of such
observed different k-mers for large k becomes very large and they become more susceptible to sequencing
errors. Focusing on k-mers appearing more than once in a sample helps significantly because it is relatively
rare to have the exactly same sequencing errors in two independent reads.
To make the method computationally efficient we treat each k-mer as an independent feature. We compute a Bayesian estimate of their relative frequencies across samples. The employed prior helps in suppressing noise caused by small observed read counts. In the filtering step the abundance distribution of each
3
Figure 3: Technical overview of our distributed string mining framework consisting of client (left) and server
(right) processes. The client-side processes are responsible for computing the substring frequencies within
each sample s1 , s2 , . . . sd separately. Substrings and their frequencies are found using a depth-first-traversal
over a (compressed) suffix tree. Frequency information is transmitted over to the server-side by streaming
it as a balanced-parenthesis representation of a sorted trie. For example, the trie on the left results as the
parenthesis representation given in the middle. The server reads the client-streams and merges the (already
sorted) tries in recursive manner: at each node, the server computes the entropy based on the received
values and updates the affected pairwise distances. Load-balancing on the server-side is achieved by hashing
the prefix of the substring so that each server corresponds to a certain range of hash values.
k-mer over samples is used to judge informativeness of the k-mer for retrieval; a k-mer with constant abundance does not have discriminative power and, in the other extreme, a k-mer which is present in only one
sample cannot generalize over samples. We show that the filtering step significantly improves the retrieval
performance with most datasets and distance measures. Finally, we compute the dissimilarity between two
samples across the features as a weighted average of distances between relative frequencies of individual
k-mers. Treating each k-mer as an independent feature allows us to execute these steps fast and on the
fly without storing the intermediate results. Such simplified distance measures are necessary to guarantee
scalability given the extremely high dimensionality of the k-mer features.
To summarize, we introduce methods to i. estimate the frequencies of a large number of k-mers over
multiple samples, ii. decide if a k-mer is informative or uninformative in the context of a retrieval task,
iii. compute a distance metric using the filtered k-mer frequencies, and iv. execute these steps fast without
explicitly storing the frequency values. Fig. 2 summarizes the method.
3 Methods
3.1 Estimating k-mer frequencies: normalization, regularization, filtering
In order to perform the feature selection or filtering, we first compute Bayesian estimates of the relative
frequencies p(s|w) of each k-mer w over samples s ∈ S using observed frequencies fˆ(s, w) of the k-mers.
These are distributions over samples for each k-mer that are computed independently for each k-mer for
reasons of computational efficiency.
Even if the relative abundance of a k-mer is the same in every sample, the observed frequencies may
differ because of different sequencing depth or coverage in different samples. To tackle this issue we employ
normalization: we normalize the frequency fˆ(s, w) by a sample-specific constant σ(s), which is proportional
to the total number of base pairs in a sample, and σ(s) = 1 for the largest sample in the collection in terms
4
of total base pair count, obtaining
f (s, w) = fˆ(s, w)/σ(s).
(1)
The σ(s) can be interpreted probabilistically as the probability of observing a sequence in the actual sample,
assuming every sample had the same number of base pairs to start with but some have been lost in the
processing.
In order to estimate the relative frequencies, we place a conjugate symmetric Dirichlet prior on the
parameters of the multinomial distribution over the observed counts. The common choice of uniform prior
distribution corresponds to a Dirichlet distribution with all parameters equal to 1. This yields a posterior
mean estimate of the relative frequency values as
p(s|w) = P
f (s, w) + 1
.
[f (s′ , w) + 1]
(2)
s′ ∈S
The Dirichlet prior with all parameters equal to one is ubiquitous in document retrieval. It is particularly
suitable for metagenomics due to the following observations:
1. The distributed string mining algorithm (described below) trades off low k-mer counts for speed and
ignores any k-mers that are present only once in a sample. The pseudo-count from the prior makes up
for this missing count.
2. Adding pseudo-counts assists in playing down the significance of very rare k-mers that may appear due
to sequencing errors in the filtering step without affecting other k-mers too much.
Finally, given the massive number of potential k-mers, it is crucially important to improve signal-to-noise
ratio by focusing on the informative ones. For the unsupervised tasks of comparing the samples, obviously
only k-mers which distinguish between the samples are informative. As a concrete example, consider a kmer that is present in all samples with a similar abundance. It certainly does not give information useful for
comparing samples. In the other extreme, if a k-mer is present in one specific sample, but not in any other, it
is potentially a spurious k-mer due to sequencing error, and in any case does not help in comparing samples
either. On the other hand, if a k-mer is present in some samples, but not all, then it gives information that
those samples are similar in a specific sense. Informativeness in this sense can be measured by the entropy
H of the distribution of the k-mer over the samples: we filter the k-mers based on the conditional entropies
H(S|w) = −
X
1
p(s|w) log p(s|w);
log(|S|)
(3)
s∈S
a k-mer is taken into account in distance computation only if the normalized entropy is lower than a certain
threshold e. By design 0 ≤ H ≤ 1. Notice that in standard information theory terminology higher entropy
implies higher information. However, in our context an informative k-mer has low entropy. Also, due to the
Bayesian estimation, a spurious k-mer having only very small counts will have large conditional entropy and
will be filtered out.
The optimal value of threshold e varies with datasets. It can be ‘optimized’ in a supervised manner by
utilizing a training set where we have labelled samples. In the absence of a labelled set, we suggest taking the
‘average’ of distance metrics computed over the potential thresholds as the final metric. We refer to the final
metrics in the two cases as optimized metric and average metric. In our experimental set-up, we randomly
make a 50-50 split of a given dataset in training Str and testing Ste sets: Str ∩ Ste = ∅, and Str ∪ Ste = S.
We use Str to optimize the entropy threshold: we query with samples in Str , and retrieve relevant samples
within the same set to observe which entropy threshold results in the best retrieval result (see Sec. 3.4 for
details). While comparing the performance of two methods we always present the evaluation over Ste : we
query with samples within Ste , and we retrieve relevant samples from S (not just Ste ).
5
3.2 Algorithms to extract informative k-mers
Our main computational challenge is to extract all informative k-mers from large-scale datasets in feasible
time and space. Recall that the filtering step relies on knowledge over multiple samples to decide if the
respective k-mer is informative for the retrieval task or not. Since the typical collections of WMS samples are
huge in size, we cannot assume that even the plain input fits into the main memory of any single machine. To
process these large-scale datasets, the computation needs to be done either using external memory (i.e. disk)
or in a distributed manner (i.e. a computer cluster). We review two approaches: k-mer counting [12, 21]
and distributed string mining [29]. The first one is a standard approach in the literature for fixed k, but has
several limitations when applied in our context of multiple samples and large-scale data. We show that the
latter approach is more flexible in this context and can also be generalized to extract informative k-mers over
all values of k simultaneously.
Jellyfish [12] and DSK [21] are examples of recent algorithmic improvements in k-mer counting. Both
tools use hash tables to compute the k-mer distribution for a given (fixed) k. In both tools, space-efficiency
is achieved by keeping most of the hash table on disk. The main drawback with these disk-based approaches
is that they are aimed at counting k-mers in a single sample and extending them over to multiple samples is
non-trivial. For example, Jellyfish could, in principle, be extended to count k-mers over multiple samples: the
authors give a roughly linear time algorithm to merge two or more hash tables. However, the intermediate
k-mer counts would need to be stored on disk, which requires significant amount of additional space, and
the merge-phase is not parallelized [12, User manual, Sect. Bugs].
The decision whether a particular k-mer is informative or not is made by looking at its frequency over
all the given WMS samples. We tackle this problem by a Distributed String Mining (DSM) framework [29]
that can handle multi-sample inputs by utilizing a computer cluster. The main advantages of this framework
are that (i) load-balancing divides the data and computation over multiple cluster nodes, (ii) intermediate
k-mer counts are not stored explicitly, and (iii) there is no additional disk I/O strain, except reading through
the input once. These advantages allow terabyte-scale data analysis on a cluster consisting of nodes having
limited main memory. We extend the DSM framework to be compatible with our definition of informative
k-mers (see the above subsection). It allows us to extract the informative k-mers either for a fixed k or over
all values of k in feasible time.
The DSM framework is based on a client-server model. The clients have one-to-one correspondence to
the given samples, each client being responsible for computing the frequencies within the designated sample.
The client-side computation relies heavily on suffix sorting techniques and on space-efficient data structures
for strings [29]: the input data are first preprocessed into a compressed representation, which replaces the
input data and acts as an efficient search structure. The server-side computation is more straightforward: the
server simply merges the (sorted) input from the clients, computes the entropies and updates the distance
matrices. Fig. 3 gives a toy example of the client-server interaction. Two crucial observations are needed to
keep the whole computation and transmission costs feasible. First, the informative k-mers can be seen as a
subset of left-right-branching substrings, i.e. substrings whose instances have differentiating continuation on
both left and right. More formally: substring w of string T [1, n] is called right-branching if there exists two
symbols a and b such that a 6= b and both wa and wb are substrings of T . Similarly, a substring w is leftbranching if aw and bw, a 6= b, are substrings of T . If a substring is both left-branching and right-branching
we say it is left-right-branching. Second, for any string of length n, there are at most O(n) left-right-branching
substrings, and the total length of all such substrings is bounded by O(n log n) [7, Theorem 1].
The first observation allows us to reduce the client-side computation to a smaller set of substrings: it is
easy to see that if k-mer w, having frequency f ′ (s, w) ≥ 2, is non-branching, then there exists a substring
w′ of length k ′ > k that is left-right-branching and has exactly the same frequency, i.e., f ′ (s, w) = f ′ (s, w′ ).
It follows that the frequency of non-branching k-mers can be deduced from the branching k ′ -mers, and the
left-right-branching substrings contain all the necessary information for us to detect informative k-mers. The
second observation guarantees a feasible transmission cost between clients and servers: the upper bound for
the concatenation of all left-right-branching substrings also acts as an upper bound for both the server-side
running time and the amount of communication needed. The drawback of restricting to left-right-branching
substrings is that the informative k-mers that we are able to detect have to appear at least twice in a sample,
6
although this limit may be useful in pruning spurious k-mers introduced by sequencing errors. More detailed
explanation and analysis of the DSM framework is given in [29]. A software implementation of the DSM
framework is available at https://github.com/HIITMetagenomics/dsm-framework.
3.3 Dissimilarity metrics
Having extracted the informative k-mers, we use them to compute the dissimilarity between two metagenomic samples. We consider three dissimilarity metrics that can be computed easily over a large number
of k-mers in sequential manner, i.e. one k-mer at a time, and without storing all the k-mer frequencies
explicitly. To utilize the natural variance structure of the k-mers—some are more abundant than others—we
weight the relative frequencies of each k-mer by their respective total counts, i.e., we utilize the absolute
frequencies f (s, w) as defined in (1).
We mainly use the very simple Jaccard distance which does not consider abundances at all, only whether
a k-mer occurs or not. Given two sets s1 and s2 of k-mers detected as present in two different samples,
Jaccard distance measures how many elements are shared between these two sets. Mathematically, it is
defined as
|s1 ∩ s2 |
Dcount (s1 , s2 ) = 1 −
.
|s1 ∪ s2 |
Despite its simplicity, we observe that Jaccard distance performs well; a potential reason is its robustness
to measurement noise and effectiveness when two metagenomic samples differ in terms of presence and
absence of certain species or functionalities. We assume a k-mer is present in a sample if its frequency is
more than 2.
We also experiment with two metrics that use the abundance information:
I. Variance-stabilized Euclidean distance: An obvious distance measure between two metagenomic samples
s1 and s2 is the Euclidean distance between their respective k-mer frequencies. We consider the distance
metric
Xp
p
Dsqrt (s1 , s2 ) =
( f (w, s1 ) − f (w, s2 ))2
w
which can be computed sequentially as new informative k-mers are extracted. The square root transformation is the variance stabilizing transformation for Poisson distribution—a popular model for quantitative
sequencing data.
II. Log transformed Euclidean distance: We also consider the same metric but with log transformation
which is a popular approach in document retrieval, i.e.,
X
Dlog (s1 , s2 ) =
(log(1 + f (w, s1 )) − log(1 + f (w, s2 )))2 .
w
The motivation for using the log transformation is that it decreases sensitivity to high frequency counts:
some k-mers are present in high abundance in almost every genome, for instance k-mers from the marker
gene, and the log transformation reduces their effect in the metric.
3.4 Evaluation metric
We evaluate the performance of the dissimilarity metric in terms of its performance in the task of retrieving
relevant samples given a query metagenomics sample. The ground truth for relevance is either the disease
class (disease vs not) or the known body site: samples from the same class are considered relevant.
For measuring retrieval performance we use an evaluation metric which is popular in document retrieval,
the mean average precision (MAP) [25]. Given a query q, the retrieval method ranks the samples in an
increasing order of their dissimilarities from q. Given one has retrieved the top (closest) n ∈ {1, . . . , N }
samples the precision @n is defined as
Precision(n; q) =
number of relevant samples in n retrieved samples
,
n
7
HIGH-C
149
200
0.4
117
4.9
149
1.8
10
Input size (GB)
Samples
Preproc. (h)
Total memory (GB)
All k
Wall-clock (h)
CPU time (h)
k = 21
Wall-clock (h)
CPU time (h)
MetaHIT
536
124
3.6
209
2.0
187
0.4
74
T2D-P2
786
199
10
610
8.0
1,137
2.8
279
HMP
3,353
435
65
2,885
53
20,000
12
4,000
Table 1: Computational resources required by the distributed string mining on different datasets. We report
wall-clock times and total CPU times for both fixed k = 21 and over all k. Preprocessing is done only once,
separately from the actual computation. Total memory is the memory requirement over all computation
nodes. Experiments were ran on a cluster of Dell PowerEdge M610 nodes having 32 GB of RAM and 16
cores. Simulated data and MetaHIT were run using up to 8 nodes. T2D-P2 was ran using 32 nodes allowing
more parallelization at the server-side. HMP was ran on a cluster of 20 nodes with 2x10-core Xeon CPUs
and 256GB RAM.
and MAP defined using average precision as,
MAP =
1 X
1 X
Precision(n; q).
AveP(q), AveP(q) =
|Q|
mq
n∈Rq
q∈Q
Here Q is the set of all queries, mq is the number of relevant samples to query q, and Rq is the set of locations
in the ranked list where a relevant sample appears. It is straight-forward that a higher MAP implies better
performance. To judge if two MAP values are significantly different or not, we employ the randomization
test described in [25]: for each query, this test randomly reassigns the AvePs achieved by two methods to
one another and computes the difference between the resulting MAP for multiple such reassignments to get
a distribution, against which the true MAP value is tested in terms of p-value. In case two samples share
the same dissimilarity from a query sample, we employ the modification suggested in [13] to break ties.
When computing the mean, we follow a leave-one-out cross-validation type approach using each sample as
a query, and retrieving from the rest of the collection. For simulated data and human gut samples, we only
query with the positive samples in the testing set q ∈ Ste , whereas for body site samples we query with each
sample in the testing set. For both cases we retrieve from the entire set S \ {q}. While choosing the entropy
threshold in a supervised setting, we query from q ∈ Str and retrieve from Str \ {q}.
3.5 Synthetic data generation
To test the method, we simulated four datasets containing samples from separate classes, with the interpretation that samples from the same class are relevant. In all the datasets we have two classes: both
classes of samples have the same species composition but different relative abundances. We used MetaSim
[20] to generate Illumina reads of length 80 using the error configuration file provided by the developers. Each dataset contains 200 samples: 98 of them belong to the positive class and the rest belong to the
negative class. For each dataset, we used the same 100 species from the following genera: acetobacter, acetobacterium, acidiphilium, acidithiobacillus, acinetobacter, bacillus, bacteroides, bifidobacterium, chlamydia,
chlamydophila, clostridium, escherichia, haloarcula, halobacterium, lactobacillus, pasteurella, salmonella,
staphylococcus, and streptococcus. The abundance profiles were generated from two Dirichlet distributions;
one for positive and the other for negative class. The parameters of the Dirichlet distributions were shared between two classes: for half of the species (randomly chosen) the same parameters were used for both classes
and for the other half of the species the parameters were randomly permuted. For example, given 5 species
the assigned parameters could be: (0.3, 0.2, 0.6, 0.1, 0.9) and (0.9, 0.2, 0.3, 0.1, 0.6) where the parameters for
8
log10(# of k−mers)
MetaHIT + log
15
HMP + Jaccard
15
15
12
21
30
All
10
10
10
5
5
5
0
0
0.5
Entropy threshold
HIGH−C + Jaccard
log10(# of k−mers)
T2D−P2 + Jaccard
1
0
0
0.5
Entropy threshold
HIGH−VAR + Jaccard
15
1
0
0
F
0.5
Entropy threshold
LOW−C + Jaccard
15
MIXED−C + Jaccard
15
15
12
21
30
All
10
10
10
10
5
5
5
5
0
0.7
0.8
0.9
Entropy threshold
1
0
0.7
0.8
0.9
Entropy threshold
0
0.7
1
1
0.8
0.9
Entropy threshold
1
0
0.7
F
0.8
0.9
Entropy threshold
1
Figure 4: Number of informative strings over varying entropy thresholds for the proposed approach ‘All’,
fixed k-mer lenthgs ‘12’,‘21’ and ‘30’, and for protein family based comparison with FIGfam ‘F’. The box
denotes the ‘optimized’ entropy threshold that has been used to evaluate the performance of the methods.
Some general observations are as follows. The number of strings for k = 12 is lower than the rest while
the number of strings for ‘All’ is much higher than rest of the methods, and number of strings for k = 21
and k = 30 are very close. We observe that there are strings with low entropies—more in the real data sets
than in the simulated data sets—which indicate the presence of discriminative features. Also, the ‘optimized’
entropy threshold varies for different methods.
the second and fourth species are the same, but for the other species they were permuted. The exact species
and corresponding parameter values can be downloaded from https://github.com/HIITMetagenomics.
The resulting datasets are:
1. HIGH-C: relatively easy data with high coverage (10,000,000 reads per sample)
2. LOW-C: relatively difficult data with low coverage (2,000,000 reads per sample)
3. MIXED-C: mixed data with half the samples from HIGH-C and the rest from LOW-C to simulate varying
sequencing depth.
4. HIGH-VAR: relatively difficult data with same coverage as HIGH but additional noise in the class
distributions to simulate more overlap between classes. To elaborate, the relative abundance of species
is pHIGH−VAR = 0.5 pHIGH + 0.5 noise where noise is generated from a symmetric Dirichlet distribution
with all parameters equal to 1.
4 Results
We evaluated the retrieval performance on three human metagenomics datasets:
1. MetaHIT [18], 124 metagenomic samples from 99 healthy people and 25 patients with inflammatory
bowel disease (IBD) syndrome. Each sample has on average 65 ± 21 million reads. Our goal was to
retrieve IBD positive patients.
9
Mean average precision
MetaHIT + Jaccard
MetaHIT + log
↑*
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
↓ **
T2D−P2 + Jaccard
HMP + Jaccard
↓ *** ↓ **
↓ ***
0.6
↓ *** ↓ *** ↓ ***
1
0.4
0
Ak
FIG Abd
Method
3
0
HIGH−C + Jaccard
Mean average precision
↓*
1
0.2
Ak
FIG Abd
Method
0
3
HIGH−VAR + Jaccard
↓ *** ↓ *** ↓ *** ↑ ***
0.5
0
0.5
1
T
0
FIG Abd
Method
3
1
0
T
Ak
FIG Abd
Method
3
MIXED−C + Jaccard
↓ *** ↓ *** ↑ ***
0.5
Ak FIG Abd 3
Method
0
LOW−C + Jaccard
↓ *** ↓ *** ↓ *** ↑ ***
0.5
Ak FIG Abd 3
Method
Ak
1
↓ *** ↓ *** ↓ *** ↑ ***
0.5
Ak FIG Abd 3
Method
T
0
Ak FIG Abd 3
Method
T
Figure 5: Retrieval performance comparison of the proposed approach using all k-mers (“Ak”) against the
following base measures: (1) “FIG”: retrieval performance using known protein family, (2) “Abd”: Hellinger
distance between relative estimated abundance, (3) “3”: d2S distance between relative abundance of 3-mers.
“Ak” uses the ‘optimized metric’ over 101 equally spaced threshold values between 0 and 1. Each errorbar
shows the MAP value along with the standard error. The grey horizontal line shows retrieval by chance:
MAP computed over zero similarity metric. An arrow (if present) over a method indicates whether the
performance of the corresponding method is significantly better (↑) or worse (↓) than “Ak”: The stars denote
significance level: 0 < *** < 0.001 < ** < 0.01 < * < 0.05. For the synthetic datasets (in the bottom row)
the relative abundance is known from experimental design. We present this result as “T”. For MetaHIT we
present the performance for both Jaccard and log metric, since the latter performs much better compared to
the former.
2. T2D Phase II [19], 199 metagenomic samples from 100 healthy people and 99 patients with type II
diabetes. Each sample has on average 47 ± 11 million reads. Our goal was to retrieve diabetic patients.
We chose to explore the phase II data instead of the phase I data since the former has higher coverage;
about 40% more reads than the latter.
3. HMP [27], 435 metagenomic samples from 10 different body sites. Out of 690 samples that passed the
QC assessment (http://www.hmpdacc.org/HMASM/), we discarded 255 samples that had less than
1% of the number of reads of the largest sample.
To recapitulate, for MetaHIT and T2D-P2, our goal is to observe if given a positive sample, e.g., from a
patient with a particular disease, one can retrieve relevant samples, i.e., with similar disease; whereas for
HMP, our goal is to observe if given a sample from a particular body site, one can retrieve relevant samples,
i.e., samples from the same body site. For all data we applied a quality threshold of 30 and ignored any base
pairs with quality less than the threshold. Table 1 gives an overview of the computational resources required
for each data set. Additionally, number of k-mers used by different methods for each data set is available in
4.
Retrieval of samples with similar annotation: We applied the proposed approach and a number of alternatives to retrieval of similar samples from the same data set and evaluated by how many of the retrieved
10
MetaHIT + log
Average precision
↓ **
↓*
T2D−P2 + Jaccard
↓ **
↑ *** ↓ **
0.4
0.2
0.5
0.2
All
12
21
0
30
All
12
Average precision
↓ *** ↑ *** ↑ ***
0.5
21
k
30
0
30
All
12
↓ ***
1
0
LOW−C + Jaccard
↑*
12
21
1
0
30
k
30
MIXED−C + Jaccard
↑ *** ↑ ***
0.5
All
21
k
HIGH−VAR + Jaccard
0.5
12
21
k
HIGH−C + Jaccard
All
↓*
0.6
0.4
k
0
↓ *** ↓ *
1
0.6
0
1
HMP + Jaccard
↓ **
↓ *** ↑ *** ↑ ***
1
0.5
All
12
21
k
30
0
All
12
21
30
k
Figure 6: Comparison of best performances for different k-mer lengths. The figures show the performance
over queries by all positive samples as a violin plot. All methods use the ‘optimized metric’ chosen over 101
equally spaced threshold values between 0 and 1: the box denotes the MAP value. The horizontal lines show
retrieval by chance: AveP computed over zero dissimilarity metric. Straight line is the mean, and dotted
lines are 5%, and 95% quantiles respectively, when number of relevant samples differ for different queries.
An arrow (if present) over a method implies whether the corresponding method performs significantly better
(↑) or worse (↓) than ‘All’ : The stars denote significance level: 0 < *** < 0.001 < ** < 0.01 < * < 0.05. We
observe that the considering all k-mers usually perform equally well with respect to considering a single k.
samples had the same annotation: class label, disease state or body site. A comparison of the obtained
mean average precision values averaged over queries by all positive samples is shown in Fig. 5. The results
show the performance achieved by the ‘optimized metric’. The alternatives we considered were: i. retrieval
performance based on the proposed distances but with the frequencies counted on specific 21-mers from
known protein families (FIGfams) [15]; ii. retrieval based on Hellinger distances between relative species
abundances estimated using MetaPhlAn [24]; and iii. retrieval based on d2S |M0 distances between relative
frequencies of 3-mers [6].
For the simulated data, the two classes differ only by the relative species abundance; thus, retrieval
based on ground truth abundance can be considered to give an upper limit for the performance. For HIGHC and HIGH-VAR, the proposed method performs closer to the ground truth performance than any other
methods, although the difference from ground truth performance is still statistically significant. For LOW-C,
the performance of all methods, except the protein family based comparison, drop compared to HIGH-C,
while for MIXED-C the performance is again close to HIGH-C despite the presence of low coverage samples.
This is an encouraging observation showing the robustness of the proposed approach to varying sequencing
depths.
For the real data sets the proposed approach yielded statistically significantly higher mean average precision than any of the alternatives (p < 0.05) for all the datasets, except T2D-P2 where protein family based
comparison works equally well. Interestingly, the abundance-based retrieval performs relatively poorly here,
suggesting that the differences between the classes cannot be easily captured by species composition alone,
while the proposed k-mer features can provide a better separation. Retrieval based on the known protein
family performed fairly well, but slightly worse than the proposed approach on MetaHIT. We observe that
11
Mean average precision
Mean average precision
1
0.4
MetaHIT + log
↑*
↑ * ↑ ***
0.3
0.4
0.2
0
0.5
0.2
0.1
All
12
HIGH−C + Jaccard
↑ *** ↑ *** ↑ ** ↓ *
↑ *** ↑ *** ↑ *** ↑ *** ↑ ***
21 30 FIG
k
1
0
All
12
21 30 FIG
k
HIGH−VAR + Jaccard
↓ *** ↑ *** ↓ *** ↓ *** ↑ ***
↑ *** ↑ *** ↑ *** ↑ ***
1
0.5
All
12
1
HMP + Jaccard
↓ *** ↑ *** ↓ *** ↓ *** ↓ ***
↑ ***
↑ ***
0.6
0.5
0
T2D−P2 + Jaccard
↑ *** ↑ * ↑ *** ↑ *** ↑ ***
↑ ***
↑ ***
↓ **
↓ ***
21 30 FIG
k
0
0
All
12
LOW−C + Jaccard
↓ * ↑ ***
↓ **
↑ *** ↑ *
↑ ***
0.5
All
12
0
21 30 FIG
k
21 30 FIG
k
1
MIXED−C + Jaccard
↑ * ↑ ***
↓ ***
↑ *** ↑ *** ↑ *** ↑ *** ↑ ***
0.5
All
12
21 30 FIG
k
0
All
12
21 30 FIG
k
Figure 7: Comparison of the best retrieval performance achieved with ‘optimized metric’ (middle), ‘average
metric’ (right) and without entropy filtering (left), for proposed approach ‘All’, individual ks as well as
FIGfam based distance metric. The metrics are ‘optimized’/‘averaged’ over 101 equally spaced threshold
values between 0 and 1. Each errorbar line shows the MAP value along with the standard error. The
grey horizontal line shows retrieval by chance: MAP computed over zero dissimilarity metric. An arrow
(if present) over a method implies whether the performance of the corresponding method (top: ‘average
metric’, bottom: ‘optimized metric’) is better (↑) or worse (↓) than when entropy filtering is employed: The
stars denote significance level: 0 < *** < 0.001 < ** < 0.01 < * < 0.05. We observe that filtering has a
positive impact on the retrieval performance.
for MetaHIT, Jaccard metric performs poorly; however, a change of metric to log significantly improves the
performance for all methods. Otherwise, all metrics usually work equally well over different data sets.
Effect of using specific or unspecific k-mer length: We next compared the proposed approach of using
all k-mers to using a specific k. The retrieval performance using ‘optimized metric’ is shown in Fig. 6. The
figures show the complete distribution of average precision values over different queries whose mean is the
mean average precision of Fig. 5. The performance of the proposed method is usually better than with any
individual k. Thus, the proposed method appears to be a relatively safe choice that does not suffer from
catastrophically bad performance on any of the data sets.
Effect of the entropy filtering: Next, we evaluated the efficacy of filtering the informative k-mers against
retrieval performance without the filtering operation. The results are presented in Fig. 7. We observed
that entropy filtering usually improved retrieval performance for all tested k-mer lengths when using the
‘optimized metric’, although the improvement might not always be statistically significant. Although ‘average metric’ often provides significant performance, it might not always improve over performance without
filtering. Also, retrieval performance of FIGfam may or may not improve with entropy filtering.
Comparison across different metrics: Finally, we evaluated the retrieval performance over different dissimilarity metrics. We presented the performance using ‘optimized metric’ for different metrics in Fig. 8. We
12
Average precision
MetaHIT
↓ ***
↑*
↓ ***
↑ ***
Average precision
1
0.4
0.2
HIGH−C
↑*
↓ ***
↑ ***
↓ ***
HMP
↑ ***
↑ ***
↓ ***
↓ ***
↑ ***
↓ ***
0.5
0.2
count
↑ ***
↓*
sqrt
Metric
0
log
1
HIGH−VAR
↓ ***
↓ ***
↓ ***
↑ ***
count
sqrt
Metric
↑ ***
↑ ***
sqrt
Metric
log
0
0
log
1
0.5
count
↓ ***
↑ ***
0.6
0.4
0.5
0
↑ ***
↑ ***
0.6
0
1
T2D−P2
↓ ***
↓ ***
↓*
↑ ***
LOW−C
↑ ***
↑ ***
↓ ***
↓ ***
count
↑ ***
↓ ***
0.5
count
sqrt
Metric
0
log
sqrt
Metric
log
1
MIXED−C
↓*
↓ ***
↑*
↓*
↑ ***
↑*
0.5
count
sqrt
Metric
log
0
count
sqrt
Metric
log
Figure 8: Comparison of the best retrieval performance for different distance metrics using all k-mers. They
show a violin plot of the average performances over queries by all positive samples in the data sets. The
‘optimized metrics’ have been selected over 101 equally spaced threshold values between 0 and 1: the box
denotes the MAP value. The horizontal lines show retrieval by chance: AveP computed over zero dissimilarity
metric. Straight line is the mean, and dotted lines are 5%, and 95% quantiles respectively, when number
of relevant samples differ for different queries. An arrow (if present) over a method implies whether the
corresponding method performs significantly better (↑) or worse (↓) than the other methods (denoted by
their colors): The stars denote significance level: 0 < *** < 0.001 < ** < 0.01 < * < 0.05. We observe that
different distance metrics usually demonstrate similar performance.
observed that the simple presence/absence-based metric Dcount performed at least as well as abundancesensitive log and sqrt metrics, except for the MetaHIT data for which the other metrics performed the better.
5 Conclusion
In the wake of collecting multiple samples from similar environments information retrieval for metagenomic
samples is expected to become a handy tool in metagenomics research. In this paper, we have addressed
the problem of retrieving relevant metagenomic samples given a query sample from the same collection.
The novelty of the proposed approach is that it is unsupervised, and does not rely on the availability of
reference databases. We have suggested employing k-mer frequencies as feature representation; however,
rather than exploring k-mers of a fixed k, we have scanned through all possible k-mers of all possible k’s
using distributed string mining, and have proposed appropriate filtering technique to discard uninformative
k-mers. We have evaluated our method on both real and simulated data, and observed that the approach
can effectively retrieve relevant metagenomic samples, outperforming both the FIGfams method based on
known highly informative protein families as well as retrieval based on species composition of the samples.
13
Acknowledgement
The authors would like to thank Ahmed Sobih for his help with the MetaPhlAn experiments on MetaHIT and
T2D-P2. Part of the calculations presented above were performed using computer resources within the Aalto
University School of Science “Science-IT” project.
Funding: This work was supported by the Academy of Finland (project numbers 140057, 250345, 251170
and 259440).
References
[1] Yael Baran and Eran Halperin. Joint analysis of multiple metagenomic samples. PLoS Comput Biol,
8(2):e1002373, February 2012.
[2] José Caldas, Nils Gehlenborg, Ali Faisal, Alvis Brazma, and Samuel Kaski. Probabilistic retrieval and
visualization of biologically relevant microarray experiments. Bioinformatics, 25:i145–i153, 2009.
[3] José Caldas, Nils Gehlenborg, Eeva Kettunen, Ali Faisal, Mikko Rönty, Andrew G. Nicholson, Sakari
Knuutila, Alvis Brazma, and Samuel Kaski. Data-driven information retrieval in heterogeneous collections of transcriptomics data links SIM2s to malignant pleural mesothelioma. Bioinformatics,
28(2):246–253, Jan 2012.
[4] Robert A. Edwards, Robert Olson, Terry Disz, Gordon D. Pusch, Veronika Vonstein, Rick Stevens, and
Ross Overbeek. Real time metagenomics: using k-mers to annotate metagenomes. Bioinformatics,
28(24):3316–3317, Dec 2012.
[5] Sharon Greenblum, Peter J. Turnbaugh, and Elhanan Borenstein. Metagenomic systems biology of
the human gut microbiome reveals topological shifts associated with obesity and inflammatory bowel
disease. Proc Natl Acad Sci U S A, 109(2):594–599, Jan 2012.
[6] Bai Jiang, Kai Song, Jie Ren, Minghua Deng, Fengzhu Sun, and Xuegong Zhang. Comparison of
metagenomic samples using sequence signatures. BMC Genomics, 13(1):730, December 2012. PMID:
23268604.
[7] J. Kärkkäinen, G. Manzini, and S. J. Puglisi. Permuted longest common prefix array. In Proc. CPM,
LNCS 5577, pages 181–192. Springer, 2009.
[8] Christine Largeron, Christophe Moulin, and Mathias Gry. Entropy based feature selection for text
categorization. In Proceedings of the 2011 ACM Symposium on Applied Computing - SAC 11, pages
924–928. Association for Computing Machinery, 2011.
[9] Kelvin Li, Monika Bihan, Shibu Yooseph, and Barbara A. Meth. Analyses of the microbial diversity
across the human microbiome. PLoS ONE, 7(6):e32118, 2012.
[10] Zhenqiu Liu, William Hsiao, Brandi L. Cantarel, Elliott Franco Drbek, and Claire Fraser-Liggett. Sparse
distance-based learning for simultaneous multiclass classification and feature selection of metagenomic
data. Bioinformatics, 27(23):3242–3249, Dec 2011.
[11] Nicolas Maillet, Claire Lemaitre, Rayan Chikhi, Dominique Lavenier, and Pierre Peterlongo. Compareads: comparing huge metagenomic experiments. BMC Bioinformatics, 13 Suppl 19:S10, 2012.
[12] Guillaume Marais and Carl Kingsford. A fast, lock-free approach for efficient parallel counting of
occurrences of k-mers. Bioinformatics, 27(6):764–770, March 2011.
14
[13] Frank McSherry and Marc Najork. Computing information retrieval performance measures efficiently
in the presence of tied scores. In Proceedings of the IR research, 30th European conference on Advances
in information retrieval, ECIR’08, page 414421, Berlin, Heidelberg, 2008. Springer-Verlag.
[14] F. Meyer, D. Paarmann, M. D’Souza, R. Olson, E. M. Glass, M. Kubal, T. Paczian, A. Rodriguez,
R. Stevens, A. Wilke, J. Wilkening, and R. A. Edwards. The metagenomics RAST server a public
resource for the automatic phylogenetic and functional analysis of metagenomes. BMC Bioinformatics,
9(1):386, September 2008.
[15] Folker Meyer, Ross Overbeek, and Alex Rodriguez. FIGfams: yet another set of protein families. Nucleic
Acids Research, 37(20):6643–6654, November 2009. PMID: 19762480 PMCID: PMC2777423.
[16] Suparna Mitra, Bernhard Klar, and Daniel H. Huson. Visual and statistical comparison of metagenomes.
Bioinformatics, 25(15):1849–1855, Aug 2009.
[17] Donovan H. Parks and Robert G. Beiko. Identifying biologically relevant differences between metagenomic communities. Bioinformatics, 26(6):715–721, Mar 2010.
[18] Junjie Qin et al. A human gut microbial gene catalogue established by metagenomic sequencing.
Nature, 464(7285):59–65, March 2010.
[19] Junjie Qin et al. A metagenome-wide association study of gut microbiota in type 2 diabetes. Nature,
490(7418):55–60, Oct 2012.
[20] Daniel C. Richter, Felix Ott, Alexander F. Auch, Ramona Schmid, and Daniel H. Huson. MetaSim: a
sequencing simulator for genomics and metagenomics. PLoS ONE, 3(10):e3373, 2008.
[21] Guillaume Rizk, Dominique Lavenier, and Rayan Chikhi. DSK: k-mer counting with very low memory
usage. Bioinformatics, 29(5):652–653, Mar 2013.
[22] Siegfried Schloissnig, Manimozhiyan Arumugam, Shinichi Sunagawa, Makedonka Mitreva, Julien Tap,
Ana Zhu, Alison Waller, Daniel R. Mende, Jens Roat Kultima, John Martin, Karthik Kota, Shamil R. Sunyaev, George M. Weinstock, and Peer Bork. Genomic variation landscape of the human gut microbiome.
Nature, 493(7430):45–50, Jan 2013.
[23] Nicola Segata, Jacques Izard, Levi Waldron, Dirk Gevers, Larisa Miropolsky, Wendy S. Garrett, and
Curtis Huttenhower. Metagenomic biomarker discovery and explanation. Genome Biol, 12(6):R60,
2011.
[24] Nicola Segata, Levi Waldron, Annalisa Ballarini, Vagheesh Narasimhan, Olivier Jousson, and Curtis
Huttenhower. Metagenomic microbial community profiling using unique clade-specific marker genes.
Nature Methods, 9(8):811–814, August 2012.
[25] Mark D. Smucker, James Allan, and Ben Carterette. A comparison of statistical significance tests for
information retrieval evaluation. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, CIKM ’07, pages 623–632, New York, NY, USA, 2007. ACM.
[26] Xiaoquan Su, Jian Xu, and Kang Ning. Meta-Storms: efficient search for similar microbial communities based on a novel indexing scheme and similarity score for metagenomic data. Bioinformatics,
28(19):2493–2501, Oct 2012.
[27] The Human Microbiome Project Consortium. Structure, function and diversity of the healthy human
microbiome. Nature, 486(7402):207–214, June 2012.
[28] Gene W. Tyson, Jarrod Chapman, Philip Hugenholtz, Eric E. Allen, Rachna J. Ram, Paul M. Richardson,
Victor V. Solovyev, Edward M. Rubin, Daniel S. Rokhsar, and Jillian F. Banfield. Community structure and metabolism through reconstruction of microbial genomes from the environment. Nature,
428(6978):37–43, February 2004.
15
[29] Niko Välimäki and Simon J. Puglisi. Distributed string mining for high-throughput sequencing data. In
12th Workshop on Algorithms in Bioinformatics (WABI), LNCS 7534, pages 441–452. Springer-Verlag,
2012.
[30] James Robert White, Niranjan Nagarajan, and Mihai Pop. Statistical methods for detecting differentially
abundant features in clinical metagenomic samples. PLoS Comput Biol, 5(4):e1000352, Apr 2009.
[31] Yiming Yang and Jan O. Pedersen. A comparative study on feature selection in text categorization. In
Proceedings of the Fourteenth International Conference on Machine Learning (ICML ’97), pages 412–420.
Morgan Kaufmann Publishers Inc., 1997.
16
| 5 |
AMENABLE UNIFORMLY RECURRENT SUBGROUPS AND
LATTICE EMBEDDINGS
arXiv:1802.04736v2 [math.GR] 29 Mar 2018
ADRIEN LE BOUDEC
Abstract. We study lattice embeddings for the class of countable groups Γ
defined by the property that the largest amenable uniformly recurrent subgroup
AΓ is continuous. When AΓ comes from an extremely proximal action and
the envelope of AΓ is co-amenable in Γ, we obtain restrictions on the locally
compact groups G that contain a copy of Γ as a lattice, notably regarding
normal subgroups of G, product decompositions of G, and more generally dense
mappings from G to a product of locally compact groups.
We then focus on a family of finitely generated groups acting on trees within
this class, and show that these embed as cocompact irreducible lattices in some
locally compact wreath products. This provides examples of finitely generated
simple groups quasi-isometric to a wreath product C ≀ F , where C is a finite
group and F a non-abelian free group.
Keywords. Lattices, locally compact groups, strongly proximal actions, Chabauty
space, groups acting on trees, irreducible lattices in wreath products.
1. Introduction
The questions considered in this article fall into the setting of the following
general problem: given a (class of) countable group Γ, study the locally compact
groups G such that Γ embeds as a lattice in G, i.e. such that Γ sits as a discrete
subgroup of G and G/Γ carries a G-invariant probability measure.
Malcev showed that every finitely generated torsion free nilpotent group embeds
as a cocompact lattice in a unique simply connected nilpotent Lie group [Rag72,
Ch. II]. Conversely if G is a locally compact group with a finitely generated nilpotent lattice, then after modding out by a compact normal subgroup, the identity
component G0 is a Lie group of polynomial growth (these have been characterized
in [Gui73, Jen73]) and G/G0 is finitely generated and virtually nilpotent. This
statement is a combination of several works. First if G has a finitely generated
nilpotent lattice Γ, then Γ is necessarily cocompact in G. Since Γ is virtually torsion free this is a classical fact when G is totally disconnected, and the general
case can be deduced from [BQ14, Prop. 3.7] (which uses notably the solution of
Hilbert’s fifth problem [MZ55]). In particular G is compactly generated with polynomial growth, and the statement then follows from the generalization of Gromov’s
polynomial growth theorem for locally compact groups [Los87].
Beyond the nilpotent case, examples of classifications of embeddings of Γ as a
cocompact lattice have been obtained by Dymarz in [Dym15] for several families
Date: March 29, 2018.
This work was carried out when the author was F.R.S.-FNRS Postdoctoral Researcher. Current
affiliation: CNRS, UMPA - ENS Lyon.
1
2
ADRIEN LE BOUDEC
of examples of solvable groups Γ. Although not directly related to our concerns,
we also mention that a certain dual problem was considered by Bader–Caprace–
Gelander–Mozes in [BCGM16] for the class of amenable groups.
Outside the setting of amenable groups, Furman addressed the above problem for
the class of lattices Γ in semi-simple Lie groups in [Fur01], improving rigidity results
of Mostow, Prasad, Margulis (see the references in [Fur01]; see also Furstenberg
[Fur67b]). In [BFS15], Bader–Furman–Sauer considered a large class of countable
groups Γ defined by certain group theoretic conditions, and established, given a
lattice embedding of Γ in G, a general arithmeticity result in the setting where the
connected component of G is non-compact.
In this article we consider the class of groups whose Furstenberg uniformly recurrent subgroup is continuous (see below for definitions). In the first part of the
article we address the question to what extent the properties of the Furstenberg
uniformly recurrent subgroup of a countable group Γ influence the locally compact
groups into which Γ embeds as a lattice. In the second part we focus on a family of
finitely generated groups within this class which embed as cocompact irreducible
lattices in some locally compact wreath products.
The groups under consideration. For a countable group Γ, the Chabauty space
Sub(Γ) of all subgroups of Γ is a compact space, on which Γ acts by conjugation.
A uniformly recurrent subgroup (URS) of Γ is a closed minimal Γ-invariant subset
of Sub(Γ) [GW15]. Glasner and Weiss showed that every minimal action of Γ on
a compact space X gives rise to a URS (see Proposition 3.2), called the stabilizer
URS associated to the action. Conversely every URS arises as the stabilizer URS
of a minimal action (see Matte Bon–Tsankov [MBT17], and Elek [Ele17] in the case
of finitely generated groups).
URS’s have been shown to be related to the study of ideals in reduced group C ∗ algebras [KK17, Ken15] and reduced crossed products [Kaw17]. URS’s of several
classes of groups have been studied in [LBMB16]. For certain examples of groups
Γ, rigidity results about minimal actions on compact spaces have been obtained
in [LBMB16] from a complete description of the space URS(Γ). Various results
about homomorphisms between topological full groups of étale groupoids, notably
obstructions involving invariants of the groupoids, have been obtained in [MB18]
via URS’s considerations (more precisely via a complete description of the points
in the Chabauty space of these groups whose orbit does not approach the trivial
subgroup). In the present article we will make use of URS’s as a tool in order to
study lattice embeddings for a class of countable groups that we now define.
A URS is amenable if it consists of amenable subgroups. Every countable group
Γ admits a largest amenable URS AΓ (with respect to a natural partial order
on URS(Γ), see §3.1), which is the stabilizer URS associated to the action of Γ
on its Furstenberg boundary (see §2.2 for definitions). The URS AΓ is called the
Furstenberg URS of Γ. AΓ is either a point, in which case we have AΓ = {Rad(Γ)},
where Rad(Γ) is the amenable radical of Γ, or homeomorphic to a Cantor space.
In this last case we say that AΓ is continuous. We refer to [LBMB16] for a more
detailed discussion.
Let (C) denote the class of groups Γ for which the Furstenberg URS AΓ is
continuous. Equivalently, a group Γ belongs to (C) if and only if Γ admits an
amenable URS whose envelope is not amenable (see below for the definition of the
AMENABLE URS’S AND LATTICE EMBEDDINGS
3
envelope). The class (C) is disjoint from all classes of groups previously mentioned
in the introduction. More precisely, the class (C) is disjoint from the class of
amenable groups, the class of linear groups [BKKO14], and also from other classes
of groups specifically considered in [BFS15], such as groups with non-vanishing
ℓ2 -Betti numbers [BKKO14] or acylindrically hyperbolic groups (see [DGO11, Th.
7.19] and [BKKO14, Th. 1.4]). The class (C) is stable under taking quotient by
an amenable normal subgroup and extension by an amenable group [LBMB16,
Prop. 2.20]. Also if Γ has a normal subgroup that is in (C), then Γ belongs to (C)
[LBMB16, Prop. 2.24]. By a result of Breuillard–Kalantar–Kennedy–Ozawa, the
complement of the class (C) is also stable under extensions (see [LBMB16, Prop.
2.24]).
The study of this class of groups is also motivated by the work of Kalantar–
Kennedy [KK17], who showed the following characterization: a countable group Γ
belongs to (C) if and only if the group Γ/Rad(Γ) has a reduced C ∗ -algebra that is
not simple. For an introduction and the historical developments of the problem of
C ∗ -simplicity, we refer to the survey of de la Harpe [Har07].
Topological boundaries. We will make use of the notion of topological boundary in the sense of Furstenberg. These are compact spaces with a minimal and
strongly proximal group action (see §2.2 for definitions). Many different notions of
boundaries appear in the study of groups and group actions. What is now sometimes called “boundary theory” is particularly well described in the introduction
of [BF14]. We insist that in the present article the term boundary will always refer
to a topological boundary in the above sense. This notion should not be confused
with any of the measured notions of boundaries. In particular, despite the possibly
confusing terminology, the maximal topological boundary, called the Furstenberg
boundary, is not the same notion as the measured notion of Poisson–Furstenberg
boundary.
Lattices and direct products. Special attention will be given to products of
locally compact groups. The study of lattices in product groups is motivated
(among other things) by its connections with the theory of lattices in semi-simple
Lie groups, its rich geometric aspects, as well as the instances of groups with rare
properties appearing in this setting. We refer to the literature (see [Mar91, Wis96,
BM97, BM00b, Rém99, Sha00, BM02, Rat04, MS04, BS06, CR09, CM12, Rad17])
for developments over the last years on the study of lattices in products of locally
compact groups.
Given a countable group Γ with a continuous Furstenberg URS and a group G
containing Γ as a lattice, we are interested in understanding how close the group
G can be from a direct product of two groups, or which properties the group G
can share with a direct product. Of course various notions of closeness can be
considered. The most basic one is to ask whether the group G admits non-trivial
decompositions as a direct product. One step further, one might consider quotient
morphisms from G onto direct products of groups. In Theorems 1.1 and 1.2 below
we more generally consider continuous morphisms with dense image from G to a
direct product of groups G → G1 × G2 . We make no assumption about injectivity
of these maps or injectivity of the composition with the projection to one factor
4
ADRIEN LE BOUDEC
Gi . In particular this setting allows maps of the form G → G/N1 × G/N2 for closed
normal subgroups N1 , N2 such that N1 N2 is dense in G.
First results. A central notion in this article is the one of extremely proximal
action. Minimal and extremely proximal actions naturally arise in geometric group
theory, and are boundaries in the sense of Furstenberg. We refer to §2.3 for definitions and examples. We say that the Furstenberg URS AΓ of a countable group Γ
comes from an extremely proximal action if there exists a compact space Z and a
Γ-action on Z that is minimal and extremely proximal, whose associated stabilizer
URS is equal to AΓ . Note that typically Z will not be the Furstenberg boundary
of Γ. If H is a URS of Γ, the envelope Env(H) of H is by definition the subgroup
of Γ generated by all the subgroups H ∈ H.
Theorem 1.1. Let Γ be a countable group whose Furstenberg URS comes from
a faithful and extremely proximal action, and let G be a locally compact group
containing Γ as a lattice. The following hold:
(a) Assume that Env(AΓ ) is finitely generated and co-amenable in Γ. Then G
cannot be a direct product G = G1 × G2 of two non-compact groups.
(b) Assume that Env(AΓ ) has finite index in Γ and finite abelianization. Then
any continuous morphism with dense image from G to a product of locally
compact groups G → G1 × G2 is such that one factor Gi is compact.
The same conclusions hold for any group commensurable to G up to compact kernels.
This result has applications to the setting of groups acting on trees, see Corollary
1.3. We make several comments about the theorem:
1) We do not assume that Γ is finitely generated, nor that G is compactly generated. For statement (a), The assumption that Env(AΓ ) if finitely generated
admits variations, see Theorem 5.14.
2) Making an assumption on the “size” of the envelope of AΓ with respect to Γ is
natural, in the sense that in general there is no hope to derive any conclusion on
the entire group Γ if this envelope is too small. An extreme illustration of this
is that there are groups Γ whose Furstenberg URS comes from a faithful and
extremely proximal action but is trivial, and these can be lattices in products,
e.g. PSL(2, Z[1/p]) inside PSL(2, R) × PSL(2, Qp ) (see also the discussion right
after Corollary 1.3).
3) Under the assumption that Env(AΓ ) is co-amenable in Γ, the fact that the
Furstenberg URS AΓ comes from a faithful and extremely proximal action
is equivalent to asking that the action of Γ on AΓ is faithful and extremely
proximal; see Remark 3.27. This provides an intrinsic reformulation of the
assumption not appealing to any auxiliary space.
4) For Γ as in the theorem, the assumption in statement (b) that Env(AΓ ) has
finite index in Γ and Env(AΓ ) has finite abelianization is equivalent to Γ being
virtually simple (see Proposition 4.6).
The URS approach to study lattice embeddings allows to consider more generally subgroups of finite covolume. Recall that a closed subgroup H of a locally
AMENABLE URS’S AND LATTICE EMBEDDINGS
5
compact group G has finite covolume in G if G/H carries a G-invariant probability measure. Thus a lattice is a discrete subgroup of finite covolume. Before
stating the following result we need some terminology.
Recall the notion of disjointness introduced by Furstenberg in [Fur67a]. If X, Y
are compact G-spaces, X and Y are disjoint if whenever Ω is a compact G-space
and Ω → X and Ω → Y are continuous equivariant surjective maps, the map
Ω → X × Y that makes the natural diagram commute remains surjective (see
§3.3). When X, Y are minimal G-spaces, this is equivalent to asking that the
diagonal G-action on the product X × Y is minimal.
Consider the following property: two non-trivial G-boundaries are never disjoint. A group with this property will be called boundary indivisible. Glasner
characterized minimal compact G-spaces which are disjoint from all G-boundaries
as those carrying a fully supported measure whose orbit closure in the space of
probability measures is minimal [Gla75, Th. 6.2]. The relation between disjointness and boundaries that we consider here is of different spirit, as it deals with
disjointness within the class of G-boundaries, rather than disjointness from this
class. Locally compact groups with a cocompact amenable maximal subgroup are
examples of boundary indivisible groups [Fur73, Prop. 4.4]. On the contrary, many
discrete groups are not boundary indivisible. The relevance of this property in our
setting comes from the fact that, as we will show in Proposition 3.24, a discrete
group Γ as in Theorem 1.1 is boundary indivisible. Actually the only examples of
(non-amenable) boundary indivisible discrete groups that we are aware of fall into
the setting of Proposition 3.24.
Recall that a convex compact G-space is irreducible if it does not contain any
proper closed convex G-invariant subspace. We say that a subgroup L of a topological group G is weakly co-amenable in G if whenever Q is a non-trivial convex
compact G-space in which L fixes a point, Q is not irreducible. This is indeed a
weakening of the notion of co-amenability †, which asks that every convex compact
G-space Q with L-fixed points has G-fixed points [Eym72] (and hence Q is not
irreducible, unless trivial). If G has a subgroup that is both amenable and weakly
co-amenable, then G is amenable; and a normal weakly co-amenable subgroup is coamenable. However in general weak co-amenability does not imply co-amenability,
even for discrete groups. In §6.3 we exhibit examples of finitely generated groups
such that every subgroup is either amenable or weakly co-amenable, but having
non-amenable subgroups that are not co-amenable.
Finally we say that a subgroup L ≤ G is boundary-minimal if there exists a
non-trivial G-boundary on which L acts minimally. We refer to §5.1 for context
and examples.
Theorem 1.2. Let H be a locally compact group with an amenable URS that comes
from an extremely proximal action, and whose envelope is co-amenable in H. Let
G be a locally compact group containing H as a closed subgroup of finite covolume.
Then G is boundary indivisible, and the following hold:
(a) Whenever G → G1 × G2 is a continuous morphism with dense image, one
factor Gi is amenable.
†and not a relative version of a notion of weak amenability.
6
ADRIEN LE BOUDEC
(b) If L is a boundary-minimal subgroup of G, and L is uniformly recurrent,
then L is weakly co-amenable in G. In particular every boundary-minimal
normal subgroup of G is co-amenable in G.
Again we make several comments:
1) The group H is allowed to be discrete, so the theorem applies for all groups
Γ as in Theorem 1.1. While (a) will be an intermediate step in the proof of
Theorem 1.1, (b) provides additional information that is rather independent
of the conclusion of Theorem 1.1.
2) Statement (a) implies that whenever N1 , N2 are closed normal subgroups of G
such that N1 N2 is dense in G, at least one Ni must be co-amenable in G.
3) The last sentence in (b) implies that if N is a closed normal subgroup of G
such that N CN is open in G (where CN is the centralizer of N ), then N is
either amenable or co-amenable in G (see Proposition 5.3). We do not know
whether the condition that N CN is open can be removed.
4) Theorem 1.2 does not say anything about amenable normal subgroups of G. It
is worst pointing out that, as illustrated by the examples discussed in Section
6, it happens that a discrete group Γ satisfying the assumptions of Theorem
1.2 and with trivial amenable radical, sits as a lattice in a group G with noncompact amenable radical.
5) Remark 5.13 below provides counter-examples showing that in statement (b)
the conclusion cannot be strengthened by saying that L is co-amenable in G.
6) For H, G as in the theorem, it happens that G splits as a direct product of two
non-compact groups, even under the additional assumption that the amenable
URS of H comes from a faithful extremely proximal action (see Example 6.3),
so that “amenable” cannot be replaced by “compact” in statement (a).
We view the above remarks 4)-5)-6) as illustrations of the limitations of the use
of topological boundaries and URS’s to the problem addressed here in the rather
abstract setting of Theorem 1.2.
Group actions on trees is a natural source of extremely proximal actions, and
Theorems 1.1 and 1.2 find applications in this setting. In the following statement
T is a locally finite simplicial tree.
Corollary 1.3. Let Γ ≤ Aut(T ) be a countable group having no proper invariant
subtree and no finite orbit in T ∪ ∂T . Assume that Γξ is non-trivial and amenable
for all ξ ∈ ∂T ; and Γ is virtually simple. If G is a locally compact group containing
Γ as a lattice, then:
(a) any continuous morphism with dense image G → G1 × G2 is such that one
factor Gi is compact. In particular G itself cannot be a direct product of
two non-compact groups.
(b) The conclusion (b) of Theorem 1.2 holds for G.
A group Γ as in Corollary 1.3 is never discrete in Aut(T ). Recall that Burger
and Mozes constructed simple groups Γ acting on two locally finite regular trees
T, T ′ such that the image of Γ in Aut(T ) and Aut(T ′ ) are non-discrete, but Γ acts
AMENABLE URS’S AND LATTICE EMBEDDINGS
7
freely and cocompactly on T × T ′ , so that Γ is a cocompact lattice in the product
Aut(T ) × Aut(T ′ ) [BM00b]. These examples illustrate the fact that the assumption
in Corollary 1.3 that end-stabilizers are all non-trivial is essential.
Examples of groups to which Corollary 1.3 applies can be found among the family of groups denoted G(F, F ′ ) in [LB16] (see Corollary 6.22). These are examples
of groups with a continuous Furstenberg URS. Here F ′ ≤ Sym(d) is a finite permutation group and F is a simply transitive (or regular) subgroup of F ′ . The group
G(F, F ′ ) is then a finitely generated group acting on a d-regular tree, transitively on
vertices and edges, and with local action at every vertex isomorphic to F ′ . We refer
to §6.2 for a definition. The normal subgroup structure of these groups is highly
sensible to the permutation groups: there are permutation groups F, F ′ such that
G(F, F ′ ) virtually admits a non-abelian free quotient (Proposition 6.11), and there
are permutation groups F, F ′ such that G(F, F ′ )∗ (the subgroup of index two in
G(F, F ′ ) preserving the bipartition of Td ) is simple [LB16, Cor. 4.14]. This family
of groups and the family of Burger–Mozes lattices in the product of two trees both
contain instances of finitely generated simple groups which embed densely in some
universal group U (F )+ [BM00a, §3.2]. Despite these similarities, Corollary 6.22
shows that any group containing a virtually simple G(F, F ′ ) as a lattice is rather
allergic to any direct product behavior. Compare with Theorem 1.4.
We also mention that other examples of groups to which Corollary 1.3 can be
applied may be found among the family of piecewise prescribed tree automorphism
groups considered in [LB17, Sec. 4].
Irreducible lattices in wreath products. Leaving aside the previous abstract
situation, we then focus on the family of groups G(F, F ′ ) (see §6.2 for definitions).
The above mentioned common properties between the discrete groups G(F, F ′ ) and
certain lattices in the product of two trees provide motivation for studying which
locally compact groups can contain a group G(F, F ′ ) as a lattice. The contribution
of this article to this problem is on the one hand the conclusions given by Corollary
1.3 (see Corollary 6.22), and on the other hand to describe embeddings of these
groups as irreducible lattices in some locally compact wreath products.
If H is a group acting on a set Ω, and A is a subgroup of a group B, the semirestricted permutational wreath product B ≀A
Ω H, introduced by Cornulier in
Ω,A
[Cor17], is the semi-direct product B
⋊ H, where B Ω,A is the set of functions
f : Ω → B such that f (x) ∈ A for all but finitely many x ∈ Ω, and H acts on B Ω,A
in the usual way. This definition somehow interpolates between the restricted and
the unrestricted permutational wreath products, which correspond respectively to
A = 1 (in which case we will write B ≀Ω H) and A = B. When B, H are locally
compact and A is compact open in B, there is a natural locally compact group
topology on B ≀A
Ω H (see §6.5.1).
We call a lattice Γ in B ≀A
Ω H irreducible if Γ has a non-discrete projection to H.
The terminology is motivated by the fact that this definition prevents Γ, and more
generally any subgroup commensurable with Γ, from being of the form Γ1 ⋊ Γ2 ,
where Γ1 and Γ2 are lattices in B Ω,A and H.
In the following statement Ck is the cyclic group of order k, Sk the symmetric
group on k elements, Vd the vertex set of a d-regular tree Td , and G(F, F ′ )∗ the
subgroup of index two in G(F, F ′ ) preserving the bipartition of Td .
8
ADRIEN LE BOUDEC
Theorem 1.4. Let d ≥ 3, F F ′ ≤ Sym(d) permutation groups such that F acts
freely, and n the index of F in F ′ . Then the following hold:
(a) The group G(F, F ′ ) embeds as an irreducible cocompact lattice in the semiS
restricted permutational wreath product Gn,d = Sn ≀Vdn−1 Aut(Td ).
(b) When F is a transitive permutation group, the finitely generated group
Γn,d = Cn ≀Vd (Cd ∗ Cd ) = (Cn2 ≀ Fd−1 ) ⋊ Cd
and G(F, F ′ )∗ have isometric Cayley graphs.
We note that no finite index subgroup of Gn,d can split non-trivially as a product,
but the stabilizer of an edge of Td in Gn,d (for the projection action on Td ) is an
open subgroup which does split as a direct product of two non-compact groups.
V ,S
The embedding of G(F, F ′ ) in Gn,d = Snd n−1 ⋊ Aut(Td ) is not the inclusion in
the subgroup 1 ⋊ Aut(Td ), but a twisted embedding associated to the cocycle given
by the local action on Td . See Section 6 for details. We also note that the image
V ,S
of G(F, F ′ ) in Gn,d does not intersect the amenable radical Snd n−1 ⋊ 1, but does
intersect the subgroup 1 ⋊ Aut(Td ) (along a cocompact lattice of Aut(Td )).
The case n = 2 is particular as the group G2,d is actually a restricted wreath
product, and in this situation the group G(F, F ′ ) is an irreducible cocompact lattice
in C2 ≀Vd Aut(Td ) = (⊕Vd C2 ) ⋊ Aut(Td ).
Applications. Recall that the property of being virtually simple is not invariant
by quasi-isometry. Indeed the lattices constructed by Burger and Mozes in [BM00b]
show that a virtually simple finitely generated group may have the same Cayley
graph as a product of two finitely generated free groups. Theorem 1.4 together
with simplicity results from [LB16] provide another illustration of this fact, namely
finitely generated simple groups having the same Cayley graph as a wreath product.
The wreath product construction is already known to be a source of examples of
finitely generated groups whose algebraic properties are not reflected in their Cayley
graphs. Two wreath products B1 ≀ Γ and B2 ≀ Γ may have isometric or bi-Lipschitz
Cayley-graphs, one being solvable or torsion free, while no finite index subgroup of
the second has these properties [Dyu00]. The phenomenon exhibited in Theorem
1.4 is nonetheless very different, in the sense that it provides finitely generated
groups with isometric Cayley graphs such that one is a wreath product, but the
other is simple (and hence not commensurable with a wreath product).
Recall that for finitely generated groups, being amenable is a quasi-isometry
invariant. By contrast, Theorem 1.4 implies:
Corollary 1.5. Among finitely generated groups, the property of having infinite
amenable radical is not invariant by quasi-isometry.
The examples from Theorem 1.4 simultaneously show that having an infinite
elliptic radical is also not invariant by quasi-isometry. Recall that the elliptic
radical of a discrete group is the largest locally finite normal subgroup.
Recall that by a theorem of Eskin–Fisher–Whyte, any finitely generated group Γ
that is quasi-isometric to a wreath product C ≀Z, where C is a finite group, must act
properly and cocompactly on a Diestel-Leader graph DL(m, m) [EFW07, EFW12,
EFW13]. By the algebraic description of the isometry groups of these graphs given
in [BNW08] (see also [CFK12]), this implies in particular that Γ has a subgroup
AMENABLE URS’S AND LATTICE EMBEDDINGS
9
of index at most two that is (locally finite)-by-Z. By contrast, Theorem 1.4 shows
that this rigidity fails in the case of C ≀ Fk when k ≥ 2.
Questions. We end this introduction with two questions. Extreme proximality
is used in a crucial way at different stages of the proofs of Theorems 1.1 and 1.2.
These results both fail without the extreme proximality assumption, simply because
then the group itself may very well be a direct product. Putting aside these trivial
counter-examples, we do not know whether serious algebraic restrictions on a locally
compact group may be derived from the existence of a lattice with a continuous
Furstenberg URS. In this direction, we find the following question natural:
Question 1.6. Does there exist Γ with a continuous Furstenberg URS which is a
lattice in a group G = G1 × G2 such both factors are non-discrete, and Γ has an
injective and dense projection to each factor ? What if we impose moreover that
Γ has trivial amenable radical ?
Theorem 1.4 presents a situation of a locally compact group G with two cocompact lattices Γ1 , Γ2 ≤ G such that the stabilizer URS associated to the Γ1 -action on
∂sp G is {Rad(Γ1 )}, while the stabilizer URS associated to the Γ2 -action on ∂sp G is
continuous. Here ∂sp G stands for the Furstenberg boundary of G; see §2.2. In these
examples the group G splits as G = N ⋊ Q, where N is the amenable radical of G.
The lattice Γ1 preserves this splitting, meaning that we have Γ1 = (N ∩Γ1 )⋊(Q∩Γ1 )
(and hence Γ1 does not act faithfully on ∂sp G), while Γ2 has an injective projection
to Q. This naturally raises the following:
Question 1.7. Let G be a locally compact group with two lattices Γ1 and Γ2
both acting faithfully on X = ∂sp G. Is it possible that the Γ1 -action on X is
topologically free, but the Γ2 -action on X is not topologically free ? Can this
happen with ∂sp G = G/H being a homogeneous G-space ?
Note that by [Fur03, Prop. 7], the condition that Γ1 and Γ2 act faithfully on
∂sp G is equivalent to saying that Γ1 and Γ2 have trivial amenable radical. Recall
that topologically free means that there is a dense subset of points having trivial
stabilizer (equivalently, the stabilizer URS is trivial).
Outline of proofs and organization. The article is organized as follows. In the
next section we introduce terminology and preliminary results about topological
boundaries and extremely proximal actions. In Section 3 we establish the results
about uniformly recurrent subgroups that are used in later sections. In particular
we prove a certain gap property for URS’s coming from extremely proximal actions
(Proposition 3.17). Combined with an observation about compact spaces with
comparable stabilizer URS’s (Proposition 3.9), we deduce that a locally compact
group H with an amenable URS that comes from an extremely proximal action,
and whose envelope is co-amenable in H, is boundary indivisible (Proposition 3.24).
The setting of Section 4 is that of a group admitting a non-topologically free
extremely proximal action. We establish intermediate results, notably concerning
normal subgroups (Proposition 4.6) and commensurated subgroups (Proposition
4.12), and deduce non-embedding results for this class of groups (see Proposition
4.10 and Corollary 4.13).
In Section 5 we use results from Section 3 together with Proposition 5.9 of
Furstenberg and prove Theorem 1.2. We then specify to discrete groups and give
10
ADRIEN LE BOUDEC
the proof of Theorem 1.1. The proof essentially splits in two steps: the first one is
the application of Theorem 1.2 to obtain amenability of one factor, and the second
consists in proving that under appropriate assumptions the amenable factor is
compact, using results from Section 4.
In Section 6 we consider groups acting on trees, and apply previous results of
the article to this setting. After giving the proof of Corollary 1.3, we focus on the
family of groups with prescribed local action G(F, F ′ ). We study boundaries of
these groups, and use results from Section 3 in order to characterize the discrete
groups within this family which are boundary indivisible (see Theorem 6.9). This
includes those which are virtually simple, but this also contain non-virtually simple
instances. Finally we study lattice embeddings of these groups and give the proof
of Theorem 1.4.
Acknowledgements. I am grateful to Alex Furman for pointing out Proposition
5.9 to my attention, and to Uri Bader for enlightening discussion about the proof.
I am also grateful to Pierre-Emmanuel Caprace, Yves Cornulier, Bruno Duchesne,
Nicolás Matte Bon, Nicolas Monod and Pierre Pansu for interesting discussions
and comments related to this work. Finally I am indebted to Alain Valette for a
decisive remark made in Neuchâtel in May 2015, which eventually led to Theorem
1.4.
2. Preliminaries
2.1. Conventions and terminology. The letter G will usually refer to a topological group, while Γ will denote a discrete group. The group of homeomorphic
automorphisms of G will be denoted Aut(G). Whenever G is a locally compact
group, we will always assume that G is second countable.
The notation X will refer to a topological space. The letters X, Y will be reserved for compact spaces, and Z for a compact space equipped with an extremely
proximal group action. All compact spaces are assumed to be Hausdorff.
A space X is a G-space if G admits a continuous action G × X → X . The action
of G on X (or the G-space X ) is minimal if all orbits are dense. The G-space X
is said to be trivial if X is a one-point space.
If X is locally compact, we denote by Prob(X ) the set of all regular Borel probability measures on X . The space of continuous compactly supported functions on X
is denoted CK (X ). Each µ ∈ Prob(X ) defines a linear functional on CK (X ), and we
endow Prob(X ) with the weak*-topology: a net (µi ) converges to µ if µi (f ) → µ(f )
for all f ∈ CK (X ). By Banach-Alaoglu theorem, Prob(X ) is relatively compact in
CK (X )∗ .
We denote by 2X the set of all closed subsets of X . The sets
n
o
O(K; U1 , . . . , Un ) = C ∈ 2X : C ∩ K = ∅; C ∩ Ui 6= ∅ for all i ,
where K ⊂ X is compact and U1 , . . . , Un ⊂ X are open, form a basis for the
Chabauty topology on 2X . Endowed with the Chabauty topology, the space 2X is
compact. We will freely identify X with its image in 2X by the natural inclusion
x 7→ {x}. Note that when X is a G-space, so is 2X .
In the particular case where X = G is a locally compact group, the space Sub(G)
of closed subgroups of G is closed in 2G . In particular Sub(G) is a compact space,
on which G acts by conjugation. A uniformly recurrent subgroup (URS) of
AMENABLE URS’S AND LATTICE EMBEDDINGS
11
G is a closed, G-invariant, minimal subset of Sub(G). The set of URS’s of G is
denoted URS(G). By extension we also say that a subgroup H ≤ G is uniformly
recurrent if the closure of the conjugacy class of H in Sub(G) is minimal.
2.2. Topological boundaries. If X is a compact G-space, the action of G on X
is strongly proximal if the closure of any G-orbit in Prob(X) contains a Dirac
measure. Strong proximality is stable under taking products (with diagonal action)
and continuous equivariant images (see e.g. [Gla76]).
We say that X is a boundary if X is both minimal and strongly proximal. For
every topological group G, there exists a unique boundary ∂sp G with the universal
property that for any boundary X, there exists a continuous G-equivariant surjection ∂sp G → X [Fur73, Prop. 4.6]. This universal space ∂sp G is referred to as
the Furstenberg boundary of G. It is easy to verify that any amenable normal
subgroup N of G acts trivially on any G-boundary, so that ∂sp G = ∂sp (G/N ).
If G admits a cocompact amenable subgroup, then the Furstenberg boundary
is a homogeneous space ∂sp G = G/H, and the G-spaces of the form G/L with L
containing H are precisely the G-boundaries [Fur73, Prop. 4.4]. The situation for
discrete groups is quite different: as shown in [KK17] and [BKKO14], Furstenberg
boundaries of discrete groups are always non-metrizable (unless trivial).
The following is a fundamental property of boundaries (see [Gla76, III.2.3]):
Theorem 2.1. Any convex compact G-space contains a boundary. In fact if Q
is an irreducible convex compact G-space, then the action of G on Q is strongly
proximal, and the closure of extreme points of Q is a G-boundary.
Irreducible means that Q has no proper closed convex G-invariant subspace. In
particular Theorem 2.1 has the following consequence ([Gla76, III.3.1]):
Theorem 2.2. A group G is amenable if and only if all G-boundaries are trivial,
or equivalently ∂sp G is trivial.
2.3. Extremely proximal actions. Let X be a compact G-space. A closed subset C of X is compressible if the closure of the G-orbit of C in the space 2X
contains a singleton {x}. Equivalently, for every neighbourhood U of x, there exists g ∈ G such that g(C) ⊂ U . The action of G on X is extremely proximal if
every closed subset C ( X is compressible. References where extremely proximal
actions were considered include [Gla74, LS96, JR00, FST15, LBMB16].
We will make use of the following result, which is Theorem 2.3 from [Gla74]:
Theorem 2.3. Let X be a compact G-space, and assume X has at least three
points. If the G-action on X is extremely proximal, then it is strongly proximal.
Examples of extremely proximal actions are provided by group actions on trees
or hyperbolic spaces. If G ≤ Aut(T ) acts on T with no proper invariant subtree
and no finite orbit in T ∪ ∂T , then the action of G on ∂T is minimal and extremely
proximal; and if G acts coboundedly on a proper geodesic hyperbolic space X with
no fixed point or fixed pair at infinity, then the G-action on the Gromov boundary
∂X is minimal and extremely proximal.
These two situations are particular cases of the following more general result,
that we believe is well-known. A homeomorphism g of a space X is hyperbolic if
there exist ξ− , ξ+ ∈ X, called the endpoints of g, such that for all neighbourhoods
12
ADRIEN LE BOUDEC
U− , U+ of ξ− , ξ+ , for n large enough we have gn (X \ U− ) ⊂ U+ and g−n (X \ U+ ) ⊂
U− .
Proposition 2.4. If G acts on a compact space X with hyperbolic elements having
no common endpoints, and such that the set of endpoints of hyperbolic elements of
G is dense in X, then the action is minimal and extremely proximal.
Proof. Let U ⊂ X be a non-empty open invariant subset. By our density assumption, there is g ∈ G hyperbolic whose attracting endpoint ξ+ belongs to U . So for
every x 6= ξ− , there is n > 0 such that gn (x) ∈ U since U is open, so we deduce that
U contains X \ {ξ− }. But the existence of hyperbolic elements with no common
endpoints ensures that G fixes no point of X, so finally U = X, i.e. the action is
minimal.
Now if C ( X is a closed subset then again there is g ∈ G whose attracting
endpoint is outside C, and C is compressible to the repealing endpoint of g.
Recent work of Duchesne and Monod shows that group actions on dendrites is
also a source of extremely proximal actions. Recall that a dendrite X is a compact
metrizable space such that any two points are the extremities of a unique arc.
Duchesne and Monod show that if Γ acts on X with no invariant proper subdendrite, then there is a unique minimal closed invariant subset M ⊆ X and the
Γ-action on M is extremely proximal. See the proof of Theorem 10.1 in [DM16].
Extremely proximal actions also play a prominent role in the context of group actions on the circle. For any minimal action α : Γ → Homeo+ (S1 ), either α(Γ) is conjugated to a group of rotations, or α(Γ) has a finite centralizer CΓ in Homeo+ (S1 )
and the action of Γ on the quotient circle CΓ \S1 is extremely proximal: see Ghys
[Ghy01] and Margulis [Mar00]. We mention however that in all the examples of
countable groups Γ with an action on S1 that is minimal and not topologically free
that we are aware of, the stabilizer URS is either non-amenable, or not known to be
amenable. In particular we do not know any application of Theorem 1.1 to groups
acting on the circle.
In the sequel we will make use of the following easy lemma.
Lemma 2.5. Let G be a topological group, and H a subgroup of G such that there
is some compact subset K of G such that G = KH. Let X be a compact G-space,
and C a closed subset of X that is compressible by G. Then C is compressible by
H. In particular if the G-action on X is extremely proximal, then the H-action is
extremely proximal.
Proof. By assumption there exists x ∈ X and (gi ) such that gi (C) converges to x
in 2X . If gi = ki hi , by compactness of K we assume that (ki ) converges to some k,
and it follows that hi (C) converges to k−1 x by continuity of G × 2X → 2X .
3. Uniformly recurrent subgroups
3.1. Generalities on uniformly recurrent subgroups. Let G be a locally compact group. For H, K ∈ URS(G), we write H 4 K when there exist H ∈ H and
K ∈ K such that H ≤ K. This is equivalent to the fact that every H ∈ H is
contained in an element of K, and every K ∈ K contains an element of H, and the
relation 4 is an order on URS(G). See e.g. §2.4 in [LBMB16].
AMENABLE URS’S AND LATTICE EMBEDDINGS
13
For simplicity the URS {N } associated to a closed normal subgroup N of G
will still be denoted N . In particular N 4 H (resp. H 4 N ) means that N is
contained in (resp. contains) all the elements of H. By the trivial URS we mean
the URS corresponding to the trivial subgroup {1}. We warn the reader that in
this terminology the URS corresponding to a non-trivial normal subgroup N is
trivial as a G-space (it is a one-point space), but is not trivial as a URS.
Let X, Y be compact G-spaces. We say that X is a factor of Y , and Y is an
extension of X, if there exists a continuous equivariant map Y → X that is onto.
If π : Y → X is a continuous equivariant map, we say that π is almost 1-1 if the
set of y ∈ Y such that π −1 (π(y)) = {y} is dense in Y . When moreover π is onto
we say that Y is an almost 1-1 extension of X.
We now recall the definition of the stabilizer URS associated to a minimal action
on a compact space. If X is a compact G-space and x ∈ X, we denote by Gx the
stabilizer of x in G.
Definition 3.1. If X is a compact G-space, we denote by X0 ⊂ X the set of points
at which Stab : X → Sub(G), x 7→ Gx , is continuous.
Upper semi-continuity of the map Stab and second countability of G imply that
X0 is a dense subset of X (indeed if (Un ) is a basis of the topology on Sub(G) and
Xn is the set of x ∈ X such that Gx ∩ Un 6= ∅, which is closed, one verifies that
Stab is continuous on ∩n (∂Xn )c ). Following [GW15], we denote
X̃ = cls {(x, Gx ) : x ∈ X0 } ⊂ X × Sub(G),
and
SG (X) = cls {Gx : x ∈ X0 } ⊂ Sub(G),
where cls stands for the closure in the ambient space. We have the obvious inclusions
X̃ ⊆ X ∗ := cls {(x, Gx ) : x ∈ X}
and
∗
SG (X) ⊆ SG
(X) := cls {Gx : x ∈ X} .
We denote by ηX and πX the projections from X × Sub(G) to X and Sub(G)
respectively.
Proposition 3.2 (Prop. 1.2 in [GW15]). If X is a minimal compact G-space, then
ηX : X̃ → X is an almost 1-1 extension, and X̃ and SG (X) are the unique minimal
∗ (X).
closed G-invariant subsets of respectively X ∗ and SG
Definition 3.3. SG (X) is the stabilizer URS associated to the G-action on X.
The action of G on X is topologically free if SG (X) is trivial, i.e. SG (X) = {{1}}.
Remark 3.4. When G is not assumed second countable, in general X0 is no longer
dense in X. However it is still possible to define the stabilizer URS associated to
a minimal action on a compact space; see the discussion in [MBT17, p1].
In the sequel we will sometimes use the following version of Proposition 3.2.
Proposition 3.5. Let X be compact G-space, and let H ≤ G be a subgroup acting
minimally on X. Then H acts minimally on X̃ and SG (X), and X̃ and SG (X)
∗ (X).
are the unique minimal closed H-invariant subsets of X ∗ and SG
14
ADRIEN LE BOUDEC
Proof. Let Y ⊆ X ∗ closed and H-invariant. Since X is a factor of X ∗ and H acts
minimally on X, for every x ∈ X0 there exists L ∈ Sub(G) such that (x, L) ∈ Y .
But for x ∈ X0 , the fact that (x, L) belongs to X ∗ forces L to be equal to Gx
by definition of X0 , and it follows that X̃ ⊆ Y . Moreover H acts minimally on
X̃ since ηX : X̃ → X is an almost 1-1 extension and minimality is preserved by
taking almost 1-1 extensions (if π : X1 → X2 is almost 1-1 and if C ⊆ X1 is a
closed subset such that π(C) = X2 , then C = X1 ). So the statements for X̃ and
∗ (X) since these are factors
X ∗ are established, and the same hold for SG (X) and SG
of X̃ and X ∗ .
3.2. Envelopes. Let G be a locally compact group and H ∈ URS(G).
Definition 3.6. The envelope Env(H) of H is the closed subgroup of G generated
by all the subgroups H ∈ H.
By definition Env(H) is the smallest closed subgroup of G such that H ⊂
Sub(Env(H)). Note that Env(H) is a normal subgroup of G, and is actually the
smallest normal subgroup such that H 4 Env(H).
Let Γ be a discrete group, X a compact Γ-space and X0 the domain of continuity
of the map Stab. It is a classical fact that X0 consists of those x ∈ X such that for
every γ ∈ Γx , there exists U neighbourhood of x that is fixed by γ (see e.g. [Vor12,
Lem. 5.4] for a proof). For x ∈ X, we will denote by Γ0x ≤ Γx the set of elements
fixing a neighbourhood of x, so that x ∈ X0 if and only if Γx = Γ0x .
Lemma 3.7. Let Γ be a countable discrete group, X a compact minimal Γ-space,
n ≥ 1 and γ1 , . . . , γn ∈ Γ. The following are equivalent:
(i)
(ii)
(iii)
(iv)
∩Fix(γi ) has non-empty interior;
there is x ∈ X such that γi ∈ Γ0x for all i;
same as in (ii) but with x ∈ X0 ;
there is H ∈ SΓ (X) such that γi ∈ Hfor all i.
In particular Env(SΓ (X)) is generated by the elements γ ∈ Γ such that Fix(γ) has
non-empty interior.
Proof. It is clear that (i) and (ii) are equivalent. Also (iii) clearly implies (ii),
and (ii) also implies (iii) by density of X0 in X. Finally (iii) implies (iv) since
Γ0x ∈ SΓ (X) for x ∈ X0 , and (iv) implies (iii) by density of the set of Γ0x , x ∈ X0 ,
in SΓ (X).
3.3. G-spaces with comparable stabilizer URS’s. We recall the notion of
disjointness from [Fur67a]. Two compact G-spaces X, Y are disjoint if whenever
X, Y are factors of a compact G-space Ω via ϕX : Ω → X and ϕY : Ω → Y , then
the map (ϕX , ϕY ) : Ω → X × Y is surjective. When X, Y are minimal G-spaces,
this is equivalent to saying that the product X × Y remains minimal [Fur67a, Lem.
II.1].
The following lemma presents a situation which easily implies disjointness:
Lemma 3.8. Let X, Y be minimal compact G-spaces such that there exists y0 ∈ Y
such that Gy0 acts minimally on X. Then X and Y are disjoint.
AMENABLE URS’S AND LATTICE EMBEDDINGS
15
Proof. This is clear: if W is a closed invariant subset of X × Y , then by minimality
of Y there exists x0 ∈ X such that (x0 , y0 ) ∈ W . Since Gy0 acts minimally on X
we deduce that W contains X × {y0 }, and by minimality of Y it follows that W is
equal to X × Y .
The following proposition will be used notably in Proposition 3.24.
Proposition 3.9. Let X, Y be compact minimal G-spaces, and write H = SG (X)
and K = SG (Y ). Suppose that H 4 K. Then X and Y can be disjoint only if
Env(H) 4 K.
In particular if SG (X) = SG (Y ) and this URS is not a point, then X and Y are
not disjoint.
Proof. Using notation from Proposition 3.2, we have almost 1-1 extensions ηX :
X̃ → X and ηY : Ỹ → Y , and we write η = ηX × ηY : X̃ × Ỹ → X × Y , and
π = πX × πY : X̃ × Ỹ → H × K. The set W ⊆ H × K of pairs (H, K) such that
H ≤ K is non-empty by assumption, is clearly G-invariant, and is easily seen to be
closed. If W is a proper subset of H × K then π −1 (W ) is a proper subset of X̃ × Ỹ
since π is a factor, and it follows that η π −1 (W ) is a closed G-invariant subset of
X × Y that is proper since η is almost 1-1. This contradicts disjointness of X and
Y . Therefore we have W = H × K. This means that for a fixed K ∈ K we have
H ≤ K for every H ∈ H, and hence Env(H) 4 K.
3.4. Action of a URS on a G-space. In this paragraph G still denote a locally
compact group, and X is a compact G-space. Given H ∈ URS(G), we study the
properties of the action of elements on H on the space X.
The proof of the following lemma is an easy verification, and we leave it to the
reader.
Lemma 3.10. If X is a compact G-space, {H ∈ Sub(G) | H fixes a point in X} is
a closed G-invariant subset of Sub(G).
In particular the following definition makes sense.
Definition 3.11. Let X be a compact G-space, and H ∈ URS(G). We say that
H fixes a point in X if for some (all) H ∈ H, there is x ∈ X such that h(x) = x
for all h ∈ H.
Lemma 3.12. Let X be a compact G-space, Y ⊆ X a closed invariant subset of
X, and H ∈ URS(G). If there exists H ∈ H fixing x ∈ X such that Gx ∩ Y 6= ∅,
then H fixes a point in Y .
Proof. By assumption there exist (gi ) and y ∈ Y such that gi (x) converges to y. If
K ∈ H is a limit point of (H gi ) (which exists by compactness), then K fixes y by
upper semi-continuity of the stabilizer map.
Lemma 3.12 implies the following:
Lemma 3.13. If X is a compact G-space containing a unique minimal closed Ginvariant subset Xmin ⊂ X (e.g. X is proximal), and if H ∈ URS(G) fixes a point
in X, then H fixes a point in Xmin .
Proposition 3.14. Let Z be a compact G-space that is extremely proximal, and
H ∈ URS(G). Then either H fixes a point in Z, or all H ∈ H act minimally on
Z.
16
ADRIEN LE BOUDEC
Proof. If there exist H ∈ H and a non-empty closed subset C ( Z that is invariant
by H, then we may apply Lemma 3.12 to the space X = 2Z , the subspace Y = Z
and the point x = C, and we deduce that H fixes a point in Z.
∗ (X) stands for the closure in Sub(G)
Recall that given a compact G-space X, SG
of the set of subgroups Gx , x ∈ X.
Lemma 3.15. Let X be a compact G-space. Assume that K ≤ G is a closed
∗ (X)
subgroup of G which acts minimally on X and such that there exists H ∈ SG
with H ≤ K. Then Env(SG (X)) ≤ K.
∗ (X)
Proof. Since K acts minimally on X, the closure of the K-orbit of H in SG
contains SG (X) according to Proposition 3.5. Since H ∈ Sub(K) and Sub(K) is
a closed subset of Sub(G), we deduce that SG (X) ⊂ Sub(K), and in particular
Env(SG (X)) ≤ K.
Definition 3.16. Let H ∈ URS(G). We say that H comes from an extremely
proximal action if there exists a compact G-space Z that is minimal and extremely proximal, and such that SG (Z) = H.
It was shown in [LBMB16] that for a discrete group Γ with a non-trivial URS
H coming from an extremely proximal action, any non-trivial K ∈ URS(Γ) must
be “relatively large” with respect to H (see [LBMB16, Th. 3.10] for a precise
statement). Appropriate assumptions on Γ and H further imply that H 4 K for
every non-trivial K ∈ URS(Γ) [LBMB16, Cor. 3.12]. The following proposition
goes in the opposite direction by considering URS’s larger than H.
Proposition 3.17. Let H ∈ URS(G) that comes from an extremely proximal action. Let K ∈ URS(G) such that H 4 K and H =
6 K. Then Env(H) 4 K.
Proof. Let Z be a compact G-space that is minimal and extremely proximal and
such that SG (Z) = H. Fix K ∈ K, and assume that K does not act minimally
on Z. According to Proposition 3.14 this implies that the URS K fixes a point
in Z, i.e. K 4 H. Since moreover H, K satisfy H 4 K by assumption, we deduce
that H = K, which is a contradiction. Therefore K acts minimally on Z. Since
moreover there exists H ∈ H such that H ≤ K, we are in position to apply Lemma
3.15, from which the conclusion follows.
It should be noted that Proposition 3.17 is false without the extreme proximality
assumption, as in general there are plenty of URS’s between H and Env(H).
Lemma 3.18. Let H ∈ URS(G) that comes from an extremely proximal action.
Then Env(H) acts minimally on H.
Proof. Let Z be a compact G-space that is minimal and extremely proximal and
such that SG (Z) = H, and let N = Env(H). Without loss of generality we may
assume that H is not a point, since otherwise there is nothing to prove. This ensures
that N acts non-trivially on Z. By extreme proximality N must act minimally on
Z (see Lemma 4.2), and therefore also on H by Proposition 3.5.
Remark 3.19. The extreme proximality assumption cannot be removed in Lemma
3.18. Indeed it is not true in general that, given H ∈ URS(G), H remains a URS
of Env(H). Indeed, as explained in [GW15], any minimal subshift on two letters
AMENABLE URS’S AND LATTICE EMBEDDINGS
17
gives rise to a URS H of the lamplighter group G = C2 ≀Z, such that H is contained
in the Chabauty space Sub(L) of the base group L = ⊕C2 . In particular Env(H)
lies inside the abelian group L, and it follows that Env(H) acts trivially on H.
Proposition 3.20. Let H ∈ URS(G) that comes from an extremely proximal action, and assume H is not a point. Then:
(a) The action of G on H gives rise to the same URS, i.e. SG (H) = H.
(b) If moreover H comes from a faithful extremely proximal action, then the
action of G on H is faithful.
Proof. Write K = SG (H). By definition we have H 4 K. Argue by contradiction
and suppose H =
6 K. Then applying Proposition 3.17, we deduce that Env(H) acts
trivially on H. But Env(H) also acts minimally on H by Lemma 3.18, so we deduce
that H must be a point, a contradiction. This shows (a).
For (b), arguing as in the proof of Lemma 3.18 we see that any non-trivial normal
subgroup N of G acts minimally on H. Since H is not a point, we have in particular
that N acts non-trivially on H.
Remark 3.21. Proposition 3.20 implies that, as far as our interest lies inside the
URS associated to a minimal and extremely proximal action (and not the space
Z itself), there is no loss of generality in assuming that (G, Z) is a sub-system of
(G, Sub(G)). See also Remark 3.27.
3.5. Amenable URS’s. Recall that we say that H ∈ URS(G) is amenable if every
H ∈ H is amenable. The following lemma already appeared in [LBMB16, Prop.
2.21].
Lemma 3.22. If H ∈ URS(G) is amenable and X is a G-boundary, then H 4
SG (X).
Proof. Since H is amenable, H must fix a point in the compact G-space Prob(X).
Now X is the unique minimal G-invariant subspace of Prob(X) since X is a Gboundary, so by Lemma 3.13 we have that H fixes a point in X, i.e. H 4 SG (X).
Proposition 3.23. Let X be a compact minimal G-space such that H = SG (X)
is amenable, and let Y be a G-boundary such that X and Y are disjoint. Then
Env(H) acts trivially on Y .
In particular if Env(H) is co-amenable in G, a non-trivial G-boundary is never
disjoint with X.
Proof. The fact that Env(H) must act trivially on Y follows by applying Lemma
3.22 and Proposition 3.9. Since an amenable group has no non-trivial boundary,
the second statement follows.
Proposition 3.23 says that when G admits an amenable URS whose envelope is
co-amenable, a non-trivial G-boundary is never disjoint with X. This conclusion is
not satisfactory for our concerns as it depends on the choice of a space X and not
only on G. Although there is no hope to get a better conclusion in full generality,
the next result, which will play an important role in Section 5, will remove this
dependence under an extreme proximality assumption.
We recall from the introduction that we say that G is boundary indivisible if
two non-trivial G-boundaries are never disjoint.
18
ADRIEN LE BOUDEC
Proposition 3.24. Assume that G admits an amenable H ∈ URS(G) that comes
from an extremely proximal action, and let X be a non-trivial G-boundary.
(a) Either SG (X) = H, or Env(H) acts trivially on X.
(b) Assume that Env(H) is co-amenable in G. Then SG (X) = H, and G is
boundary indivisible.
Proof. (a). Since H is amenable, we have H 4 SG (X) by Lemma 3.22. Now if we
assume H =
6 SG (X), then according to Proposition 3.17 we have Env(H) 4 SG (X),
which exactly means that Env(H) acts trivially on X.
(b). If SG (X) 6= H then the action of G on X factors through an action of
G/Env(H) by (a). But by assumption the latter is amenable, so has no non-trivial
boundaries. So it follows that X is trivial, a contradiction. Therefore all non-trivial
G-boundaries have the same stabilizer URS H. Since moreover H cannot be a point
(because otherwise G would be amenable), the fact that G is boundary indivisible
follows from Proposition 3.9.
For a countable group Γ, the Furstenberg URS of Γ is the stabilizer URS
associated to the action of Γ on its Furstenberg boundary. We refer to [LBMB16]
for the proof of the following properties.
Proposition 3.25. Let Γ be a countable group, and AΓ its Furstenberg URS. Then
the following hold:
(a) AΓ is amenable, and H 4 AΓ for every amenable H ∈ URS(Γ).
(b) If X is a Γ-boundary, then AΓ 4 SΓ (X). If moreover there is x ∈ X such
that Γx is amenable, then AΓ = SΓ (X).
(c) AΓ is invariant under Aut(Γ).
Proposition 3.26. Let Γ be a countable group, and let Λ = Env(AΓ ) be the envelope of the Furstenberg URS of Γ. Then Λ acts minimally on AΓ , and AΓ = AΛ .
Proof. The conjugation action of Γ on the normal subgroup Λ = Env(AΓ ) induces
a map Γ → Aut(Λ). Since AΛ is invariant under Aut(Λ) by Proposition 3.25, it is
in particular Γ-invariant. Moreover the action of Γ on AΛ is clearly minimal since
it is already the case for Λ. Therefore AΛ is an amenable URS of Γ, so it follows
that AΛ 4 AΓ since AΓ is larger than any amenable URS of Γ. On the other hand
AΓ is a closed and Λ-invariant subset of Sub(Λ) consisting of amenable subgroups,
so by the domination property applied to AΛ we must have AΓ 4 AΛ . Equality
follows.
Remark 3.27. When Env(AΓ ) is co-amenable in Γ, the fact that AΓ comes from
a faithful and extremely proximal action is equivalent to saying that the Γ-action
on AΓ is faithful and extremely proximal. The direct implication is consequence
of Proposition 3.20, and the converse follows from Proposition 3.24. This gives us
an intrinsic reformulation of the assumption of Theorem 1.1 inside the Chabauty
space of Γ.
4. Extremely proximal actions
If X is a Hausdorff Γ-space and U ⊂ X , we denote by ΓU the set of elements of
Γ acting trivially on X \ U . We say that the action of Γ on X is micro-supported
if ΓU is non-trivial for every non-empty open set U .
We will need the following easy lemma.
AMENABLE URS’S AND LATTICE EMBEDDINGS
19
Lemma 4.1. Assume that the action of Γ on X is micro-supported, and let U be
a non-empty open set. Then ΓU is not solvable.
Proof. Assume that Λ is a subgroup of ΓU whose action on U is micro-supported,
and let V be a non-empty open subset of U . By assumption there exists a nontrivial λ1 ∈ ΛV , so that we may find an open set W ⊂ V such that W and λ1 (W )
are disjoint. For λ2 ∈ ΛW , the commutator [λ1 , λ2 ] coincides with λ−1
2 on W , and
is therefore non-trivial provided that λ2 is non-trivial. It follows by induction that
if ΓU,n is the n-th term of the derived series of ΓU , then the action of ΓU,n on U is
micro-supported. In particular ΓU,n is never trivial, and ΓU is not solvable.
In this section we will consider the following setting:
(EP) Γ is a discrete group, Z is a compact Γ-space, and the action of Γ on Z is
faithful, minimal and extremely proximal. In order to avoid trivialities, we assume
that Z has at least three points.
Unless specified otherwise, in the remaining of this section Γ and Z will be
assumed to satisfy (EP). Our goal is to derive various properties on the group Γ
that will be used in later sections.
Lemma 4.2. Let N ≤ Homeo(Z) be a non-trivial subgroup that is normalized by
Γ. Then N acts minimally and does not fix any probability measure on Z.
Proof. Assume there exists C ( Z that is closed and N -invariant. Since C is
compressible and N is normalized by Γ, wee see that N has a fixed point in Z. Now
the set of N -fixed points is Γ-invariant, so it has to be the entire Z by minimality,
and N is trivial. The same argument shows the absence of N -invariant probability
measure on Z, since an extremely proximal action is also strongly proximal by
Theorem 2.3.
In all this section the terminology topologically free (see Definition 3.3) has to
be understood with Γ viewed as a discrete group. Therefore that the action is not
topologically free means that there exists γ 6= 1 which acts trivially on a non-empty
open subset of Z.
Lemma 4.3. If the action of Γ on Z is not topologically free, then it is microsupported.
Proof. Let U be a non-empty open subset of Z. Let γ be a non-trivial element
such that there is a non-empty open set V on which γ acts trivially, and let g ∈ Γ
such that g(X \ V ) ⊂ U . Then the non-trivial element gγg−1 acts trivially outside
U , so ΓU is non-trivial.
Definition 4.4. Let Γ0 be the subgroup of Γ generated by the elements γ ∈ Γ
such that Fix(γ) has non-empty interior.
Remark 4.5. When Γ is a countable group, Γ0 is also equal to the envelope of the
URS SΓ (Z) by Lemma 3.7.
Recall that the monolith Mon(Γ) is the intersection of all non-trivial normal
subgroups of Γ. We say Γ is monolithic if Mon(Γ) is non-trivial.
Proposition 4.6. Assume that the action of Γ on Z is not topologically free. Then
the following hold:
20
ADRIEN LE BOUDEC
(a) The commutators [γ1 , γ2 ], where Fix(γ1 ) ∩ Fix(γ2 ) has non-empty interior,
generate [Γ0 , Γ0 ].
(b) Γ is monolithic, and one has Mon(Γ) = [Γ0 , Γ0 ].
(c) Any non-trivial normal subgroup of Γ has trivial centralizer.
(d) If the action of [Γ0 , Γ0 ] on Z is extremely proximal, then [Γ0 , Γ0 ] is a simple
group.
(e) Γ is virtually simple if and only if Γ0 has finite index in Γ and finite abelianization.
Proof. (a) Denote by N the subgroup generated by the set of [γ1 , γ2 ], where γ1 , γ2
act trivially on a common open set. We show that for every g, h fixing non-empty
open sets U, V , the commutator [g, h] belongs to N . Since Γ0 is generated by all
these elements g, h, this will show that Γ0 /N is abelian. Hence [Γ0 , Γ0 ] ≤ N , and
the other inclusion is clear.
First note that N is not trivial by Lemmas 4.1 and 4.3. Therefore N acts
minimally on Z according to Lemma 4.2, and we may find s ∈ N such that the
open set W = U ∩ s(V ) is non-empty. Since g and hs fix W by construction, we
have [g, hs ] ∈ N . But since s ∈ N , we deduce that [g, h] = [h, s]g [g, hs ][s, h] is a
product of three elements of N , and hence belongs to N , as desired.
(b) We shall show that any non-trivial normal subgroup N contains [Γ0 , Γ0 ].
Since [Γ0 , Γ0 ] is itself a non-trivial normal subgroup, this will prove that it is the
monolith of Γ. By a classical commutator manipulation (see e.g. Lemma 4.1 from
[Nek13]), there exists an open set U such that N contains the derived subgroup
of ΓU . Now let γ1 , γ2 fixing an open set V . If γ is such that γ(Z \ V ) ⊂ U , then
γ1γ , γ2γ are supported inside U , so that [γ1γ , γ2γ ] = [γ1 , γ2 ]γ is contained in N . Since
N is normal, [γ1 , γ2 ] ∈ N . Now all these elements generate [Γ0 , Γ0 ] by (a), hence
the conclusion.
(c) If N is a normal subgroup of Γ, then so is CΓ (N ). Therefore they cannot be
both non-trivial, because otherwise the intersection would be abelian and would
contain [Γ0 , Γ0 ] by the previous paragraph, a contradiction.
(d) If the action of [Γ0 , Γ0 ] on Z is extremely proximal, then according to (b)
the monolith N of [Γ0 , Γ0 ] is non-trivial. Since N is characteristic in [Γ0 , Γ0 ], N
is normal in Γ, and hence contains [Γ0 , Γ0 ] by (b). So N = [Γ0 , Γ0 ] and [Γ0 , Γ0 ] is
simple.
(e) For Γ to be virtually simple it is clearly necessary that the normal subgroup
[Γ0 , Γ0 ] has finite index in Γ. Conversely, if this condition holds then the action of
[Γ0 , Γ0 ] on Z is extremely proximal (Lemma 2.5), and [Γ0 , Γ0 ] is simple by (d).
Definition 4.7. Let X be a topological space, and let Γ be a group acting on X .
A non-empty open set Ω ⊂ X is wandering for Γ if the translates γ(Ω), γ ∈ Γ,
are pairwise disjoint. We say that Ω is wandering for γ if it is wandering for hγi.
Proposition 4.8. Let Γ and Z as in (EP). Then there exist an open set Ω and a
non-abelian free subgroup F2 = ∆ ≤ Γ such that Ω is wandering for ∆.
Proof. Following Glasner [Gla74], we consider pairwise disjoint non-empty open
sets U− , U+ , V− , V+ , and elements a, b ∈ Γ such that a(Z\U− ) ⊂ U+ and b(Z\V− ) ⊂
V+ . Let W = U− ∪ U+ ∪ V− ∪ V+ . It follows from a ping-pong argument than any
non-trivial reduced word in the letters a, b sends the complement of W inside W ,
so that the subgroup ∆ generated by a, b is free [Gla74, Th. 3.4].
AMENABLE URS’S AND LATTICE EMBEDDINGS
21
Upon reducing W if necessary, we may find an open set Ω such that Ω ∩ W = ∅
and a(Ω) ⊂ U+ , a−1 (Ω) ⊂ U− , b(Ω) ⊂ V+ and b−1 (Ω) ⊂ V− . Induction on the
word length shows that if the left-most letter of γ ∈ ∆ is respectively a, a−1 , b, b−1 ,
then γ(Ω) lies respectively inside U+ , U− , V+ , V− . In particular Ω ∩ γ(Ω) is empty
since Ω is disjoint from W , so Ω is wandering for ∆.
Proposition 4.9. Retain notations (EP). Then the wreath product ΓU ≀ F2 embeds
into Γ for every open subset U ( Z that is not dense.
Proof. Let Ω and ∆ as in Proposition 4.8, and let Λ be the subgroup of Γ generated
by ΓΩ and ∆. Since Ω is wandering for ∆, all the conjugates γΓΩ γ −1 pairwise
commute, and it follows that Λ is isomorphic to ΓΩ ≀ F2 . Now if U is an in the
statement, by extreme proximality the group ΓU is isomorphic to a subgroup of
ΓΩ , hence the conclusion.
The argument in the following proof is borrowed from [Cap16].
Proposition 4.10. In the setting (EP), if the action of Γ on Z is not topologically
free, then Γ cannot have any faithful linear representation.
Proof. Let U be an open subset of Z. By Lemmas 4.1 and 4.3, we may find a
non-abelian finitely generated subgroup B inside ΓU . Now if we choose U small
enough, it follows from Proposition 4.9 that the finitely generated group Λ = B ≀ F2
is isomorphic to a subgroup of Γ. Since Λ is not residually finite [Gru57], it admits
no faithful linear representation by Malcev’s theorem [Mal40], and a fortiori the
same is true for Γ.
Recall that a subgroup Λ of a group Γ is commensurated if all conjugates of
Λ are commensurable, where two subgroups are commensurable if the intersection
has finite index in both.
The beginning of the argument in the proof of the following proposition already
appeared in [LBW16]. The idea is to extend classical techniques for normal subgroups to certain commensurated subgroups.
Proposition 4.11. Retain notation from (EP), and assume that the action of Γ
on Z is not topologically free. If Λ is a commensurated subgroup of Γ such that
there exists an element of Λ admitting a wandering open set, then Λ contains the
monolith Mon(Γ).
Proof. Let λ ∈ Λ admitting a wandering open set Ω. We shall first prove that
[ΓΩ , ΓΩ ] is contained in Λ. Let g, h ∈ ΓΩ , and let also n ≥ 1. Since Ω is wandering
we have Ω ∩ λn (Ω) = ∅. It follows that the commutator [g, λn ] is trivial outside
Ω ∪ λn (Ω), and coincides with g on Ω and with λn g−1 λ−n on λn (Ω). Therefore
its commutator with h is trivial outside Ω, and is coincides with [g, h] on Ω. But
since g, h ∈ ΓΩ , the elements [[g, λn ], h] and [g, h] actually coincide everywhere, i.e.
[[g, λn ], h] = [g, h].
Now since Λ is commensurated in Γ, there exists n0 ≥ 1 such that [[g, λn0 ], h]
belongs to Λ. Applying the previous argument with n = n0 , we deduce that [g, h]
belongs to Λ.
In order to prove the statement, it is enough to prove that [ΓC , ΓC ] is contained
in Λ for every closed subset C ( Z according to Proposition 4.6. So let C a proper
closed subset of Z. By minimality and extreme proximality, there is γ ∈ Γ such
22
ADRIEN LE BOUDEC
that γ(C) ⊂ Ω. Fix such a γ, and choose some integer n1 ≥ 1 such that γ −1 λn1 γ
belongs to Λ. Set λ′ = γ −1 λn1 γ and Ω′ = γ −1 (Ω). This Ω′ is wandering for λ′ and
λ′ ∈ Λ, so [ΓΩ′ , ΓΩ′ ] is contained in Λ by the first paragraph. Since C ⊂ Ω′ , we
have ΓC ≤ ΓΩ′ , and the proof is complete.
Proposition 4.12. Assume that Z is a compact Γ-space and the action of Γ on
Z is faithful, minimal, extremely proximal and not topologically free. Then there
exists a free subgroup F2 ≤ Γ such that for every commensurated subgroup Λ ≤ Γ
not containing the monolith Mon(Γ), we have F2 ∩ Λ = 1.
Proof. Let F2 be a free subgroup of Γ as in the conclusion of Proposition 4.8. If
Λ ≤ Γ is a commensurated subgroup such that F2 ∩ Λ 6= 1, then in particular Λ
contains an element admitting a wandering open set. So by Proposition 4.11 we
have Mon(Γ) ≤ Λ. This shows that every commensurated subgroup not containing
Mon(Γ) intersects F2 trivially.
Corollary 4.13. Assume that Z is a compact Γ-space and the action of Γ on Z
is faithful, minimal, extremely proximal and not topologically free. If G is a locally
compact amenable group whose connected component G0 is a Lie group, then there
exists no injective homomorphism Γ → G.
Proof. Argue by contradiction and assume Γ embeds in G. Let U be an open
subgroup of G containing G0 as a cocompact subgroup. Such a U is commensurated
in G, so the subgroup Γ ∩ U is commensurated in Γ. If there exists U such that
Γ ∩ U does not contain Mon(Γ), then according to Proposition 4.12 we may find a
non-abelian free subgroup F2 ≤ Γ such that F2 ∩ U = 1. In particular G contains
the non-amenable group F2 as a discrete subgroup, which contradicts amenability
of G. Therefore Mon(Γ) is contained in U for every choice of U . Since compact
open subgroups form a basis at 1 in G/G0 by van Dantzig’s theorem, it follows that
Mon(Γ) actually lies inside G0 . Now since G0 is a connected Lie group, the group
Aut(G0 ) is linear, so the map G → Aut(G0 ) induced by the conjugation action of
G on G0 is not injective in restriction to Γ by Proposition 4.10. Therefore this map
must vanish on Mon(Γ), which means that Mon(Γ) actually lies inside the center
of G0 . In particular Mon(Γ) is abelian, which contradicts Proposition 4.6.
5. The proofs of Theorems 1.1 and 1.2
5.1. Boundary-minimal subgroups. In this paragraph we consider the following property:
Definition 5.1. Let G be a topological group, and L a closed subgroup of G.
We say that L is boundary-minimal if there exists a non-trivial G-boundary on
which L acts minimally.
It should be noted that being boundary-minimal does not prevent L from being
amenable. For instance the action of Thompson’s group T on the circle S1 is
a boundary action, and the abelian subgroup of T consisting of rotations acts
minimally on S1 . Other examples may be found among the groups acting on trees
considered in §6.2, where the stabilizer of a vertex is an amenable subgroup acting
minimally on the ends of the tree.
In the sequel we will mainly focus on the case when L is normal in G, or more
generally when L belongs to a URS (see Proposition 5.8). By contrast with the
AMENABLE URS’S AND LATTICE EMBEDDINGS
23
previous examples, a normal boundary-minimal subgroup is never amenable, as
a normal amenable subgroup of G acts trivially on any G-boundary. Recall that
Furman showed [Fur03, Prop. 7] (see also Caprace–Monod [CM14, Prop. 3]) that
if N is non-amenable normal subgroup of a locally compact group G, there always
exists a G-boundary on which N acts non-trivially. This naturally raises the question whether any non-amenable normal subgroup of G is boundary-minimal. We
do not know the answer to this question. While the case of discrete groups is easily
settled (see below), the situation for non-discrete groups seems to be more delicate.
We recall the following result of Furstenberg (see [Gla76, II.4.3]):
Theorem 5.2. Let G be a topological group, and denote by ϕ : G → Homeo(∂sp G)
the action of G on ∂sp G. Then there exists a homomorphism ψ : Aut(G) →
Homeo(∂sp G) such that ψ ◦ Inn = ϕ, where Inn(G) ≤ Aut(G) is the group of inner
automorphisms of G.
In particular when N is a normal subgroup of a group G, the map G → Aut(N )
coming from the conjugation action of G on N induces an action of G on ∂sp N ,
which factors through G/CG (N ).
Note that this result readily answers the above question for discrete groups,
by showing that the boundary-minimal normal subgroups of G are exactly the
non-amenable normal subgroups of G. Indeed if N is non-amenable then ∂sp N
is a non-trivial space, N acts minimally on ∂sp N , and ∂sp N is a G-boundary by
Theorem 5.2. However the argument does not carry over for arbitrary groups, as
in general the G-action on ∂sp N is not continuous.
Proposition 5.3. Let G be a locally compact group, and N a closed normal nonamenable subgroup. Assume that at least one of the following holds true:
(a) N · CG (N ) is open in G (e.g. when N is a direct factor of G);
(b) N is cocompact in G;
(c) there exists H ∈ URS(N ) that is not a point, invariant by Aut(N ), and that
is an N -boundary (e.g. if N has a closed cocompact amenable subgroup).
Then N is boundary-minimal in G.
Proof. Condition (a) ensures that the image of N in G/CG (N ) is open. Therefore
the G-action on ∂sp N given by Theorem 5.2 is continuous because the N -action
is, and we deduce that ∂sp N is a non-tivial G-boundary. If (b) holds, then N acts
minimally on any G-boundary [Gla76, II.3.2]. Finally the verification of case (c)
is straightforward since G → Aut(N ) is continuous [HR79, Th. 26.7] and Aut(N )
acts continuously on Sub(N ).
5.2. Weakly co-amenable subgroups. In this paragraph we consider the following weakening of the notion of co-amenability.
Definition 5.4. Let G be a topological group, and H a subgroup of G. We say that
H is weakly co-amenable in G if whenever Q is a non-trivial convex compact
G-space in which H fixes a point, Q is not irreducible.
The following properties readily follow from the definition.
Proposition 5.5. Let K ≤ H ≤ G be subgroups of G.
(i) If H ≤ G is co-amenable then H is weakly co-amenable.
24
ADRIEN LE BOUDEC
(ii) For a normal subgroup N ⊳ G, weakly co-amenable is equivalent to coamenable.
(iii) If H ≤ G is amenable and weakly co-amenable in G, then G is amenable.
(iv) If ϕ : G → G′ is continuous with dense image and H ≤ G is weakly coamenable, then ϕ(H) is weakly co-amenable in G′ .
(v) If K is weakly co-amenable in G, then H is weakly co-amenable in G.
(vi) If K is co-amenable in H and H is weakly co-amenable in G, then K is
weakly co-amenable in G.
Proof. (i) If Q is non-trivial convex compact G-space with H-fixed points, then
there is a G-fixed point by co-amenability of H in G, so Q is not irreducible.
(ii) If N ⊳ G is not co-amenable, there is a convex Q such that Fix(N ) is nonempty but Fix(G) is empty. Since N is normal Fix(N ) is G-invariant, so that by
Zorn’s lemma Fix(N ) contains an irreducible convex G-space, which is non-trivial
since Fix(G) is empty. This shows N is not weakly co-amenable.
The proofs of (iii), (iv), (v) and (vi) are similar verifications, and we leave them
to the reader.
Remark 5.6. As for co-amenability, it is natural to wonder whether weak coamenability of K in G implies weak co-amenability of K in H. In view of (ii),
the same counter-examples given in [MP03] show that the answer is negative in
general.
By the correspondence between irreducible convex compact G-spaces and Gboundaries, weak co-amenability admits the following characterization:
Proposition 5.7. A subgroup H ≤ G is weakly co-amenable in G if and only if
for every non-trivial G-boundary X, there is no probability measure on X that is
fixed by H.
Proof. Follows from Theorem 2.1.
The following shows how weak co-amenability naturally appears for boundary
indivisible groups (see also Proposition 6.17).
Proposition 5.8. Let G be a boundary indivisible locally compact group, and L a
closed subgroup of G that is boundary-minimal and uniformly recurrent. Then L is
weakly co-amenable in G.
Proof. Write H for the closure of LG in Sub(G), which is a URS by assumption.
Let X be a non-trivial G-boundary on which L acts minimally, and let Y be a
G-boundary on which L fixes a probability measure. We have to show that Y is
trivial. Since H fixes a point in Prob(Y ) and the G-action on Prob(Y ) is strongly
proximal by Theorem 2.1, H fixes a point in Y by Lemma 3.13. So there exists
y ∈ Y such that L ≤ Gy , and it follows that Gy acts minimally on X. Therefore
by Lemma 3.8 X and Y are disjoint, and since X is non-trivial and G is boundary
indivisible, this is possible only if Y is trivial.
5.3. The proof of Theorem 1.2. In this paragraph we shall give the proof of
Theorem 1.2 from the introduction. We will make use of the following result.
Proposition 5.9 (Furstenberg). Let G be a locally compact group, H ≤ G a closed
subgroup of finite covolume, and X a G-boundary. Then X is a H-boundary.
AMENABLE URS’S AND LATTICE EMBEDDINGS
25
For completeness we repeat the argument from [Fur81, Prop. 4.4].
Proof. Write Q = Prob(X), and consider a closed H-invariant subspace Q′ ⊆ Q.
We have to show that X ⊆ Q′ . The set
X = (gH, µ) : µ ∈ g(Q′ ) ⊆ G/H × Q
is a well-defined, closed, G-invariant subspace of G/H × Q. Fix a G-invariant
probability measure mG/H on G/H, and consider
n
o
Y = ν ∈ Prob(X ) : p∗G/H (ν) = mG/H ,
where pG/H is the projection from G/H ×Q onto the first factor, and p∗G/H is the induced push-forward operator. Then Y is a closed (and hence compact) G-invariant
subspace of Prob(X ), and p∗Q : Y → Prob(Q) is continuous. So p∗Q (Y ) is closed in
Prob(Q), and by strong proximality of the G-action on Q (Theorem 2.1), p∗Q (Y )
must intersect Q. Now X being the unique minimal closed G-invariant subspace
of Q, one has X ⊆ p∗Q (Y ). For every x ∈ X, we therefore have νx ∈ Prob(X ) such
that p∗G/H (νx ) = mG/H and p∗Q (νx ) = δx . This implies mG/H {gH : x ∈ g(Q′ )} = 1
for every x, and it easily follows that X ⊆ Q′ .
Remark 5.10. In the case when H is cocompact in G, strong proximality of the
action of H on X also follows from [Gla76, II.3.1] applied to the action on Prob(X);
and minimality follows [Gla76, IV.5.1] from disjointness of the G-spaces G/H and
X [Gla76, III.6.1].
Theorem 5.11. Assume that H admits an amenable URS H that comes from an
extremely proximal action, and such that Env(H) is co-amenable in H. Let G be a
locally compact group containing H as a closed subgroup of finite covolume. Then:
(a) G is boundary indivisible.
(b) More generally if L is a locally compact group such that there is a sequence
a topological group homomorphisms G = G0 → G1 → . . . → Gn = L such
that either Gi → Gi+1 has dense image, or Gi → Gi+1 is an embedding
of Gi as a closed subgroup of finite covolume in Gi+1 ; then L is boundary
indivisible.
In particular whenever G maps continuously and with dense image to a product
G1 × G2 , one factor Gi must be amenable.
Proof. Since H is amenable, H comes from an extremely proximal action, and
Env(H) is co-amenable in H, the group H is boundary indivisible by Proposition 3.24. Now by Proposition 5.9, the property of being boundary indivisible is
inherited from closed subgroups of finite covolume. Indeed if X, Y are disjoint Gboundaries, i.e. X × Y is a G-boundary, then X × Y is also a boundary for H by
Proposition 5.9, hence of X or Y must be trivial since H is boundary indivisible.
This shows (a). Since boundary indivisibility passes to dense continuous images,
and is inherited from closed subgroups of finite covolume, (b) follows from (a).
Finally if G1 , G2 are as in the last statement and Xi = ∂sp Gi , then X1 × X2 is
a boundary for G1 × G2 , which is boundary indivisible by the previous paragraph.
So one factor Xi must be trivial, which exactly means that Gi is amenable by
Theorem 2.2.
26
ADRIEN LE BOUDEC
Remark 5.12. In the proof of Theorem 5.11 we obtain that G is boundary indivisible from the same property for H, which is itself deduced from Proposition 3.24
(which in turn relies notably on Proposition 3.9). We note that the order in which
the argument is developed seems to matter, in the sense that the arguments applied
to H do not seem to be applicable directly to the group G. Indeed we do not know
whether a group G as in Theorem 5.11 falls into the setting of Proposition 3.9,
i.e. we do not know whether all non-trivial G-boundaries have the same stabilizer
URS. We actually believe this might be false in general.
We note at this point that the proof of Theorem 1.2 from the introduction is
now complete. Indeed the fact that a group G as in Theorem 1.2 is boundary
indivisible, as well as statement (a), is Theorem 5.11; and statement (b) follows
from Proposition 5.8.
The following remark explains a comment from the introduction.
Remark 5.13. Theorem 1.4 provides instances of countable groups Γ being lattices in a group G of the form G = N ⋊ Aut(Td ), such that all the assumptions of
Theorem 1.2 are satisfied (see Section 6). If M is a cocompact subgroup of Aut(Td )
acting minimally on ∂Td , then L = N ⋊ M is a cocompact (hence uniformly recurrent) subgroup of G, and L is boundary-minimal in G since ∂Td is a G-boundary.
However when M is non-unimodular, then M is not co-amenable in Aut(Td ), and
L is not co-amenable in G. This shows that the conclusion of statement (b) in
Theorem 1.2 that L is weakly co-amenable in G cannot be strengthen by saying
that L is co-amenable in G.
5.4. The proof of Theorem 1.1. Recall that if G is a topological group, the
quasi-center QZ(G) of G is the subgroup of G containing the elements g ∈ G
having an open centralizer. Note that QZ(G) contains the elements having a discrete conjugacy class, so in particular it contains all discrete normal subgroups.
Recall also that the elliptic radical of G is the largest normal subgroup of G in
which every compact subset generates a relatively compact subgroup. It is a closed
characteristic subgroup of G.
We say that two groups G1 , G2 are commensurable up to compact kernels
if there exist Ki ≤ Hi ≤ Gi such that Hi is open and of finite index in Gi , Ki is a
compact normal subgroup of Hi , and H1 /K1 and H2 /K2 are isomorphic.
The following is slightly more complete than Theorem 1.1 from the introduction.
Theorem 5.14. Let Γ be a countable group whose Furstenberg URS AΓ comes from
a faithful and extremely proximal action, and assume that Env(AΓ ) is co-amenable
in Γ. Let G be a locally compact group containing Γ as a lattice, and H a group
commenurable with G up to compact kernels. Consider the following properties:
(a) Env(AΓ ) is finitely generated;
(b) Env(AΓ ) has finite index in Γ, and Γ admits a finitely generated subgroup
with finite centralizer;
(c) Env(AΓ ) has finite index in Γ and Env(AΓ ) has finite abelianization.
Then:
• (a), (b) both imply that H cannot be a product of two non-compact groups.
AMENABLE URS’S AND LATTICE EMBEDDINGS
27
• (c) implies that any continuous morphism with dense image from H to a
product of locally compact groups H → G1 × G2 is such that one factor Gi
is compact.
Proof. For simplicity we give the proof for H = G. The general case follows the
same lines.
Of course we may assume that Env(AΓ ) is non-trivial, since otherwise there is
nothing to prove. According to Proposition 4.6 we have in particular that Γ is
monolithic, and Mon(Γ) = [Env(AΓ ), Env(AΓ )]. For simplicity in all the proof we
write E = Env(AΓ ) and M = Mon(Γ) = [E, E].
Assume that ϕ : G → G1 × G2 is continuous with dense image, and denote by
pi the projection G1 × G2 → Gi , i = 1, 2. We will show that one factor must be
compact. Upon modding out by the maximal compact normal subgroup of the
identity component G0 , which intersects Γ trivially since Γ has no non-trivial finite
normal subgroup (Proposition 4.6), we may also assume that G0 has no non-trivial
compact normal subgroup. This implies in particular that G0 is a connected Lie
group [MZ55].
By the assumption that E is co-amenable in Γ, we can apply Theorem 5.11,
which says that one factor, say G2 , must be amenable. We then apply Corollary
4.13, which tells us that the map p2 ◦ ϕ is not injective in restriction to Γ. By
definition of M we deduce that M ≤ ϕ−1 (G1 × 1).
Assume now that (c) holds. Then M , being of finite index in Γ, is a lattice in G,
and is contained in the closed normal subgroup ϕ−1 (G1 × 1). Therefore we deduce
that ϕ−1 (G1 × 1) is cocompact in G, and that p2 ◦ ϕ(G) is a compact subgroup of
G2 . Since p2 ◦ ϕ(G) is also dense in G2 , we have that G2 is compact.
We now have to deal with (a), (b), in which case ϕ is the identity and G =
G1 × G2 . Without loss of generality, we may assume that the projections pi (Γ) are
dense. The proofs of the two cases will share a common mechanism, given by the
following easy fact:
Lemma 5.15. If there exists a subgroup L ≤ G whose centralizer CG (L) contains
G2 , CG (L) is open in G, and CG (L) ∩ Γ is finite, then G2 is compact.
Indeed, since Γ must intersect an open subgroup O ≤ G along a lattice of O, it
follows that CG (L) is compact, and a fortiori so is G2 .
We start with case (a). Consider H1 = p1 (E), which is normal in G1 by density
of p1 (Γ). Note that H1 is compactly generated in view of the assumption that E is
finitely generated. Since M = [E, E], the group H1 /M is abelian, and therefore of
the form Zn × Rm × C for some compact group C. It follows that the group Q1 =
H1 /H10 admits a discrete cocompact normal subgroup ∆, which is an extension of
M by a free abelian group. Being characteristically simple and non-amenable, the
group M has trivial elliptic radical, so the group ∆ also has trivial elliptic radical.
Now since Q1 is compactly generated, there is a compact open normal subgroup K
of Q1 such that K ⋊ ∆ has finite index in Q1 (see e.g. [BCGM16, Lem. 4.4]), so we
deduce that Q1 has a compact open elliptic radical. Since any connected group has
compact elliptic radical [MZ55], we deduce that H1 has a compact elliptic radical
R, and H1 /R is discrete-by-connected. The compact group R is also normal in G,
and therefore we can mod out by R and assume that R is trivial, so that H10 is
open in H1 . Since H10 centralizes M , any γ ∈ E such that p1 (γ) belongs to H10
28
ADRIEN LE BOUDEC
centralizes M , and therefore is trivial by Proposition 4.6. Therefore H10 is open
in H1 and intersects the dense subgroup p1 (E) trivially, so it follows that H10 is
trivial, and p1 (E) is a discrete subgroup of G.
Observe that p1 (E) is centralized by G2 and normalized by G1 , and hence is
normal in G. Being a discrete normal subgroup of G, p1 (E) therefore lies in the
quasi-center QZ(G). Since p1 (E) is finitely generated, the centralizer of p1 (E) in G
is actually open in G. Moreover the subgroup Γ ∩ CG (p1 (E)) is normal in Γ since
CG (p1 (E)) is normal in G, but clearly does not contain M , and hence is trivial
by Proposition 4.6. Therefore we can apply Lemma 5.15 with L = p1 (E), and we
obtain the conclusion.
We now deal with (b). Let Z be a minimal compact Γ-space on which the
Γ-action is faithful and extremely proximal and such that SΓ (Z) = AΓ . By Proposition 5.9 (actually an easy case of it) the action of E on Z is also minimal, and
it is extremely proximal by Lemma 2.5. Moreover the associated stabilizer URS
remains equal to AΓ , and is also the Furstenberg URS of E by Proposition 3.26.
So E satisfies all the assumptions of case (b) of the theorem, so it is enough to
prove the result under the additional assumption Γ = E. In this case we have
M = [Γ, Γ] thanks to Proposition 4.6, so it follows that p2 (Γ) is abelian. By density of the projection the group G2 is also abelian, and hence G2 lies in the center
of G. Therefore Γ is normalized by the dense subgroup ΓG2 , and it follows that Γ
is normal in G. In particular Γ ≤ QZ(G), and the conclusion follows by applying
Lemma 5.15 with L a f.g. subgroup of Γ such that CΓ (L) is finite.
6. Groups acting on trees
6.1. Amenable URS’s and groups acting on trees. In this paragraph T is a
locally finite tree, and H acts continuously on T by isometries. The assumption
that T is locally finite is not essential here, and the results admit appropriate
generalizations for non-locally finite trees (using the compactification from [MS04,
Prop. 4.2]).
Recall that the H-action on T is minimal if there is no proper invariant subtree,
and of general type if H has no finite orbit in T ∪ ∂T . The following is wellknown, and essentially goes back to Tits [Tit70] (see also [PV91] and Proposition
2.4 for details).
Proposition 6.1. If the action of H ≤ Aut(T ) is minimal and of general type,
then the action of H on ∂T is minimal and extremely proximal.
Theorem 5.11 therefore implies the following result:
Corollary 6.2. Let H ≤ Aut(T ) be a locally compact group whose action on T is
continuous, minimal and of general type. Assume that end stabilizers are amenable,
and the envelope of SH (∂T ) is co-amenable in H. Assume H embeds as a subgroup
of finite covolume in G. Then whenever G maps continuously and with dense image
to a product G1 × G2 , one factor Gi must be amenable.
The conclusion of Corollary 6.2 implies in particular that whenever H embeds in
G with finite covolume, then G cannot be a product of two non-amenable groups.
The following example, which is largely inspired from [CM11, Ex. II.8], shows that
the group G can nonetheless be a product of two non-compact groups.
AMENABLE URS’S AND LATTICE EMBEDDINGS
29
Example 6.3. Let k = Fp ((t)) be the field of Laurent series over the finite field Fp ,
and let α ∈ Aut(k) be a non-trivial automorphism of k. The group L = SL(2, k)
acts on a (p + 1)-regular tree, 2-transitively and with amenable stabilizers on the
boundary. This action extends to a continuous action of H = L ⋊α Z, so that H
satisfies all the assumptions of Corollary 6.2. Nevertheless H embeds diagonally in
the product G = (L ⋊ Aut(k)) × Z as a closed subgroup of finite covolume since
G/H is compact and H and G are unimodular.
We will need the following fact. If A is a subtree of T , by the fixator of A we
mean the subgroup fixing pointwise A.
Proposition 6.4. Let Γ ≤ Aut(T ) be a countable group whose action on T is
minimal and of general type, and such that end-stabilizers in Γ are amenable. Then
AΓ = SΓ (∂T ), and Env(AΓ ) is the subgroup generated by fixators of half-trees.
Proof. Since the Γ-action on ∂T is extremely proximal, it is also strongly proximal
by Theorem 2.3. So ∂T is a Γ-boundary with amenable stabilizers, and we deduce
that AΓ = SΓ (∂T ) by Proposition 3.25.
Now according to Lemma 3.7, the subgroup Env(SΓ (∂T )) = Env(AΓ ) is generated by the elements γ ∈ Γ whose fixed point set in ∂T has non-empty interior.
Since half-trees form a basis of the topology in ∂T , the statement follows.
Before going to the proof of Corollary 1.3, we make the following observation:
Remark 6.5. For Γ acting on T (action minimal and general type) such that
the action on ∂T is not topologically free, virtual simplicity of Γ is equivalent to
Γ0 being of finite index in Γ and Γ0 having finite abelianization, where Γ0 is the
subgroup generated by fixators of half-trees. See statement (e) of Proposition 4.6.
Proof of Corollary 1.3. In view of Proposition 6.4, the assumptions on Γ imply
that the Furstenberg URS of Γ comes from a faithful extremely proximal action.
The fact that end-stabilizers are all non-trivial means that the action of Γ on ∂T
is not topologically free, and by the above observation virtual simplicity of Γ is
equivalent to Env(AΓ ) being of finite index in Γ and with finite abelianization.
The first statement of the corollary therefore follows from Theorem 5.14, case (c),
and the second statement from Theorem 1.2.
6.2. Groups with prescribed local action. In the next paragraphs we will
illustrate the results of the previous sections on a family of groups acting on trees,
which contains instances of discrete and non-discrete groups. The purpose of this
paragraph is to recall the definition and give a brief description of known properties
of these groups.
We will denote by Ω a set of cardinality d ≥ 3 and by Td a d-regular tree. The
vertex set and edge set of Td will be denoted respectively Vd and Ed . We fix a
coloring c : Ed → Ω such that neighbouring edges have different colors. For every
g ∈ Aut(Td ) and every v ∈ Vd , the action of g on the star around v gives rise to
a permutation of Ω, denoted σ(g, v), and called the local permutation of g at v.
These permutations satisfy the identity
(1)
σ(gh, v) = σ(g, hv)σ(h, v)
for every g, h ∈ Aut(Td ) and v ∈ Vd .
30
ADRIEN LE BOUDEC
Given a permutation group F ≤ Sym(Ω), the group U (F ) introduced by Burger
and Mozes in [BM00a] is the group of automorphisms g ∈ Aut(Td ) such that
σ(g, v) ∈ F for all v. It is a closed cocompact subgroup of Aut(Td ).
Definition 6.6. Given F ≤ F ′ ≤ Sym(Ω), we denote by G(F, F ′ ) the group of
automorphisms g ∈ Aut(Td ) such that σ(g, v) ∈ F ′ for all v and σ(g, v) ∈ F for all
but finitely many v.
That G(F, F ′ ) is indeed a subgroup of Aut(Td ) follows from (1), and we note
that we have U (F ) ≤ G(F, F ′ ) ≤ U (F ′ ). We make the following observation for
future reference.
Remark 6.7. As it follows from the definition, any element γ ∈ G(F, F ′ ) fixing
an edge e can be (uniquely) written as γ = γ1 γ2 , where each γi belongs to G(F, F ′ )
and fixes one of the two half-trees defined by e.
In the sequel we always assume that F ′ preserves the F -orbits in Ω (see [LB16,
Lem. 3.3] for the relevance of this property in this context). These groups satisfy
the following properties (see [LB16]):
(1) The group G(F, F ′ ) is dense in the locally compact group U (F ′ ). In particular G(F, Sym(Ω)) is a dense subgroup of Aut(Td ).
(2) G(F, F ′ ) admits a locally compact group topology (defined by requiring that
the inclusion of U (F ) is continuous and open), and the action of G(F, F ′ )
on Td is continuous but not proper as soon as F 6= F ′ . Endowed with this
topology, the group G(F, F ′ ) is compactly generated.
(3) stabilizers of vertices and stabilizers of ends in G(F, F ′ ) are respectively locally elliptic and (locally elliptic)-by-cyclic. In particular they are amenable.
(4) G(F, F ′ ) is a discrete group if and only if F acts freely on Ω. When this is
so, the group G(F, F ′ ) is therefore a finitely generated group, and stabilizers
of vertices and stabilizers of ends in G(F, F ′ ) are respectively locally finite
and (locally finite)-by-cyclic.
When F acts freely on Ω and F 6= F ′ , the groups G(F, F ′ ) are instances of
groups obtained from a more general construction described in [LB17, Sec. 4]
(more precisely, a variation of it), which provides discrete groups with a continuous
Furstenberg URS (the later being the stabilizer URS associated to the action on
the boundary of the tree on which these groups act). In the particular case of the
groups G(F, F ′ ), the Furstenberg URS can be explicitly described, see Proposition
4.28 and Corollary 4.29 in [LBMB16].
In the sequel whenever we use letters F and F ′ , we will always mean that F, F ′
are permutation groups on a set Ω, that F ′ contains F and preserves the F -orbits
in Ω. Following [Tit70], we will denote by G(F, F ′ )+ the subgroup of G(F, F ′ ) generated by fixators of edges, and by G(F, F ′ )∗ the subgroup of index two in G(F, F ′ )
preserving the bipartition of Td .
The following result, also obtained in [CRW17, Prop. 9.16], supplements simplicity results obtained in [LB16], where the index of the simple subgroup was found
explicitly under appropriate assumptions on the permutation groups.
Proposition 6.8. The group G(F, F ′ ) has a simple subgroup of finite index if and
only if F is transitive and F ′ is generated by its point stabilizers.
AMENABLE URS’S AND LATTICE EMBEDDINGS
31
Proof. These conditions are necessary by [LB16, Prop. 4.7]. Conversely, assume F
transitive and F ′ generated by its point stabilizers. By [LB16, Prop. 4.7] again,
G(F, F ′ )+ has index two in G(F, F ′ ), so in particular it is compactly generated. If
M is the monolith of G(F, F ′ ), which is simple and open in G(F, F ′ ) by [LB16, Cor.
4.9], we have to show M has finite index. According to Remark 6.7, G(F, F ′ )+ is
also the subgroup generated by fixators of half-trees, and therefore by Proposition
4.6 M is the commutator subgroup of G(F, F ′ )+ . The abelianization of G(F, F ′ )+
is therefore a finitely generated abelian group, which is generated by torsion elements since G(F, F ′ )+ is generated by locally elliptic subgroups (fixators of edges).
Therefore this abelianization is finite, and it follows that M has finite index in
G(F, F ′ ).
6.3. Boundaries of G(F, F ′ ). In this paragraph we use results from the previous
sections in order to study the boundaries of the discrete groups G(F, F ′ ). The following result shows that several properties of the set of boundaries are governed by
the permutation groups, and that rigidity phenomena occur under mild conditions
of the permutation groups.
Theorem 6.9. Assume F acts freely on Ω, F 6= F ′ , and write Γ = G(F, F ′ ). The
following are equivalent:
(i) The subgroup of F ′ generated by its point stabilizers has at most two orbits
in Ω.
(ii) Γ/Env(AΓ ) is isomorphic to one of C2 , D∞ or D∞ ⋊ C2 .
(iii) Env(AΓ ) is co-amenable in Γ.
(iv) SΓ (X) = AΓ for every non-trivial Γ-boundary X.
(v) Γ is boundary indivisible.
We will need preliminary results before proving Theorem 6.9.
Lemma 6.10. Assume that F acts freely on Ω. The envelope of the Furstenberg
URS of G(F, F ′ ) is equal to G(F, F ′ )+ .
Proof. Write Γ = G(F, F ′ ) and Γ+ = G(F, F ′ )+ . According to Proposition 6.4,
Env(AΓ ) is the subgroup generated by fixators of half-trees in Γ. Therefore the
inclusion Env(AΓ ) ≤ Γ+ is clear. The converse inclusion also holds true by Remark
6.7, so equality follows.
In view of Lemma 6.10 and Proposition 3.24, we are led to consider the quotient
G(F, F ′ )/G(F, F ′ )+ , and in particular study when it is amenable. To this end, we
will denote by F ′+ the subgroup of F ′ generated by its point stabilizers, and write
D = F ′ /F ′+ . Since F ′+ is normal in F ′ , we have an action of F ′ on the set of
orbits of F ′+ , which factors through a free action of D.
Proposition 6.11. The group Q = G(F, F ′ )∗ /G(F, F ′ )+ is isomorphic to the group
U (D)∗ , where D = F ′ /F ′+ is viewed as a permutation group acting freely on the
set of orbits of F ′+ in Ω. If moreover F is transitive, one has Q = D ∗ D.
Proof. We let O1 , . . . , Or ⊆ Ω be the orbits of F ′+ , and we will freely identify
the set of orbits with the integers {1, . . . , r}. For every a ∈ Ω, there is a unique
i ∈ {1, . . . , r} such that a ∈ Oi , and we denote i = ia .
We view the tree Td as the Cayley graph of the free Coxeter group of rank d,
namely the group defined by generators x1 , . . . , xd and relators x2j = 1 for all j.
32
ADRIEN LE BOUDEC
When adding relations of the form xa = xb whenever ia = ib (i.e. a and b are in the
same F ′+ -orbit), we obtain a free Coxeter group of rank r. It has a Cayley graph
that is a regular tree of degree r, and we have a surjective map p : Td → Tr .
Two elements v, w ∈ ∗d C2 = hx1 , . . . , xd i have the same image in ∗r C2 if and
Q
only if one can write w = v wj xaj xbj wj−1 for some words wj and colors aj , bj
such that iaj = ibj . Since the inverse of wj is equal to the word obtained from wj
by reversing the order, we have:
Lemma 6.12. Two vertices v, w of Td have the same projection in Tr if and only
if the distance between v and w is even, say d(v, w) = 2m, and if (a1 , . . . , a2m ) is
the sequence of colors from v to w, then the word (ia1 , . . . , ia2m ) is a concatenation
of palindromes of even length.
Lemma 6.13. For every g ∈ G(F, F ′ ) and every vertex v on Td , the image of
σ(g, v) in D = F ′ /F ′+ does not depend on v. We denote by σg ∈ D the corresponding element, which is trivial when g ∈ G(F, F ′ )+ .
Proof. If v, w are adjacent vertices and a is the color of the edge between them,
then σ(g, v)(a) = σ(g, w)(a). So σ(g, v)σ(g, w)−1 ∈ F ′+ . The first statement
follows by connectedness. The fact that σg is trivial on G(F, F ′ )+ is then clear
because g 7→ σg is a morphism according to the first statement, which vanishes on
fixators of edges.
Note that the set of edges of Tr inherits a natural coloring by the integers 1, . . . , r.
Lemma 6.14. There is a natural morphism ϕ : G(F, F ′ ) → Aut(Tr ) such that
ker(ϕ) = G(F, F ′ )+ and Im(ϕ) = U (D).
Proof. We shall first define an action of G(F, F ′ ) on the set of vertices of Tr . Let
g ∈ G(F, F ′ ). Let v, w be two vertices of Td , and (a1 , . . . , an ) be the sequence of
colors from v to w. If v = v1 , . . . , vn+1 = w are the vertices between v and w,
then the sequence of colors from g(v) to g(w) is (σ(g, v1 )(a1 ), . . . , σ(g, vn )(an )). If
σg is the element defined in Lemma 6.13, then one has iσ(g,vj )(aj ) = σg (iaj ) for all
j = 1, . . . , n. This shows in particular that if v, w satisfy the condition of Lemma
6.12, then the same holds for g(v) and g(w). This means that for every vertex x
of Tr , the formula
(2)
ϕ(g)(x) := p(gx̃),
where x̃ is any vertex of Td such that p(x̃) = x, is a well-defined action of G(F, F ′ )
on Tr . The fact that the tree structure is preserved is clear. Note that for every
g ∈ G(F, F ′ ), all the local permutations of ϕ(g) are equal to σg : for every vertex x
of Tr , one has σ(ϕ(g), x) = σg . In particular the image of ϕ lies inside U (D).
We shall prove that ker(ϕ) = G(F, F ′ )+ . Let g ∈ G(F, F ′ ) fixing an edge in Td .
Then ϕ(g) also fixes an edge of Tr by (2). Moreover one has σg = 1 (Lemma 6.13),
and it follows from the previous paragraph that all local permutations of ϕ(g) are
trivial. This implies g ∈ ker(ϕ). Conversely, we let g be an element of ker(ϕ),
and we prove that g ∈ G(F, F ′ )+ . Note that since ϕ(g) is trivial, one has σg = 1,
i.e. all local permutations of g are in F ′+ . Now let v be any vertex. By Lemma
6.12, the sequence of colors (a1 , . . . , a2n ) from v to g(v) gives rise to a sequence
(ia1 , . . . , ia2n ) that is a concatenation of palindromes. For simplicity we treat the
case where (ia1 , . . . , ia2n ) is a palindrome; the general case consists in repeating the
AMENABLE URS’S AND LATTICE EMBEDDINGS
33
argument for this case. Let v = v1 , . . . , v2n+1 = g(v) be the vertices between v
and g(v) (note that vn is the midpoint between v and g(v)). Since (ia1 , . . . , ia2n )
is a palindrome, one easily checks that there are elements g1 , . . . , gn such that gj
belongs to the stabilizer of vj in G(F, F ′+ ) and g′ = g1 . . . gn g fixes the vertex v
(this is obtained by successively folding the geodesic [v, g(v)] onto itself starting
from its midpoint in order to bring back g(v) to v with g1 . . . gn ). We now invoke
the following easy fact, whose verification is left to the reader.
Lemma 6.15. Let γ ∈ G(F, F ′ ) fixing a vertex w, and such that σ(γ, w) ∈ F ′+ .
Then γ ∈ G(F, F ′ )+ .
We apply Lemma 6.15 to g′ and all gj ’s, and deduce that g = gn−1 . . . g1−1 g′
belongs to G(F, F ′ )+ as desired.
The last thing that remains to be proved in the statement of Lemma 6.14 is that
the image of ϕ is equal to U (D). The fact that ϕ(g) always belongs to U (D) has
already been observed. For the converse inclusion, observe that since G(F, F ′ ) acts
transitively on the vertices of Tr (as it is already the case on Td ), it is enough to
check that the image of ϕ contains U (D)x for some vertex x of Tr . Now since D
acts freely on {1, . . . , r}, the map U (D)x → D, γ 7→ σ(γ, x), is an isomorphism.
Therefore it is enough to see that any action on the star around x on Tr can
realized by an element of G(F, F ′ ), and this is indeed the case (see e.g. Lemma 3.4
in [LB16]).
To finish the proof of the proposition, remark that the image of G(F, F ′ )∗ by ϕ
is precisely U (D)∗ . When F is transitive then D is also transitive, so that U (D)∗
has two orbits of vertices and one orbit of edges, and therefore splits as the free
product D ∗ D.
Remark 6.16. The case F = F ′ is allowed in Proposition 6.11, so that the conclusion also holds for the groups U (F ) from [BM00a].
Proposition 6.11 naturally leads us to isolate the following three situations. We
keep the previous notation, so that r is the number of orbits of F ′+ in Ω, D =
F ′ /F ′+ and Q = G(F, F ′ )∗ /G(F, F ′ )+ :
(1) r = 1. In this case Tr is a segment of length one, and D and Q are trivial.
(2) r = 2. Tr is a bi-infinite line, and this case splits into two disjoint sub-cases:
(a) If F ′ is intransitive then D is trivial, and Q = U (1)∗ = Z (generated by a
translation of Tr of length 2).
(b) If F ′ is transitive then we have D = Sym(2) and Q = D ∗ D = D∞ .
(3) r ≥ 3. Then Q = U (D)∗ is a virtually free group (since it acts vertex transitively and with trivial edge stabilizers of Tr ).
Theorem 6.9 says that all properties stated there hold true if and only if r ∈
{1, 2}. A sufficient condition for having r = 1 is for instance that F acts transitively
on Ω, F 6= F ′ and F ′ acts primitively, or quasi-primitively on Ω. Recall that
a permutation group is quasi-primitive if every non-trivial normal subgroup acts
transitively.
But Theorem 6.9 also applies beyond the case of quasi-primitive permutation
groups. For example a situation giving rise to case (2) (a) is when F ′ has a fixed
point and acts transitively on the complement. Examples giving rise to case (2) (b)
are for instance obtained by taking F ′ = Sym(n)≀C2 = Sym(n)≀hτ i acting naturally
34
ADRIEN LE BOUDEC
on 2n letters, and F the subgroup generated by ((cn , cn ), 1) and ((1, 1), τ ), where
cn is a cycle of order n.
Proof of Theorem 6.9. (i) ⇒ (ii) follows from Lemma 6.10 and Proposition 6.11
(and the discussion following its proof). (ii) ⇒ (iii) is clear. (iii) ⇒ (iv) is Proposition 3.24. (iv) ⇒ (v) is guaranteed by Proposition 3.9 and the fact that AΓ is not
a point. Finally assume that (i) does not hold, i.e. F ′+ has at least three orbits
in Ω, and write Γ+ = G(F, F ′ )+ . By Proposition 6.11, the group Q = Γ/Γ+ has
a subgroup of finite index that is free of rank at least 2. So there exist non-trivial
Q-boundaries, and a fortiori these are non-trivial Γ-boundaries. If X is such a
boundary, then Γ+ acts trivially on X. Since Γ+ also acts minimally on ∂Td , it
follows that X and ∂Td are disjoint Γ-boundaries, contradicting (v). Therefore
property (v) implies property (i), and the proof is complete.
6.4. Weakly co-amenable subgroups. In this paragraph we show that subgroups of the groups G(F, F ′ ) satisfy the following dichotomy:
Proposition 6.17. Assume that F acts freely transitively and F ′ acts primitively
on Ω. Then any subgroup of G(F, F ′ ) is either (locally finite)-by-cyclic (and hence
amenable) or weakly co-amenable.
We will need the following lemma.
Lemma 6.18. Assume that F ′ acts primitively on Ω, and take two subgroups
H1 6= H2 in the Furstenberg URS of G(F, F ′ ). Then hH1 , H2 i = G(F, F ′ )∗ .
Proof. Write Γ = G(F, F ′ ). Recall from [LBMB16, Prop. 4.28] that the Furstenberg
URS of Γ consists of subgroups Γ0ξ , ξ ∈ ∂Td , where Γ0ξ is the set of elements acting
trivially on a neighbourhood of ξ. Given ξ 6= η ∈ ∂Td , we show that the subgroup
Λ generated by Γ0ξ and Γ0η must be equal to G(F, F ′ )∗ .
Take a vertex v on the geodesic from ξ to η, let e1 , e2 be the edges containing
v and pointing towards ξ and η, and a, b the colors of e1 , e2 . Denote by K(v) the
subgroup of Γ consisting of elements γ fixing v and such that σ(γ, w) ∈ F for every
w 6= v. Since F ′ is primitive, F ′ is generated by the point stabilizers Fa′ and Fb′ .
This implies that every element of K(v) may be written as a product of elements
fixing either the half-tree defined by e1 containing ξ, or the half-tree defined by e2
containing η, so that K(v) ≤ Λ. Since v was arbitrary, we also have K(v ′ ) ≤ Λ for
v ′ a neighbour of v on the geodesic [ξ, η]. The conclusion now follows since for two
neighbouring vertices v, v ′ , the subgroups K(v), K(v ′ ) always generate G(F, F ′ )∗
[LB16, Cor. 3.10].
Proof of Proposition 6.17. Write Γ = G(F, F ′ ), and let Λ be a subgroup of Γ that is
non-amenable, equivalently whose action of Td is of general type. By Proposition
5.7 we have to show that Λ fixes no probability measure on any non-trivial Γboundary. Argue by contradiction and assume that X is a non-trivial Γ-boundary
on which Λ fixes a probability measure µ. According to Theorem 6.9, we have
SΓ (X) = AΓ . Therefore by Proposition 3.2 there exist an almost 1-1 extension
η : X̃ → X and a factor map π : X̃ → AΓ .
Let Q ⊂ Prob(X̃) be the set of ν such that η ∗ ν = µ, and write R = π ∗ (Q),
which is a closed Λ-invariant subset of Prob(AΓ ). Since the action of Λ on ∂Td
is strongly proximal and since AΓ is a factor of ∂Td [LBMB16, Prop. 2.10-4.28],
AMENABLE URS’S AND LATTICE EMBEDDINGS
35
we deduce that R contains some Dirac measures. Let H ∈ AΓ such that there is
ν ∈ Prob(X̃) with η ∗ ν = µ and π ∗ ν = δH . Such a measure ν must be supported
in the set of (x, H) ∈ X̃, and it follows that µ is supported in the set of H-fixed
points in X (because (x, H) ∈ X̃ implies that H ≤ Gx by upper semi-continuity
of the stabilizer map). But since Λ does not fix any point in AΓ , we may find
another H ′ ∈ AΓ such that δH ′ ∈ R, so that the same argument shows that H ′ also
acts trivially on the support of µ. By Lemma 6.18 the subgroups H, H ′ generate
G(F, F ′ )∗ , which is of index two in Γ. Therefore any point in the support of µ has
a Γ-orbit of cardinality at most two, which is absurd since Γ acts minimally on X
and X is non-trivial by assumption.
Remark 6.19. Assume F acts freely transitively and F ′ acts primitively, and
write Γ = G(F, F ′ ). Let Λ ≤ Γ such that the Λ-action on Td is of general type but
with a proper Λ-invariant subtree. For instance one could take for Λ the subgroup
generated by two hyperbolic elements with sufficiently far apart axis. Then Λ is
not co-amenable in Γ (see the argument in the proof of Theorem 2.4 in [CM09]),
but Λ is weakly co-amenable in Γ by Proposition 6.17.
Remark 6.20. We mention that when F ′ is primitive, following the proof of Corollary 4.14 from [LBMB16] (with minor modifications), one could prove that every
non-trivial G(F, F ′ )-boundary factors onto ∂Td . This would provide an alternative
proof of Proposition 6.17.
6.5. Lattice embeddings of the groups G(F, F ′ ). In this section we study how
the discrete groups G(F, F ′ ) can embed as lattices in some locally compact groups.
The purpose of this paragraph is twofold:
(1) First we apply previous results of the article to the family of groups G(F, F ′ )
and deduce some properties of general locally compact groups containing a
group G(F, F ′ ) as a lattice (Corollary 6.22).
(2) Second we explain how the groups G(F, F ′ ) embed as lattices in some locally
compact wreath products. This will be the content of §6.5.2 below.
Remark 6.21. Maybe it is worth pointing out that instances of lattice embeddings
of the groups G(F, F ′ ) already appeared in [LB16]. Indeed under appropriate
assumptions on permutation groups F ≤ F ′ , H ≤ H ′ , the inclusion of G(F, F ′ ) in
G(H, H ′ ) has discrete and cocompact image [LB16, Cor. 7.4].
Corollary 6.22. Assume that F acts freely transitively on Ω, and that F ′ is generated by its point stabilizers. Let G be a locally compact group containing G(F, F ′ )
as a lattice. Then the conclusions of Corollary 1.3 hold.
Proof. The assumptions on F, F ′ imply that G(F, F ′ ) is virtually simple by Proposition 6.8, so Corollary 1.3 applies.
Remark 6.23. In the setting of Corollary 6.22, although G(F, F ′ ) cannot be a
lattice in a product, it happens that there exist non-discrete groups G1 , G2 such
that G(F, F ′ ) embeds as a discrete subgroup of G1 × G2 with injective and dense
projection to each factor. For instance if F1 , F2 are permutation groups such that
F
Fi ≤ F ′ and we set Gi = G(Fi , F ′ ), then the diagonal embedding of G(F, F ′ )
in G1 × G2 has this property as soon as F1 ∩ F2 = F . See [LB16, Lem. 3.4 and
§7.1].
36
ADRIEN LE BOUDEC
6.5.1. Locally compact wreath products. In this paragraph we introduce some terminology that will be used in the sequel.
Let Ω be a set, B a group and A a subgroup of B. We will denote by B Ω,A the
set of functions f : Ω → B such that f (x) ∈ A for all but finitely many x ∈ Ω.
Note that B Ω,A is a group.
Definition 6.24 ([Cor17]). If H is a group acting on Ω, the semi-restricted
Ω,A ⋊ H,
permutational wreath product B ≀A
Ω H is the semi-direct product B
X,A
−1
where h ∈ H acts on f ∈ B
by (hf )(x) = f (h x).
The extreme situations when A = 1 and when A = B correspond respectively
to the restricted and the unrestricted wreath product. When A = 1, we shall write
B ≀Ω H for the restricted wreath product. Also for simplicity we will sometimes
say “wreath product” B ≀A
Ω H instead of “semi-restricted permutational wreath
product”.
When A is a compact group and H a locally compact group acting continuously
on Ω, the group AΩ ⋊ H is a locally compact group for the product topology. If
moreover B is locally compact and A is compact open in B, there is a natural
locally compact group topology on B ≀A
Ω H, defined by requiring that the inclusion
Ω
A
of A ⋊ H in B ≀Ω H is continuous and open. See [Cor17, Sec. 2].
In the remaining of the article we shall be interested in the study of certain
lattices in some locally compact groups B ≀A
Ω H. A few remarks are in order:
Lemma 6.25. Let A, B, Ω and H as above.
(a) Assume Γ1 is a lattice in B Ω,A and Γ2 is a lattice in H that normalizes Γ1 .
Then Γ1 ⋊ Γ2 is a lattice in B ≀A
Ω H.
A
(b) For B ≀Ω H to contain a lattice, it is necessary that H contains a lattice.
Proof. For the first statement, see [Rag72, Lem. I.1.6-7]. For the second statement,
Ω
observe that if Γ ≤ B ≀A
Ω H is a lattice, the intersection ΓA = Γ ∩ (A ⋊ H) is a
Ω
lattice in AΩ ⋊H since AΩ ⋊H is open in B ≀A
Ω H. The subgroup A being compact,
the projection of ΓA to H is discrete, and hence is a lattice in H.
Recall that there are various notions of irreducibility for a lattice in a direct
product of groups. In general whether all these notions coincide depends on the
context. We refer to [CM12, 2.B] and [CM09, 4.A] for detailed discussions. In the
setting of wreath products, we will use the following terminology:
Definition 6.26. A lattice Γ in B ≀A
Ω H is an irreducible lattice if Γ has a
non-discrete projection to the group H.
This definition implies that neither Γ nor its finite index subgroups can be of
the form Γ1 ⋊ Γ2 as in Lemma 6.25.
Lemma 6.27. If the group B Ω,A does not contain any lattice, then any lattice in
B ≀A
Ω H is irreducible.
Proof. If Γ ≤ B≀A
Ω H is a lattice with a discrete projection Λ to H, then the subgroup
Λ
contains
Γ as a lattice, and it follows that Γ intersects the subgroup B Ω,A ,
B ≀A
Ω
Ω,A .
which is open in B ≀A
Ω Λ, along a lattice of B
AMENABLE URS’S AND LATTICE EMBEDDINGS
37
Remark 6.28. Of course if B admits no lattice then the same holds for B Ω,A .
More interestingly, there are finite groups A, B for which B Ω,A fails to admit any
lattice (provided Ω is infinite). This is for instance the case when A 6= B and any
non-trivial element of B has a non-trivial power in A. Consequently all lattices in
B ≀A
Ω H are irreducible by Lemma 6.27 (for arbitrary H).
Proof of Remark 6.28. We claim that the above condition on A, B actually implies
that B Ω,A has no infinite discrete subgroup. For every finite Σ ⊂ Ω, we write
OΣ ≤ B Ω,A for the subgroup vanishing on Σ, and UΣ = OΣ ∩ AΩ . Assume Γ is a
discrete subgroup of B Ω,A , so that there is a finite Σ ⊂ Ω such that Γ ∩ UΣ = 1.
The assumption on A, B is easily seen to imply that any non-trivial subgroup of
OΣ intersects UΣ non-trivially. Therefore Γ ∩ OΣ = 1, and OΣ being of finite index
in B Ω,A , Γ is finite.
It should be noted that the existence of an irreducible lattice in B ≀A
Ω H forces
H to be non-discrete and B to be non-trivial. However this does not force B Ω,A to
be non-discrete, and as we will see below, interesting examples already arise when
B is finite and A is trivial.
6.5.2. The proof of Theorem 1.4. Let n ≥ 2 and d ≥ 3. We denote by Σn the
(V )
set of integers {0, . . . , n − 1}, and by Σn d the set of functions f : Vd → Σn with
finite support, where the support of f is the set of v such that f (v) 6= 0. We will
also write fv for the image of v by f , and sometimes use the notation (fv ) for the
function f .
We consider the graph Xn,d whose set of vertices is the set of pairs (f, e), where
(V )
f belongs to Σn d and e ∈ Ed , and edges emanating from a vertex (f, e) are of two
types:
• type 1: (f, e′ ) is connected to (f, e) if e′ ∈ Ed is a neighbour of e (i.e. if e
and e′ share exactly one vertex);
• type 2: (f ′ , e) is connected to (f, e) if the function f ′ is obtained from f by
changing the value at exactly one vertex of e.
Note that since any e ∈ Ed has 2(d − 1) neighbours and Σn has cardinality n,
every vertex of Xn,d has 2(d − 1) neighbours of type 1 and 2(n − 1) neighbours of
type 2. The graph Xn,d is almost the wreath product of the complete graph on n
vertices with the tree Td , see below.
Let Sn be the group of permutations of Σn . For σ ∈ Sn and i ∈ Σn , we will write
σ · i the action of σ on i. The stabilizer of 0 ∈ Σn in Sn is obviously isomorphic
to Sn−1 , and by abuse of notation we will denote it Sn−1 . In particular when
viewing Sn−1 as a subgroup of Sn , we will always implicitly mean that Sn−1 is
the subgroup of Sn acting only on {1, . . . , n − 1}.
S
Definition 6.29. We will denote by Gn,d the wreath product Sn ≀Vdn−1 Aut(Td ).
Groups of the form Gn,d were considered in [Cor17, Ex. 2.6]. We will denote
V ,S
Un,d = Snd n−1 , so that Gn,d = Un,d ⋊ Aut(Td ). We endow Gn,d with the topology
such that sets of the form ((σv )U1 , γU2 ) form a basis of neighbourhoods of ((σv ), γ),
d
and in
where U1 and U2 belong to a basis of the identity respectively in SVn−1
Aut(Td ). This defines a totally disconnected locally compact group topology on
Gn,d (see [Cor17, Prop. 2.3]). We note that the case n = 2 is somehow particular,
38
ADRIEN LE BOUDEC
as U2,d is a discrete subgroup of G2,d , and G2,d is just the restricted wreath product
G2,d = C2 ≀Vd Aut(Td ) = (⊕Vd C2 ) ⋊ Aut(Td ).
Proposition 6.30. The group Gn,d acts by automorphisms on the graph Xn,d by
preserving the types of edges. Moreover the action is faithful, continuous, proper
and transitive on the set of vertices.
Proof. The group Gn,d is a subgroup of the unrestricted permutational wreath
product of Sn and Aut(Td ). The latter group has a faithful action on the set of
functions Vd → Σn , given by
((σv ), γ) · (fv ) = (σv · fγ −1 v ).
(V )
The group Gn,d preserves Σn d because σv fixes 0 almost surely if (σv ) belongs to
Un,d . Now the projection from Gn,d onto Aut(Td ) induces an action of Gn,d on the
(V )
set Ed , and we will consider the diagonal action of Gn,d on Σn d × Ed . In other
words, if g = ((σv ), γ) ∈ Gn,d , and ((fv ), e) ∈ Xn,d ,
(3)
g · ((fv ), e) = (σv · fγ −1 v ), γe .
Fix x = ((fv ), e) ∈ Xn,d and g = ((σv ), γ) ∈ Gn,d , and let x′ be a neighbour of
x. If x′ is of type 1, then we have x′ = ((fv ), e′ ), where e and e′ share a vertex w
in Td . Then γe and γe′ have the vertex γw in common, so that by the formula (3),
g · x′ is a neighbour of type 1 of g · x in Xn,d . Now if x′ is of type 2, then we may
write x′ = ((fv′ ), e) with fv′ = fv if and only if v 6= w, where w is one of the two
vertices of e. It follows that σv · fγ −1 v = σv · fγ′ −1 v if and only if v 6= γw, so that by
(3) g · x′ is a neighbour of type 2 of g · x. This shows that the action is by graph
automorphisms and preserves the types of edges.
Lemma 6.31. Let e ∈ Ed , and let K be the stabilizer of e in Aut(TQ
d ). Then the
stabilizer of the vertex ((0), e) in Gn,d is the compact open subgroup Sn−1 ⋊ K.
Proof. That g = ((σv ), γ) fixes ((0), e) exactly means by (3) that σv fixes 0 for all
v and that γ fixes e.
So the fact that the action is continuous and proper follows from the lemma,
and the transitivity on the set of vertices is an easy verification.
Consider now the free product Cd ∗ Cd of two cyclic groups of order d, acting on
its Bass-Serre tree Td with one orbit of edges and two orbits of vertices. Denote by
Cn the cyclic subgroup of Sn generated by the cycle (0, . . . , n − 1), and set
Γn,d := Cn ≀Vd (Cd ∗ Cd ) ≤ Gn,d .
Remark that Cd ∗ Cd has a split morphism onto Cd , whose kernel acts on Td
with two orbits of vertices and is free of rank d − 1. Therefore Γn,d splits as
Γn,d = (Cn2 ≀ Fd−1 ) ⋊ Cd .
Lemma 6.32. Γn,d ≤ Gn,d acts freely transitively on the vertices of Xn,d .
Proof. This is clear: the image of the vertex ((0), e) by an element ((σv ), γ) is
((σv · 0), γe), so both transitivity and freeness follow from the fact that the actions
of Cn on Σn and of Cd ∗ Cd on Ed have these properties.
AMENABLE URS’S AND LATTICE EMBEDDINGS
39
We now explain how the groups G(F, F ′ ) act on the graphs Xn,d . In the sequel
F, F ′ denote two permutation groups on Ω such that F ≤ F ′ and F ′ preserves the
orbits of F , and we denote by n the index of F in F ′ .
Fix a bijection between Σn = {0, . . . , n − 1} and F ′ /F , such that 0 is sent to the
class F . The action of F ′ on the coset space F ′ /F induces a group homomorphism
α : F ′ → Sn ,
such that α(F ) lies inside Sn−1 . For γ ∈ G(F, F ′ ) and v ∈ Vd , write
ργ,v = α(σ(γ, γ −1 v)) ∈ Sn .
Note that ργ,v ∈ Sn−1 if and only if σ(γ, γ −1 v) ∈ F . We also denote by ργ = (ργ,v ).
Proposition 6.33. Let F ≤ F ′ ≤ Sym(Ω), and n the index of F in F ′ . The map
ϕ : G(F, F ′ ) → Gn,d , γ 7→ ϕ(γ) = (ργ , γ), is a well-defined group morphism that is
injective, continuous, and with a closed and cocompact image.
Proof. The map ϕ is well-defined because ργ,v ∈ Sn−1 for all but finitely many
V ,S
v, so that we indeed have ργ ∈ Snd n−1 . The fact that ϕ is a group morphism
follows from the cocycle identity (1) satisfied by local permutations. Indeed for
γ, γ ′ ∈ G(F, F ′ ) we have ϕ(γ)ϕ(γ ′ ) = (ψ, γγ ′ ) with
ψv = ργ,v ργ ′ ,γ −1 v
= α(σ(γ, γ −1 v))α(σ(γ ′ , γ ′−1 γ −1 v))
= α(σ(γγ ′ , (γγ ′ )−1 v))
= ργγ ′ ,v ,
so ψ = ργγ ′ and ϕ(γ)ϕ(γ ′ ) = ϕ(γγ ′ ).
Injectivity of ϕ is clear since the composition with the projection to Aut(Td )
d
⋊ Aut(Td ) is
is injective. The preimage in G(F, F ′ ) of the open subgroup SVn−1
′
the subgroup U (F ), which is open in G(F, F ) by definition of the topology, so it
follows that the map ϕ is continuous. Also the intersection between Im(ϕ) and the
d
⋊ Aut(Td ) is ϕ(U (F )), and it is easy to check that the latter
open subgroup SVn−1
is indeed a closed subgroup of Gn,d , so it follows that Im(ϕ) is closed in Gn,d . The
fact that Im(ϕ) is cocompact will follow from Proposition 6.30 and Proposition
6.34 below.
In the sequel for simplicity we will also write G(F, F ′ ) for the image of
ϕ : G(F, F ′ ) → Gn,d , γ 7→ ϕ(γ) = (ργ , γ) .
In particular when speaking about an action of G(F, F ′ ) on the graph Xn,d , we
will always refer to the action defined in Proposition 6.30, restricted to G(F, F ′ ).
This means that γ ∈ G(F, F ′ ) acts on (f, e) ∈ Xn,d by
γ · (f, e) = (f γ , γe),
where
(4)
(f γ )v = ργ,v · fγ −1 v = α(σ(γ, γ −1 v)) · fγ −1 v .
This action should not be confused with the standard action ((fv ), e) 7→ ((fγ −1 v ), γe)
coming from the inclusion of G(F, F ′ ) in Aut(Td ).
Proposition 6.34. Let F ≤ F ′ ≤ Sym(Ω), and n the index of F in F ′ .
40
ADRIEN LE BOUDEC
(a) The group G(F, F ′ )∗ acts cocompactly on Xn,d . When F is transitive on Ω,
the group G(F, F ′ )∗ acts transitively on vertices of Xn,d .
(b) The stabilizer of a vertex ((0), e) ∈ Xn,d in G(F, F ′ ) is the stabilizer of e in
U (F ). In particular the action of G(F, F ′ ) on Xn,d is proper.
Therefore when F acts freely transitively on Ω, the group G(F, F ′ )∗ acts freely
transitively on the vertices of Xn,d .
Proof. We show that for every vertex x = ((fv ), e) of Xn,d , there is g ∈ G(F, F ′ )∗
such that g · x = ((0), e). Since U (F ) preserves the vertices of this form, and since
the number of orbits of U (F )∗ ≤ G(F, F ′ )∗ on Ed is finite and is equal to one when
F is transitive [BM00a], statement (a) will follow.
We argue by induction on the cardinality N of the support of (fv ). There is
nothing to show if N = 0. Assume N ≥ 1, and let v0 ∈ Vd with fv0 6= 0 and such
that v0 maximizes the distance from e among vertices v such that fv 6= 0. Let e0
be the edge emanating from v0 toward e (if v0 belongs to e then e0 = e), and let
a ∈ Ω be the color of e0 . We also denote by T 1 and T 2 the two half-trees defined
by e0 , where T 1 contains v0 . For every b ∈ Ω, b 6= a, we denote by e0,b the edge
containing v0 and having color c(e0,b ) = b, and by T 1,b the half-tree defined by e0,b
not containing v0 .
By assumption the permutation group F ′ preserves the F -orbits in Ω, so we have
′
F = F Fa′ . The subgroup α(F ′ ) ≤ Sn being transitive, it follows from the previous
decomposition that there exists σ ∈ Fa′ such that α(σ) · fv0 = 0. For every b 6= a,
we choose σb ∈ F such that σb (b) = σ(b), and we consider the unique element
h ∈ Aut(Td ) whose local permutations are σ(h, v) = 1 if v ∈ T 2 ; σ(h, v0 ) = σ;
and σ(h, v) = σb for every v ∈ T 1,b and every b 6= a. It is an easy verification to
check that h is a well-defined automorphism of Td , and h ∈ G(F, F ′ ) because all
but possibly one local permutations of h are in F .
Note that h fixes e by construction. Write h(x) = ((φv ), e). We claim that the
support of (φv ) has cardinality N − 1. Since h fixes v0 , by (4) we have φv0 =
α(σ(h, v0 )) · fv0 = α(σ) · fv0 = 0. Moreover we also have φv = fv for every v in T 2
because h acts trivially on T 2 . Finally by the choice of v0 we had fv = 0 for every
v 6= v0 in T 1 , and since σ(h, v) ∈ F for all these v and α(F ) fixes 0, we still have
φv = 0 for every v in T 1 , v 6= v0 . This proves the claim, and the conclusion follows
by induction.
Statement (b) follows from Lemma 6.31, and the last statement follows from (a)
and (b) and the fact that U (F ) acts freely on Ed when F acts freely on Ω.
Propositions 6.33-6.34 and Lemma 6.32 imply Theorem 1.4 from the introduction. Note that when F acts freely transitively on Ω, we have an explicit description
of a generating subset of the group G(F, F ′ )∗ whose associated Cayley graph is Xn,d .
For, fix an edge e0 ∈ Ed , whose color is a ∈ Ω and whose vertices are v0 , v1 , and
denote x0 = ((0), e0 ) ∈ Xn,d . For i = 0, 1, let Si be the set of γ ∈ G(F, F ′ ) fixing vi
and such that σ(γ, v) ∈ F for every v 6= vi , and σ(γ, vi ) is non-trivial and belongs
to F ∪ Fa′ . Then S = S0 ∪ S1 generates G(F, F ′ )∗ , and Cay(G(F, F ′ )∗ , S) → Xn,d ,
γ 7→ γx0 , is a graph isomorphism. Moreover neighbours of type 1 (resp. type 2)
of a vertex of Xn,d are labeled by elements s ∈ Si such that σ(γ, vi ) ∈ F (resp.
σ(γ, vi ) ∈ Fa′ ).
AMENABLE URS’S AND LATTICE EMBEDDINGS
41
We end the article by observing that there are possible variations in the definition
of the graph Xn,d . If Kn is the complete graph on n vertices, let Kn ≀ Td be the
wreath product of the graphs Kn and Td (sometimes also called the lamplighter
(V )
graph over Td ): the vertex set is Σn d × Vd , and there is an edge between (f, v)
and (f ′ , v ′ ) if and only if either f = f ′ and v, v ′ are adjacent in Td , or v = v ′ and
f (w) = f ′ (w) if and only if w 6= v. Again if n is the index of F in F ′ , the group
G(F, F ′ ) acts on Kn ≀ Td by γ · (f, v) = (f γ , γv), where f γ is given by (4). The
previous arguments for the graph Xn,d carry over to this graph, so that we have:
Proposition 6.35. Let F ≤ F ′ ≤ Sym(Ω), and n the index of F in F ′ . Then
G(F, F ′ ) acts properly and cocompactly on the graph Kn ≀ Td .
The reason why we considered the graph Xn,d instead of Kn ≀ Td is to obtain,
under the assumption that F acts freely on Ω, a free action of G(F, F ′ )∗ on the
set of vertices. In the case of Kn ≀ Td , the stabilizer of a vertex in G(F, F ′ )∗ is
finite, but non-trivial. We note that it might be interesting to investigate whether
the generalized wreath products of graphs from [Ers06] could provide other kind
of interesting groups of automorphisms.
Yet another possibility is to take the same vertex set as Xn,d , but declaring that
there is an edge between (f, e) and (f ′ , e′ ) if e 6= e′ share a vertex w and fv = fv′
for every v 6= w. This graph Zn,d has larger degree, namely 2(d − 1)n. Again all
the results proved above for Xn,d remain true. In the case d = 2, one may check
that Zn,2 is the Diestel-Leader graph DL(n, n), so that Zn,d may be thought of as
“higher dimensional” versions of these graphs.
References
[BCGM16] U. Bader, P.E. Caprace, T. Gelander, and S. Mozes, Lattices in amenable groups,
arXiv:1612.06220 (2016).
[BF14]
U. Bader and A. Furman, Boundaries, rigidity of representations, and Lyapunov exponents, Proceedings of ICM (2014).
[BFS15]
U. Bader, A. Furman, and R. Sauer, On the structure and arithmeticity of lattice
envelopes, C. R. Math. Acad. Sci. Paris 353 (2015), no. 5, 409–413.
[BKKO14] E. Breuillard, M. Kalantar, M. Kennedy, and N. Ozawa, C ∗ -simplicity and the unique
trace property for discrete groups, arXiv:1410.2518v2 (2014).
[BM97]
M. Burger and Sh. Mozes, Finitely presented simple groups and products of trees, C.
R. Acad. Sci. Paris Sér. I Math. 324 (1997), no. 7, 747–752.
, Groups acting on trees: from local to global structure, Inst. Hautes Études
[BM00a]
Sci. Publ. Math. (2000), no. 92, 113–150 (2001).
, Lattices in product of trees, Inst. Hautes Études Sci. Publ. Math. (2000),
[BM00b]
no. 92, 151–194.
[BM02]
M. Burger and N. Monod, Continuous bounded cohomology and applications to rigidity
theory, Geom. Funct. Anal. 12 (2002), no. 2, 219–280.
[BNW08] L. Bartholdi, M. Neuhauser, and W. Woess, Horocyclic products of trees, J. Eur. Math.
Soc. (JEMS) 10 (2008), no. 3, 771–816.
[BQ14]
Y. Benoist and J-F. Quint, Lattices in S-adic Lie groups, J. Lie Theory 24 (2014),
no. 1, 179–197.
[BS06]
U. Bader and Y. Shalom, Factor and normal subgroup theorems for lattices in products
of groups, Invent. Math. 163 (2006), no. 2, 415–454.
[Cap16]
P-E. Caprace, Non-discrete simple locally compact groups, to appear in the Proceedings
of the 7th European Congress of Mathematics (2016).
[CFK12]
Y. Cornulier, D. Fisher, and N. Kashyap, Cross-wired lamplighter groups, New York
J. Math. 18 (2012), 667–677.
42
[CM09]
[CM11]
[CM12]
[CM14]
[Cor17]
[CR09]
[CRW17]
[DGO11]
[DM16]
[Dym15]
[Dyu00]
[EFW07]
[EFW12]
[EFW13]
[Ele17]
[Ers06]
[Eym72]
[FST15]
[Fur67a]
[Fur67b]
[Fur73]
[Fur81]
[Fur01]
[Fur03]
[Ghy01]
[Gla74]
[Gla75]
ADRIEN LE BOUDEC
P-E. Caprace and N. Monod, Isometry groups of non-positively curved spaces: discrete
subgroups, J. Topol. 2 (2009), no. 4, 701–746.
, Decomposing locally compact groups into simple pieces, Math. Proc. Cambridge Philos. Soc. 150 (2011), no. 1, 97–128.
, A lattice in more than two Kac-Moody groups is arithmetic, Israel J. Math.
190 (2012), 413–444.
, Relative amenability, Groups Geom. Dyn. 8 (2014), no. 3, 747–774.
Y. Cornulier, Locally compact wreath products, arXiv:1703.08880 (2017).
P-E. Caprace and B. Rémy, Simplicity and superrigidity of twin building lattices, Invent. Math. 176 (2009), no. 1, 169–221.
P-E. Caprace, C. Reid, and Ph. Wesolek, Approximating simple locally compact groups
by their dense locally compact subgroups, arXiv:1706.07317v1 (2017).
F. Dahmani, V. Guirardel, and D. Osin, Hyperbolically embedded subgroups and rotating families in groups acting on hyperbolic spaces, arXiv:1111.7048 (2011).
B. Duchesne and N. Monod, Group actions on dendrites and curves,
arXiv:1609.00303v3 (2016).
T. Dymarz, Envelopes of certain solvable groups, Comment. Math. Helv. 90 (2015),
no. 1, 195–224.
A. Dyubina, Instability of the virtual solvability and the property of being virtually
torsion-free for quasi-isometric groups, Internat. Math. Res. Notices (2000), no. 21,
1097–1101.
A. Eskin, D. Fisher, and K. Whyte, Quasi-isometries and rigidity of solvable groups,
Pure Appl. Math. Q. 3 (2007), no. 4, Special Issue: In honor of Grigory Margulis.
Part 1, 927–947.
, Coarse differentiation of quasi-isometries I: Spaces not quasi-isometric to
Cayley graphs, Ann. of Math. (2) 176 (2012), no. 1, 221–260.
, Coarse differentiation of quasi-isometries II: Rigidity for Sol and lamplighter
groups, Ann. of Math. (2) 177 (2013), no. 3, 869–910.
G. Elek, On uniformly recurrent subgroups of finitely generated groups,
arXiv:1702.01631 (2017).
A. Erschler, Generalized wreath products, Int. Math. Res. Not. (2006), Art. ID 57835,
14.
P. Eymard, Moyennes invariantes et représentations unitaires, Lecture Notes in Mathematics, Vol. 300, Springer-Verlag, Berlin-New York, 1972.
J. Frisch, T. Schlank, and O. Tamuz, Normal amenable subgroups of the automorphism
group of the full shift, arXiv:1512.00587 (2015).
H. Furstenberg, Disjointness in ergodic theory, minimal sets, and a problem in Diophantine approximation, Math. Systems Theory 1 (1967), 1–49.
, Poisson boundaries and envelopes of discrete groups, Bull. Amer. Math. Soc.
73 (1967), 350–356.
, Boundary theory and stochastic processes on homogeneous spaces, Harmonic
analysis on homogeneous spaces (Proc. Sympos. Pure Math., Vol. XXVI, Williams
Coll., Williamstown, Mass., 1972), Amer. Math. Soc., Providence, R.I., 1973, pp. 193–
229.
, Rigidity and cocycles for ergodic actions of semisimple Lie groups (after G. A.
Margulis and R. Zimmer), Bourbaki Seminar, Vol. 1979/80, Lecture Notes in Math.,
vol. 842, Springer, Berlin-New York, 1981, pp. 273–292.
A. Furman, Mostow-Margulis rigidity with locally compact targets, Geom. Funct. Anal.
11 (2001), no. 1, 30–59.
, On minimal strongly proximal actions of locally compact groups, Israel J.
Math. 136 (2003), 173–187.
É. Ghys, Groups acting on the circle, Enseign. Math. (2) 47 (2001), no. 3-4, 329–407.
S. Glasner, Topological dynamics and group theory, Trans. Amer. Math. Soc. 187
(1974), 327–334.
, Compressibility properties in topological dynamics, Amer. J. Math. 97 (1975),
148–171.
AMENABLE URS’S AND LATTICE EMBEDDINGS
43
, Proximal flows, Lecture Notes in Mathematics, Vol. 517, Springer-Verlag,
Berlin-New York, 1976.
[Gru57]
K. W. Gruenberg, Residual properties of infinite soluble groups, Proc. London Math.
Soc. (3) 7 (1957), 29–62.
[Gui73]
Y. Guivarc’h, Croissance polynomiale et périodes des fonctions harmoniques, Bull. Soc.
Math. France 101 (1973), 333–379.
[GW15]
E. Glasner and B. Weiss, Uniformly recurrent subgroups, Recent trends in ergodic theory and dynamical systems, Contemp. Math., vol. 631, Amer. Math. Soc., Providence,
RI, 2015, pp. 63–75.
[Har07]
P. de la Harpe, On simplicity of reduced C ∗ -algebras of groups, Bull. Lond. Math. Soc.
39 (2007), no. 1, 1–26.
[HR79]
E. Hewitt and K. Ross, Abstract harmonic analysis. Vol. I, second ed., Grundlehren der
Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences],
vol. 115, Springer-Verlag, Berlin-New York, 1979, Structure of topological groups,
integration theory, group representations.
[Jen73]
J. W. Jenkins, Growth of connected locally compact groups, J. Functional Analysis 12
(1973), 113–127.
[JR00]
P. Jolissaint and G. Robertson, Simple purely infinite C ∗ -algebras and n-filling actions,
J. Funct. Anal. 175 (2000), no. 1, 197–213.
[Kaw17]
T. Kawabe, Uniformly recurrent subgroups and the ideal structure of reduced crossed
products, arXiv:1701.03413 (2017).
[Ken15]
M. Kennedy, Characterizations of C ∗ -simplicity, arXiv:1509.01870v3 (2015).
[KK17]
M. Kalantar and M. Kennedy, Boundaries of reduced C ∗ -algebras of discrete groups,
J. Reine Angew. Math. 727 (2017), 247–267.
[LB16]
A. Le Boudec, Groups acting on trees with almost prescribed local action, Comment.
Math. Helv. 91 (2016), no. 2, 253–293.
[LB17]
, C ∗ -simplicity and the amenable radical, Invent. Math. 209 (2017), no. 1,
159–174.
[LBMB16] A. Le Boudec and N. Matte Bon, Subgroup dynamics and C∗ -simplicity of groups of
homeomorphisms, Ann. Sci. Ecole Norm. Sup. (to appear) (2016).
[LBW16] A. Le Boudec and Ph. Wesolek, Commensurated subgroups in tree almost automorphism groups, Groups Geom. Dyn. (to appear) (2016).
[Los87]
V. Losert, On the structure of groups with polynomial growth, Math. Z. 195 (1987),
no. 1, 109–117.
[LS96]
M. Laca and J. Spielberg, Purely infinite C ∗ -algebras from boundary actions of discrete
groups, J. Reine Angew. Math. 480 (1996), 125–139.
[Mal40]
A. Malcev, On isomorphic matrix representations of infinite groups, Rec. Math. [Mat.
Sbornik] N.S. 8 (50) (1940), 405–422.
[Mar91]
G. A. Margulis, Discrete subgroups of semisimple Lie groups, Ergebnisse der Mathematik und ihrer Grenzgebiete (3), vol. 17, Springer-Verlag, Berlin, 1991.
[Mar00]
G. Margulis, Free subgroups of the homeomorphism group of the circle, C. R. Acad.
Sci. Paris Sér. I Math. 331 (2000), no. 9, 669–674.
[MB18]
N. Matte Bon, Rigidity of graphs of germs and homomorphisms between full groups,
arXiv:1801.10133 (2018).
[MBT17] N. Matte Bon and T. Tsankov, Realizing uniformly recurrent subgroups,
arXiv:1702.07101 (2017).
[MP03]
N. Monod and S. Popa, On co-amenability for groups and von Neumann algebras, C.
R. Math. Acad. Sci. Soc. R. Can. 25 (2003), no. 3, 82–87.
[MS04]
N. Monod and Y. Shalom, Cocycle superrigidity and bounded cohomology for negatively
curved spaces, J. Differential Geom. 67 (2004), no. 3, 395–455.
[MZ55]
D. Montgomery and L. Zippin, Topological transformation groups, Interscience Publishers, New York-London, 1955.
[Nek13]
V. Nekrashevych, Finitely presented groups associated with expanding maps,
arXiv:1312.5654v1 (2013).
[PV91]
I. Pays and A. Valette, Sous-groupes libres dans les groupes d’automorphismes d’arbres,
Enseign. Math. (2) 37 (1991), no. 1-2, 151–174.
[Gla76]
44
ADRIEN LE BOUDEC
[Rad17]
[Rag72]
[Rat04]
[Rém99]
[Sha00]
[Tit70]
[Vor12]
[Wis96]
N. Radu, New simple lattices in products of trees and their projections,
arXiv:1712.01091 (2017), With an appendix by P-E. Caprace.
M. S. Raghunathan, Discrete subgroups of Lie groups, Springer-Verlag, New YorkHeidelberg, 1972, Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 68.
D. Rattaggi, Computations in groups acting on a product of trees: Normal subgroup structures and quaternion lattices, ProQuest LLC, Ann Arbor, MI, 2004, Thesis
(Dr.sc.math.)–Eidgenoessische Technische Hochschule Zuerich (Switzerland).
B. Rémy, Construction de réseaux en théorie de Kac-Moody, C. R. Acad. Sci. Paris
Sér. I Math. 329 (1999), no. 6, 475–478.
Y. Shalom, Rigidity of commensurators and irreducible lattices, Invent. Math. 141
(2000), no. 1, 1–54.
J. Tits, Sur le groupe des automorphismes d’un arbre, Essays on topology and related
topics (Mémoires dédiés à Georges de Rham), Springer, New York, 1970, pp. 188–211.
Y. Vorobets, Notes on the Schreier graphs of the Grigorchuk group, Dynamical systems
and group actions, Contemp. Math., vol. 567, Amer. Math. Soc., Providence, RI, 2012,
pp. 221–248.
D. Wise, Non-positively curved squared complexes: Aperiodic tilings and non-residually
finite groups, ProQuest LLC, Ann Arbor, MI, 1996, Thesis (Ph.D.)–Princeton University.
UCLouvain, IRMP, Chemin du Cyclotron 2, 1348 Louvain-la-Neuve, Belgium
CNRS, Unité de Mathématiques Pures et Appliquées, ENS-Lyon, France
E-mail address: [email protected]
| 4 |
1
Multisensor Poisson Multi-Bernoulli Filtering with
Uncertain Sensor States
arXiv:1712.08146v1 [cs.SY] 21 Dec 2017
Markus Fröhle, Christopher Lindberg, Karl Granström, Henk Wymeersch
Abstract—In a typical multitarget tracking (MTT) scenario,
the sensor state is either assumed known, or tracking is performed based on the sensor’s (relative) coordinate frame. This
assumption becomes violated when the MTT sensor, such as a
vehicular radar, is mounted on a vehicle, and the target state
should be represented in a global (absolute) coordinate frame.
Then it is important to consider the uncertain sensor location for
MTT. Furthermore, in a multisensor scenario, where multiple
sensors observe a common set of targets, state information from
one sensor can be utilized to improve the state of another sensor.
In this paper, we present a Poisson multi-Bernoulli MTT filter,
which models the uncertain sensor state. The multisensor case
is addressed in an asynchronous way, where measurements are
incorporated sequentially based on the arrival of new sensor
measurements. In doing so, targets observed from a well localized
sensor reduce the state uncertainty at another poorly localized
sensor, provided that a common non-empty subset of features
is observed. The proposed MTT filter has low computational
demands due to its parametric implementation.
Numerical results demonstrate the performance benefits of modeling the uncertain sensor state in feature tracking as well as the
reduction of sensor state uncertainty in a multisensor scenario
compared to a per sensor Kalman filter. Scalability results display
the linear increase of computation time with number of sensors
or features present.
I. I NTRODUCTION
Intelligent transportation systems in general, and
autonomous driving (AD) in particular, require accurate
position information [1]. Measurements provided by various
on-board sensors allow to infer the vehicle state, e.g., position
and velocity, as well as information about the surrounding
environment. For instance, a global navigation satellite system
(GNSS) receiver provides absolute position, whereas a radar
sensor provides relative position with respect to (w.r.t.)
the sensor origin. Furthermore, vehicles have access to a
pre-recorded local dynamic map (LDM) containing static
features such as, e.g., landmarks [2]. Dynamic features such
as pedestrians, cyclists, etc. are not part of the pre-recorded
map. For an AD system to be fully aware of the surrounding
environment, dynamic features need to be estimated and
tracked over time using the vehicles on-board sensors thus
allowing to enrich the vehicle’s LDM. In order to incorporate
mobile features into the LDM, which contains map features
described in a global coordinate frame, location uncertainty
of on-board sensors used to track dynamic features needs to
M. Fröhle, K. Granström, and H. Wymeersch are with the Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden. E-mail: {frohle, karl.granstrom,
henkw}@chalmers.se. C. Lindberg is with Zenuity AB, Gothenburg,
Sweden. E-mail: [email protected]
be considered, i.e., the vehicle’s state uncertainty such as its
location and pose. In an ITS, vehicles communicate through
the wireless channel with other vehicles (vehicle-to-vehicle
(V2V) communication) or with road infrastructure, such as
a road side unit (RSU), through vehicle-to-infrastructure
(V2I) communication, using IEEE 802.11p or 4G/5G cellular
communication. Through information exchange, vehicles can
make local LDM information available to their neighbors
allowing to enrich their LDMs and their situational awareness.
Not only can LDM information be shared, it can also be fused
to improve every LDM [3], [4]. For the special case, they
observe an overlapping set of dynamic features, information
from one vehicle can be utilized to increase location accuracy
of other vehicles, and vice versa [5], [6]. Note, in this context
no V2V measurements are performed, different to a traditional
cooperative localization approach [7], [8]. The problem of
vehicular localization using locally observed features with
unknown observation to feature correspondence, aggregated
at an RSU, can be interpreted as an MTT problem.
In MTT, a varying number of mobile features (targets) are
tracked, using sensors such as for example radars, LIDARS,
or cameras [9]. Thereby, it is typically assumed that the
state of the observing sensor is known. Although not true in
general, this assumption can be motivated by the fact that
sensor state uncertainty is negligible in comparison to the
sensors’ measurement accuracy. If sensor state uncertainty is
significant, it needs to be modeled in the MTT in order not
to have negative impact on feature tracking performance.
In this paper, we consider the case of MTT with uncertain
sensor state, where there are potentially multiple sensors with
varying sensor state uncertainty. To enable accurate feature
tracking we model the sensor state uncertainty in the MTT
filter. The main contributions are:
•
•
•
an asynchronous parametric multitarget-multisensor
tracking filter with uncertain sensor state information,
fusion of mulitarget-multisensor tracking information
with local sensor tracking information,
and numerical simulation results demonstrating the performance of the filter in a multisensor vehicular scenario.
In the application example, we demonstrate how the proposed
filter can be used to transfer location information from a well
localized vehicle to a poorly localized vehicle through MTT.
Hence, positioning accuracy of the poorly localized vehicle is
greatly improved compared to using a local Kalman filter with
GNSS measurements alone.
2
Legend
Cooperative vehicle
Mobile feature
RSU
Communication link
Fig. 1. Urban ITS scenario with two vehicles cooperating through the RSU
and six mobile features.
A. Motivation
In this paper, we consider an urban intelligent transportation
system (ITS) scenario, consisting of cooperating vehicles (illustrated in Fig. 1). Each vehicle is equipped with an on-board
sensor allowing them to determine their absolute position, e.g.,
a GNSS receiver, and an on-board sensor to retrieve relative
positions of mobile features present in the environment, e.g.,
a radar. Absolute position measurements are denoted GNSS
measurements and measurements taken w.r.t. features are
denoted vehicle-to-feature (V2F) measurements. Due to the
sensor used to obtain V2F measurements, it is in general
not known which feature gave rise to which measurement. A
GNSS and a set of V2F measurements from every vehicle are
transmitted in a non-time synchronized manner to the RSU,
where a centralized filter is run to track the feature as well
as the vehicle states. This information can be utilized by the
RSU an sent back to the vehicles to increase their situation
awareness.
B. Related Work
1) MTT with known sensor state: Many MTT filters have
been proposed to track mobile features using radar-like sensors
when the sensor state is known [9]. The multi-hypothesis
tracking (MHT) filter builds a growing hypothesis tree with
feature-to-measurements data association (DA), and needs to
be pruned to limit computation complexity [10]. The joint
probability data association (JPDA) filter finds the most likely
DA, where feature state information is reduced after each
update step to a single Gaussian per feature. In the last
years, MTT filters based on random finite set (RFS) and
finite-set statistics (FISST) (which avoid the inherent DA),
originally developed by [11], have gained much attention.
The probability hypothesis density (PHD) filter propagates
the first moment of the RFS density over time [12]–[14].
The Poisson multi-Bernoulli (PMB) filter approximates the
global joint DA by the product of (local) marginal DA similar
to the JPDA filter [15]. In [16], a derivation of this PMB
filter based on standard single target measurement models,
without using probability generating functional (p.g.fl.) and
functional derivatives, is presented. Furthermore, a connection
to the δ-generalized labelled multi-Bernoulli (δ-GLMB) filter
[17], [18] is shown, where the δ-GLMB density is a special
case of multi-Bernoulli (MB) density for labeled targets and
can therefore be seen as a special case of the PMB filter.
In [19], a Gaussian mixture (GM) multitarget-multisensor
Bernoulli tracker, with known sensor locations, is developed
and compared to a particle filter (PF) implementation for
multistatic sonobuoy fields. Target state update from multiple
sensors was achieved through sequential sensor updates. In
[20], [21], a factor graph (FG) based approach, not using
FISST, was proposed for a variant of the JPDA filter. A multiscan scenario was considered, and the filter was realized by
running loopy belief propagation on the FG containing cycles.
In [22], a PF based implementation of the PMB filter [15]
was presented. This implementation has been used in [21]
as a performance comparison of the FG based MTT. There
it was found that the PF based PMB filter implementation
scales exponentially with an increasing number of features. In
[23], vehicles perform local feature tracking and send track
information over the wireless channel to a central fusion
center. Then, track-to-track fusion [24], [25] is performed,
taking care of out-of-sequence measurements which can arise
by utilizing a shared communication media. There the DA
problem arises on the decision of which local tracks to fuse.
The Mahalanobis distance is employed as DA metric.
2) MTT with uncertain sensor state: In contrast to MTT,
simultaneous localization and mapping (SLAM) based methods determine the sensor state while mapping features of the
environment [2], [26]. Most of the proposed SLAM methods
assume static features such as, e.g., walls and street signs.
SLAM methods excel through maintaining the correlation
between features. A reduction of complexity was achieved in
FastSLAM through Rao-Blackwellization (RB), where the feature state, conditioned on the sensor state, is tracked through
a Kalman filter and the sensor state through a PF. In [27],
a RFS based approach to the SLAM problem was proposed.
There the target state is conditioned on the sensor location
and then tracked through a PHD filter following a RB. In
[28], an MTT with uncertain (single) sensor location is derived
for the SLAM problem using FISST and point process theory.
Simulation results are shown with a RB PF implementation. In
[29], the problem of sensor uncertainty for Bernoulli filtering
[30] of at most one target using a single sensor is addressed.
Since the scenario is restricted to a single sensor, a suboptimal
approach is presented, where the sensor state is updated only
by measurements independent of the target state, i.e., target
tracking information is not used to update the sensor location.
Similar to [21], a FG based approach was considered in [31]
for an urban ITS scenario, where the number of features is
assumed a priori known, and in [32] a variant of SLAM
was considered for indoor environments using time-of-arrival
radio signal measurements. There, features are the (static)
source locations of line-of-sight and non line-of-sight signal
propagation paths of the transmitted radio wave.
C. Notation and Paper Organization
Scalars are described by non-bold letters r, vectors by
lower-case bold letters x; matrices and sets by upper-case bold
3
letters X. The cardinality of set X is denoted |X|. The set
operator ] denotes the disjoint set union, i.e., F u ] F d = F
means F u ∪ F d = F and F u ∩ F d = ∅. The vehicle state
is reserved by letter x, the feature state by letter f , and
measurements by letter z. The identity matrix of size n × n
is denoted I n . The `2 -norm of vector x is kxk2 .
The remainder of this paper is organized as follows. Section II gives some background knowledge on RFS, and Section III introduces the problem formulation and system models.
Section IV details the proposed MTT filter with uncertain
sensor state, numerical results are given in Section V, and
conclusions are drawn in Section VI.
II. BACKGROUND ON RFS
In this section, we describe some useful properties of an
RFS. If not stated otherwise, the source of all these is [15].
A. Random Finite Set Formulation
According to [15], RFS based methods have been developed
in [11] to conduct statistical inference in problems in which
the variables of interest and/or observations form finite sets.
In tracking, they address two major challenges of interest:
(i) the number of targets present in the scene is unknown,
(ii) measurements are invariant to ordering (measurement-totarget correspondence is unknown). An RFS X is a finite-set
valued random variable, which can be described by a discrete
probability distribution p(n), n ≥ 0 and a family of joint
probability densities fn (xπ(1) , . . . , xπ(n) ) yielding
X
f (x1 , . . . , xn ) = p(n)
fn (xπ(1) , . . . , xπ(n) ),
(1)
π
where the sum spans over the n! permutation functions π(·),
such that its RFS density f (X) is permutation invariant. The
set integral of a real-valued function g(X) of a finite-set
variable X is defined as [11, p. 361]
Z
Z
∞
X
1
g(x1 , . . . , xn )dx1 · · · dxn .
g(X)δX , g(∅) +
n!
n=1
(2)
A Bernoulli process X with probability of existence r and
existence-conditioned probability density function (PDF) f (x)
has RFS density
X=∅
1 − r,
f (X) = r · f (x), X = x
(3)
0,
otherwise,
The RFS density of a MB process for the RFS X =
{x1 , . . . , xn } is [11, p. 368]
"N
#
n
Y
X
Y
rik
f (X) =
(1 − ri )
fi (xi ) .
1 − rik k
i=1
i≤i1 6=...6=in ≤N k=1
(4)
A Poisson point process (PPP) with intensity function λc (y)
has RFS density [15]
Y
f (Y ) = exp(− hλc , 1i)
λc (y)
(5)
y∈Y
R
with inner product hλc , hi , λc (y)h(y)dy.
Remark 1: If X and Y are independent RFSs such that
Z = X ] Y , then
X
fZ (Z) =
fX (X)fY (Y ).
(6)
X]Y =Z
Note, for RFS X a MB and RFS Y a PPP (6) is called a
PMB density.
B. State estimation from RFS density
A common way to estimate the set states from a Bernoulli
process with RFS density f (X) is by comparing the probability of existence r against an existence threshold rth . For
r > rth , the target is said to exist and has PDF f (x) (c.f.
(3)). RIts state can then be estimated by the mean of f (x), i.e.,
x̂ = xf (x)dx.
III. P ROBLEM F ORMULATION AND S YSTEM M ODELS
Here, we first present the problem formulation, and the
vehicle and feature dynamics. This is followed by the GNSS
and V2F measurement models, and the communication model.
A. Problem Formulation
The goal of the filter, which runs on the RSU, is to track
the features and the states of all vehicles in every discrete
time step t through incorporation of all sensor measurements
(GNSS and V2F measurements) up until time step t. We are
therefore interested in the joint posterior distribution of the
feature and vehicle states at every time step t.
B. Vehicle and Feature Dynamics
Vehicle state motion follows independent Markovian processes, where the time-varying vehicle state xs,t ∈ RNx of
each vehicle s ∈ S at time step t ∈ N0 is statistically modeled
as p(xs,t |xs,t−1 ), with linear state-space model
xs,t = As,t xs,t−1 + ws,t ,
(7)
where As,t denotes the state-transition matrix and ws,t ∼
N (0, W s,t ) with error covariance matrix W s,t .
A single time-varying feature k ∈ K with state f k,t−1 ∈ RNf
survives to the next time step t following an independent identically distributed (IID) Markovian process with survival probability pS (f k,t ). The feature state motion follows IID Markovian processes and is statistically modeled as p(f k,t |f k,t−1 )
with linear state-space model
f k,t = B t f k,t−1 + v k,t ,
(8)
where B t denotes the state-transition model and process noise
v k,t ∼ N (0, V t ) with error covariance matrix V t . The statetransition matrices, as well as the error covariance matrices
4
are assumed equal among the features. Note that vehicle and
feature state motion is independent of each other.1,2
In the following, we will drop the subscript indexing on
states and measurements w.r.t. vehicle/feature/time whenever
the context allows.
C. Measurement models
At time step t, vehicle s ∈ S obtains two different kind of
measurements: (i) measurements of the vehicle state x w.r.t.
the reference frame, i.e., GNSS-like measurements; and (ii)
measurements w.r.t. to features, i.e., from a radar-like (onboard) V2F sensor. Without loss of generality, we assume that
the on-board sensors’ state is equal to the vehicle state. Thus
an uncertain vehicle location implies an uncertain location for
the on-board V2F sensor.
1) GNSS measurement: The GNSS measurement z G ∈
RMG of vehicle s ∈ S at time t is statistically modeled through
the likelihood function p(z G |x) with linear observation model
z G = H G x + r,
(9)
where H G is the linear observation matrix and r ∼ N (0, R)
with error covariance matrix R.
2) V2F measurement: Let Z F be a set of measurements
from a tracking sensor that is susceptible to measurement
noise, missed detections, and false detections. Examples of
such sensors include camera, radar and LIDAR. Consequently,
Z F = Z FA ] Z D ,
z = H 1 x + H 2 f + q,
(11)
where H 1 and H 2 denote observation matrices, and q ∼
N (0, Q) with error covariance matrix Q. Note that the
measurement-to-feature state correspondence denoted DA, is
in general not known and needs to be inferred from the
measurements. Let the measurement likelihood for a single
feature RFS F , i.e. |F | ≤ 1, be
1
if F = ∅,
ZF = ∅
0
if F = ∅,
Z F = {z}
1 − pD (x, f )
if F = {f }, Z F = ∅
η(Z F |x, F ) =
F
pD (x, f )`(z|x, f ) if F = {f }, Z =F{z}
if
|F | > 1,
or |Z | > 1,
(12)
1 For
where 0 < α ≤ 1 is a constant such that it is a valid PDF.
Depending on the specific sensor at hand, the FoV may be
different and (13) needs to be adapted.
D. Communication model
We assume that every vehicle is able to communicate
all obtained measurements (V2F and GNSS) with the RSU
instantaneously and without errors. This implies that at any
time t the number of vehicles communicating with the RSU
can vary. The incorporation of a realistic V2I channel model
and its performance impact is a point for future work.
IV. P OISSON M ULTI -B ERNOULLI FILTERING WITH
UNCERTAIN SENSOR STATE
(10)
where Z FA denotes the set of false alarm measurements due
to clutter modeled by a PPP with intensity λc (z), and Z D
denotes the set of detected features. Let a V2F measurement
z ∈ Z D with state dimension RMV2F be obtained through the
V2F sensor at vehicle s ∈ S w.r.t. feature k at time t. It is
modeled through the likelihood function `(z|x, f ) with linear
observation model
0
where `(z|x, f ) follows the measurement model (11). Note
that due to (11), the probability of detection pD (x, f ) in (12)
depends on the vehicle state x as well as on the feature state f .
For instance, a limited sensor field-of-view (FoV) affects the
probability of feature detection based on the distance between
vehicle and feature.
Remark 2: For the case the radar-like V2F sensor is able
to detect features within a radius rmax , the probability of
detection is defined
(
α, if kH 1 x + H 2 f k2 ≤ rmax
(13)
pD (x, f ) =
0, otherwise,
the sake of brevity, we present linear system and measurement models.
In case the true system dynamics and/or measurement model are (mild) nonlinear, linearization steps can be performed similar to the steps taken in an
extended Kalman filter (EKF) or the unscented Kalman filter (UKF) [33],
[34]. In doing so, the proposed filter remains valid unaltered.
2 Note that the proposed filter only predicts over small time-horizons in
the order of a few tens of milliseconds. Then the assumption on vehicle and
feature states evolving independently is reasonable, because there is very little
interaction among them within the prediction horizon.
In this section, we formulate the proposed PMB filter with
uncertain sensor state. We consider a tracking scenario subject
to Section I-A, where there may be multiple features. In Section IV-C, we proceed with the asynchronous multisensor case
allowing to track multiple features and vehicles with uncertain
vehicle state. In Section IV-D, a tractable Gaussian density
approximation of the proposed filter is given. The vehicle state
PDF at time step t − 1 is indicated by subscript ’−’, i.e.,
p− (x), the PDF predicted to the current time step t (before
updating by a measurement) is indicated by subscript ’+’,
i.e., p+ (x), and the posterior PDF is stated without subscript.
Similar definitions hold w.r.t. the feature RFS density.
The proposed filter is developed within a Bayesian framework
with alternating prediction and update steps operating on an
RFS X by [11, Ch. 14].
Z
f+ (X) = f (X|X 0 )f− (X 0 )δX 0 ,
(14)
and
f (X|Z) ∝ `(Z|X)f+ (X),
(15)
where f− (X 0 ) is the prior RFS density, f (X|X 0 ) is the RFS
transition density, f+ (X) is the predicted density, and `(Z|X)
is the RFS measurement likelihood for measurement set Z.
From the problem definition stated in Section III-A, we are
interested in the joint posterior density of the vehicle and
features considering all measurements up to the current time
step t. We now proceed with the development of the proposed
filter within this framework.
5
With a single vehicle, the prior joint vehicle-feature density
is of form
f− (x, F ) = p− (x)f− (F ),
(16)
where p− (x) is the prior PDF on the vehicle state, and f− (F )
is the prior PMB density. The latter density can be written in
u
terms of an PPP intensity of undetected features f−
(F u ), i.e.,
features which are hypothesized to exist but have never been
detected [15, Def. I], and the prior MB RFS density of detected
d
features f−
(F d ), as [11, p. 484], [15]
X
u
d
f−
(F u )f−
(F d ).
(17)
f− (F ) =
F u ]F d =F
In (17), the PPP density of undetected features is
Y
u
u
u
f−
(F u ) = e−hD− ,1i
D−
(f ),
(18)
f ∈F u
u
where D−
(f ) is the intensity of undetected features. We
are interested in a low computational complexity method
to compute the (posterior) joint vehicle-feature density
f (x, F ) in every discrete time step t through incorporation
of all sensor measurements. In doing so, the posterior density
should remain in the same form as the prior joint density (16).
A. Prediction step
With the vehicle state x, and existing feature RFS F , the
predicted joint vehicle-feature density is
f+ (x, F ) = p+ (x)f+ (F ).
(19)
Here, the predicted vehicle state PDF is given by the
Chapman-Kolmogorov equation [35]
Z
p+ (x) = p(x|x0 )p− (x0 )dx0 ,
(20)
where p(x|x0 ) is the state transition PDF described by (7),
and p− (x0 ) is the prior PDF. Similarly, the predicted feature
state PMB density is calculated by
Z
f+ (F ) = f (F |F 0 )f− (F 0 )δF 0 ,
(21)
where f (F |F 0 ) is the transition RFS density, and f− (F 0 ) is
the prior PMB density. the predicted intensity of undetected
u
u
features D+
(f ) of the predicted PPP density f+
(F u ) is given
by
Z
u
u
D+
(f ) = Db (f ) + p(f |f 0 )pS (f 0 )D−
(f 0 )df 0 .
(22)
Here, the birth intensity is denoted Db (f ), feature transition
PDF p(f |f 0 ) is described by (8), feature survival probability
u
is denoted pS (f 0 ), and D−
(f 0 ) denotes the prior intensity. The
MB RFS density of detected features of (21) is
X
d
i
(F i ),
(23)
f+
(F d ) =
f+
]i∈I+ F i =F d
where I+ denotes the set of existing features (before measurement update), and
i
,
Fi = ∅
1 − r+
i
i
i
i
f+ (F ) =
(24)
r p (f ), F i = {f i }
+ +
0,
otherwise.
Here, the predicted PDF of feature i is
Z
p+ (f i ) = p(f i |f 0i )p− (f 0i )df 0i ,
(25)
where p− (f 0i ) is the prior PDF of feature i. The probability
of existence of feature i is [15, Eqn. (40)]
Z
i
i
r+
pS (f 0i )p− (f 0i )df 0i ,
(26)
= r−
i
where r−
denotes the prior probability of existence, p− (·) the
prior PDF, and pS (·) the probability of feature survival.
B. Measurement update step
Updating the joint vehicle-feature density (19) by any of
the two types of different measurements, GNSS and V2F
measurements, involves the application of Bayes’ theorem. In
the following, we describe the update calculations using the
different type of measurements.
1) Update with vehicle state measurement: Let z G be a
measurement related to the vehicle state x, and unrelated to
the set of features F ,
p(z G |x, F ) = p(z G |x).
(27)
For example, z G could be a GNSS and/or an inertial measurment unit (IMU) measurement. Given a predicted vehiclefeature density (19), and by Bayes’ theorem the updated
density is
f (x, F |z G ) = p(x|z G )f+ (F ).
(28)
In other words, the vehicle state density is updated with the
measurement z G , the feature set density is unaffected by the
update, and the independent form is retained (c.f. (16)). Note,
this update step can be omitted in the absence of GNSS-like
measurements, e.g., in a pure SLAM application [26].
2) Update with cluttered set of feature measurements: Let
the set of measurements Z F subject to the measurement model
of Section III-C2 be indexed by M, and let A be the space
of all DAs A for the predicted MB. A DA A ∈ A is an
assignment of each measurement in Z F to a source, either
to the background (clutter or new feature) or to one of the
existing features indexed by I. It is therefore a partition of
M ∪ I into non-empty disjoint subsets C ∈ A, called index
cells3 .
Remark 3: Due to the standard MTT assumption that the
features generate measurements independent of each other
3 For example, let M = (m , m , m ) and I = (i , i ), i.e., three
1
2
3
1 2
measurements and two features. One valid partition of M ∪ I, i.e., one of
the possible associations, is {m1 }, {m2 , i1 }, {m3 }, {i2 }. The meaning of
this is that measurement m2 is associated to feature i1 , feature i2 is not
detected, and measurements m1 and m3 are not associated to any previously
detected feature, i.e., measurements m1 and m3 are either clutter or from
new features .
6
[9], an index cell contains at most one feature index, i.e.,
|C ∩ I| ≤ 1 for all C ∈ A. Any association in which there
is at least one cell, with at least two feature indices, will
have zero likelihood because this violates the independence
assumption. Further, due to the point feature assumption, any
feature generates at most one measurement in each time step,
i.e., |C ∩ M| ≤ 1 for all C ∈ A. Any association in which
there is at least one cell, with at least two measurement
indices, will have zero likelihood because this violates the
point feature assumption. If the index cell C contains a feature
index, then let iC denote the corresponding feature index.
Further, if the index cell C contains a measurement index,
then let mC denote the corresponding measurement index.
Measurements in C not assigned to any feature are associated
to the background.
With the help of Bayes’ rule, the updated joint vehicle-feature
density is
X
f u (F u |x)
f (x, F |Z F ) =
×
F u ]F d =F
X
A A
w p (x|Z F )f d,A (F d |x, Z F ),
(29)
A∈A
P
where wA denotes the weight of DA A ∈ A with A∈A wA =
F
1, and pA (x|Z ) denotes the vehicle state posterior stated
in Appendix A. For now, let us assume the DA weights
are given. The undetected feature density f u (F u |x) and the
detected feature density f d,A (F d |x, Z F ) are stated in Appendix B. Equation (29) does not factorize as f (x, F |Z F ) =
p(x|Z F )f (F |Z F ) where f (F |Z F ) is a multi-Bernoulli density such that it remains in the same form as the prior
(c.f. (19)); on the contrary, there are many dependencies
between the feature state RFS and the vehicle state. This
means that existing independence-assuming tracking frameworks cannot be applied directly [9], or introduce a significant
increase in computational complexity [28]. To overcome this,
we approximate
f u (F u |x) ≈ fˆu (F u ),
f d,A (F d |x, Z F ) ≈ f˜d,A (F d |Z F ),
(30)
(31)
where the functions on the right hand side need to be found.
Towards this end, we make the following approximations for
the vehicle and feature dependent probability of detection
u
p̂D , f ∈ F u
pD (x, f ) ≈
(32)
p̂iD , f ∈ F d ,
with F u ] F d = F , and where
ZZ
u
p̂uD =
pD (x, f )p+ (x)D+
(f )dxdf ,
ZZ
p̂iD =
pD (x, f )p+ (x)pi+ (f )dxdf .
for p̂iD .
Under approximation (32), the updated density (29) becomes
X
fˆ(x, F |Z F ) ∝
fˆu (F u )
×
F u ]F d =F
X
A A
w p (x|Z F )fˆd,A (F d |x, Z F )
(35)
A∈A
with fˆu (F u ) and fˆd,A (F d |x, Z F ) given in Appendix C. We
observe in (35) that the undetected feature density fˆu (F u )
depends only on the undetected feature RFS F u , and is
independent of the other stochastic variables. What remains are
dependencies between the detected features, and the vehicle.
To remove the dependency on the vehicle state in the detected
feature density fˆd,A (F d |x, Z F ) in (35), we map the vehicle
state uncertainty onto the V2F measurement uncertainty. This
is done by averaging the V2F measurement likelihood by the
vehicle state uncertainty. In doing so, the detected feature
density under association A becomes independent of the
vehicle state x leading to the approximation (31), where the
approximated updated feature set density f˜d,A (F d |Z F ) is
given in Appendix D. The updated joint vehicle-feature density
is approximated as
X
f (x, F |Z F ) ∝
fˆu (F u )
F u ]F d =F
X
A A
w p (x|Z F )f˜d,A (F d |Z F ).
×
(36)
A∈A
In this form, the vehicle state PDF is now independent on
the feature RFS, and so is the feature density on the vehicle
state. This allows to state the weights wA , which are given in
Appendix E. Note, (36) is a Poisson multi-Bernoulli mixture
(PMBM) density, where each DA A ∈ A denotes a hypothesis
on the posterior vehicle state x and the detected feature state
F d , weighted by wA . It can be reduced to a PMB density
using, e.g., the variational approximation presented in [36], or
based on the marginal DA probabilities [15]. We apply the
latter approach, where for the reduction of (36) to the form of
(16), the track-oriented marginal MeMBer/Poisson (TOMB/P)
algorithm (c.f. [15]) is used. This results in a single hypothesis
per detected feature described by a Bernoulli process (c.f. (3)),
and per vehicle described by its PDF; as well as the intensity of
undetected feature described by a PPP (c.f. (5)). This means,
the summation over the DA space A has vanished in (36),
retaining the form of (16).
C. Multi-scan scenario using multiple vehicles with uncertain
state
(33)
(34)
Here, (33) is the expected probability of detection for an
undetected feature under the predictive distributions for x and
f , and (34) is the expected probability of detection for a
detected feature. An alternative (and stronger)
R approximation
for (33) would be p̂uD = pD (x̂, f̂ ) with x̂ = xp+ (x)dx and
f̂ the estimated feature state (c.f. Section II-B), and similarly
Up to this point, we discussed PMB filtering with a single
vehicle and uncertain state, where GNSS and V2F measurements are used. To achieve feature tracking as described in
Section I-A, where sensors are mounted on several vehicles,
we have to consider the multisensor case. Furthermore, depending on the infrastructure, sensors are time synchronized,
i.e., take measurements at the same time step t or are not
synchronized, i.e., measurements from a sensor arrives timestamped, but the time the sensor acquires the measurement is
independent of other sensors. Let there be a single RFS F
7
modeling the features state, S = |S| vehicles with uncertain
vehicle state xs with s = 1, 2, . . . S. The set of vehicles
taking a measurement at time step t is given by C ⊆ S,
where each vehicle c ∈ C provides a vector with a GNSS
measurement z c and V2F measurements Z F
c . Furthermore,
T
T T
T
T
T T
let x̃ = [xT
1 , x2 , . . . , x|S| ] , x̂ = [x1 , x2 , . . . , x|C| ] , ẑ G =
S
F
F
T
T T
[z G T
1 , z G 2 , . . . , z G |C| ] , and Ẑ =
c∈C Z c .
In the multisensor case, the PMB filter with uncertain sensor
state follows the unisensor case proposed in Section IV: the
joint vehicle-feature density is predicted, and updated by the
GNSS and the V2F measurements. Thereby, the predicted
vehicle-feature density (19) becomes
f+ (x̃, F ) = f+ (F )p+ (x̃).
(37)
Updating the joint density (19) by the GNSS measurement
results in (28). In the multisensor case, it becomes
f (x̃, F |ẑ G ) = f+ (F )p(x̃|ẑ G ).
F u ]F d =F
X
F
F
wA pA (x̃|Ẑ )f˜d,A (F d |Ẑ ),
(39)
A∈A
where
F
F
pA (x̃|Ẑ ) ∝ p+ (x̃)`A (Ẑ |x̂),
In order to obtain a low complexity implementation, we
describe the vehicle state PDF by a Gaussian with PDF p(x) ,
N (µ, Σ) with mean parameter µ and covariance matrix Σ.
Similarly, the RFS density of a MB process of feature i is
described by a Bernoulli random variable ri and a Gaussian
PDF p(f i ) , N (µf , Σf ) with parameters µf and Σf . Under
this description and the system models of Section III, we can
express the prediction and update steps of the proposed filter in
closed form with low computational complexity, whose steps
are described next.
1) Prediction Step: The predicted vehicle state PDF of (19)
is
(41)
p+ (x) = N (µx+ , Σx+ ),
where with the use of (7)
µx+ = Aµx− ,
(38)
After incorporating the GNSS measurements we proceed with
incorporating the V2F measurements. In the unisensor case,
this resulted in the approximated joint sensor-feature density
(36). In the multisensor case, this density becomes
X
F
f (x̃, F |Ẑ ) ∝
fˆu (F u )
×
D. Gaussian Density Approximation
Σx+ = AΣx− A + W .
(43)
The notation above means that if at time step t − 1 the vehicle
state PDF p− (x) = N (µx− , Σx− ), then at time step t, the
predicted state PDF (before updating by a measurement) is
p+ (x) = N (µx+ , Σx+ ).
The intensity of undetected features (22) is modeled by GM
consisting of newborn features with weight λb (f ) = λb and
PDF p(f ) = N (µb , Σb ), where µb andΣb are the birth mean
and covariance matrix; and undetected features survived to the
current time step with prior parameters {λu , µf − , Σf − } and
predicted parameters
(40)
F
and f˜d,A (F d |Ẑ ) involves the marginalization over the vehicle prior PDF p+ (x̂), i.e., containing only vehicles which
provide a V2F measurement, according to (70). Note that
here (38) is used as prior in the joint sensor-feature density
(39). Furthermore, the space A of all DA increases with an
increase in the number of communicating sensors |C| for the
predicted MB. In terms of complexity, this increase can be
significant, because of the increase of possible feature stateto-measurement associations.
Remark 4: Several different approaches exist to tackle this
DA problem in a tractable manner. For instance, by employing
sequential sensor-by-sensor measurement updates on (36), or
by performing variational inference [37], or by solving the DA
in parallel on a sensor-by-sensor basis [21]. Here, we employ
the sequential measurement update strategy to limit the size of
the DA space A. In doing so, subsequent sensors will benefit
from updated vehicle and feature information of preceding
sensors. In our application example (c.f. Section I-A), this
means that an update of the joint vehicle-feature density,
with measurements from a well localized vehicle (certain
vehicle state), results in an improvement of feature tracking
performance when prior information on the features is low. An
update of the joint vehicle-feature density, with measurements
from a poorly localized vehicle (uncertain vehicle state),
allows to reduce the uncertainty of its own vehicle state when
prior information on the features is high.
(42)
T
λu+ = hpS , λu i ,
µf + = Bµf − ,
(44)
(45)
Σf + = BΣf − B T + V .
(46)
The predicted MB density of detected features, stated in (21),
i
predicted using
has single feature Bernoulli parameters r+
i
(26), and single feature PDF p+ (f ) calculated similarly to
(45) and (46).
2) Update step: The joint vehicle-feature state density (28)
is computed by updating the predicted vehicle-feature density
(19) with the GNSS measurement z G through the Kalman
update step [33], [35], where the vehicle state PDF is given
by
p(x|z G ) = N (µx , Σx ).
(47)
Here,
µx = µx+ + K (z G − H G µx+ ),
(48)
Σx = Σx+ − K H G Σx+ ,
(49)
T
K = Σx+ H G S ,
(50)
T
S = H G Σx+ H G + R.
(51)
The matrices H G and R are defined in (9).
The updated joint vehicle-feature density (36) is computed
by updating the predicted vehicle-feature density (19) with
the V2F measurement Z F . Note that depending on the time
difference between the GNSS and V2F measurements, (28)
may be used instead of (19) as prior on the joint vehicle-feature
8
density. In order to calculate (36), the vehicle measurementstate likelihood `(z|x) used in (64) and (65), and the feature
measurement-state likelihood `(z|f ) used in (74) are needed.
The measurement-feature state likelihood (74) for z given, can
be written in terms of f in closed form by
`(z|f ) ∝ N (µz|f , Σz|f )
(52)
with
−1
T
T
H 2,
Σ−1
z|f = H 2 Q + H 1 Σx+ H 1
µz|f = H +
2 z − H 1 µx+ .
(53)
(54)
Here, p+ (x) = N (µx+ , Σx+ ), and (·)+ denotes the MoorePenrose pseudo inverse.
Proof: See Appendix F.
The vehicle measurement-state likelihood (64) for a given
z and marginalized over a detected feature, and in (65)
marginalized over an undetected feature, can be written in
terms of x in closed form by
`(z|x) = N (µz|x , Σz|x )
(55)
described by a Bernoulli component (c.f. (3)) with Gaussian
PDF, has a memory footprint of a Bytes needed to store
{r, f , cov(f )}, where f ∈ RNf . Storing the vehicle state requires b Bytes, where the state of each vehicle is described by
a Gaussian PDF with parameters {x, cov(x)} with x ∈ RNx .
Without pruning of low-probability Bernoulli components in
F , the memory footprint of the proposed filter is a|F | + b
Bytes.
In the multisensor approach of Section IV-C, the RSU receives
GNSS measurement z G and V2F measurement Z F from a
vehicle and then performs the filter update computation. The
RSU broadcasts the vehicle state estimates either whenever
a new measurement has been processed, or based on a fixed
schedule. Should information of detected (and tracked) features be required at the vehicles, then the single-feature pdfs
need to be transmitted as well (c.f. Section II-B).
V. N UMERICAL R ESULTS
We consider a scenario similar to the one outlined in Fig.1,
where we apply the proposed multifeature-multisensor state
tracking filter presented in Section IV.
with
−1
T
T
H 1,
Q
+
H
Σ
H
Σ−1
=
H
2
f
2
1
z|x
+
µz|x = H +
1 z − H 2 µf + .
(56)
(57)
Here, (55) equals (64) for feature PDF p(f ) , pi+C (f ) =
u
N (µf + , Σf + ), and in (65) for feature PDF p(f ) , D+
(f ) =
N (µf + , Σf + ).
Proof: The proof is analogous to the proof of the feature
measurement-state likelihood (52) with the only difference that
the unknown is x instead of f .
E. Computational complexity, memory footprint, and communication demand
Computational complexity is dominated by the matrix inversion needed to update the feature and vehicle densities, and the
measurement-to-feature state DA. Updating the joint vehiclefeature density f (x, F |z G ) using GNSS measurement z G
requires a matrix inversion which scales as O(Nx3 ). The update
of an MB component in f˜d,A (F |x, Z F ) by V2F measurement
z mC ∈ Z F scales as O(Nf3 ), and consequently as O(|Z F |Nf3 )
for the whole measurement set. Computational complexity of
DA is O(|F ||Z F |) [38]. Hence, the update of the joint vehiclefeature density (36) by V2F measurement set Z F scales as
O(|F ||Z F | + Nf3 |Z F | + Nx3 |Z F |), where the last term comes
from vehicle state update of Z F .
In each time step t, the size of undetected features RFS
F u increases by |Db (f )| new born targets. The number of
existing feature-tracks increases by |Z F |, a new feature-track
per measurement [15], using a Bernoulli component per track.
For each existing feature-track |Z F | + 1 hypotheses (plus one
for a missed detection) are computed. The TOMB/P algorithm
reduces each feature-track to a single hypothesis track. Pruning
of feature-tracks with low probability of existence r allows to
keep the number of feature-tracks tractable. Each hypothesis,
A. Setup
The state of a vehicle at time step t is x = [pT , v T ]T with
position p ∈ R2 and velocity v ∈ R2 . Vehicle dynamics follow
a linear constant velocity (CV) model described by (7) with
1 Ts
A=
⊗ I 2,
(58)
0 1
where Ts = 0.5 s, and
W =r
Ts3 /3
Ts2 /2
Ts2 /2
Ts
⊗ I2
(59)
with r = 0.05 m2 , and ⊗ denoting the Kronecker product. The
state of a feature at time t, denoted f ∈ R4 , is comprised of
Cartesian position and velocity, similar to the vehicle state x.
There are maximal five features present, if not noted otherwise.
Furthermore, feature dynamics follow the CV model with
the same parameters used for the vehicles. To generate a
challenging scenario for DA, we initialize the feature states
f ∼ N (0, 0.25I 4 ) at t = 175 for all features and run the CV
model forward and backward in time similar to [15, Sec. VI].
The first feature enters the scene after t = 0, the second after
t = 20 and so on. Once present, features stay alive for the
remaining simulation time. Vehicle and feature trajectories
are shown in Fig. 2. The observation matrix of the GNSS
measurement model (9) is
H G = [1 0] ⊗ I 2 ,
(60)
2
where R = σG
I 2 . For vehicle 1, we assume it has low
2
location uncertainty with σG
= 0.1 m2 and for vehicle 2
2
high location uncertainty with σG
= 10 m2 , corresponding
to a vehicle with high quality GNSS receiver and one with
low quality GNSS receiver. In the single sensor case, only
vehicle 1 is present, and in the multisensor case both vehicles
are present, if not noted otherwise. The V2F measurement
model follows (11), where H 1 , H G , H 2 , −H G and
9
200
y dimension in m
100
vehicle
feature
0
−100
−200
−300
−500
−400
−300
−200
−100
x dimension in m
0
100
Fig. 2. Vehicle and feature trajectories.
Fig. 3. V2F measurements in x (top panel) and y dimension (bottom panel)
for each time step t.
2
Q = σV2F
I 2 . In Fig. 3, V2F measurements are shown for
each time step t including clutter measurements. Following
u
[15], we set the initial undetected feature intensity to D−
(f ) =
2
2
T
10N (0, P ), where P = diag([100 , 1, 100 , 1] ) to cover
the ranges of interest of the feature state. The feature birth
intensity is set to Db (f ) = 0.05N (0, P ), the average number
of false alarms per scan to λc = 20, with uniform spatial
distribution on [−rmax , rmax ] with parameter rmax = 1000 m.
Furthermore, the probability of survival is pS = 0.7 and the
probability of detection is pD (x, f ) , pD = 1.
To asses feature tracking performance, we use the optimal
sub-pattern assignment (OSPA) metric with cut-off parameter
c = 20 m and order p = 2 [39]. The vehicle tracking
performance is assessed in terms of the root mean square error
(RMSE).
B. Discussion
First, we discuss the impact of an uncertain vehicle state
on feature tracking performance using a single vehicle and
multiple-features. After that, we consider the multitarget-
multisensor case from Section IV-C, and show scaling results
in terms of numbers of features and vehicles tracked.
1) Impact of uncertain vehicle state on feature tracking
performance: The features and vehicle trajectories are outlined
in Fig. 2 with V2F measurements in Fig. 3. In Fig. 4, the
feature state OSPA is plot for each time step t. We observe that
there are peaks with a high OSPA value when a new feature
enters the scene. These peaks are due to a cardinality mismatch
between the feature RFS estimate and the true feature set.
Furthermore, there is a high OSPA value around time step
t = 175. At this point in time, features are closely spaced
2
together w.r.t. the V2F measurement variance σV2F
resulting in
a challenging scenario for measurement-to-state DA. In Fig. 5
the cardinality of the feature RFS is plotted over time. Around
time step t = 175, the filter overestimates the feature RFS cardinality, which may be caused by clutter measurements. Note,
different Monte-Carlo (MC) realizations produce a slightly
different outcome and feature OSPA value, but with the same
tendency. This behavior w.r.t. feature appearance and the effect
when they are spatially close agrees with the findings in [15]
for a known sensor (vehicle) state. Furthermore, we observe
in Fig. 4, that the OSPA is low for time steps where features
are spatially separated and already present in the scene. Then
the MTT filter is able to produce feature estimates with low
error.
In Fig. 6, the feature state OSPA averaged over all 351
time steps is plot for different values of GNSS measurement
2
. The increase of GNSS measurement variance
variance σG
leads to an increased vehicle state uncertainty with the effect of
an increase of the average feature OSPA. This OSPA increase
consists of two components. First, an increased feature state
2
estimation error due to a higher value of σG
. Second, this
results in features staying spatially close together w.r.t. the
2
2
together)
and σV2F
feature state measurement uncertainty (σG
for a longer period of time around time step t = 175. Hence,
DA is more challenging with the effect of an increased feature
OSPA in this regime. In the same figure, the average feature
OSPA without modeling the present vehicle state uncertainty
is plotted using the conventional PMB filter [15]. We observe
that not modeling the present vehicle state uncertainty has a
negative effect on feature tracking performance.
In Fig. 7, the average feature state OSPA is plotted for
2
different values of V2F measurement noise variance σV2F
.
We observe, that a higher V2F noise variance leads to an
increased OSPA value. This is because the single feature state
estimation error increases and DA becomes more challenging.
Note, the results of Fig. 6 and Fig. 7 are averages over 10 MC
realizations.
2) Multitarget-multisensor tracking performance: The
RMSE of the vehicle state is plotted for each time step t for
2
small) in Fig. 8,
vehicle 1 with low location uncertainty (σG
2
and in Fig. 9 for vehicle 2 with high location uncertainty (σG
high). As a benchmark, results from a centralized Kalman
filter (KF) are plot as well, where measurement-to-feature
DA is known and where the augmented state vector contains
all vehicle and all feature states. Furthermore, the tracking
performance using a local KF is plot. The local KF performs
filtering only on the individual vehicle state separately using
20
4
15
3
RMSE
Feature OSPA
10
10
0
50
100
150
200
time-step t
250
300
0
350
2 = 10−3 m2 and σ 2
2
Fig. 4. Feature OSPA with σG
V2F = 0.25 m .
cardinality
4
100
150
200
time-step t
250
300
350
0.8
2
0
50
100
150
200
time-step t
250
300
CDF
0.6
350
0.2
0
proposed
conventional
15
Vehicle
Vehicle
Vehicle
Vehicle
Vehicle
Vehicle
0.4
20
Avg. feature OSPA
50
1
estimate
true
2 = 10−3 m2 and σ 2
2
Fig. 5. Feature cardinality with σG
V2F = 0.25 m .
10
0
0.5
1
1,
1,
1,
2,
2,
2,
proposed
local KF
central KF (known DA)
proposed
local KF
central KF (known DA)
1.5
RMSE in m
2
2.5
3
Fig. 10. CDF plot of vehicle state RMSE.
5
0
2
4
6
8
10
2
σ G in m2
Fig. 6. Average feature OSPA for different values of GNSS measurement
2 . The V2F noise variance is set to σ 2
2
variance σG
V2F = 0.25 m .
Avg. feature OSPA
4
3
2
1
0
0.2
0.4
2
σ V2F
0.6
in
0.8
1
m2
Fig. 7. Average feature OSPA for different values of V2F measurement
2
2 = 10−3 m2 .
variance σV2F
. The GNSS noise variance is set to σG
1
local KF
central KF (known DA)
proposed
0.8
RMSE
0
Fig. 9. Vehicle state RMSE of vehicle 2.
6
0
2
1
5
0
local KF
central KF (known DA)
proposed
0.6
0.4
0.2
0
0
50
100
150
200
time-step t
Fig. 8. Vehicle state RMSE of vehicle 1.
250
300
350
only GNSS measurements and does not estimate feature states.
Note, the performance of the local KF can be considered as
the worst-case performance on vehicle state estimation, since
V2F measurements are not considered at all. We observe from
Fig. 8, that for vehicle 1, which has low GNSS measurement
noise, all three filter methods deliver a similar performance.
The reason for this is that, due to the high accuracy of GNSS
measurements, not a lot of information (to improve the vehicle
state) is provided from feature tracking, i.e., feature tracking
error is high w.r.t. the vehicle state tracking error of vehicle
1 (after updating by the GNSS measurement). In Fig. 10, the
cumulative distribution function (CDF) of the RMSE is plot.
Here, the low RMSE of vehicle 1 using the three different
filters can be observed as well. Moving the focus to vehicle 2,
we observe that the RMSE of the local KF is much higher
compared to the central KF, which is caused by the high
noise in the GNSS measurements. Due to the low RMSE
of vehicle 1’s state there is relevant position information in
the system, which can be transfered from vehicle 1 to vehicle
2 via the features utilizing the V2F measurements. In 80%
of all cases, the RMSE of vehicle 2 is below 0.5 m with
the proposed filter, compared to 1.4 m with the local KF.
Despite this great improvement of the proposed filter over
the local KF, it does not achieve the performance of the
central KF, where the RMSE is below 0.3 m. The reason
for the difference is that the central KF has knowledge of the
correct DA, knows the true number of features present, and
ignores clutter V2F measurements. Furthermore, it tracks any
present correlations between features and vehicles not modeled
11
0.1
time in s
8 · 10−2
6 · 10−2
4 · 10−2
2 · 10−2
0
0
5
10
No. of features
15
20
Fig. 11. Average computation time per time step for different number of
present features. The number of vehicles is set to two.
time in s
0.2
0.15
0.1
5 · 10−2
0
0
5
10
No. of vehicles
15
be incorporated either in a time-synchronized manner where
sensor measurements are aggregated in a super-sensor state,
or in a non time-synchronized manner through asynchronous
update steps, executed whenever sensor measurements arrive
at the central node. Simulation results showed for a unisensormultitarget tracking scenario with known sensor state that
tracking performance assessed by the OSPA distance metric
is equivalent to William’s PMB filter.
In a scenario with present vehicle state uncertainty, the proposed filter showed superior feature tracking performance over
the conventional PMB filter due to the modeling of this
type of additional uncertainty. In a multisensor-multitarget
tracking scenario, feature information from a well localized
vehicle (sensor) allows to significantly reduce the vehicle
state uncertainty of (previously) poor localized vehicles. This
improvement is possible through joint observation of a subset
of the present features and is supported by simulation results.
20
A PPENDIX
A. Vehicle State Posterior
Fig. 12. Average computation time per time step for different number of
vehicles. The number of features is set to five.
by the proposed filter. The proposed filter needs to infer the
measurement-to-feature DA, estimate the number of features
currently present, and needs to appropriately handle clutter in
the V2F measurement set Z F .
3) Scaling results: In Fig. 11, the average computation time
per time step t is plot for a simulation with two vehicles and
different number of present features. We observe that computation time increases linearly as the number of present features
increases. This scaling result is different to the PF based
implementation of the PMB filter in [21] with a known vehicle
state. There, the authors reported an exponential increase of
computation time. In Fig. 12, the average computation time
per time step t is plot for a simulation with five features and
different number of vehicles. Here, computation time increases
linearly as the number of vehicles increases. Furthermore, we
investigated the average computation time per time step t for
2
different values of the GNSS measurement variance σG
and
2
of the V2F measurement variance σV2F . In the simulation
scenario with five features and two vehicles, the average
computation time remained constant around 2.4 · 10−2 s for
2
10−3 m2 ≤ σG
≤ 10 m2 . For a V2F measurement variance
−3
2
2
10
m ≤ σV2F ≤ 10 m2 , the average computation time
2
linearly increased from 2 · 10−2 s to 3 · 10−2 s as σV2F
increased.
VI. C ONCLUSIONS
This paper presented a Poisson multi-Bernoulli filter for
multisensor-multitarget tracking with uncertain sensor states.
Two different kind of measurements, observations of the sensor state and observations of the features, were used to obtain
accurate feature and sensor state tracking. The proposed parametric filter implementation scales linearly with the number
of features or sensors. Information from multiple sensors can
The vehicle state posterior of (29) and (36) is proportional
to the vehicle’s prior PDF p+ (x) times the measurement
likelihood `A (Z F |x) as
pA (x|Z F ) ∝ p+ (x)`A (Z F |x),
(61)
where
A
Y Z
F
` (Z |x) ∝
`(z mC |x, f )pi+C (f )df
C∈A:
C∩I6=∅
C∩M6=∅
f ∈F d
×
Y Z
u
`(z mC |x, f )D+
(f )df
(62)
C∈A:
C∩I=∅
C∩M6=∅
f C ∈F u
∝
Y
`iC (z mC |x)
C∈A:
C∩I6=∅
C∩M6=∅
f ∈F d
Y
`u (z mC |x)
(63)
C∈A:
C∩I=∅
C∩M6=∅
f ∈F u
with
Z
iC
` (z|x) =
`u (z|x) =
Z
`(z|x, f )pi+C (f )df ,
(64)
u
`(z|x, f )D+
(f )df .
(65)
Here, (64) and (65) map the feature uncertainty on the V2F
measurement likelihood.
B. Updated Undetected and Detected Feature Density
The updated joint vehicle-feature density (29) has undetected feature density
Y
u
f u (F u |x) ∝
(1 − pD (x, f ))D+
(f )
(66)
f ∈F u
12
where we average over the predicted
vehicle state and
0
marginalize over all subsets F C not equal to F C . Eqn. (70)
has Bernoulli parameters [15, Eqns. (45) to (57)]
i
i
(1−p̂DC )r+C
C ∩ I 6= ∅, C ∩ M = ∅
iC iC
1−p̂D r+
F
iC
r̃ |Z =
1
if C ∩ I 6= 0
C ∩ M 6= 0
u
,D+
p̂u
i
D h`z mC
E
D
C∩I=0
C ∩ M 6= 0,
m
u
u
and detected feature density
f
d,A
∝
d
F
(F |x, Z )
X
]C∈A F
C
=F
Y
×
d
Y
iC
(F C )
η(∅|x, F C )f+
C∈A:
C∩I6=∅
C∩M=∅
iC
(F C )
η({z mC }|x, F C )f+
λc (z
C∈A:
C∩I6=∅
C∩M6=∅
D
`zmC ,D+
(72)
Y
×
C )+p̂
u
(F C ).
η({z mC }|x, F C )f+
(67)
C∈A:
C∩I=∅
C∩M6=∅
p̃iC (f |Z F ) =
i
p+C (f )
`zmC (f )pi+C (f )
Here, the first product considers the cases where no measurement is associated to any of the existing features, the second
product considers the cases where a measurement is associated
to an existing feature, and the last line considers the case
where a measurement is associated to the background (clutter
or undetected feature).
E
i
`zmC ,p+C
u
`zmC (f )D+ (f )
E
D
u
`zmC ,D+
D
if
C ∩ I 6= ∅,
C∩M=∅
C ∩ I 6= 0
C ∩ M 6= 0
C∩I=0
C ∩ M 6= 0,
(73)
where
Z
`z (f ) = `(z|f ) =
`(z|x, f )p+ (x)dx.
(74)
E. Approximated Updated Feature Density Weights
C. Updated Undetected and Detected Feature Density Approximation
The joint vehicle-feature density approximation (35) has
undetected feature density
Y
fˆu (F u ) ∝
(1 − p̂uD )D+ (f )
(68)
The weight of a global association hypothesis A ∈ A stated
in (36) is [15, Eqn. (67)]
Y
Y
iC iC
iC iC
p̂D `zmC , pi+C
p̂D )
r+
wA ∝
(1 − r+
C∈A:
C∩I6=∅
C∩M=∅
f ∈F u
×
]C∈A F
×
C
=F
Y
d
Y
iC
(1 − p̂uD ∆|F C | )f+
(F C )
Here, we proof (52). With (11) and p+ (x) = N (µx+ , Σx+ )
known, define
iC
η({z mC }|x, F C )f+
(F C )
Y
(75)
F. Proof of Measurement-Feature State Likelihood
C∈A:
C∩I6=∅
C∩M=∅
C∈A:
C∩I6=∅
C∩M6=∅
×
u
).
(λc (z mC ) + p̂uD `zmC , D+
C∈A:
C∩I=∅
C∩M6=∅
and detected feature density
fˆd,A (F d |x, Z F )
X
∝
Y
C∈A:
C∩I6=∅
C∩M6=∅
y , z − H 1 x − q,
(76)
p(y) = N (z − H 1 µx+ , H 1 Σx+ H T
1 ).
(77)
and consequently
u
η({z mC }|x, F C )f+
(F C ),
(69)
C∈A:
C∩I=∅
C∩M6=∅
C
where ∆|F C | = 1 for F 6= ∅, and zero otherwise. Here, the
three products consider similar cases to (67).
Now, we have y − H 2 f = 0 and solve for f with the help
of [40, Rule 4 in Table 3], which results in eqs. (52) to (54).
R EFERENCES
D. Approximated Updated Feature Set Density
The approximated updated feature set density of (31) and
(36) is a MB density,
X
Y
f˜d,A (F d |Z F ) =
f˜iC (F C |Z F )
(70)
]C∈A F C =F d C∈A
with
f˜iC (F C |Z F ) =
Z
Z
Y
0
fˆd,A (F d |x, Z F )p+ (x)
δF C
dx,
C 0 ∈A:
C 0 6=C
(71)
[1] J. Leonard, J. How, S. Teller, M. Berger, S. Campbell, G. Fiore,
L. Fletcher, E. Frazzoli, A. Huang, S. Karaman et al., “A perceptiondriven autonomous urban vehicle,” Journal of Field Robotics, vol. 25,
no. 10, pp. 727–774, 2008.
[2] C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira,
I. Reid, and J. J. Leonard, “Past, present, and future of simultaneous
localization and mapping: Toward the robust-perception age,” IEEE
Transactions on Robotics, vol. 32, no. 6, pp. 1309–1332, 2016.
[3] S.-W. Kim, Z. J. Chong, B. Qin, X. Shen, Z. Cheng, W. Liu, and M. H.
Ang, “Cooperative perception for autonomous vehicle control on the
road: Motivation and experimental results,” in Intelligent Robots and
Systems (IROS), 2013 IEEE/RSJ International Conference on. IEEE,
2013, pp. 5059–5066.
[4] S.-W. Kim, B. Qin, Z. J. Chong, X. Shen, W. Liu, M. H. Ang, E. Frazzoli, and D. Rus, “Multivehicle cooperative driving using cooperative
perception: Design and experimental validation,” IEEE Transactions on
Intelligent Transportation Systems, vol. 16, no. 2, pp. 663–680, 2015.
13
[5] F. Meyer, O. Hlinka, H. Wymeersch, E. Riegler, and F. Hlawatsch,
“Distributed localization and tracking of mobile networks including
noncooperative objects,” IEEE Transactions on Signal and Information
Processing over Networks, vol. 2, no. 1, pp. 57–71, March 2016.
[6] G. Soatti, M. Nicoli, N. Garcia, B. Denis, R. Raulefs, and H. Wymeersch,
“Enhanced Vehicle Positioning in Cooperative ITS by Joint Sensing of
Passive Features,” in IEEE 20th International Conference on Intelligent
Transportation Systems (ITSC), Oct 2017.
[7] H. Wymeersch, J. Lien, and M. Z. Win, “Cooperative localization in
wireless networks,” Proceedings of the IEEE, vol. 97, no. 2, pp. 427–
450, 2009.
[8] G.-M. Hoang, B. Denis, J. Härri, and D. T. Slock, “Breaking the gridlock
of spatial correlations in gps-aided ieee 802.11 p-based cooperative
positioning,” IEEE Transactions on Vehicular Technology, vol. 65,
no. 12, pp. 9554–9569, 2016.
[9] Y. Bar-Shalom, P. K. Willett, and X. Tian, Tracking and data fusion: A
handbook of algorithms. Storrs, CT: YBS Publishing, 2011.
[10] R. L. Streit and T. E. Luginbuhl, “Probabilistic Multi-Hypothesis Tracking,” Naval Underwater Systems Center Newport RI, Tech. Rep., 1995.
[11] R. P. Mahler, Statistical multisource-multitarget information fusion.
Artech House, Inc., 2007.
[12] B.-N. Vo and W.-K. Ma, “The Gaussian Mixture Probability Hypothesis
Density Filter,” IEEE Transactions on Signal Processing, vol. 54, no. 11,
pp. 4091–4104, 2006.
[13] B. N. Vo, S. Singh, and A. Doucet, “Sequential monte carlo methods
for multitarget filtering with random finite sets,” IEEE Transactions on
Aerospace and Electronic Systems, vol. 41, no. 4, pp. 1224–1245, Oct
2005.
[14] B.-T. Vo, B.-N. Vo, and A. Cantoni, “Analytic implementations of the
cardinalized probability hypothesis density filter,” IEEE Transactions on
Signal Processing, vol. 55, no. 7, pp. 3553–3567, 2007.
[15] J. L. Williams, “Marginal multi-bernoulli filters: RFS derivation of
MHT, JIPDA, and association-based MeMBer,” IEEE Transactions on
Aerospace and Electronic Systems, vol. 51, no. 3, pp. 1664–1687, 2015.
[16] Á. F. Garcı́a-Fernández, J. L. Williams, K. Granström, and L. Svensson,
“Poisson multi-bernoulli mixture filter: direct derivation and implementation,” arXiv preprint arXiv:1703.04264, 2017.
[17] B.-N. Vo, B.-T. Vo, and D. Phung, “Labeled random finite sets and
the bayes multi-target tracking filter,” IEEE Transactions on Signal
Processing, vol. 62, no. 24, pp. 6554–6567, 2014.
[18] S. Reuter, B.-T. Vo, B.-N. Vo, and K. Dietmayer, “The labeled multibernoulli filter,” IEEE Transactions on Signal Processing, vol. 62, no. 12,
pp. 3246–3260, 2014.
[19] B. Ristic, D. Angley, S. Suvorova, B. Moran, F. Fletcher, H. Gaetjens,
and S. Simakov, “Gaussian mixture multitarget–multisensor bernoulli
tracker for multistatic sonobuoy fields,” IET Radar, Sonar & Navigation,
2017.
[20] F. Meyer, P. Braca, P. Willett, and F. Hlawatsch, “Scalable Multitarget
Tracking Using Multiple Sensors: A Belief Propagation Approach,” in
18th International Conference on Information Fusion, 2015, pp. 1778–
1785.
[21] ——, “A scalable algorithm for tracking an unknown number of targets using multiple sensors,” IEEE Transactions on Signal Processing,
vol. 65, no. 13, pp. 3478–3493, 2017.
[22] T. Kropfreiter, F. Meyer, and F. Hlawatsch, “Sequential monte carlo
implementation of the track-oriented marginal multi-bernoulli/poisson
filter,” in Information Fusion (FUSION), 2016 19th International Conference on. IEEE, 2016, pp. 972–979.
[23] A. Berg and A. Käll, “Track-to-track fusion for multi-target tracking
using asynchronous and delayed data,” Master’s thesis, Department of
Signals and Systems, Chalmers University of Technology, Gothenburg,
Sweden, 2017.
[24] C.-Y. Chong, S. Mori, W. H. Barker, and K.-C. Chang, “Architectures
and algorithms for track association and fusion,” IEEE Aerospace and
Electronic Systems Magazine, vol. 15, no. 1, pp. 5–13, 2000.
[25] M. E. Liggins, C.-Y. Chong, I. Kadar, M. G. Alford, V. Vannicola, and
S. Thomopoulos, “Distributed fusion architectures and algorithms for
target tracking,” Proceedings of the IEEE, vol. 85, no. 1, pp. 95–107,
1997.
[26] H. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping: part i,” IEEE robotics & automation magazine, vol. 13, no. 2, pp.
99–110, 2006.
[27] J. Mullane, B.-N. Vo, M. D. Adams, and B.-T. Vo, “A random-finite-set
approach to bayesian slam,” IEEE Transactions on Robotics, vol. 27,
no. 2, pp. 268–282, 2011.
[28] E. Brekke, B. Kalyan, and M. Chitre, “A novel formulation of the bayes
recursion for single-cluster filtering,” in Aerospace Conference, 2014
IEEE. IEEE, 2014, pp. 1–16.
[29] S. J. Julier and A. Gning, “Bernoulli filtering on a moving platform,”
in Information Fusion (Fusion), 2015 18th International Conference on.
IEEE, 2015, pp. 1511–1518.
[30] B. Ristic, B.-T. Vo, B.-N. Vo, and A. Farina, “A tutorial on bernoulli
filters: theory, implementation and applications,” IEEE Transactions on
Signal Processing, vol. 61, no. 13, pp. 3406–3430, 2013.
[31] M. Fröhle, C. Lindberg, and H. Wymeersch, “Cooperative localization
of vehicles without inter-vehicle measurements,” in IEEE Wireless
Communications and Networking Conference, April 2018.
[32] E. Leitinger, F. Meyer, F. Tufvesson, and K. Witrisal, “Factor graph
based simultaneous localization and mapping using multipath channel
information,” in Communications Workshops (ICC Workshops), 2017
IEEE International Conference on. IEEE, 2017, pp. 652–658.
[33] D. Simon, Optimal state estimation: Kalman, H infinity, and nonlinear
approaches. John Wiley & Sons, 2006.
[34] E. A. Wan and R. Van Der Merwe, “The unscented kalman filter
for nonlinear estimation,” in Adaptive Systems for Signal Processing,
Communications, and Control Symposium 2000. AS-SPCC. The IEEE
2000. Ieee, 2000, pp. 153–158.
[35] M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A Tutorial
on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking,” IEEE Transactions on Signal Processing, vol. 50, no. 2, pp. 174–
188, 2002.
[36] J. L. Williams, “An efficient, variational approximation of the best fitting
multi-Bernoulli filter,” IEEE Transactions on Signal Processing, vol. 63,
no. 1, pp. 258–273, 2015.
[37] J. L. Williams and R. A. Lau, “Multiple scan data association by convex
variational inference,” arXiv preprint arXiv:1607.07942, 2016.
[38] J. Williams and R. Lau, “Approximate Evaluation of Marginal Association Probabilities with Belief Propagation,” IEEE Transactions on
Aerospace and Electronic Systems, vol. 50, no. 4, pp. 2942–2959, 2014.
[39] D. Schuhmacher, B.-T. Vo, and B.-N. Vo, “A consistent metric for
performance evaluation of multi-object filters,” IEEE Transactions on
Signal Processing, vol. 56, no. 8, pp. 3447–3457, 2008.
[40] H.-A. Loeliger, “An introduction to factor graphs,” IEEE Signal Processing Magazine, vol. 21, no. 1, pp. 28–41, 2004.
| 3 |
Implementation of Tetris as a Model Counter∗
Atri Rudra
University at Buffalo, SUNY
[email protected]
arXiv:1701.07473v1 [cs.DS] 25 Jan 2017
Jimmy Dobler
University at Buffalo, SUNY
[email protected]
Abstract
Solving #SAT problems is an important area of work. In this paper, we discuss implementing
Tetris, an algorithm originally designed for handling natural joins, as an exact model counter for the
#SAT problem. Tetris uses a simple geometric framework, yet manages to achieve the fractional
hypertree-width bound. Its design allows it to handle complex problems involving extremely large
numbers of clauses on which other state-of-the-art model counters do not perform well, yet still performs strongly on standard SAT benchmarks.
We have achieved the following objectives. First, we have found a natural set of model counting benchmarks on which Tetris outperforms other model counters. Second, we have constructed
a data structure capable of efficiently handling and caching all of the data Tetris needs to work on
over the course of the algorithm. Third, we have modified Tetris in order to move from a theoretical,
asymptotic-time-focused environment to one that performs well in practice. In particular, we have
managed to produce results keeping us within a single order of magnitude as compared to other
solvers on most benchmarks, and outperform those solvers by multiple orders of magnitude on others.
∗ This research was supported in part by grant# NSF CCF-1319402.
1
1 Introduction
#SAT is the prototypical #P-complete problem. #SAT (as well as its NP-complete cousin SAT) are not
only of great interest in computational complexity but by their completeness turn out to be a great tool
to model a wide host of practical problems. This has led to an explosion of SAT solvers that try to solve
practical instances of #SAT or SAT by exploiting structure in these instances. For this paper, we will
assume the importance of designing #SAT and SAT solvers as a given. We refer the reader to the book
chapters by Gomes, Sabharwal and Selman on #SAT solvers (also known as model counters) [15] and
by Gomes et al. on SAT solvers [14] for more details.
A common technique is the DPLL procedure, a depth-first search procedure where the algorithm
makes guesses on the assignments one variable at a time, determines at each stage whether or not this
produces a conflict, and uses that information to learn new clauses and get closer to finding the satisfying
assignment [17].
Recently in the database literature, the work of Abo Khamis et al. [6] connected the DPLL procedure
to computing natural joins. In particular, they presented the Tetris algorithm, which computes the natural join with beyond worst-case theoretical guarantees. As a special case, Tetris also recovers some of the
recent worst-case optimal join results [24, 30, 25]. Abo Khamis et al. then showed that Tetris is an DPLL
procedure and pointed out how one of the main step in their algorithm is exactly the resolution step that
is ubiquitous in DPLL-based SAT solvers. Given the close ties of SAT solvers to DPLL, they left open the
following intriguing possibility:
Can Tetris be implemented as a SAT solver or model counter that can compete with state-ofthe-art solvers?
Our contributions Our main result in this paper is to show that Tetris can indeed be implemented as a
model counter that is competitive with state-of-the-art model counters on actual datasets.
While [6] presented a nice geometric framework to reason about algorithms to compute the natural
join query, some of its simplicity arose from inefficiencies that matter when implementing Tetris as a
model counter. Before we present the issues we tackle, we give a quick overview of Tetris. The fundamental idea is that, rather than working to create the output of a join directly, it instead attempts to rule
out large sections of the cross product of the joined tables [6]. Initially, Tetris is given a set of sets whose
union is the set of all incorrect solutions to the problem Tetris is solving. In other words, any solution to
the problem must not be a member of this union. By efficiently querying this set of sets, and by adding to
it intelligently at various times (such as by adding a new exclusion whenever an output point is found),
Tetris is able to rule out increasingly large sets of potential solutions. Once it has ruled out all possible
solutions, it terminates and outputs the list of solutions.
We tackle the following three issues with the theoretical presentation of Tetris in [6]:
1. At any point, Tetris needs to keep track of the union of all the potential solutions it has ruled out.
To do this, [6] used a simple trie data structure to keep track of the union. However, this loses some
poly-logarithmic factors and proves detrimental to practical performance. To deal with this, we
design a new data structure that essentially compresses consecutive layers in the traditional trie
into one ‘mega layer.’ Inspired by the used of SIMD instructions by EmptyHeaded [5] (to speed
up implementations of worst-case optimal join algorithms [25]), we set up the compression in a
manner that lends itself to speedup via SIMD instructions.
2
2. The analysis of Tetris in [6] was for data complexity. This implies they could afford to use exponential (in the size of the join query) time algorithm to find an appropriate ordering in which to explore
different variables. In #SAT instances, we can no longer assume that the number of variables is a
constant, and hence we cannot obtain an optimal ordering using a brute force algorithm. We deal
with this by designing heuristics that take the structure of Tetris into account.
3. As mentioned earlier, Tetris (like a DPLL procedure) performs a sequence of resolutions and theoretically, it can store the outcomes of all the resolutions it performs. However, for practical efficiency, we use a heuristic to decide which resolution results to cache and which ones to discard.
Our experimental results are promising. On some natural #SAT benchmarks based on counting
number of occurrences of small subgraphs in a large graph (which we created), our implementation of
Tetris is at least two orders of magnitude (and in most cases more than three orders of magnitude) faster
than the standard model counters (sharpSAT [28], Cachet [26] and dSharp [22]). We also compared Tetris
with these model counters on standard SAT benchmarks, where Tetris was either comparable or at most
25x slower.
Theoretical Implications While this paper deals with an experimental validation of the theoretical result from [6], we believe that it highlights certain theoretical questions that are worth investigating by the
database community. We highlight some of our favorite ones that correspond to each of our three main
contributions:
1. Extending Tetris beyond join queries. As our work has shown, Tetris can be used to solve problem
beyond the original natural join computation. Recently, the worst-case optimal join algorithms
were shown to be powerful enough to solve problems in host of other areas such as CSPs (of which
MaxSAT is a prominent example), probabilistic graphical models and logic [7]. (Also see the followup work [18].) The beyond worst-case results in [6] have so far seemed more of a theoretical
novelty. However, given that this paper demonstrates the viability of Tetris in practice, this work
opens up the tantalizing possibility of extending the theoretical results of Tetris to problems captured by [7, 18]. Such a result even for MaxSAT would be of interest in practice.
2. Computing orderings efficiently. As mentioned earlier, the theoretical results for Tetris assumes
that the required ordering among variables can be computed in exponential time. However, for
applications in SAT (as well as other areas such as probabilistic graphical models, assuming the
question in the item above can be answered), we need to compute orderings that are approximately good in polynomial time. Thus, a further avenue of theoretical investigation is to come up
with a polynomial time algorithm to compute the ordering, and to prove some guarantees on the
loss of performance from the case where Tetris has access to the ‘optimal’ ordering. Some of the
heuristics developed in our paper might prove to be good starting points for this investigation. We
would like to point that that the importance of efficiently computing variable orderings has been
studied a lot in AI and database literature. Some of the very recent work on Generalized Hypertree
Decompositions (which are well known to be equivalent to variable elimination orderings) could
potentially be useful towards this goal [13].
3. Time-space tradeoff. Recent results on worst-case optimal algorithms to compute natural joins [24,
30, 25] and to compute joins with functional dependencies [8] all focus exclusively on time complexity. However, as highlighted by our work, being more prudent with space usage in fact benefits
3
actual performance. This point was also indirectly highlighted in [6], where it was shown that resolution schemes that did not cache their intermediate results are strictly less powerful than those
that do (in the context of computing the natural join). However, we believe that a systematic theoretical study of the tradeoff between time and space needed to compute the natural join is an
attractive route to pursue.
We will begin in Section 2 by introducing the fundamental concepts necessary to understand both
SAT problems and details on Tetris itself, all while giving a hands-on example of how Tetris would handle
a toy example. From there, we will move into Section 3, an in-depth analysis of our major contributions.
Afterwards, we will continue with our experimental results in Section 4. Then we will discuss related
work in the field in Section 5.
2 Background
In this section, we will introduce the concepts necessary to understand how Tetris functions, introduce
the concept of resolutions, and walk through how Tetris would handle a simple input.
2.1 SAT and Boxes
We begin by defining several key terms and ideas. Recall that a SAT problem consists of a series of
Boolean variables, x 1 , x 2 , ..., x n , joined together in a series of AND and OR clauses. Problems are generally
presented in the conjunctive normal form (CNF), a simplification wherein the entire formula is written
as a series of ANDs over a set of disjunctive clauses. One such example would be (x 1 ∨ x 2 ) ∧ (x 1 ∨ x¯2 ).
A solution, or satisfying assignment, to a SAT problem is an assignment of true or false to each of the
variables such that the Boolean formula is satisfied; that is, that all clauses are satisfied.
Next, we will consider the idea of boxes, which is how our algorithm will interpret SAT problems.
Each box is an n-dimensional structure in {0, 1}n , where n is the number of variables in the original SAT
problem. We will define this set as the output space; that is, all potential outputs will be elements of this
set. Each of our boxes exists within this hypercube, and along each dimension has the value 0, the value
1, or extends along the full length of the edge. The reason for this is simple: 0 corresponds to false, 1 to
true, and the length of the edge to both. Henceforth, we will use λ to refer to edges with length 1. We thus
form the following definition:
Definition 1 (Box Notation). A box takes the form 〈b 1, b 2 , ..., b n 〉, where each b i ∈ {T, F, λ}.
Observe that, from these definitions, we can consider every assignment to be a 0-dimensional box;
this will be important later.
Then, our goal will be to find the set of points within the output space that are not contained (see
Definition 2) by any boxes. Any such point will be termed an output point, and the goal of an algorithm
working on these boxes is to find all such output points.
Definition 2 (Containment). A box b is said to contain another box c if, for all points p ∈ {0, 1}n such that
p ∈ c, it is true that p ∈ b. Equivalently, the box 〈b 1, b 2 , ..., b n 〉 contains the box 〈c 1 , c 2 , ..., c n 〉 if, for all i ,
b i = c i or b i = λ.
However, there is one key difference between these two representations. Each clause in a CNF formula is essentially a subproblem wherein at least one variable’s assignment must match its value in the
4
clause for the assignment to possibly be satisfying. But with boxes, the exact opposite is true: if an assignment matches the value for the boxes on all non-λ dimensions, we reject the assignment. In other
words, if we consider a geometric visualization of these boxes, any and all assignments that fall within a
box are rejected.
Hence, our next step is to devise a means by which to convert any given SAT problem in CNF form to
the boxes format that Tetris can understand. As follows from our above observation, the most important
step is simply the negation of the CNF formula; the rest is all bookkeeping. For the exact algorithm, see
Algorithm 1.
Algorithm 1 Conversion from CNF to boxes
1: for each CNF clause do
2:
Negate the clause
3:
Set all x¯i to F, and all x j to T
4:
Set all variables not present in the clause to λ.
5:
Insert into the database {See Definition 3}
Let us consider the following toy example CNF problem:
Example 1. (x 1 ∨ x 2 ) ∧ (x 1 ∨ x¯2 ) ∧ (x 2 ∨ x 3 ).
Our first step is to negate each clause, which will give us a disjunctive normal form (DNF) co-problem:
(x¯1 ∧ x¯2 ) ∨ (x¯1 ∧ x 2 ) ∨ (x¯2 ∧ x¯3 ).
Next, we will convert to boxes (by replacing a variable with T and its negation with F ) and add all
missing variables (as λ). After conversion, our three clauses become 〈F, F, λ〉, 〈F, T, λ〉, and 〈λ, F, F 〉.
1
x2
1
0
1
x3
1
1
0
x1
(x 1 ∨ x 2 ), 〈F, F, λ〉
x2
1
1
x3
x2
0
x1
1
x3
1
x1
(x 1 ∨ x¯2 ), 〈F, T, λ〉 (x 2 ∨ x 3 ), 〈λ, F, F 〉
Figure 1: Our starting boxes, and the corresponding SAT clauses. While boxes are, technically speaking,
strictly the corners of what we depict as the boxes, we depict them with the edges and surfaces drawn for
the purpose of visual clarity.
At this point, it is time to insert the boxes into our data structure. Let us list the fundamental operations the data structure must be able to perform:
Definition 3 (Tetris data structure). The Tetris data structure shall be able to perform the following operations:
1) Insert, input is the box to be inserted, no output
2) Contains, input is the box we are seeing if the structure contains, output is the containing box (see Definition 2)
5
3) GetAllContainingBoxes, input is the box we are seeing if the structure contains, output is the set of all
containing boxes
We will return to the details of the data structure implementation in Section 3.1.
2.2 Resolution
We then come to the concept of resolution, a key aspect of Tetris and most state-of-the-art SAT solvers.
Resolution can be defined over both CNF clauses and boxes; let us begin with the former. Let us consider
two clauses in our CNF Example 1 once again: specifically, (x 1 ∨x 2 ) and (x 1 ∨ x¯2 ). We see that these are two
very similar clauses; they differ only in that the x 2 term is negated in one and not the other. Therefore, we
can resolve these two clauses by removing the x 2 term and then taking the OR of all remaining variables.
In this case, this gives us (x 1 ). We then remove the original two clauses from the CNF problem and insert
this new clause in its place. This is a significant simplification.
Similarly, we can resolve any two clauses such that there is exactly one pivot point, by which we mean
a variable that appears in both clauses, but is negated in one and not the other. For instance, looking back
at our example, we can also resolve (x 1 ∨ x¯2 ) with (x 2 ∨ x 3 ) to form (x 1 ∨ x 3 ) In this case, we would not
be able to remove the original two clauses, but we would have gained information. Let us now formally
define this process:
Definition 4 (Resolution on Clauses). Two clauses (x i 1 ∨ x i 2 ∨...∨ x i m ∨ v) and (y j 1 ∨ y j 2 ∨...∨ y j l ∨ v̄), i ∈ I ,
j ∈ J , I , J ⊂ [n], can be resolved if and only if there exists exactly one variable v, the pivot point, such that
v ∈ x and v̄ ∈ y.
The resolution of the two clauses is (x i 1 ∨ x i 2 ∨ ... ∨ x i m ∨ y j 1 ∨ y j 2 ∨ ... ∨ y j m ).
Since boxes are simply another representation of the same problem, it follows that resolution can be
performed on boxes as well. First, we will require that there must exist exactly one variable on which one
box is true and the other box is false.1 Call this the pivot variable. In the output, set this variable to λ.
Then, for each other variable, if it is T in one or both boxes, set it to T in the output box; if it is F in one
or both boxes, set it to F in the output box; and if both variables are λ, then the resolution of the two is
also λ.
We see two possible resolutions in our Example 1. The resolution of 〈F, F, λ〉 and 〈F, T, λ〉, two coplanar and parallel edges, is the square 〈F, λ, λ〉, as depicted in Figure 2, and the resolution of the askew
edges 〈F, T, λ〉 and 〈λ, F, F 〉 is the edge 〈F, λ, F 〉, as depicted in Figure 3. For a formal definition of the
resolution operator, henceforth ⊕, see Definition 5.
1
x2
1
0
1
x3
1
x2
1
0
x1 +
1
1
x3
x2
0
x1 =
1
x3
1
x1
Figure 2: The resolution of 〈F, F, λ〉 and 〈F, T, λ〉 on the vertex x 2 is the square 〈F, λ, λ〉. This is equivalent
to (x 1 ∨ x 2 ) resolved with (x 1 ∨ x¯2 ) being the clause (x 1 ).
1 This is exactly analogous to the requirement that we resolve on a pivot point in the clause version.
6
1
x2
1
0
1
x3
1
x1
x2
1
0
+
1
1
x3
x1
x2
0
=
1
x3
1
x1
Figure 3: The resolution of 〈F, T, λ〉 and 〈λ, F, F 〉 on the vertex x 2 is the edge 〈F, λ, F 〉 This is equivalent to
(x 1 ∨ x¯2 ) resolved with (x 2 ∨ x 3 ) being the clause (x 1 ∨ x 3 ).
Definition 5 (Resolution on Boxes). Two boxes 〈b 1 , b 2 , ..., b n 〉 and 〈c 1 , c 2 , ..., c n 〉 can be resolved if and only
if there exists exactly one i such that b i is true and c i is false, or vice-versa.
In the resolved box a, a i = λ. Each a j , j 6= i is equal to b j ⊕ c j , where ⊕ is defined as follows:
T ⊕T = T
F ⊕F = F
λ⊕T = T
λ⊕F = F
T ⊕λ = T
F ⊕λ = F
λ⊕λ = λ
T ⊕ F is undefined.
Observe that resolution on boxes and resolution on clauses are identical:
Lemma 1. Resolution on boxes, with the additional restriction that exactly one variable must be true in
one box and false in the other, is exactly equivalent to resolution on SAT clauses.
Proof. Let (x i 1 ∨...∨x i m ) and (y j 1 ∨...∨ y j l ), i ∈ I , j ∈ J , w her e I and J ⊂ [n], be the clauses we are resolving.
Assume WLOG that i 1 = j 1 is the pivot point. Then the resolved clause is (x i 2 ∨ ... ∨ x i m ∨ y j 2 ∨ ... ∨ y j l ).
The boxes equivalent to our starting to clauses are 〈b 1 , ..., b n 〉 and 〈c 1 , ..., c n 〉, where b k = F if x k ∈ x, b k = T
if x¯k ∈ x, and λ otherwise, and c k is defined similarly with respect to y. The resolution of these boxes a is
then defined as 〈a 1, ..., a n 〉, where a k = λ if k = i 1 = j 1 , and a k = b k ⊕ c k otherwise.
Now, let us calculate the box equivalent of the output of the clause-based resolution, t . It can be shown
that t k = λ if k = i 1 = j 1 , t k = x k = y k if k ∈ I , k ∈ J and k 6= i 1 , t k = x k if k ∈ I and k ∉ J , t k = y k if k ∈ J
and k ∉ I , and t k = λ if k ∉ I and k ∉ J . Inspection with the above definition of ⊕ reveals that t is exactly
equivalent to a. Since the same problem with arbitrary, equivalent inputs produced equivalent outputs,
the two operations must be equivalent.
Tetris introduces one additional restriction on resolution.
Definition 6 (Resolution on Boxes in Tetris). Two boxes b and c can be resolved if and only if there is
exactly one spot i such that b i = T and c i = F, or vice-versa, and for all j > i , b j = c j = λ.
In other words, we will demand that that the pivot variable be the final non-λ variable. Therefore,
while Tetris will perform the resolution of 〈F, F, λ〉 and 〈F, T, λ〉 (Figure 2), it will not perform the resolution
of 〈F, T, λ〉 and 〈λ, F, F 〉 (Figure 3). We see, then, that the ordering of the variables determines whether or
7
not a resolution is even possible. This makes determining the global ordering of the variables a key issue,
as mentioned earlier as Theoretical Implication 2, which we will address later in Section 3.2.
In general, Tetris performs resolution on pairs of recently found boxes. Let k be the location of the
last non-λ variable in a box b. Then b k must be either true or false. If it is false, we will store the box for
future use. If it is true, then we will take this box b and resolve it with the stored box with the same value
for k whose last non-λ variable was false. By doing so, we will guarantee the production of a box where
the last n −k +1 variables have the value λ. For more details on how this works, along with the reasoning
for why such pairs can always be found, see Section 2.3.
2.3 Tetris
For now, let us return to our Example 1. When we last left off, we were just inserting the three clauses
into our data structure, which was loosely defined in Definition 3. For a formal definition of the database
and details for how it allows the set of boxes Tetris knows about to be quickly and efficiently queried, see
Section 3.1; for now, one can simply assume it to be a trie-based structure. Additionally, we can resolve
the first two boxes while leaving the third untouched, as in Figure 2; therefore, the database will contain
exactly the boxes 〈F, λ, λ〉 and 〈λ, F, F 〉 (see Figure 4). Furthermore, we will prepare an empty array of
boxes L of size n, which will be used later. The purpose of this array is to store and retrieve boxes that we
wish to resolve with other boxes.
Now that we have our database established, it is time to perform Tetris proper. The basic idea here
is very simple. We will pick a point P in the output space, which we will call the probe point; recall
that this point is itself a 0-dimensional box. We then determine whether or not any box in the database
contains this point. If one does, we will store this box in an additional data structure referred to as the
cache, which functions identically to the main database, and probe a new point. If no box contains P ,
we will list the point as a solution and furthermore add this point into the cache. Along the way, we will
perform resolution in order to create new and larger boxes. This process continues until the entirety of
the output space is covered by a single box, at which point we must have found every output point and
are done. Algorithm 3 has the details. It should be noted that this algorithm was originally presented
recursively in [6]; here, we present it iteratively both for the purposes of speed and because this allows
for non-chronological backtracking; in other words, we can backtrack more than one layer at a time.
Algorithm 2 Advance(box b, probe point &p) (note: p is a global variable)
1: while b Contains p do
2:
if the last non-λ variable of p is F then
3:
Set that variable to T {Return to the previous branching point and take the right,
or true, branch}
4:
else
5:
while the last non-λ variable of p is T do
6:
Set that variable to λ {Return to the most recent level where we branched left}
7:
Set the last non-λ variable of p to T {Branch right here}
8:
Replace all λs after this variable with F {Repeatedly branch left}
Now, let us consider how this algorithm behaves with regards to our earlier Example 1. We pick as our
first probe point 〈F, F, F 〉, giving us the situation illustrated in Figure 4. We first scan our local cache, C , for
any boxes that contain this point; however, since this is the first probe point, the cache is trivially empty.
Next, we scan the database D. D contains all the boxes corresponding to the clauses in the original SAT
8
Algorithm 3 General Tetris for SAT
1: Establish variable ordering
2: Build the database D using Algorithm 1
3: C ← ;
4: L ← An empty array of size n {This array is implicit in [6]}
5: p ← 〈F, F, ..., F 〉
6: while 〈λ, λ, ..., λ〉 ∉ C do
7:
if (b ← C .Contains(p)) is nonempty then
8:
Advance(b, p) {Advance (see Algorithm 2) the probe point past b}
9:
else if (A ← D.GetAllContainingBoxes(p)) is nonempty then
10:
for all boxes b ∈ A do
11:
C .Insert(b)
12:
Advance(b, p) {Advance the probe point past b}
13:
else
14:
Add p to the output{There is no containing box, so p is an output point}
15:
C .Insert(p)
16:
Advance(p, p) {Advance the probe point past itself}
17:
k ← the location of the last non-λ variable in b w.r.t. the variable ordering
18:
if b k = F then
19:
L[k] = b {Store the most recent left-branching box for a given depth}
20:
else
21:
r
←
b ⊕ L[k]
{Resolve this right-branching box with the corresponding
left-branching box}
22:
C .Insert(r )
problem. The database just so happens to contain two containing boxes; for reasons that will become
clear shortly, the operation will choose to output 〈F, λ, λ〉. We insert the box 〈F, λ, λ〉 into C .
Our next task is to advance the probe point until it lies beyond our box. To do this, we proceed
according to Algorithm 2. The idea is to think of the set of all possible probe points as a tree that we are
performing a depth-first search on, with F representing left-branching paths and T representing rightbranching paths. We will continue along this depth-first search until we find a point not covered by the
most recently discovered box. This takes us to 〈T, F, F 〉. Note that if the database had fetched 〈F, F, λ〉, we
would not have been able to advance the probe point as far.
Finally, we insert our containing box into the array L at location 1, since only the first variable is nonλ, for future use. We know to do insertion here, rather than trying to resolve with a non-existent box,
because the value of that first non-λ variable is false. This takes us to the situation depicted in Figure 5.
Again we scan C , this time for the probe point 〈T, F, F 〉, and again we find no containing box in C .
So we scan D once again and find the containing box 〈λ, F, F 〉. We then insert this box into the cache
and advance the probe point to 〈T, F, T 〉. This time, although our containing box features a λ at the first
location, we determine the location in L into which we will insert based on the location of the last non-λ
variable, so we insert it into L[3].
This time, we find no containing boxes in either the cache or the database. Therefore, we have found
an output point (see Figure 6 for an illustration). We add 〈T, F, T 〉 to our output set, then add the box
〈T, F, T 〉 to our cache, which marks the point as found.
At this juncture, we find that the last non-λ variable is at location 3, but this time, it is true. Therefore,
9
1
x2
1
0
1
1
x3
Probe Point p
Cache C
Database D
x2
1
0
x1
1
1
x3
x2
0
x1
Probe Point Map
1
1
x3
x1
Figure 4: The initial state of the database D, cache C , and the location of the first probe point p, which
will search the output space in a depth-first manner that can be tracked using the map on its right. D
is simply the union of all the boxes we created from the initial SAT problem, while p is set to an initial
value of 〈F, F, F 〉. L is currently empty.
1
x2
1
0
1
x3
1
Probe Point p
Cache C
Database D
x2
1
0
x1
1
x3
1
x2
0
x1
1
x3
Probe Point Map
1
x1
Figure 5: The state of the database, cache, and probe point after the first round of the algorithm. Our
probe point found the box 〈F, λ, λ〉, so it added this box to C . Then, p was advanced until it reached a
point not contained by this box, which turned out to be 〈T, F, F 〉. L(1) is the box 〈F, λ, λ〉, which is the box
corresponding to the orange vertex in the map; the array is empty elsewhere.
we will extract the same-length box we stored previously at L[3] and resolve it with this box. We know
that this will be a legal resolution because we are scanning the output space in a tree-like fashion. This
means that, when retreating from a right branch, the box containing the corresponding left branch must
be able to contain the right branch if the final non-λ variable were set to λ instead. It follows that this
final non-λ variable must be the one and only pivot point between the two.
Therefore, we can and do perform this resolution; in this example, it is 〈T, F, T 〉 resolved with 〈λ, F, F 〉.
This outputs the box 〈T, F, λ〉. We furthermore store this box in L at location 2, since this box ends with
false at that index.
We continue forth with probe points 〈T, T, F 〉 and 〈T, T, T 〉. Neither will be found in either D or C , and
are therefore output points. Both again have their final non-λ variables at index 3, with 〈T, T, F 〉 being
inserted into L at that index and then 〈T, T, T 〉 recovering that box so it can resolve with it to form the
box 〈T, T, λ〉. This time, the output of our resolution ends with true, so we recover the box at index 2 in
L, 〈T, F, λ〉, and take the resolution of these two boxes, giving us 〈T, λ, λ〉. Once again this ends in T, so
we can resolve it with the box we found back at the beginning that has been waiting in slot 1, 〈F, λ, λ〉, to
form the box 〈λ, λ, λ〉. This box completely covers the output space; therefore, the algorithm knows that
it has found all possible output points and terminates (see Figure 7 for an illustration).
10
1
x2
1
0
1
1
x3
Probe Point p
Cache C
Database D
x2
1
0
x1
1
1
x3
x2
0
x1
Probe Point Map
1
1
x3
x1
Figure 6: The state of the database, cache, and probe point after the first output point is found at 〈T, F, T 〉.
In getting here, we first found the box 〈λ, F, F 〉, and then probed the output point. After finding the output
point, that box was resolved with the aforementioned box to produce 〈T, F, λ〉, which was also added to
C . Note that, while the box 〈λ, F, λ〉 could be produced at this juncture, Tetris will not do so. L(1) contains
〈F, λ, λ〉 (the orange dot); L(2) contains 〈T, F, λ〉 (the purple dot); L(3) is empty.
1
x2
1
0
1
x3
1
Output Points
Cache C
Database D
x2
1
0
x1
1
x3
1
x2
0
x1
1
x3
Probe Point Map
1
x1
Figure 7: The state of the database, cache, and probe point after the entire output space has been covered.
With each further output point discovered, boxes were added to the cache, which produced a chain of
resolutions that eventually resulted in the production of the box 〈λ, λ, λ〉. Note that there is no longer a
probe point, as there is nothing left to probe.
3 Our Improvements
Here, we will discuss the major additions introduced into Tetris in order to handle CNF inputs and increase practical efficiency. These include a new data structure, work on heuristically determining a global
variable ordering, and selectively caching only certain boxes.
3.1 Data Structure and Compression
For its data structure, the original Tetris paper simply states that a trie will suffice to achieve asymptotic
runtime guarantees. While this is true, a simple trie still leaves much to be desired; attempts to implement Tetris in such a simple matter produced a system significantly slower than other state-of-the-art
model counters. Our contribution is to design a novel system of tries that takes advantage of the nature
of the problem space to improve both runtime and memory usage.
11
3.1.1 Data Structure Description
As described above, the database must allow each variable to store three values: false, true, and λ. Therefore, the immediate approach is to use 3-tries as the base data structure. However, when #SAT instances
routinely have hundreds of variables, this results in an extremely deep problem space that requires a lot
of time to probe. Our next step, then, is to compress multiple layers into a single node that can be queried
in a single instruction.
To this end, we will first come up with a means to enumerate all possible boxes:
Definition 7. Let φ be a bijective function from the set of boxes onto the integers.
Example 2. One such way is as follows: Let λ be assigned 1; false, 2; true, 3. Then, for each box 〈b 1 , b 2 , ..., b n 〉,
its numerical value is 3n−1 b 1 + 3n−2 b 2 + ... + b n .2 This gives us a bijective ternary numeration.
From there, we observe that there is exactly one box for which n is 0 (namely, the empty box 〈〉),
three boxes with n equal to 1, nine boxes with n equal to 2, twenty-seven boxes with n equal to 3, and
eighty-one boxes with n equal to 4. We will require that a single node within the trie be able to record a
box of any of these lengths. Additionally, it must be able to store the children of all possible boxes with n
equal to 4. Therefore, by compacting four logical layers into a single layer within the database, the result
is a trie that can store 121 possible boxes, the sum of the five aforementioned values, and can have 81
possible children. We will refer to this collection of variables as a cluster.
Definition 8 (Cluster). A Cluster is the set of variables that the database handles in a single operation. By
default, each cluster contains 4 variables.
Of course, this raises a new issue: When checking if the database contains a given input string x,
there can exist up to sixteen children of x that must be checked (since each T or F can be replaced by a
λ), and an even greater number of boxes that could be contained in this cluster may contain the input
string. This creates the need for a way to quickly and efficiently determine if a containing box exists in
this cluster and to create the list of children to be searched.
3.1.2 SIMD-based Trie
Here, we take inspiration from EmptyHeaded, a relational database engine [5], and utilize SIMD. As an
example, let us consider the simplified version of the data structure that contains only two layers, and
with n = 3. Suppose that 〈F, λ, λ〉 was known to be a box, and that 〈λ, T, F 〉 and 〈T, T, F 〉 are boxes that,
to be found, necessitate traversing into child clusters. We can see this depicted in Figure 8. Now, we
will determine whether or not this data structure contains the box 〈F, T, F 〉. Since each cluster contains
two layers, clusters with depth 0 will look at only the first two variables in the input box to determine
input; therefore, let us consider the sub-box 〈F, T 〉. Using φ, we can find in a lookup table the two 128-bit
bitstrings corresponding to this input sub-box. The first lists the set of boxes that, if they exist within
the cluster, would contain 〈F, T 〉, and is the second line of Figure 9. The second bitstring does much the
same for the set of children that, if truncated to two variables, contain 〈F, T 〉. Additionally, a cluster stores
two more 128-bit bitstrings: BOXES, which marks the boxes contained by the cluster; and CHILDREN,
which marks the child nodes of the cluster. The BOXES bitstring is specifically the top line of Figure
9. BOXES is marked for the box 〈F 〉 (which is equivalent to 〈F, λ, λ〉), while CHILDREN has bits marked
corresponding to the box prefixes 〈F, T 〉 and 〈T, T 〉.
2 This is exactly the mapping used in our implementation.
12
It follows that the intersection of these two pairs of bitstrings is the set of boxes and child prefixes
present in the data structure that contain the input; a single AND operation suffices to calculate it.
;
Cluster 1
Cluster 0
Layer 0
Layer 1
〈F 〉
〈λ〉
〈T 〉
Layer 2
〈λ, T 〉
〈T, T 〉
Layer 3
〈λ, T, F 〉
〈T, T, F 〉
Layer 4
Figure 8: The SIMD operation, shown as the associated clusters, with each box marked by a blue box.
Each layer corresponds to a variable (with Layer 4 being empty because all boxes have only three variables), and checkmarks mark the boxes that are found to be containing boxes. The central branch is
created from the box 〈F, λ, λ〉. In the left branch, created by 〈λ, T, F 〉, a child is created corresponding
to the sub-box 〈λ, T 〉, and then the box is inserted in the child cluster at 〈λ, T, F 〉. In the right branch,
created by 〈T, T, F 〉, we similarly create a child corresponding to the sub-box 〈T, T 〉, and then inserting
〈T, T, F 〉 in the child cluster. We have drawn the database here as a 3-trie; to see it as a single, flattened
cluster, see Figure 10.
Let us inspect our output. In this particular example, we find that we have matched the box 〈F, λ〉
and the child with prefix 〈λ, T 〉.
At this point, we are faced with an interesting choice. Suppose for the sake of argument that the
child eventually leads to some containing box for the input. Now, if this is a check for containment,
the algorithm can only return the one box that it considers the “best" containing box. Which one, then,
should the algorithm choose? In general, it will choose the shortest-length box; in this example, it will
choose the box 〈F, λ〉. The reason for this is simple: the more λs in a box, the more space it covers; and the
more space it covers, the more output space it can cover. Additionally, when those λs are at the tail end
of a box, we know that we can immediately advance the probe point considerably. On the other hand,
when the λs are in the middle, the algorithm must re-find these boxes every time it scans a point that is
contained within this hypothetical box. This repetition is costly; we would rather avoid it.
Now, let us consider how our algorithm would handle this input if we were simply using traditional
tries; in other words, consider what would happen if our cluster size were 1, as in a standard 3-trie. This
algorithm would query the input box for its first value, find that it is F , and know that it could check
both the λ branch and the F branch. Since λs are generally to be preferred, it would take the λ branch.
Then, it would match the T value to the T branch, and proceed to the third layer, where it would match
the F value to the F branch and find a containing box, which it will return. In this way, it has taken us
three comparisons to find a containing box. Furthermore, the box found in the 3-trie version is of lower
quality: we would have preferred to find the 〈F, λ, λ〉 box instead.
On the other hand, let us consider what occurs when using full, four-variable clusters (Figure 10). In
this case, a single operation immediately finds all three containing boxes. Therefore, we have outper-
13
Database’s BOXES: 0 0 1 0 0 0 0 0 0 0 0 0 0
0 1 2 3 4 5 6 7 8 9 10 11 12
&
Input: 〈F, T 〉: 1 1 1 0 1 1 0 1 0 1 0 0 0
0 1 2 3 4 5 6 7 8 9 10 11 12
=
Containing Boxes: 0 0 1 0 0 0 0 0 0 0 0 0 0
0 1 2 3 4 5 6 7 8 9 10 11 12
Figure 9: The algorithm takes the logical AND of the stored bitstrings, top, with the bitstring corresponding to the input, row 2, in order to produce the list of containing boxes and children, bottom. The BOXES
bitstring contains a single 1 to mark the box 〈F, λ〉. The second line contains all the potential boxes that
would contain 〈F, T 〉; in addition to the boxes that do exist, these include 〈λ〉, 〈F, T 〉, and many more. The
output of the operation has bits set corresponding to the box 〈F, λ〉.
formed the traditional variation both in terms of number of comparisons and in terms of the quality of
the output.
In summary, each box is stored as a single bit in a 128-bit vector (with the last 7 bits unused), as is
a record of whether or not a given child exists. A lookup table is used to find the 128-bit vectors corresponding to the possible outputs. Then, the former two are concatenated, as are the latter two, and all
are compared using a single 256-bit AND operation. It can be seen that the output of this operation must
be the intersection of the potential containing boxes and children, found from the lookup table, and the
ones that actually exist. Hence, by calculating φ(b) for some box b, we can quickly find a box containing
b, if it exists; and if it does not, we can quickly generate the exact list of children to examine.
Additionally, in practice, it turns out that there is a certain sub-box value that shows up far more
often than any other: the sequence 〈λ, λ, λ, λ〉. Notably, this is the very sequence that accepts every single
possible input string. Therefore, in the event that a layer contains only this child and no words, it is in
fact possible to skip the entire layer. In practice, this produces significant savings on both computational
costs and memory usage, which contributes towards Theoretical Implication 3.
Let us now formally define the data structures and its algorithms used in our implementation:
Definition 9 (Data Structure). The data structure is a 121-ary trie, where each node of the trie is called a
cluster. The top level consists of a pointer to the root cluster, which is the cluster covering the first four variables in the ordering, and can perform the following operations: Insert (Algorithm 4), Contains (Algorithm
6), and GetAllContainingBoxes (Algorithm 8).
Definition 10 (Cluster). Each cluster in the database contains two bitstrings, BOXES and CHILDREN, bitstrings identifying the sets of boxes and children, respectively, along with an integer DEPTH that informs
the cluster of its depth. The operation .at (i ) on a bitstring calculates the intersection of that bitstring with
a bitstring retrieved from a lookup table that lists the set of boxes or children that contain or potentially
contain, respectively, the box with value i ; setting .at (i ) to 1 sets the specific bit referring exactly to that
box. Each cluster corresponds to four layers of a standard 3-ary trie.
Definition 11 (Index of a Box). The index of a box b, i nd ex(b), is the location of the last non-λ variable
in that box.
14
Cluster 0
;
〈F 〉
〈λ, T, F 〉
〈T, T, F 〉
Figure 10: The clusters in Figure 8, flattened out into a single node with four variables to a cluster. This
is how the node would be stored in the actual database. Note that it is far more compressed than when
drawn out as a trie as in Figure 8. This cluster would have a DEPTH of 0, have three bits set in BOXES,
and no bits set in CHILDREN.
Example 3. The index of the box b = 〈F, T, λ, F, λ〉, i nd ex(b), is 4.
Algorithm 4 Insert
1: Given box b to insert, b = 〈c 1 , c 2 , ..., c n 〉, where each c i here is a cluster consisting of b i 1 ...b i 4
2: k ← i nd ex(b)
3: Call InsertCluster(b, k) (Algorithm 5) called on the root cluster
Algorithm 5 InsertCluster
1: Input: A cluster T with depth DEPTHto insert on, box b to insert, b = 〈c 1 , c 2 , ..., c n 〉, where each c i
here is a cluster consisting of b i 1 , ..., b i 4 , and the location of the final non-λ cluster k
2: if DEPTH = k then
3:
if BOXES.at (φ(c DEPTH )) is nonempty then
4:
Return {If a containing box of b is already in the data structure, stop.}
5:
else
6:
Set BOXES.at(φ(c DEPTH )) to 1
7: else
8:
if CHILDREN.at (φ(c DEPTH )) 6= 1 then
9:
Create a child cluster at location φ(c DEPTH ) with depth (DEPTH + 1).
10:
Set CHILDREN.at (φ(c DEPTH )) to 1
11:
Call InsertCluster on the cluster indexed as φ(c DEPTH ).
12: Return
In the Insert algorithms, Algorithms 4 and 5, we recursively traverse clusters until we find the appropriate location in the data structure, and then set the appropriate bit to 1. Along the way, we will check
for containing boxes and immediately cease operation if one is found. Furthermore, if a child cluster that
contains the box we are inserting, or that contains a cluster along the path to that cluster, does not exist,
we create it.
Algorithm 6 Contains
1: Given box b to find a containing box of, b = 〈c 1 , c 2 , ..., c n 〉, where each c i here is a cluster consisting of
b i 1 ...b i 4
2: out put ← ContainsCluster(b) (Algorithm 7) called on the root cluster
3: Return out put
15
Algorithm 7 ContainsCluster
1: Input: Cluster T with depth DEPTHthat is being checked for a containing box, box b to check containment of, b = 〈c 1 , c 2 , ..., c n 〉, where each c i here is a cluster consisting of b i 1 ...b i 4 .
2: if ( f = BOXES.at (φ(c DEPTH ))) is nonempty (i.e., there is at least one box in the
intersection) then
3:
o = mi n x∈ f I nd ex(x)
4:
Return o
5: else if ( f = CHILDREN.at (φ(c DEPTH ))) is nonempty (i.e., there is at least one child in
the intersection) then
6:
for all Children k ∈ f do
7:
a = k.ContainsCluster(b) {Scan these in order of increasing index}
8:
if a is nonempty then
9:
Return a
10: else
11:
Return ;
In the Contains algorithms, Algorithms 6 and 7, we check the database to see if it contains any box
that contains some box b. Therefore, we traverse along clusters in our path, first checking to see if any
containing boxes exist; if we find one, we return that box immediately and cease checking further. If none
exists, we will perform a depth-first search of the children that could potentially contain a containing
box. This process continues until either a containing box is found, or else the search space is exhausted
and it is determined that no containing box exists.
Algorithm 8 GetAllContainingBoxes
1: Given box b to find all containing boxes of, b = 〈c 1 , c 2 , ..., c n 〉, where each c i here is a cluster consisting
of b i 1 ...b i 4
2: out put ← GetAllContainingBoxesCluster (b) (Algorithm 9) called on the root cluster
3: Return out put
Algorithm 9 GetAllContainingBoxesCluster
1: Input: Cluster T with depth DEPTH that is being checked, box b to find all containing boxes of,
b = 〈c 1 , c 2 , ..., c n 〉, where each c i here is a cluster consisting of b i 1 ...b i 4 .
2: O ← ;
3: if (F = BOXES.at (φ(c DEPTH ))) is nonempty (i.e., there is at least one box in the
intersection) then
4:
O = F.
5: else if ( f = CHILDREN.at (φ(c DEPTH ))) is nonempty (i.e., there is at least one child in
the intersection) then
6:
for all Children k ∈ f do
7:
A = k.GetAllContainingBoxesCluster(b)
8:
O =O∪A
9: else
10:
Return O
16
The GetAllContainingBoxes algorithms, Algorithms 8 and 9, are similar to the Contains algorithms.
There are, however, two key differences. First, while Contains terminates as soon as it found a single
containing box, GetAllContainingBoxes will continue. Secondly, it returns the set of all containing boxes,
rather than just one; hence, the name. In all other regards, it behaves exactly as Contains does.
3.2 Global Variable Ordering
Up until this point, we have simply been assuming that all boxes must order their variables in exactly
the same order as they appear in the original SAT formulas. In other words, in each box b, b 1 must
correspond to x 1 , b 2 must correspond to x 2 , and so on. However, this does not need to be the case. We
can reorder the variables, and by doing so can greatly improve the runtime of our system.
The original Tetris paper cites the importance of the variable ordering. However, it assumes that
there exists an exponential in n time algorithm to compute the optimal variable ordering. While this is
justifiable in the context of join problems, where n is small compared to the size of the database, in SAT
problems it is unacceptable. Furthermore, as computing the optimal ordering is NP-hard, we cannot
hope to improve upon this result. Indeed, even approximating the ordering is intractable [20].
Nevertheless, initial experimentation with Tetris made clear how impactful this choice can be. A
slight variation in ordering can result in a large difference in runtime. Thus, we turned to various heuristics and intuitions in order to find a quick and effective means to generate an ordering that works well in
practice and thereby contribute to Theoretical Implication 2.
First, let us define a few terms that we will use in our discussion of various ordering strategies:
Definition 12 (Degree). The degree of a variable is the number of clauses of which the variable is a part.
Example 4. In Example 1, (x 1 ∨x 2 )∧(x 1 ∨ x¯2 )∧(x 2 ∨x 3 ), x 1 has degree 2, x 2 has degree 3, and x 3 has degree
1.
Definition 13 (Closeness). Variables x i and x j are said to be close if there exists a clause that includes
both x i and x j . The fewer terms in the clause, the closer the two variables are said to be. Specifically, the
closeness of two variables, θ(x 1 , x 2 ), is equal to 1 divided by the size of the smallest clause containing both
variables minus 1.
Example 5. In the clause (x 1 ∨ x 2 ), x 1 and x 2 would have a closeness of 1; and in the clause (x 1 ∨ x 2 ∨ x 3 ),
x 1 and x 3 would have a closeness of 21 . If both clauses were part of the same SAT problem, x 1 and x 2 would
still have a closeness of 1 because the first clause has a smaller size than the second.
Definition 14 (Interconnectedness). The interconnectedness of a cluster C , IC (C ), is the sum of the close¡ ¢
ness values for each 42 pairs of variables in the cluster.
Example 6. Using Example 5, if x 1 , x 2 , and x 3 compose a cluster, its interconnectedness would be 2, since
θ(x 1 , x 2 ) = 1, θ(x 1 , x 3 ) = 12 , and θ(x 2 , x 3 ) = 12 .
In general, we note two high-level strategies that we have found improve the performance of a given
global variable ordering:
a) High-degree First In Tetris, handling high-degree variables early tends to improve performance. To
see the reason for this, we must consider the nature of the algorithm. Scattering high-degree variables
throughout the ordering forces the algorithm to branch frequently, which means that, when testing for
17
inclusion or for containing boxes, the algorithm must scan all possible branches. This is highly inefficient. Instead, we focus the branches as much as possible to the beginning, with the hope being that, as
the algorithm progresses down from there, most layers will have few if any divergent choices. If this is
true, then the inclusion check can be handled quickly. Let us proceed to see why this is so. First, we will
introduce an example of why placing high-degree variables early in the ordering proves effective. Let us
consider the following sample problem:
Example 7. Consider the SAT formula (x 1 ∨x 3 )∧(x 2 ∨x 3 ). The equivalent box problem, using the ordering
(x 1 , x 2 , x 3 ), would begin with database D containing the boxes {〈F, λ, F 〉, 〈λ, F, F 〉}. x 1 has degree 1, x 2 has
degree 1, and x 3 has degree 2.
Now, let us consider how the algorithm would attack this problem if we used this naive, low-degree
first ordering, which would be (x 1 , x 2 , x 3 ). We immediately note two things. First, there are no boxes
that have λ for the final variable. This means that the algorithm will never find a box that allows it to
skip multiple probe points unless it can use resolution to create a new box that happens to have that
property. In fact, that will not occur. Additionally, we can consider all 8 possible probe points and track
how many comparisons the algorithm would need to make on each point, assuming that each cluster
only covers a single variable instead of four. We can see that the algorithm will always have to calculate
the set intersection for the cluster of depth 1, will have to perform the set intersection for the cluster with
depth 2 on 50% of probe points, and will have to perform the set intersection for the cluster with depth 3
on 75% of probe points.
Let us contrast this with the high-degree ordering, which moves x 3 to the front and x 1 to the back to
give us the ordering (x 3 , x 2 , x 1 ). Now, our database contains the boxes 〈F, λ, F 〉 and 〈F, F, λ〉. This time, we
do have a box with a λ at the back; specifically, 〈F, F, λ〉. Therefore, after the probe point 〈F, F, F 〉 finds this
box, the algorithm will advance past 〈F, F, T 〉 entirely. Additionally, while we still have to perform a set
intersection for the cluster with depth 1 100% of the time, and 50% of the time for the cluster with depth
2, we only have to perform the set intersection at depth 3 25% of the time — and all of these numbers
ignore how we skipped one of the probe points entirely.
While this is of course a very simple example, this illustrates the principles that cause the strategy to
be effective in larger datasets.
b) Local Interconnectedness As a direct result of the 121-ary trie-based system described in Section
3.1, if a box has multiple non-λ variables within the same 4-variable cluster, they can all be recovered with
a single operation. Therefore, maximizing the interconnectedness of these 4-variable blocks provides an
advantage.
To illustrate, let us consider the following example:
Example 8. (x 1 ∨ x 3 ) ∧ (x 2 ∨ x 4 ), with each cluster containing 2 variables rather than 4.
If we use the naive strategy of keeping the variables ordered as-is, the first cluster contains x 1 and
x 2 while the second cluster contains x 3 and x 4 . Therefore, both clusters have interconnectedness 0,
and we find that all boxes transcend a cluster boundary. In other words, it will always take at least two
comparisons to find either of these boxes. However, if we had gone with the ordering (x 1 , x 3 , x 2 , x 4 ) instead, the box corresponding to (x 1 ∨ x 3 ) would be entirely contained within the first cluster, and the box
corresponding to (x 2 ∨ x 4 ) would be entirely contained within the second cluster. Therefore, each box
would be entirely contained within a cluster, and each cluster would have had an interconnectedness of
1, and each box could have been recovered with only a single comparison. This saves a large number of
comparisons over the long term.
18
3.2.1 Ordering Algorithms
Two major methods were employed in order to achieve these aims. The first was a descending degree
sort. While this only directly achieved goal (a), in practice it did an acceptable job with goal (b). Additionally, we constructed three variations on this method. The first, naive degree descent, is where we
simply order the variables according to their degree, using Algorithm 10.
Algorithm 10 Naive Degree Descent Ordering
1: Given a set of variables V and, for each variable v, its degree v d :
2: O ← SORT(V on v d , descending)
3: Return O
The second, optimally grouped degree descent, forms all possible groups of four variables, finds
the greatest possible interconnectivity among these groups, and then selects from all groups with the
greatest interconnectivity on the basis of the combined degree of the group of four, using Algorithm 11.
While this proved effective, it is a slow algorithm with a runtime of Θ(n 8 ).
Algorithm 11 Optimally Grouped Degree Descent Ordering
1: Given a set of variables V and, for each variable v, its degree v d :
2: O ← ;
3: G ← all possible sets of four variables {v i , v j , v k , v l } from V .
4: while G is nonempty do
5:
max IC ← max g ∈G IC (g ) {Determine the maximum possible interconnectedness of all
remaining groups.}
P
6:
X ← max g ∈G s.t . IC (g )=maxIC ( v∈g v d ) {Of the groups with max interconnectedness,
7:
select the group for which the sum of the degrees of all variables is the
greatest.}
O ← (O, X ) {Append this group to the ordering.}
for v ∈ X do
for Y ∈ G do
10:
if v ∈ Y then
11:
G ← G \ Y {Remove each grouping that contains one of the variables in the
group that was selected.}
12: Return O
8:
9:
This necessitated the creation of the third subtype, the heuristically grouped degree descent ordering. This ordering works in groups of four. When creating a group, the first node chosen is the highestdegree remaining variable. Then, for each of the remaining three variables, the algorithm picks the variable with the highest interconnectedness to the nodes already chosen for this group of four, breaking
inevitable ties based on degree. The result, Algorithm 12, is an algorithm that can compute its ordering
significantly faster than the optimal ordering, while Tetris run on this ordering runs is competitive with
the optimal ordering.
19
Algorithm 12 Heuristically Grouped Degree Descent Ordering
1: Given a set of variables V and, for each variable v, its degree d v :
2: i ← 0
3: X ← ;
4: O ← ;
5: while V is nonempty do
6:
if i = 0 then
7:
x ← y where De g r ee(y) = max v∈V (d v )
8:
else
P
9:
max IC
←
max v∈V ( x∈X θ(v, x))
{Calculate which variable has the best
interconnectedness with the already chosen variables}
10:
x ← y where d y = max v∈V s.t .VIC =maxIC (d v ) {Break ties based on degree}
11:
O ← (O, x)
12:
X ← X ∪x
13:
i ← i +1
14:
if i = 3 then
15:
i =0
16:
x ←;
17: Return O
Additionally, we employ the Treewidth tree decomposition, which was introduced in [16]. In essence,
the idea here is to minimize the width of the search tree; in our domain, this corresponds to increasing
the locality and local interconnectedness of variables. This naturally did a very good job with interconnectivity, while a decent job with placing high-degree variables early.
We also experimented with the minfill ordering, described in [12]. This ordering sets the elimination
order such that the node to be eliminated is the node whose removal makes the smallest impact on the
overall graph. While this ordering has proved effective in similar applications, we found it to perform
poorly with Tetris.
In Table 1, we can see how these various orderings performed in practice on representative graphbased and non-graph-based benchmarks. For instance, while the Treewidth sort outperformed all others
on the AIS8 dataset [3], on the WikiVotes dataset (created using a SNAP [19] dataset; see Section 4.1.1)
the ordering caused Tetris to timeout. Notably, we see that the Heuristically Grouped Degree Descent
takes only slightly longer to process the input compared to the Naive Degree Descent, but significantly
less time than the Optimally Grouped Degree Descent; however, the runtime does not suffer significantly
when going from the optimal ordering to the heuristic one.
3.3 Selective Insertion
While the original Tetris paper calls for the insertion of every box created through the resolution process
to be inserted into the database, this proved to be inefficient in practice. Very frequently, this will result in
a huge increase in the number of branches that the algorithm must scan while trying to find the output
point without notably improving the quality of the containing boxes found. Therefore, we only insert
those boxes that contain a suitably high percentage of λs. The best results generally come from requiring
slightly less than 50 percent of the layers to be composed of entirely λs.
In Table 2, we have posted the runtime for the AIS10 dataset [3] showing the relationship between
the number of λs we require in storing a box and the runtime of Tetris. Performance suffers at extreme
20
Dataset
AIS8
WikiVotes
RUNTIME WITH D IFFERENT O RDERINGS
Ordering
Load Time (Seconds)
Naive Degree Descent
.001
Heuristically Grouped Degree Descent
.009
Treewidth
.002
Minfill
.008
Optimally Grouped Degree Descent
1.629
Naive Degree Descent
.927
Heuristically Grouped Degree Descent
1.024
Treewidth
2.349
Minfill
2.032
Optimally Grouped Degree Descent
2.06
Runtime (Seconds)
1.494
2.607
.45
7.005
0.573
34.124
21.509
Timeout
4059.426
23.142
Table 1: The performance of various ordering schemes on two datasets. As one can see, no ordering does
best on both datasets; indeed, the best ordering on one is the worst on the other. Insertion Ratio (see
Section 3.3) was set to .5 for these tests.
settings, with optimal performance resulting from an insertion ratio of close to 21 . Hence, with regard
to Theoretical Implication 3, we find that by decreasing space complexity, we furthermore improve runtime.
4 Our Experimental Results
Here, we compare CNFTetris — that is, Tetris designed to solve CNF problems — with other model counters in order to compare and contrast their ability to tackle model counting problems. A model counting
problem, simply put, is, given a CNF formula, output the number of satisfying solutions to that formula.
Since Tetris was originally designed to handle database joins, these are more natural problems for the
algorithm to solve than the corresponding SAT problems, which are to simply determine whether or not
any solution exists.
While the model counting problem allows for a solver to simply find the number of solutions without
finding the solutions themselves, CNFTetris does in fact output all of the solutions. While this admittedly
poses a disadvantage compared to the solvers we are comparing against, for some datasets, CNFTetris
runs faster in spite of this.
We compare our results with those of the sharpSAT [28] , dSharp[22], and Cachet[26] model counters
due to their recognition as state-of-the-art model counters. All tests were performed using a single thread
on an 8-core E5 v3 2.6GHz processor with 64GB of RAM.
Additionally, we include two types of datasets. The first is derived from join problems on graphs.
These are the sort of problems that Tetris was originally designed to solve; as such, CNFTetris does a very
good job on them. The second set is a selection of standard model counting benchmarks from various
competitions held over the past several years. Most model counters have been trained to solve such
problems, so they serve as an apt second set of benchmarks for CNFTetris to compete against.
21
I NSERTION R ATIO VS . RUNTIME
Ratio
Time (Seconds)
.00
140.999
.05
123.49
.10
79.003
.15
71.229
.20
41.956
.25
36.685
.30
32.064
.35
24.405
.40
22.749
.45
21.784
.50
24.171
.55
25.205
.60
28.157
.65
35.738
.70
44.657
.75
51.809
.80
54.622
.85
66.529
.90
64.16
.95
75.829
1.00
88.001
Table 2: A comparison of insertion ratios with the time to solve the AIS10 dataset [3]. For these tests, we
used the Treewidth ordering; similar behavior was observed from all ordering strategies.
4.1 Graph Results
Here, we will compare and contrast how various solvers performed on model counting problems created
from graphs.
4.1.1 Dataset Generation
The CNF graph datasets were created using the publicly available SNAP datasets [19]. These are graph
datasets; that is, each consists of a set of vertices and a set of edges connecting those vertices. Each of
these datasets is a natural problem; some arose from social networks, while others are anonymized data
from other corners on the Internet. We can then use this data to run various queries; for instance, we
can determine how many triangles exist in the graph. Our goal, then, is to convert these problems into
an equivalent CNF problem so that we can use CNFTetris and the other model counters to solve them.
To do this, each vertex is first assigned a unique binary encoding using log(n) bits. We furthermore
increase the number of bits such that there are log(n) bits times the size of the data structure being
looked for in the graph. For instance, if we are performing the triangle query on the dataset, there will be
3 · log(n) bits used in the encoding. Henceforth, let k represent the size of this query. Each of these bits
will correspond to a variable in the CNF encoding of the problem. In essence, each of these repetitions
22
RUNTIME
Query
3-clique
2-path
Base Graph
Wikivotes
Facebook
Soc-Epinions
Wikivotes
Facebook
Soc-Epinions
AIS6
AIS8
AIS10
AIS12
ls8-simplified4
LS5 firstr
CNFTetris
ON
VARIOUS D ATASETS
sharpSAT
dSharp
Cachet
Loadtime
1.080
.593
77.420
2.32
.752
6.22
Runtime
21.55
11.22
309.418
35.171
10.236
236.439
Runtime
Timeout
Timeout
Timeout
31801.8
4637.25
Timeout
Speedup
n/a
n/a
n/a
904.2
453.03
n/a
Runtime
Timeout
Timeout
Timeout
36947.5
3871.86
Timeout
Speedup
n/a
n/a
n/a
1050.51
378.26
n/a
Runtime
Timeout
Timeout
Timeout
28838.1
2210.42
Timeout
Speedup
n/a
n/a
n/a
819.94
215.95
n/a
0.0021
0.0027
.00798
.0198
.00063
.000656
.013
.45
21.55
1732.03
.123
.409
.004
.0466
2.629
124.937
.004933
.0156
.308
0.104
.122
.0721
.0401
.0381
.0074
.3386
14.757
1002.73
.021
.095
.569
0.752
.685
.579
.171
.232
.0135
.300
16.4724
1020.64
.01
.025
1.034
.667
.764
.589
.0813
.0611
Table 3: This table shows the comparative results of various solvers on CNF datasets created using various SNAP graphical datasets and SAT datasets. All runtimes are in seconds; timeout was set at 40,000
seconds. For these tests, we used a insertion ratio of .45 and the Heuristic Degree Descent ordering for
CNFTetris. Wikivote [19] contains 39 variables and 745485 clauses; Facebook [19] contains 36 variables
and 464234 clauses; and Soc-Epinions [19] contains 51 variables and 4578589 clauses (all clause data is
for 3-clique; 2-path has approximately 2/3rd that number of clauses). For the SAT datasets, AIS6 [3] has
61 variables and 581 clauses; AIS8 [3] has 113 variables and 1520 clauses; AIS10 [3] has 181 variables and
3151 clauses; AIS12 [3] has 265 variables and 5666 clauses; ls8-simplified4 [2] has 119 variables and 410
clauses; and LS5-firstr [2] has 125 variables and 529 clauses. In CNFTetris, loadtime refers to the time to
determine the variable ordering and insert the boxes into the database, while runtime is the time to find
all satisfying solutions.
represents a vertex in e.g. the triangle.
Next, we will encode each absent edge (i.e., a pair of vertices v 1 and v 2 such that the edge (v 1 , v 2 ) ∉ E ,
¡ ¢
where E is the edge set of the graph) as k2 total Boolean formulas for the k-clique query, and k formulas
for the k-path query. Each of these formulas corresponds to e.g. one of the three edges of a triangle.
Observe that any possible satisfying solution to the SAT problem cannot select an edge that does not
exist; therefore, any assignment that matches one of these formulas on all variables must be rejected.
Equivalently, any accepting assignment must match at least one variable in the inverse. This naturally
leads to a CNF definition, which is what we create. We will repeat this encoding over each possible
set of vertex pairings such that the lower-indexed vertex is always written before the higher-indexed
vertex, while adding additional clauses to reject all edges that would be from the higher-indexed vertex
to the lower-indexed vertex. Simplifying resolutions are also performed where possible. Therefore, we
have created a CNF problem where, for each output to the query in the original problem, there exists a
solution. We can and do use this problem instance as input for both Tetris and other model counters.
Let us examine an example instance of this. Consider the very simple example graph depicted in Figure
11 below, using the triangle query. In order to encode the non-edge (v 2 , v 4 ), we must first calculate the
binary encodings of each of these vertices. These are (01) and (11), respectively. We then flip all of the
23
bits, giving us (10) and (00). Then we construct three CNF clauses, each of which corresponds to being
the first, the second, or the third edge of the triangle. The first will be (x 1 ∨ x¯2 ∨ x¯3 ∨ x¯4 ), with (x 1 ∨ x¯2 )
corresponding to (10) and (x¯3 ∨ x¯4 ) corresponding to 00. Similarly, the second will be (x 1 ∨ x¯2 ∨ x¯5 ∨ x¯6 );
and the third will be (x 3 ∨ x¯4 ∨ x¯5 ∨ x¯6 ). Additionally, we insert clauses forbidding “bad" orderings of
the points; in other words, we are making sure we do not count (v 1 , v 2 , v 3 ) and (v 2 , v 3 , v 1 ) as separate
triangles.
Then, when Tetris run on this CNF input attempts to recover the number of triangles, when it uses
the probe point (assuming naive ordering) 〈T, T, T, F, F, T 〉 — that is, the probe point that corresponds to
the inverted binary representations of v 1 , v 2, and v 3 , the three vertices in the top-right triangle — let us
consider what happens. Since each of the selected edges does not correspond to the missing edge, we
know that all three of those clauses must be satisfied; and since each edge is in the index order, we know
that the additional clauses that we added will also accept our input. Therefore, Tetris will add this probe
point to the output list. This continues for all other probe points until Tetris has found all triangles.
v1
v2
v4
v3
Figure 11: A sample graph on four vertices, used in the above example. We will encode the non-edge
(v 2 , v 4 ) as our SAT formula, along with additional clauses that ensure we only count triangles once,
which will allow us to run the query as a #SAT problem.
4.1.2 Results Analysis
As can be seen in Table 3, while all of the other solvers find these problems to be difficult, CNFTetris
solves them quickly. Queries that take seconds on CNFTetris wind up taking hours on the competition, with CNFTetris running nearly a thousand times faster on some problems. This is largely due to
the extremely high number of clauses relative to the number of variables, along with the fact that these
clauses contain a large number of variables; these factors are not present in many of the standard SAT
benchmarks. For instance, while the average clause in many SAT benchmarks contains two or three variables, here the average clause has thirty or more. And while SAT benchmarks rarely have over ten times
as many clauses as variables, here the system is forced to tackle an environment where the number of
clauses is exponentially larger than the number of variables. Note that all of the solvers we are comparing
against use unit propagation techniques in order to count models [9]; see Section 5 for details. Because
of this, the increased number of clauses directly corresponds to increased work for these solvers.
4.2 Nongraph Results
In this section, we will discuss how CNFTetris performed as compared to other solvers on standard
model counting benchmarks.
24
4.2.1 About the Datasets
These datasets are a combination of datasets from the SATLIB datasets [3] and the SampleCount benchmarks for model counting taken from International Joint Conference on Artificial Intelligence ’07 [2]. We
chose to use the AIS datasets for several reasons. First, each of the datasets terminates in a reasonable
amount of time on all solvers, allowing us to find interesting comparisons. Secondly, due to the existence of increasingly-sized versions of this dataset, we can use this as insight into whether or not Tetris is
scaling efficiently with the size of the dataset. Additionally, we featured the ls8-simplified4 and LS5firstr
datasets, which gave us insights into our implementation’s strengths and weaknesses.
4.2.2 Results Analysis
As Table 3 shows, Tetris is competitive with dSharp and Cachet on many of the datasets. Indeed, a factor
of 2 separates us from either solver on all of the AIS datasets, a difference that engineering work alone
can easily overcome. While there is significantly more space between it and sharpSAT, a factor of 10 on
average, we believe the distance is not insurmountable.
The largest gaps exist on the ls8-simplified and LS5-firstr datasets. On these, CNFTetris is roughly a
factor of 5 off of the worst of the competition, and a factor of over 20 as compared to sharpSAT. The reason for this is simple: both of these datasets contain pure variables. In other words, there exist variables
x such that, in all clauses, x̄ never appears, or vice-versa. This is an important piece of information, one
that can and must be utilized, but CNFTetris in its current state does not know how to do so. However,
since we know what the problem is, we expect to be able to quickly and efficiently attack this issue.
5 Related Work
Our work builds on Tetris as developed by Abo Khamis et al. in [6]. In that work, the authors introduced
Tetris as a beyond-worst-case algorithm for geometrically solving the database join problem. This in
turn built on work on the Minesweeper [23], NPRR [24] and Leapfrog [29] algorithms, of which Tetris
is a generalization. Furthermore, Tetris itself is can be considered a version of the DPLL algorithm [10]
with clause learning. In DPLL, which is itself an evolution of the earlier DP [11] algorithm, a variable is
chosen at every stage and assigned to be either true or false. The algorithm then uses unit propagation
in order to simplify clauses under these assumptions. In this techniques, after the solver assigns a value
to a variable, every other clause is inspected to see if this assignment creates a unit clause (i.e. a clause
with only one variable in it), and to see if resolutions can be performed. This process continues until
a conflicting clause (that is, a clause that is violated by the assignments) is found, at which point the
algorithm is forced to backtrack. In the clause learning versions, introduced in [27], the solver takes this
as an opportunity. It determines where it went astray, adds a new clause to its cache that is the negation
of this errant assignment, non-chronologically backtracks to where this decision-making took place, and
then proceeds in the opposite direction.
The reasoning why CNFTetris is a form of this algorithm follows from the aforementioned method of
converting from SAT clauses to boxes, and vice-versa (see Algorithm 1). Since these two representations
are exactly equivalent, any operation performed on one representation can be translated into an operation on the other. Hence, every single operation Tetris performs on the boxes over its execution must
correspond exactly to a set of operations on the original clauses.
For instance, the Contains operation matches up with the idea of a conflicting clause. When a containing box is found, we can consider this as finding a box that rejects the current probe point as a po25
tential output point. Meanwhile, a conflicting clause rejects a potential satisfying assignment in much
the same way. Furthermore, over the course of Tetris, the algorithm tentatively assigns a variable to either true or false, and then proceeds along with this assumption until a contradiction is found, all while
learning additional clauses where possible through the resolution process. When a containing box is
found or synthesized through resolution, and we advance the probe point accordingly, we are in essence
backtracking to the earliest decision point and choosing to go in the opposite direction, just as DPLL
with clause learning does. Therefore, this is exactly the DPLL algorithm with clause learning, with the
added restriction of a fixed global variable ordering [6].
Tetris additionally utilizes a three-value logic system. While similar systems have been utilized in
database schemes, such as by Zaniolo in [31], in these systems the three values are true, false, and unknown. Here, however, the three values we are considering can be summarized as true, false, and both.
This causes a number of key differences. For instance, true ∧ unknown is equivalent to unknown, while
true ∧ both is equivalent to true. Similarly, true ∨ unknown is equivalent to true, while true ∨ both is equivalent to both.
Much work has been done in creating SAT solvers. Let us briefly discuss those state-of-the-art solvers
we are comparing our work against. First, let us consider Cachet[26]. This solver was originally released
in 2005, with minor compatibility updates continuing through the most recent version, which came out
in 2015 [1].
Next, there is sharpSAT. First released in 2006, sharpSAT significantly eclipsed contemporary solvers
[28]. sharpSAT has been maintained over time, with the most recent release in 2013 [4].
Finally, we come to dSharp. The most recently released of our three competitors, dSharpwas introduced in 2012 in order to efficiently compile CNF problems into the Decomposable Negation Normal
Form language [22]. Further work allowed it to function as a model counter, which is how we utilize it.
The version we use was released in 2016.
What all of these solvers have in common, including CNFTetris, is that they have at their core a form
of the DPLL algorithm with clause learning; indeed, almost all modern SAT and #SAT solvers do so
[9]. The differences, then, come in terms of efficiency. Each solver uses a different array of techniques in
order to effectively cache and recover learned clauses, to determine the variable ordering, and to identify
clause conflicts.
With Cachet, the authors focused on adding component caching capabilities on top of an existing
SAT solver, ZChaff [26], the theoretical grounds for which were themselves introduced in [21]. This
caching involved the storing of subproblems in a local cache, so that these clauses would not have to
be re-derived by Cachet at a later juncture, thereby reducing redundant calculations over the course of
the algorithm. This can be viewed as analogous to how CNFTetris stores learned boxes in a local cache,
which it checks for containing boxes before examining the original database. A subproblem, meanwhile,
could be thought of as a box with a high percentage of variables set to λ.
However, one key difference here is the nature of the cached components. In Cachet, due to how
the algorithm functions, it must regularly prune the cache of siblings that would otherwise cause it to
undercount the number of models [26]. CNFTetris, in contrast, needs to perform no such pruning; it will
naturally determine the exact number of models without any additional work.
sharpSAT built on the work in Cachet while adding new ideas of its own [28]. Boolean constraint
propagation (also known as the failed literal rule [15]) and unit propagation heuristics are used by sharpSAT to identify failed literals with greater efficiency than was done in Cachet [28]. However, by fixing
the variable order, CNFTetris simplifies this process. Ultimately, this means that it finds its conflicting
boxes in a fundamentally different manner than sharpSAT does, which provides room for CNFTetris to
26
outperform sharpSAT.
dSharp, much like how sharpSAT built on Cachet uses sharpSAT as a core component [22]. The
authors perform a DNNF translation, and then use properties of decomposability and determinism to
perform model counting [15]. Though these differences do allow it to outperform more pure DPLLbased solvers on some benchmarks [15], since this system still uses sharpSAT as a core component, it
still shares many of the same advantages and disadvantages in comparison to CNFTetris.
As we have seen, all of the competing solvers can be viewed as evolutions along a single line. While
CNFTetris does not throw the baby out with the bathwater — that is, while CNFTetris still continues to
implement the classic DPLL algorithm — it does represent a distinct deviation from that line, challenging
assumptions such as the necessity of allowing a non-fixed global variable ordering and the much more
complex data storage scheme necessary in order to accommodate this. While this has necessitated much
work in order to implement, it has also shown vast promise.
6 Acknowledgments
We would like to thank Mahmoud Abo Khamis, Hung Q. Ngo, Christopher Ré, and Ce Zhang for very
helpful discussions.
References
[1] Cachet.
http://www.cs.rochester.edu/users/faculty/kautz/Cachet/index.htm.
cessed: 2016-11-24.
[2] International
joint
conference
on
artificial
intelligence
’07
dataset
Ac-
collection.
Ac-
http://www.cs.cornell.edu/~sabhar/software/benchmarks/IJCAI07-suite.tgz.
cessed: 2016-11-25.
[3] Sat-encoded all-interval series problems. http://www.cs.ubc.ca/~hoos/SATLIB/Benchmarks/SAT/AIS/descr.
Accessed: 2016-11-25.
[4] sharpsat – marc thurley. https://sites.google.com/site/marcthurley/sharpsat. Accessed:
2016-11-24.
[5] Christopher R. Aberger, Susan Tu, Kunle Olukotun, and Christopher Ré. Emptyheaded: A relational
engine for graph processing. In Proceedings of the 2016 International Conference on Management
of Data, SIGMOD Conference 2016, San Francisco, CA, USA, June 26 - July 01, 2016, pages 431–446,
2016.
[6] Mahmoud Abo Khamis, Hung Q. Ngo, Christopher Ré, and Atri Rudra. Joins via geometric resolutions: Worst-case and beyond. In Proceedings of the 34th ACM SIGMOD-SIGACT-SIGAI Symposium
on Principles of Database Systems, PODS ’15, pages 213–228, New York, NY, USA, 2015. ACM.
[7] Mahmoud Abo Khamis, Hung Q. Ngo, and Atri Rudra. FAQ: questions asked frequently. In Proceedings of the 35th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS
2016, San Francisco, CA, USA, June 26 - July 01, 2016, pages 13–28, 2016.
27
[8] Mahmoud Abo Khamis, Hung Q. Ngo, and Dan Suciu. Computing join queries with functional
dependencies. In Proceedings of the 35th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of
Database Systems, PODS 2016, San Francisco, CA, USA, June 26 - July 01, 2016, pages 327–342, 2016.
[9] Armin Biere, Marijn Heule, and Hans van Maaren. Conflict-driven clause learning sat solvers. pages
131–153, 2009.
[10] Martin Davis, George Logemann, and Donald Loveland. A machine program for theorem-proving.
Commun. ACM, 5(7):394–397, July 1962.
[11] Martin Davis and Hilary Putnam. A computing procedure for quantification theory. Journal of the
ACM (JACM), 7(3):201–215, 1960.
[12] Rina Dechter. Constraint Processing. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA,
2003.
[13] W. Fischl, G. Gottlob, and R. Pichler. General and Fractional Hypertree Decompositions: Hard and
Easy Cases. ArXiv e-prints, November 2016.
[14] Carla P. Gomes, Henry A. Kautz, Ashish Sabharwal, and Bart Selman. Satisfiability solvers. In Handbook of Knowledge Representation, pages 89–134. 2008.
[15] Carla P. Gomes, Ashish Sabharwal, and Bart Selman. Model counting. In Handbook of Satisfiability,
pages 633–654. 2009.
[16] Rudolf Halin. S-functions for graphs. Journal of Geometry, 8(1-2):171–186, 1976.
[17] Federico Heras, Javier Larrosa, and Albert Oliveras. Minimaxsat: An efficient weighted max-sat
solver. J. Artif. Intell. Res.(JAIR), 31:1–32, 2008.
[18] Manas R. Joglekar, Rohan Puttagunta, and Christopher Ré. AJAR: aggregations and joins over annotated relations. In Proceedings of the 35th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of
Database Systems, PODS 2016, San Francisco, CA, USA, June 26 - July 01, 2016, pages 91–106, 2016.
[19] Jure Leskovec and Andrej Krevl.
SNAP Datasets: Stanford large network dataset collection.
http://snap.stanford.edu/data, June 2014.
[20] Dániel Marx. Approximating fractional hypertree width. ACM Trans. Algorithms, 6(2):29:1–29:17,
April 2010.
[21] Matthew W Moskewicz, Conor F Madigan, Ying Zhao, Lintao Zhang, and Sharad Malik. Chaff: Engineering an efficient sat solver. In Proceedings of the 38th annual Design Automation Conference,
pages 530–535. ACM, 2001.
[22] Christian Muise, Sheila A McIlraith, J Christopher Beck, and Eric I Hsu. Dsharp: fast d-dnnf compilation with sharpsat. In Canadian Conference on Artificial Intelligence, pages 356–361. Springer,
2012.
[23] Hung Q. Ngo, Dung T. Nguyen, Christopher Ré, and Atri Rudra. Towards instance optimal join
algorithms for data in indexes. CoRR, abs/1302.0914, 2013.
28
[24] Hung Q Ngo, Ely Porat, Christopher Ré, and Atri Rudra.
Worst-case optimal join algorithms:[extended abstract]. In Proceedings of the 31st ACM SIGMOD-SIGACT-SIGAI symposium on
Principles of Database Systems, pages 37–48. ACM, 2012.
[25] Hung Q. Ngo, Christopher Ré, and Atri Rudra. Skew strikes back: new developments in the theory
of join algorithms. SIGMOD Record, 42(4):5–16, 2013.
[26] Tian Sang, Fahiem Bacchus, Paul Beame, Henry A Kautz, and Toniann Pitassi. Combining component caching and clause learning for effective model counting.
[27] João P Marques Silva and Karem A Sakallah. GraspâĂŤa new search algorithm for satisfiability. In
Proceedings of the 1996 IEEE/ACM international conference on Computer-aided design, pages 220–
227. IEEE Computer Society, 1997.
[28] Marc Thurley. sharpsat–counting models with advanced component caching and implicit bcp.
In International Conference on Theory and Applications of Satisfiability Testing, pages 424–429.
Springer, 2006.
[29] Todd L Veldhuizen. Leapfrog triejoin: A simple, worst-case optimal join algorithm. arXiv preprint
arXiv:1210.0481, 2012.
[30] Todd L. Veldhuizen. Triejoin: A simple, worst-case optimal join algorithm. In Proc. 17th International Conference on Database Theory (ICDT), Athens, Greece, March 24-28, 2014., pages 96–106,
2014.
[31] Carlo Zaniolo. Database relations with null values. In Proceedings of the 1st ACM SIGACT-SIGMOD
Symposium on Principles of Database Systems, PODS ’82, pages 27–33, New York, NY, USA, 1982.
ACM.
29
| 8 |
Neural Affine Grayscale Image Denoising
arXiv:1709.05672v1 [cs.CV] 17 Sep 2017
Sungmin Cha, Taesup Moon
College of Information and Communication Engineering
Sungkyunkwan University, Suwon, Korea 16419
[email protected]
Abstract
We propose a new grayscale image denoiser, dubbed as Neural Affine Image
Denoiser (Neural AIDE), which utilizes neural network in a novel way. Unlike
other neural network based image denoising methods, which typically apply simple
supervised learning to learn a mapping from a noisy patch to a clean patch, we
formulate to train a neural network to learn an affine mapping that gets applied
to a noisy pixel, based on its context. Our formulation enables both supervised
training of the network from the labeled training dataset and adaptive fine-tuning
of the network parameters using the given noisy image subject to denoising. The
key tool for devising Neural AIDE is to devise an estimated loss function of
the MSE of the affine mapping, solely based on the noisy data. As a result, our
algorithm can outperform most of the recent state-of-the-art methods in the standard
benchmark datasets. Moreover, our fine-tuning method can nicely overcome one of
the drawbacks of the patch-level supervised learning methods in image denoising;
namely, a supervised trained model with a mismatched noise variance can be mostly
corrected as long as we have the matched noise variance during the fine-tuning
step.
1
Introduction
Image denoising is one of the oldest problems in image processing and various denoising methods
have been proposed over the past several decades, e.g., BM3D [1], wavelet shrinkage [2], field of
experts [3], sparse-coding based approach [4], WNNM [5], EPLL [6] and CSF [7], etc.
In this paper, we propose a new image denoiser, dubbed as Neural Affine Image Denoiser (Neural
AIDE), which utilizes neural network in a novel way. The method is inspired by the recent work
in discrete denoising [8], in which a novel “pseudo-labels” were devised to train a denoiser solely
based on the noisy data. We extend the approach to the continuous-valued data case and devise a
novel estimated loss function based on the noisy data that is an unbiased estimate of the true MSE.
By investigating the devised estimated loss function we formulate to train a neural network to learn
an affine mapping that gets applied to a noisy pixel, based on its context. Such formulation enables
both supervised training of the network from the labeled training dataset and adaptive fine-tuning of
the network parameters using the given noisy image subject to denoising. Our experimental results
extensively show how we made subtle design choices in developing our algorithm. Furthermore, we
show that Neural AIDE significantly outperforms strong state-of-the-art baselines in the standard
benchmark test datasets.
2
Notations and Problem Setting
We denote xn×n as the clean grascale image, and each pixel xi ∈ {0, . . . , 255} is corrupted by an
independent additive noise to result in a noisy pixel Zi , i.e.,
Zi = xi + Ni , i = 1, . . . , n2 ,
(1)
where the continuous noise variables Ni ’s are independent (not necessarily identically distributed nor
Gaussian) over i and E(Ni ) = 0, E(Ni2 ) = σ 2 for all i. Moreover, As in the standard processing in
grayscale image denoising, we normalize both xi ’s and Zi ’s with 255 and treat them as real numbers.
Importantly, following the universal setting in discrete denoising [9, 8], we treat the clean image
xn×n as an individual image without any probabilistic model and only treat Z n×n as random.
2
Generally, a denoiser can be denoted as X̂ n×n = {X̂i (Z n×n )}ni=1 denoting that each reconstruction
at location i is a function of the noisy image Z n×n . The standard loss function used for the grayscale
image denoising to measure the denoising quality is the mean-squared error (MSE) denoted as
2
n×n
ΛX̂ n×n (x
,Z
n×n
)
n
1 X
Λ xi , X̂i (Z n×n )
2
n i=1
=
(2)
where Λ(x, x̂) = (x − x̂)2 is the per-symbol squared-error. Conventionally, the MSE
is compared in the dB-scale using the Peak Signal-to-Noise-Ratio (PSNR) defined as
10 log10 (1/ΛX̂ n×n (xn×n , Z n×n )).
2.1
Estimated loss function for the affine denoiser
In this paper, we consider the denoiser of the form X̂i (Z n×n ) = a(Z \i ) · Zi + b(Z \i ) for each i, in
which Z \i stands for the entire noisy image except for Zi . Namely, the reconstruction at location i
has the affine function form of the noisy symbol Zi , but the slope and the intercept parameters, i.e.,
a(Z \i ) and b(Z \i ), of the affine function can be functions of the surrounding pixels. Hence, separete
parameters can be learned from data for each location. Before presenting more concrete form of our
denoiser, we first consider the following lemma.
Lemma 1 Consider a single-symbol case Z = x + N with E(N ) = 0 and E(N 2 ) = σ 2 , and
suppose a single-symbol denoiser has the form of X̂(Z) = aZ + b. Then,
L(Z, (a, b); σ 2 ) = (Z − (aZ + b))2 + 2aσ 2
(3)
is an unbiased estimate of Ex Λ(x, X̂(Z)) + σ 2 , in which Λ(x, x̂) = (x − x̂)2 and Ex (·) notation
stands for the expectation over Z given that the clean symbol is x.
Remark: Note while the true MSE, Λ(x, X̂(Z)), can be evaluated only when the clean symbol x is
known, the estimated loss L(Z, (a, b)) can be evaluated soley with the noisy symbol Z, the affine
mapping (a, b) and the noisy variance σ 2 . Thus, L(Z, (a, b)) plays a key role in adaptively learning
the neural network-based affine denoiser as shown in the next section.
Proof: By simple algebra, we have the following equalities:
Ex (x − X̂(Z))2
= Ex (x2 + (aZ + b)2 − 2x(aZ + b))
= Ex (x2 + (aZ + b)2 − 2ax2 − 2bx)
2
2
2
2
(4)
2
= Ex (Z − σ + (aZ + b) − 2a(Z − σ ) − 2bZ)
2
= Ex Z − (aZ + b) + (2a − 1)σ 2
(5)
(6)
= Ex L(Z, (a, b); σ 2 ) − σ 2 ,
in which (4) follows from Ex (Z) = x, (5) follows from Ex (Z 2 ) = x2 + σ 2 and replacing x2 with
Ex (Z 2 − σ 2 ), and (6) follows from simply rearranging the terms. Thus, we have the lemma.
From Lemma 1, we can also show that for the denoisers of the form X̂i (Z n×n ) = a(Z \i )·Zi +b(Z \i ),
Exi Λ(xi , X̂i (Z n×n )) Z \i = Exi L(Zi , (a(Z \i ), b(Z \i )); σ 2 )|Z \i − σ 2
(7)
holds since a(Z \i ) and b(Z \i ) become constant given Z \i and the noise is independent over i. The
Exi (·|Z \i ) in (7) stands for the conditional expectation of Zi given the clean symbol xi and the
noisy symbols Z \i . Note the estimated loss function similar to (3) has been also used to the filtering
problem [10].
2
3
3.1
Neural AIDE: Neural Affine Image DEnoiser
Neural network-based affine denoiser
Our proposing Neural Affine Image DEnoiser (Neural AIDE) considers the denoiser of the form
\i
\i
X̂i (Z n×n ) = a(Ck×k ) · Zi + b(Ck×k ), i = 1, . . . , n × n
(8)
\i
in which Ck×k stands for the noisy image patch, or the context, of size k × k surrounding Zi that
does not include Zi . Thus, the patch has a hole in the center. Then, we define a neural network
g(w, ·) : [0, 1]k
that takes the context
\i
Ck×k
2
−1
→ R2+
(9)
as input and outputs the slope and intercept parameters
\i
a(Ck×k )
and
\i
b(Ck×k ) for each location i. We denote w as the weight parameters of the neural network, which
will be learned by the process described in the later sections. As it will get clear in our arguments
below, the specific form of our denoiser in (8) enables learning the parameters by both supervised
learning with labelled training data and adaptive fine-tuning with the given noisy image.
Note in (9), we put a constraint that the slope and intercept of the affine
function, i.e., the output of the network, should be nonnegative. While such
constraint would appear apparent in our experimental results, it also makes an
intuitive sense; the denoiser (8) tries to estimate xi from Zi , which are both
in the interval [0, 1], hence, the nonnegative slope and intercept parameters
should suffice. The nonnegativity constraint is realized in the neural network
by applying
f (x) = log(1 + ex )
(10)
as the activation function for the final output layer of the neural network. The
rest of the network architecture is the ordinary fully-connected neural network
with ReLU activation functions, as depicted in Figure 1.
There are two sharp differences with our Neural AIDE and other neural network
based denoisers, e.g., [11, 12]. First, the other schemes take the full noisy
image patch (including the center location) as input to the network, and the
network is trained to directly infer the corresponding clean image patches.
In contrast, Neural AIDE is trained to first learn an affine mapping based on
Figure 1: The arthe noisy image patch with a hole (i.e., the context of Zi ), then the learned
chitecture of Neural
mapping is applied to Zi to obtain the recostruction X̂i . Such difference AIDE
enables the development of the estimated loss function in Lemma 1 and the
adaptive training process described in the next section. The principle of learning a mapping first and
applying the mapping to the noisy symbol for denoising or filtering has been utilized in [13, 10, 8].
Second, unlike the other schemes, in which the patch-level reconstructions should somehow be
aggregated to generate the final denoised image, Neural AIDE simply generates the final pixel-bypixel reconstructions. Thus, there is no need for a step to aggregate multiple number of reconstructed
patches, which simplifies the denoising step. Furthermore, since the neural network of Neural AIDE
only has to estimate the two parameters of the affine mapping from each context, Neural AIDE can
make much more efficient usage of the data with a simpler model compared to the networks in other
schemes that need to estimate the full k × k-patch, e.g., [11].
3.2
Adaptive training with noisy image
We first describe how the network parameters w can be adaptively learned from the given noisy
image Z n×n without any additional labelled training data. That is, by denoting each output element
\i
of the neural network g(w, ·) for the context Ck×k as
\i
\i
\i
\i
g(w, Ck×k )1 , a(Ck×k ) and g(w, Ck×k )2 , b(Ck×k ),
we can define an objective function for the neural network to minimize as
Ladaptive (w, Z
n×n
n2
1 X
\i
\i
L Zi , (g(w, Ck×k )1 , g(w, Ck×k )2 ); σ 2
), 2
n i=1
3
(11)
by using the estimated loss function L(Z, (a, b); σ 2 ) defined in Lemma 1. The training process using
(11) is identical to the ordinary neural network learning, i.e., start with randomly initiallized w, then
use backprogagation and variants of mini-batch SGD for updating the parameters.
The formulation (11) may seem similar to training a neural network for a regression problem; namely,
2
\i
{(Ck×k , Zi )}ni=1 , which are solely obtained from the noisy image Z n×n , can be analogously thought
of as the input-target label pairs for the supervised regression. But, unlike regression, which tries
to directly learn a mapping from input to the target label, our network learns the affine mapping for
each context and apply it to Zi to estimate the unobserved clean symbol xi . The fact that (11) only
depends on the given noisy image Z n×n (and the assumed σ 2 ) makes the learning adaptive.
The rationale behind using L(Z, (a, b); σ 2 ) in (11) is the following; as shown in (7), the estimated
\i
loss is an unbiased estimate of the true expected squared-error given the context Ck×k . Therefore,
minimizing (11) may result in the network that produces the slope and intercept parameters that minimize the true MSE for the reconstrunctions of the corresponding affine mappings. This formulation
of training neural network parameters solely based on the noisy data is inspired by the recent work in
discrete denoising [8].
Once the training is done, we can then denoise the very noisy image Z n×n used for training by
applying the affine mapping at each location as (8). That is, by denoting w∗ as the learned parameter
by minimizing (11), the reconstruction at location i by Neural AIDE becomes
\i
\i
X̂i,Neural AIDE (Z n×n ) = g(w∗ , Ck×k )1 · Zi + g(w∗ , Ck×k )2 .
3.3
(12)
Supervised training and adaptive fine-tuning
While the formulation in (11) gives an effective way of adaptively training a denoiser based on the
given noisy image Z n×n , the specific form of the denoiser in (8) makes it possible to carry out the
supervised pre-training of w before the adaptive training step. That is, we can collect abundant clean
images, x̃n×n , from the various image sources (e.g., World Wide Web) and corrupt them with the
assumed additive noise with variance σ 2 in (1) to generate the correspoding noisy images, Z̃ n×n ,
and the labelled training data of size N ,
D = {(x̃i , C̃i,k×k )}N
i=1 .
(13)
In (13), C̃i,k×k stands for the noisy image patch of size k × k at location i that includes the noisy
symbol Z̃i , and x̃i is the clean symbol that correspond to Z̃i . Now, the subtle point is that, unlike the
usual supervised learning that may directly learn a mapping from C̃i,k×k to x̃i , we remain in using
the neural network defined in (9) and learn w by minimizing
N
1 X
\i
\i
Λ x̃i , g(w, C̃k×k )1 · Z̃i + g(w, C̃k×k )2 .
Lsupervised (w, D) ,
N i=1
(14)
Note Λ(x, x̂) = (x − x̂)2 as before. The training process of minimizing (14) is again done by the
usual backpropagation and the variants of mini-batch SGD.
Once the objective function (14) converges after sufficient iteration of weight updates, we denote
the converged parameter as w̃. Then, for a given noisy image to denoise, Z n×n , we can further
update w̃ adaptively for Z n×n by minimizing Ladaptive (w, Z n×n ) in (11) starting from w̃. That is,
we adaptively fine-tune w̃ until Ladaptive (w, Z n×n ) converges, then denoise Z n×n with the converged
parameter as (12). This capability of adaptively fine-tuning the supervised trained weight parameter
is the unique characteristic of Neural AIDE that differentiates it from other neural network-based
denoisers.
4
Experimental Results
We compared the denoising performance of the proposed Neural AIDE with several state-of-the-art
denoising methods, including BM3D [1], MLP [11], EPLL [6], WNNM [5] and CSF [7].
4
4.1
Data and experimental setup
For the supervised training, we generated the labelled training set using 2000 images available in
public datasets. Out of 2000 images, 300 images are taken from train/validation set in the Berkeley
Segmentation Dataset and the remaining 1700 images are taken from Pascal VOC 2012 Dataset.
For the Pascal VOC images, we resized them to match the resolution of the Berkeley Segmentation
Dataset [14], 481 × 321. We corrupted the images with additive Gaussian noise and tested with
multiple noise levels, namely, σ = 5, 10, 15, 20, 25. That is, we built separate training set of size
2000 for each noise level. The total number of training data points (i.e., N in (13)) in each dataset
was thus about 308 million. We evaluated the performance of the denoisers with 11 standard test
images, i.e., {Barbara, Boat, C.man Couple, F.print, Hill, House, Lena, Man, Montage and Peppers},
and 68 standard Berkeley images [3].
Our network had 9 fully connected layers with 512 nodes in each layer, which showed the best result
among a few tried models 1 . ReLU was used as activation functions, and we used Adam [15] as the
optimizer to train the network. For the supervised training, we trained the network up to 50 epochs
and halved the learning rate every 10 epochs starting from 10−4 . For the adaptive fine-tuning, we
also trained up to 50 epochs and halved the learning rate every 20 epochs starting from 10−5 . We
\i
did not use any regularization methods while training. Moreover, for the context data, Ck×k , we
subtracted 0.5 from the values to make the input to the network get centered around 0. (Note Zi that
the affine mappping gets applied to in (12) still is in the original scale.)
For all our experiments, we used Keras (version 1.2.2) with Tensorflow (version 0.11.0) backend and
NVIDIA’s GPU (GeForce GTX1080) with CUDA library version 8.0.
4.2
Training Neural AIDE
In this section, we systematically show the reasoning behind choosing the context size k, the empirical
justification of the nonnegative contraint on the outputs of g(w, ·) and the validity of the combination
of the supervised pre-training with adaptive fine-tuning.
4.2.1
Adaptive training with noisy image
We first carried out the adaptive training solely with the given noisy image as described in Section
3.2. That is, for each given noisy image, we randomly initialized the weight parameters of the neural
network and trained with the objective function (11). After training, the image was denoised as (12).
Figure 2(a) shows the PSNR results on the standard 11 test images with varying k values and output
activation functions, i.e., Linear (f (x) = x), Positive (f (x) = log(1 + ex ) in (10)) and Sigmoid
(f (x) = 1/(1 + e−x )). The noise level was σ = 25.
From the figure, we can see that the adaptive training alone can still result in a decent denoiser,
although some PSNR gap exists compared to the state-of-the-arts as shown in Table 1. We see that
k = 7 tend to be the best context size for adaptive training. Moreover, the choice of the output
activation functions turns out to be important, and more discussion is given on the activation function
in the next section.
4.2.2
Supervised training and adaptive fine-tuning
Since the limitation of the adaptive training alone was apparent, we then carried out the supervised
training in Section 3.3. That is, we took the 300 images from the Berkeley Segmentation Dataset and
trained the network with varying k values as shown in Figure 2(b). Denoising of the noisy image was
done identically as before by applying the learned affine mapping to each noisy pixel. Note in this
case, we only carried out the experiments with the Linear activation function. We can see that the
supervised training can result in a much higher PSNR values than the adaptive training, already very
close to the state-of-the-arts. Also, the performance seems to get saturated around k = 17, so in all
our experiments below, we used k = 17.
Encouraged by this result, we moved on to adaptively fine-tuning the weight parameters by minimizing
the objective function (11) for each image initialized with the parameters learned by supervised
learning. This is when the subtle issue regarding the activation function we describe below comes
1
The difference among the models were not huge.
5
(a) Adaptive training (random initialization)
(b) Supervised training (300 training images)
Figure 2: Adaptive and supervised training results on the standard 11 test images (σ = 25)
up. In Figure 3, we trained supervised learning models with Linear and Positive output activation
functions using 800 images for σ = 25, then adaptively fine-tuned the parameters for given noisy
image (F.print and Montage image). Figure 3(a)-3(d) show the distributions of the slope (a) and
intercept (b) paramters that each model outputs for the given image, and 3(i) shows the change of
PSNR value in the process of adaptive fine-tuning. From Figure 3(a) and 3(e), we can see that when
trained with supervised learning with Linear output activation function, the values of a and b all lie in
the interval [0, 1]. However, when fine-tuned for each image, Figure 3(b) and 3(f) show that many
negative a values are produced for the Linear activation. This can be readily seen by examining the
form of L(Z, (a, b); σ 2 ) in (3), which does not hinder a from having negative values when there is no
constraint. As shown in Figure 3(i), such negative a values for the affine mapping sometime does not
have big effect on the fine-tuning process and the final denoising performance as in the case of F.print,
in which the PSNR increases significantly from the supervised model by fine-tuning. However, as in
the case of Montage in Figure 3(i), we suspect such negative a values sometimes hurt the denoising
performance greatly. In contrast, when we put the nonnegativitiy contstraint on a and b in the neural
network, we observe a stable fine-tuning process, as is observed in Figure 3(d), 3(h) and 3(i). Thus,
the results of Neural AIDE from now on all uses the positive activation function. 2
Figure 4 shows the adaptive fine-tuning process of the standard 11 images for σ = 15. The supervised
model was trained with the full training set of 2000 images. From the figures, we can see that the
learning is done appropriately and the PSNR does improve with fine-tuning.
2
We also tested with the sigmoid activation and the result was more or less the same.
6
(a) F.print(Lin.,s)
(b) F.print(Lin.,ft)
(e) Montage(Lin.,s)
(f) Montage(Lin.,ft)
(c) F.print(Pos.,s)
(d) F.print(Pos.,ft)
(g) Montage(Pos.,s) (h) Montage(Pos.,ft)
(i) PSNR values during adaptive fine-tuning.
Figure 3: (a-h) Distribution of a and b values for F.print and Montage after supervised training (s)
and fine-tuning (ft) for Linear (Lin.) and Positive (Pos.) activation functions. The distributions
obtained for fine-tuning are from the models at 50 epoch. (i) PSNR values during fine-tuning.
(a) PSNR
(b) Objective function (11)
Figure 4: PSNR and objective function value during fine-tuning for the standard 11 images (σ = 15)
4.3
4.3.1
Quantitative evaluation
Standard 11 images
Table 1 summarizes our denoising results compared to the recent state-of-the-arts on the standard 11
images for various noise levels. We show both mean and standard deviation of PSNR values. For
the baseline methods, we downloaded the codes from the authors’ webpages and ran the code on
the noisy images, thus, the numbers can be compared fairly. (MLP and CSF57×7 could run only on
selected noise levels.) N-AIDES stands for the Neural AIDE that is only supervised trained (with
2000 images). N-AIDEfB and N-AIDEfH are fine-tuned models after supervised learning; N-AIDEfB
is the best model (in terms of epoch) chosen based on PSNR (thus, not practical) and N-AIDEfH is
the model that is chosen with a heuristic rule - i.e., stop fine-tuning when the training loss becomes
smaller than σ 2 , otherwise fine-tune until 50 epochs.
From the table, we can see that N-AIDEfH significantly outperforms all other baselines on average
except for WNNM. The difference of mean PSNR between WNNM and N-AIDEfH is almost
7
negligible and N-AIDEfH tend to have smaller variance in terms of PSNR than WNNM. By comparing
N-AIDES and N-AIDEfH , we can definitely see that adaptive fine-tuning is effective. Also, when
the noise level is low, the improvement gets larger. Furthermore, by comparing N-AIDES with
MLP, which is another neural network based denoiser and uses much more data points (362 million
exmample) and larger model, we can confirm that our model more efficiently uses the data.
σ
5
10
15
20
25
PSNR
Mean
Std
Mean
Std
Mean
Std
Mean
Std
Mean
Std
BM3D
38.24
1.24
34.71
1.37
32.76
1.48
31.43
1.50
30.40
1.51
MLP
34.45
1.12
30.24
1.43
EPLL
37.88
1.07
34.27
1.18
32.29
1.35
30.90
1.34
29.81
1.38
WNNM
38.43
1.28
34.95
1.42
32.99
1.54
31.59
1.57
30.51
1.56
CSF57×7
32.40
1.27
29.93
1.41
N-AIDEs
38.14
1.17
34.66
1.31
32.77
1.44
31.38
1.50
30.36
1.53
N-AIDEfB
38.44
1.18
34.92
1.33
32.97
1.42
31.58
1..46
30.51
1.45
N-AIDEfH
38.44
1.18
34.91
1.33
32.96
1.42
31.55
1.44
30.47
1.46
Table 1: PSNR comparsions on the 11 standard benchmark images for σ = 5, 10, 15, 20, 25.
Figure 5(a) shows the competitive comparison between N-AIDEfH and the baselines. That is, the
figure plots the number of images of which the PSNR of N-AIDEfH is better than the baseline methods.
We can see that our method mostly outperforms all baselines competitively, including WNNM.
One of the main drawbacks of MLP [11] is that the neural networks have to be trained separately
for all noise levels and the mismatch of σ significantly hurts the denoising performance. While the
supervised training of Neural AIDE is also done in the similar way, Figure 5(b)-5(c) show that the
adaptive fine-tuning can be very effective in overcoming such limitation. Figure 5(b) shows the PSNR
results of the mismatched N-AIDEs models before fine-tuning. Each row is normalized with the
PSNR of the matched case, i.e., the diagonal element, and the PSNR values are color-coded. We
clearly see the sensitivity of PSNR in the mismatch of σ as the off-diagonal values show significant
gaps compared to the diagonal values in each row. On the other hand, Figure 5(c) shows the PSNR
values of N-AIDEfH ’s that have mismatched supervised models but are adaptively fine-tuned with
the correct σ’s. We can clearly see that the PSNR gaps of the mismatched supervised models can be
significantly closed by adaptive fine-tuning, which gives a significant edge over MLP in [11].
(a) Competitive comparison
(b) PSNR of N-AIDEs
(c) PSNR of N-AIDEfH
Figure 5: (a) Competitive comparison of N-AIDEfH with baselines (b) PSNR of mismatched N-AIDEs
(c) PSNR of N-AIDEfH with mismatched N-AIDEs but fine-tuned with correct σ
4.3.2
Standard 68 Berkeley images
Table 2 shows the PSNR results on the 68 standard Berkeley images from [3]. We can clear see
that N-AIDEfH again outperforms the baseline state-of-the-art methods, including WNNM, with
significant margins.
8
σ
5
10
15
20
25
MLP
33.41
28.73
EPLL
37.50
33.32
31.09
29.60
28.47
WNNM
37.71
33.48
31.18
29.63
28.46
CSF57×7
31.10
28.41
N-AIDEs
37.72
33.62
31.45
29.98
28.93
N-AIDEfB
37.82
33.71
31.52
30.05
28.97
N-AIDEfH
37.79
33.66
31.47
30.00
28.90
Table 2: PSNR comparisons on the 68 standard Berkeley images.
5
Concluding remarks
We devised a novel neural network based image denoiser, Neural AIDE. The algorithm is devised
with a different principle from the other state-of-the-art methods. As a result, we show that a very
simple adaptive affine model, which Neural AIDE learns differently for each pixel, can significantly
outperform many strong baselines. Also, the adaptive fine-tuning of Neural AIDE can successfully
overcome the σ mismatch problem, which is a serious drawback of other neural network based
methods.
As a future work, we would like to more thoroughly carry out the experiments in even noisier
regime. Also, since our algorithm does not require the noise to be Gaussian (only the additivity
of the noise and σ 2 are assumed), we would try to other types of noise, e.g., Laplacian noise.
Furthermore, extending our framework to non-additive noise such as multiplicative noise would be
another interesting direction. Finally, theoretical anayses of our method based on information theory
and learning theory would be another direction worth pursuing.
9
References
[1] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image denoising by sparse 3-d transformdomain collaborative filtering. IEEE Trans. Image Processing, 16(8):2080–2095, 2007.
[2] E.P. Simoncelli and E.H. Adelson. Noise removal via bayesian wavelet coring. In ICIP, 1996.
[3] S. Roth and M.J Black. Field of experts. IJCV, 82(2):205–229, 2009.
[4] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Non-local sparse models for image
restoration. In ICCV, 2009.
[5] S. Gu, L. Zhang, W. Zuo, and X. Feng. Weighted nuclear norm minimization with applicaitons
to image denoising. In CVPR, 2014.
[6] D. Zoran and Y. Weiss. From learning models of natural image patches to whole image
restoration. In ICCV, 2011.
[7] U. Schmidt and S. Roth. Shrinkage fields for effective image restoration. In CVPR, 2014.
[8] T. Moon, S. Min, B. Lee, and S. Yoon. Neural universal discrete denosier. In NIPS, 2016.
[9] T. Weissman, E. Ordentlich, G. Seroussi, S. Verdu, and M. Weinberger. Universal discrete
denoising: Known channel. IEEE Trans. Inform. Theory, 51(1):5–28, 2005.
[10] T. Moon and T. Weissman. Universal FIR MMSE filtering. IEEE Transactions on Signal
Processing, 57(3):1068–1083, 2009.
[11] H. Burger, C. Schuler, and S. Harmeling. Image denoising: Can plain neural networks compete
with BM3D? In CVPR, 2012.
[12] J. Xie, L. Xu, and E. Chen. Image denoising and inpainting with deep neural networks. In NIPS,
2012.
[13] T. Weissman, E. Ordentlich, M. Weinberger, A. Somekh-Baruch, and N. Merhav. Universal
filtering via prediction. IEEE Trans. Inform. Theory, 53(4):1253–1264, 2007.
[14] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images
and its application to evaluating segmentation algorithms and measuring ecological statistics.
In ICCV, 2001.
[15] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
10
| 1 |
Convolutional Neural Networks for Histopathology Image
Classification: Training vs. Using Pre-Trained Networks
Brady Kieffer1 , Morteza Babaie2 Shivam Kalra1 , and H.R.Tizhoosh1
1 KIMIA Lab, University of Waterloo, ON, CANADA
2 Mathematics and Computer Science Department, Amirkabir University of Technology, Tehran, IRAN
arXiv:1710.05726v1 [cs.CV] 11 Oct 2017
e-mails: [email protected], [email protected], [email protected], [email protected]
Abstract— We explore the problem of classification within a
medical image data-set based on a feature vector extracted from
the deepest layer of pre-trained Convolution Neural Networks.
We have used feature vectors from several pre-trained structures,
including networks with/without transfer learning to evaluate
the performance of pre-trained deep features versus CNNs
which have been trained by that specific dataset as well as the
impact of transfer learning with a small number of samples. All
experiments are done on Kimia Path24 dataset which consists of
27,055 histopathology training patches in 24 tissue texture classes
along with 1,325 test patches for evaluation. The result shows that
pre-trained networks are quite competitive against training from
scratch. As well, fine-tuning does not seem to add any tangible
improvement for VGG16 to justify additional training while we
observed considerable improvement in retrieval and classification
accuracy when we fine-tuned the Inception structure.
Keywords— Image retrieval, medical imaging, deep learning,
CNNs, digital pathology, image classification, deep features, VGG,
Inception.
I. I NTRODUCTION
We are amid a transition from traditional pathology to
digital pathology where scanners are replacing microscopes
rapidly. Capturing the tissue characteristics in digital formats
opens new horizons for diagnosis in medicine. On on hand,
we will need to store thousands and thousands of specimens
in large physical archives of glass samples. This will be a
relief for many hospitals with limited space. On the other
hand, acquiring an image from the specimen enables more
systematic analysis, collaborations possibilities and, last but
not least, the computer-aided diagnosis for pathology, arguable
the final frontier of vision-based disease diagnosis. However,
like any other technology, digital pathology comes with its
own challenges; whole-scan imaging generally generates gigapixel files that also require (digital) storage and are not easy
to analyze via computer algorithms. Detection, segmentation,
and identification of tissue types in huge digital images, e.g.,
50,000×70,000 pixels, appears to be a quite daunting task for
computer vision algorithms.
Looking at the computer vision community, the emergence
of deep learning and its vast possibilities for recognition and
classification seems to be a lucky coincidence when we intend
to address the above-mentioned obstacles of digital pathology.
Diverse deep architectures have been trained with large set of
images, e.g., ImageNet project or Faces in the Wild database,
to perform difficult tasks like object classification and face
recognition. The results have been more than impressive; one
may objectively speak of a computational revolution. Accuracy
numbers in mid and high 90s have become quite common
when deep networks, trained with millions of images, are
tested to recognize unseen samples.
In spite of all progress, one can observe that the applications
of deep learning in digital pathology hast not fully started
yet. The major obstacle appears to be the lack of large
labelled datasets of histopathology scans to properly train
some type of multi-layer neural networks, a requirement that
may still be missing for some years to come. Hence, we have
to start designing and training deep nets with the available
datasets. Training from scratch when we artificially increase
the number of images, i.e., data augmentation, is certainly the
most obvious action. But we can also use nets that have been
trained with millions of (non-medical) images to extract deep
features. As a last possibility, we could slightly train (finetune) the pre-trained nets to adjust them to the nature of or
data before we use them as feature extractors or classifiers.
In this paper, we investigate the usage of deep networks
for Kimia Path24 via training from scratch, feature extraction,
and fine-tuning. The results show that employing a pre-trained
network (trained with non-medical images) may be the most
viable option.
II. BACKGROUND
Over recent years researchers have shown interest in leveraging machine-learning techniques for digital pathology images. These images pose unique issues due to their high variation, rich structures, and large dimensionality. This has lead
researchers to investigate various image analysis techniques
and their application to digital pathology [1]. For dealing
with the large rich structures within a scan, researchers have
attempted segmentation on both local and global scales. For
example, researchers have conducted works on the segmentation of various structures in breast histopathology images using
methods such as thresholding, fuzzy c-means clustering, and
adaptive thresholding with varying levels of success [1]–[4].
When applying these methods to histopathological images,
it is often desired that a computer aided diagnosis (CAD)
method be adopted for use in a content-based image retrieval
(CBIR) system. Work has been done to propose various CBIR
systems for CAD by multiple groups [5]. Recently, hashing
To appear in proceedings of the 7th Intern. Conf. on Image Processing Theory, Tools and Applications (IPTA 2017), Nov 28-Dec 1, Montreal, Canada.
methods have been employed for large-scale image retrieval.
Among the hashing methods, kernelized and supervised hashing are considered the most effective [5], [6]. More recently
Radon barcodes have been investigated as a potential method
for creating a CBIR [7]–[9].
Yi et al. utilized CNNs on a relatively small mammography
dataset to achieve a classification accuracy of 85% and an
ROC AUC of 0.91 whereas handcrafted features were only
able to obtain an accuracy of 71% [10].
Currently, there is interest in using pre-trained networks to
accomplish a variety of tasks outside of the original domain
[11]. This is of great interest for medical tasks where there is
often a lack of comprehensive labeled data to train a deep
network [12]. Thus, other groups have leveraged networks
trained on the ImageNet database which consists of more than
1.2 million categorized images of 1000+ classes [12]–[14].
These groups have reported a general success when attempting
to utilize pre-trained networks for medical imaging tasks [12],
[14], [15].
In this study we explore and evaluate the performance of
a CNN when pre-trained on non-medical imaging data [12],
[16]. Specifically, when used as feature extractors with and
without fine tuning for a digital pathology task.
for the fine-tuning process (we do not use all of them to
emulate cases where no large dataset is available; besides
more extensive training may destroy what a network has
already learned). The values of each patch were subsequently
normalized into [0, 1]. The patches were finally downsized to
a 224×224 to be fed into the CNN architecture.
Following the above steps, we first obtained 27,055 patches
from each scan based purely on the homogeneity threshold.
Then, randomly sampled 100 patches from each class leading
to the much smaller training set of 2,400 patches. A selection
of patches from the training set can be viewed within Fig. 1.
As Fig. 2 shows the testing samples are relatively balanced
in Kimia Path24 dataset, whereas the training set is rather
imbalanced. Different size and frequency of specimens are the
main reasons for the imbalance.
B. Accuracy Calculation
The accuracy measures used for the experiments are adopted
from [17]. These were chosen so that results between the
papers could be compared. There are ntot = 1, 325 testing
patches Psj that belong to 24 sets Γs = {Psi |s ∈ S, i =
1, 2, . . . } with s = 0, 1, 2, . . . , 23 [17]. Looking at the set
of retrieved images for an experiment, R, the patch-to-scan
accuracy, ηp , can be defined as
III. DATA S ET
ηp =
The data used to train and test the CNNs was the Kimia
Path24 consisting of 24 whole scan images (WSIs), manually
selected from more than 350 scans, depicting diverse body
parts with distinct texture patterns. The images were captured
by TissueScope LE 1.0 1 bright field using a 0.75 NA lens.
For each image, one can determine the resolution by checking
the description tag in the header of the file. For instance, if
the resolution is 0.5µm, then the magnification is 20x, and if
the resolution is 0.25µm, then the magnification is 40x. The
dataset offers 27,055 training patches and 1,325 (manually
selected) test patches of size 1000×1000 (0.5mm×0.5mm)
[17]. The locations of the test patches in the scans have been
removed (whitened) such that they cannot be mistakenly used
for training. The color (staining) is neglected in Kimia Path24
dataset; all patches are saved as grayscale images. The Kimia
Path24 dataset is publicly available2 .
1 X
|R ∩ Γs |
ntot
(1)
s∈S
The whole-scan accuracy, ηw , can be defined as
ηw =
1 X
|R ∩ Γs |
24
(2)
s∈S
With the total accuracy is defined as ηtotal = ηp × ηw .
By incorporating both the accuracy measurements the resulting problem becomes much more difficult when attempting
to obtain acceptable results [17].
IV. M ETHODS
Each experiment was run using the architecture for both the
VGG16 and Inception-v3 networks as provided in the Keras
Python package [18]–[20]. Utilizing a pre-trained network, we
then analyze the effectiveness of the network when using it
just as a feature extractor, and when transferring the network
(some of its weights) to the medical imaging domain.
A. Patch Selection
To create the Kimia Path24 dataset, each scan is divided
into patches that are 1000×1000 pixels in size with no overlap
between patches. Background pixels (i.e., very bright pixels)
are set to white and ignored using a homogeneity measure
for each patch. The homogeneity for selection criterion is that
every patch with a homogeneity of less than 99% is ignored.
The high threshold ascertains that no patch with significant
texture pattern is ignored. From the set of patches each scan
had 100 randomly sampled patches are selected to be used
A. Fine-Tuning Protocols
When fine-tuning a deep network, the optimal setup varies
between applications [21]. However, using a pre-trained network and applying it to other domains has yielded better
performing models [12]. It was decided that only the final
convolutional block (block 5) within VGG16 and the final
two inception blocks within Inception-v3 would be re-trained
[12], [18], [19], [21]. As in [14] a single fully connected layer
1 http://www.hurondigitalpathology.com
2 http://kimia.uwaterloo.ca
2
To appear in proceedings of the 7th Intern. Conf. on Image Processing Theory, Tools and Applications (IPTA 2017), Nov 28-Dec 1, Montreal, Canada.
Fig. 1. A selection of patches from each training scan within the Kimia Path24 dataset. The patches are 1000×1000 pixels in size or 0.5mm×0.5mm. From
top left to bottom right: scan/class 0 to scan/class 23.
3
To appear in proceedings of the 7th Intern. Conf. on Image Processing Theory, Tools and Applications (IPTA 2017), Nov 28-Dec 1, Montreal, Canada.
Fig. 2. Instance distribution for training set (left) and testing set (right) of Kimia Path24 .
of size 256 (followed by an output layer of size 24) was
chosen to replace the default VGG16 fully connected layers
when fine-tuning. This was found to give better results. The
optimizer we used follows the logic from [12], [14] where the
learning rate chosen was very small (10−4 ) and the momentum
used was large (0.9), both of which were selected to ensure
no drastic changes within the weights of the network during
training (which would destroy what had been already learned).
The Keras data augmentation API was used to generate extra
training samples and the network was trained for a total of 200
epochs (after which the accuracy was no longer changing) with
a batch size of 32 [20].
softmax classification layer. The fully connected layers were
pretrained on bottleneck features and then attached to the
convolutional layers and training on the final two inception
blocks was then performed. The resulting networks (Transfer
Learned VGG16 or TL-VGG16 and TL-Inception-v3) were
then used to classify the test patches. The class activation
mappings (CAMs) for the fine-tuned Inception-v3 network on
randomly selected test patches can be viewed in Fig. 3.
V. R ESULTS
The results of our experiments are summarized in Table 1.
It can be stated the results for VGG16 and CNN1 are quite
similar; training from scratch, using a pre-trained network as
feature extractor, and fine-tuning a pre-trained network are all
delivering comparable results for Kimia Path24 . Whereas the
results for Inception-v3 are similar with the transfer-learned
model outperforming the feature extractor. As TL-Inceptionv3 produced the best results, ηtotal = 56.98%, and minimally
updating the weights of a pre-trained network is not a time
consuming task, one may prefer to utilize it. However, one
may prefer using Inception-v3 to training from scratch and
fine-tuning a pre-trained net as it requires no extra effort and
produces similar results with a linear SVM.
B. Pre-Trained CNN as a Feature Extractor
By using the provided implementation of the specified architectures within Keras, the pre-trained network was first used as
a feature extractor without any fine-tuning (Feature Extractor
VGG16 or FE-VGG16 and FE-Inception-v3) [20]. The last
fully connected layer of the network – prior to classification –
was used extracted to be used a feature vector. As pre-trained
networks are trained in other domains (very different image
categories) and hence cannot be used as classifier, we used the
deep features to train a linear Support Vector Machine (SVM)
for classification. The Python package scikit-learn as well as
LIBSVM were used to train SVM classifiers with a linear
kernel [22], [23]. Both NumPy and SciPy were leveraged to
manipulate and store data during these experiments [24], [25].
VI. D ISCUSSIONS
It was surprising to find out that simply using features from
a pre-trained network (trained on non-medical images, see
Fig. 4) can deliver results comparable with a network that,
with considerable effort and resources, has been trained from
scratch for the domain in focus (here histopathology). As well,
such simpler approach was even able to achieve a noticeable
accuracy increase of ≈ 8.74% in overall performance for
Kimia Path24 dataset. Another surprising effect was that
transfer learning via fine-tuning for VGG16 was not able to
provide any improvement compared to extracting deep features
from a pre-trained network without any change in the learned
of its weights whereas with Inception-v3 the improvement was
immediate.
Perhaps the most obvious reaction to this finding is that
if we had enough samples, i.e., millions of histopathological
images, and if we would use proper computational devices
for efficient training, then CNN would perhaps deliver the
C. Fine-Tuned CNN as a Classifier
The proposed network was then fine-tuned to the Kimia
Path24 dataset. Using the Keras library, the convolutional
layers were first separated from the top fully connected layers
[20]. The training patches were fed through the model to create
a set of bottleneck features to initially pre-train the new fullyconnected layers [26]. These features were used to initialize
the weights of a fully connected MLP consisting of one 256
dense ReLU layer and a softmax classification layer. Next, the
fully connected model was attached to the convolutional layers
and training on each convolutional block, except the last block,
was performed to adjust classification weights [12], [14].
Similarily, for the Inception-v3 network the fully connected
layers were replaced with one 1024 dense ReLU layer and a
4
To appear in proceedings of the 7th Intern. Conf. on Image Processing Theory, Tools and Applications (IPTA 2017), Nov 28-Dec 1, Montreal, Canada.
Table 1. Comparing the results training form scratch (CNN1 reported in [17]), using deep features via a pre-trained network with no change (FE-VGG16),
and classification after fine-tuning a pre-trained network (TL-VGG16, TL-Inception-v3). The best scores are highlighted in bold.
Scheme
Train from scratch
Pre-trained features
Fine-tuning the pre-trained net
Pre-trained features
Fine-tuning the pre-trained net
Approach
CNN1 [17]
FE-VGG16
TL-VGG16
FE-Inception-v3
TL-Inception-v3
ηp
64.98%
65.21%
63.85%
70.94%
74.87%
ηw
64.75%
64.96%
66.23%
71.24%
76.10%
ηtotal
41.80%
42.36%
42.29%
50.54%
56.98%
Fig. 4. Sample images from ImageNet project. One may object to using
features that have been learned from such images in order to classify highly
sensitive images of histopathology for medical diagnosis. However,
experiments with Kimia Path24 dataset shows that features extracted from
these images are expressive enough to compete against networks trained by
histopathology images from scratch [Source: http://openai.com/ ].
a deep network, architecture not well suited to the problem,
or an overly simplistic fully connected network. However, as
previously discussed in [17], the problem given by the Kimia
Path24 dataset is indeed a hard problem, most likely due to the
high variance between the different patches within a given scan
(intra-class variability). This is further validated when looking
at the results in Fig. 3. The two columns contain patches
that have distinct patterns with their own unique features.
The CAM from the first column shows that the network
responds strongly to the unique structures within the 4 label
(very strongly for the final patch). Whereas when presented
with completely different patterns in the second column, the
network responds strongly to other areas, typically ones that
embody inner edges within the sample. This shows evidence
that the model has at the very least begun to learn higher level
Fig. 3. Activation maps using randomly selected patches from the Kimia
Path24 testing data. The patches within each column are the same class and
the labels per column are 4 and 8, respectively. The activation maps are
created using the Keras Visualization Toolkit and the Grad-CAM algorithm
[27]–[29]. Red areas had more influence on the label prediction [28].
best results clearly better than transfer learning. Although this
statement is supported by comparable empirical evidence, it
remains speculation for a sensitive field like medical imaging.
But why is so difficult to train a CNN for this case? It is
most likely due to a number of factors such as a relative lack
of image data, the effect of scaling down a patch for use within
5
To appear in proceedings of the 7th Intern. Conf. on Image Processing Theory, Tools and Applications (IPTA 2017), Nov 28-Dec 1, Montreal, Canada.
structures within individual patches. Further investigation with
different architectures would likely improve upon these results
as would more aggressive augmentation.
[10] D. Yi, R. L. Sawyer, D. C. III, J. Dunnmon, C. Lam,
X. Xiao, and D. Rubin, “Optimizing and visualizing deep
learning for benign/malignant classification in breast tumors,”
CoRR, vol. abs/1705.06362, 2017. [Online]. Available: http:
//arxiv.org/abs/1705.06362
[11] S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–
1359, Oct 2010.
[12] H. C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao,
D. Mollura, and R. M. Summers, “Deep convolutional neural networks
for computer-aided detection: Cnn architectures, dataset characteristics
and transfer learning,” IEEE Transactions on Medical Imaging, vol. 35,
no. 5, pp. 1285–1298, May 2016.
[13] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet:
A large-scale hierarchical image database,” in Computer Vision and
Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE,
2009, pp. 248–255.
[14] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Region-based convolutional networks for accurate object detection and segmentation,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 38,
no. 1, pp. 142–158, Jan 2016.
[15] Y. Bar, I. Diamant, L. Wolf, and H. Greenspan, “Deep learning with
non-medical training used for chest pathology identification,” in Proc.
SPIE, vol. 9414, 2015, p. 94140V.
[16] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning
applied to document recognition,” Proceedings of the IEEE, vol. 86,
no. 11, pp. 2278–2324, Nov 1998.
[17] M. Babaie, S. Kalra, A. Sriram, C. Mitcheltree, S. Zhu, A. Khatami,
S. Rahnamayan, and H. R. Tizhoosh, “Classification and Retrieval
of Digital Pathology Scans: A New Dataset.” [Online]. Available:
http://arxiv.org/abs/1705.07522
[18] K. Simonyan and A. Zisserman, “Very deep convolutional networks for
large-scale image recognition,” CoRR, vol. abs/1409.1556, 2014.
[19] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking
the inception architecture for computer vision,” in Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, 2016,
pp. 2818–2826.
[20] F. Chollet et al., “Keras,” https://github.com/fchollet/keras, 2015.
[21] N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B. Kendall, M. B.
Gotway, and J. Liang, “Convolutional neural networks for medical image
analysis: Full training or fine tuning?” IEEE Transactions on Medical
Imaging, vol. 35, no. 5, pp. 1299–1312, May 2016.
[22] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion,
O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” Journal of Machine
Learning Research, vol. 12, pp. 2825–2830, 2011.
[23] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector
machines,” ACM Transactions on Intelligent Systems and Technology,
vol. 2, pp. 27:1–27:27, 2011, software available at http://www.csie.ntu.
edu.tw/∼cjlin/libsvm.
[24] S. van der Walt, S. C. Colbert, and G. Varoquaux, “The numpy array:
A structure for efficient numerical computation,” Computing in Science
& Engineering, vol. 13, no. 2, pp. 22–30, 2011. [Online]. Available:
http://aip.scitation.org/doi/abs/10.1109/MCSE.2011.37
[25] E. Jones, T. Oliphant, P. Peterson et al., “SciPy: Open source scientific
tools for Python,” 2001–, [Online; accessed ¡today¿]. [Online].
Available: http://www.scipy.org/
[26] D. Yu and M. L. Seltzer, “Improved bottleneck features using pretrained
deep neural networks,” in Twelfth Annual Conference of the International
Speech Communication Association, 2011.
[27] R. Kotikalapudi and contributors, “keras-vis,” https://github.com/
raghakot/keras-vis, 2017.
[28] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and
D. Batra, “Grad-cam: Visual explanations from deep networks via
gradient-based localization,” See https://arxiv. org/abs/1610.02391 v3,
2016.
[29] M. D. Zeiler and R. Fergus, Visualizing and Understanding
Convolutional Networks. Cham: Springer International Publishing,
2014, pp. 818–833. [Online]. Available: http://dx.doi.org/10.1007/
978-3-319-10590-1 53
VII. C ONCLUSIONS
Retrieval and classification of histopathological images are
useful but challenging tasks in analysis for diagnostic pathology. Whole scan imaging (WSI) generates gigapixel images
that are immensely rich in details and exhibit tremendous interand intra-class variance. Both a feature extractor and transferlearned network were able to offer increases in classification
accuracy on the Kimia Path24 dataset when compared to a
CNN trained from scratch. Comparatively low performance
of the latter could be due to the architecture not being well
suited for the problem, lack of sufficient number of training
images, and/or the inherent difficulty of the classification
task for high-resolutional and highly variable histopathology
images. Further work would warrant using different architectures for comparison, more aggressive data augmentation, and
potentially increasing the size of training samples used from
the Kimia Path24 dataset. However, both transfer-learned and
feature extractor models were able to compete with the stateof-the-art methods reported in literature [17], and therefore
show potential for further improvements.
ACKNOWLEDGEMENTS
The authors would like to thank Huron Digital Pathology
(Waterloo, ON, Canada) for its continuing support.
R EFERENCES
[1] M. N. Gurcan, L. E. Boucheron, A. Can, A. Madabhushi, N. M.
Rajpoot, and B. Yener, “Histopathological image analysis: A review,”
IEEE reviews in biomedical engineering, vol. 2, pp. 147–171, 2009.
[2] S. Naik, S. Doyle, M. Feldman, J. Tomaszewski, and A. Madabhushi,
“Gland segmentation and computerized gleason grading of prostate histology by integrating low-, high-level and domain specific information,”
in MIAAB workshop, 2007, pp. 1–8.
[3] P. S. Karvelis, D. I. Fotiadis, I. Georgiou, and M. Syrrou, “A watershed based segmentation method for multispectral chromosome images
classification,” in Engineering in Medicine and Biology Society, 2006.
EMBS’06. 28th Annual International Conference of the IEEE. IEEE,
2006, pp. 3009–3012.
[4] S. Petushi, F. U. Garcia, M. M. Haber, C. Katsinis, and A. Tozeren, “Large-scale computations on histology images reveal gradedifferentiating parameters for breast cancer,” BMC medical imaging,
vol. 6, no. 1, p. 14, 2006.
[5] X. Zhang, W. Liu, M. Dundar, S. Badve, and S. Zhang, “Towards largescale histopathological image analysis: Hashing-based image retrieval,”
IEEE Transactions on Medical Imaging, vol. 34, no. 2, pp. 496–506,
Feb 2015.
[6] W. Liu, J. Wang, R. Ji, Y. G. Jiang, and S. F. Chang, “Supervised hashing
with kernels,” in 2012 IEEE Conference on Computer Vision and Pattern
Recognition, June 2012, pp. 2074–2081.
[7] H. R. Tizhoosh, “Barcode annotations for medical image retrieval:
A preliminary investigation,” in Image Processing (ICIP), 2015 IEEE
International Conference on. IEEE, 2015, pp. 818–822.
[8] H. R. Tizhoosh, S. Zhu, H. Lo, V. Chaudhari, and T. Mehdi, “Minmax
radon barcodes for medical image retrieval,” in International Symposium
on Visual Computing. Springer, 2016, pp. 617–627.
[9] A. Khatami, M. Babaie, A. Khosravi, H. R. Tizhoosh, S. M. Salaken,
and S. Nahavandi, “A deep-structural medical image classification for a
radon-based image retrieval,” in 2017 IEEE 30th Canadian Conference
on Electrical and Computer Engineering (CCECE), April 2017, pp. 1–4.
6
| 1 |
arXiv:1709.07929v2 [math.AC] 12 Dec 2017
TRIANGULATED EQUIVALENCES AND RECONSTRUCTION OF
CLASSIFYING SPACES
HIROKI MATSUI
Abstract. In algebra such as algebraic geometry, modular representation theory and
commutative ring theory, we study algebraic objects through associated triangulated
categories and topological spaces. In this paper, we consider the relationship between
such triangulated categories and topological spaces. To be precise, we explore necessary
conditions for derived equivalence of Noetherian schemes, stable equivalence of finite
groups, and singular equivalence of commutative Noetherian rings by using associated
topological spaces.
1. Introduction
As is a common approach in many branches of algebra including algebraic geometry,
modular representation theory and commutative ring theory, we assign to an algebraic
object A (e.g., a scheme X, a finite group G, a commutative Noetherianring R) a triangulated category T (e.g., the perfect derived category Dperf (X), the stable module category
mod kG, the singularity category Dsg (R)) and a topological space S (e.g., the underlying
topological spaces X, Proj H∗ (G; k), Sing R). By studying such a triangulated category
and a topological space, we aim to grasp the structure of the original algebraic object.
From this motivation, it is natural to ask what kind of relationship there exists between
T and S.
algebraic objects A:
X, G, R
❖❖❖
❖❖❖
❖❖❖
❖❖'
♥♥
♥♥♥
♥
♥
♥♥
w ♥♥
♥
triangulated categories T :
Dperf (X), mod kG, Dsg (R)
o /o /o /o /o /o /o ???
o/ /o /o /o /o /o /o /o /o /
topological spaces S:
X, Proj H∗ (G; k), Sing R
In this paper, we consider this question, more precisely, the following:
Question 1.1. Let A, A′ be algebraic objects, T , T ′ corresponding triangulated categories, and S, S ′ corresponding topological spaces, respectively. Does the implication
T ∼
= S′
= T ′ =⇒ S ∼
hold?
2010 Mathematics Subject Classification. 13C14, 13D09, 14F05, 18E30, 19D23, 20C20.
Key words and phrases. triangulated category, triangulated equivalence, classifying space, (classifying)
support data, quasi-affine scheme, finite p-group, complete intersection.
The author is partly supported by Grant-in-Aid for JSPS Fellows 16J01067.
1
2
HIROKI MATSUI
We introduce the notion of a classifying space of a triangulated category (see Definition
2.9), and prove the following result, which gives a machinery to answer the above question.
Theorem 1.2 (Theorem 3.13). Let T , T ′ be essentially small triangulated categories and
S, S ′ classifying spaces for T and T ′ , respectively. Then the implication
T ∼
= S′
= T ′ =⇒ S ∼
holds.
The key role to prove this theorem is played by the support theory for triangulated
categories. For tensor triangulated categories, the support theory has been developed
by Balmer [Bal02, Bal05] and is a powerful tool to show such a reconstruction theorem.
Since we focus on triangulated categories without tensor structure, we need to invent the
support theory without tensor structure.
1.1. Algebraic geometry. Let X be a scheme. The derived category of perfect complexes on X is called the perfect derived category and denoted by Dperf (X). The case
where X = Spec R is affine, it is well known that the original scheme is reconstructed
from Dperf (R) := Dperf (X). Indeed, for two commutative rings R and S, if the perfect derived categories of R and S are equivalent, then R is isomorphic to S (see [Ric, Proposition
9.2]), and hence
Dperf (R) ∼
= Dperf (S) =⇒ Spec R ∼
= Spec S as topological spaces.
(∗)
However, such a result no longer holds for non-affine schemes. In fact, there exist a lot of
non-isomorphic schemes X and Y such that Dperf (X) ∼
= Dperf (Y ); see [Muk, Orl97]. When
perf
perf
∼
there is a triangulated equivalence D (X) = D (Y ), X and Y are said to be derived
equivalent. In section 3, we shall prove that the underlying topological spaces of a certain
class of schemes can be reconstructed from their perfect derived categories:
Theorem 1.3 (Theorem 3.10). Let X and Y be Noetherian quasi-affine schemes (i.e.,
open subschemes of affine schemes). Then the implication
Dperf (X) ∼
= Dperf (Y ) =⇒ X ∼
= Y as topological spaces
holds.
This theorem recovers (∗) for Noetherian rings as any affine scheme is quasi-affine. A
typical example of a non-affine quasi-affine scheme is the punctured spectrum of a local
ring. As an application of this theorem, we obtain that a derived equivalence of X and
Y yields the equality of the dimensions of X and Y .
1.2. Modular representation theory. In modular representation theory, finite groups
are studied in various contexts. From an algebraic viewpoint, a finite group G has been
studied through its group algebra kG and stable module category mod kG, where k is a
field whose characteristic divides the order of G. Here, mod kG is a triangulated category
consisting of finitely generated kG-modules modulo projectives. On the other hand, the
cohomology ring H∗ (G; k) gives an approach to study a finite group G from the topological
aspect because it is isomorphic to the cohomology ring of a classifying space BG of G;
see [Ben, Chapter 2] for instance. The second main result in section 3 is the following:
TRIANGULATED EQUIVALENCES AND RECONSTRUCTION OF CLASSIFYING SPACES
3
Theorem 1.4 (Theorem 3.13). Let k (resp. l) be a field of characteristic p (resp. q),
and let G (resp. H) be a finite p-group (resp. q-group). Then the implication
mod kG ∼
= mod lH =⇒ Proj H∗ (G; k) ∼
= Proj H∗ (H; l) as topological spaces
holds.
If there exists a triangulated equivalence mod kG ∼
= mod lH, we say that kG and lH
are stably equivalent. As an application of this theorem, we have that a stable equivalence
of kG and lH yields that the p-rank of G and the q-rank of H are equal.
1.3. Commutative ring theory. Let R be a left Noetherian ring. The singularity
category of R is by definition the Verdier quotient
Dsg (R) := Db (modR)/Dperf (R),
which has been introduced by Buchweitz [Buc] in 1980s. Here, mod R stands for the
category of finitely generated left R-modules and Db (modR) its bounded derived category. The singularity categories have been deeply investigated from algebro-geometric
and representation-theoretic motivations [Che, IW, Ste, Tak] and connected to the Homological Mirror Symmetry Conjecture by Orlov [Orl04].
One of the important subjects in representation theory of rings is to classify rings up
to certain category equivalence. For example, left Noetherian rings R and S are said to
be:
• Morita equivalent if mod R ∼
= mod S as abelian categories,
• derived equivalent if Db (mod R) ∼
= Db (mod S) as triangulated categories,
• singularly equivalent if Dsg (R) ∼
= Dsg (S) as triangulated categories.
It is well known that these equivalences have the following relations:
Morita equivalence ⇒ derived equivalence ⇒ singular equivalence.
Complete characterizations of Morita and derived equivalence have already been obtained
in [Mor, Ric], while singular equivalence is quite difficult to characterize even in the case of
commutative rings. Indeed, only a few examples of singular equivalences of commutative
Noetherian rings are known. Furthermore, for all of such known examples, the singular
loci of rings are homeomorphic. Thus, it is natural to ask the following question.
Question 1.5. Let R and S be commutative Noetherian rings. Are their singular loci
homeomorphic if R and S are singularly equivalent?
In section 4, we show that this question is affirmative for certain classes of commutative
Noetherian rings. To be precise, we shall prove the following theorem.
Theorem 1.6 (Theorem 4.4). Let R and S be commutative Noetherian local rings that
are locally hypersurfaces on the punctured spectra. Assume that R and S are either
(a) complete intersection rings, or
(b) Cohen-Macaulay rings with quasi-decomposable maximal ideal.
Then the implication
Dsg (R) ∼
= Dsg (S) =⇒ Sing R ∼
= Sing S as topological spaces
holds.
4
HIROKI MATSUI
Here, we say that an ideal I of a commutative ring R is quasi-decomposable if there is
an R-regular sequence x in I such that I/(x) is decomposable as an R-module. Moreover,
we prove that singular equivalence localizes by using such a homeomorphism.
The organization of this paper is as follows. In section 2, we introduce the notions
of a support data and a classifying support data for a given triangulated category and
develop the support theory without tensor structure, and finally prove Theorem 1.2. In
section 3, we connect the results obtained in section 2 with the support theory for tensor
triangulated categories and study reconstructing the topologies of the Balmer spectra
without tensor structure. Using this method, we prove Theorem 1.3 and 1.4. In section 4,
we prove Theorem 1.6 and give examples of commutative rings which are not singularly
equivalent.
Throughout this paper, all categories are assumed to be essentially small. For two
triangulated category T , T ′ (resp. topological spaces X, X ′ ), the notation T ∼
= T ′ (resp.
X∼
= X ′ ) means that T and T ′ are equivalent as triangulated categories (resp. X and X ′
are homeomorphic) unless otherwise specified.
2. The support theory without tensor structure
In this section, we discuss the support theory for triangulated categories without tensor
structure. Throughout this section, T denotes a triangulated category with shift functor
Σ.
First of all, let us recall some basic definitions which are used in this section.
Definition 2.1. Let X be a topological space and T a triangulated category.
(1) We say that X is sober if every irreducible closed subset of X is the closure of exactly
one point.
(2) We say that X is Noetherian if every descending chain of closed subspaces stabilizes.
(3) We say that a subset W of X is specialization-closed if it is closed under specialization,
namely if an element x of X belongs to W , then the closure {x} is contained in W .
Note that W is specialization-closed if and only if it is a union of closed subspaces of
X.
(4) We say that a non-empty additive full subcategory X of T is thick if it satisfies the
following conditions:
(i) closed under taking shifts: ΣX = X .
(ii) closed under taking extensions: for a triangle L → M → N → ΣL in T , if L
and N belong to X , then so does M.
(iii) closed under taking direct summands: for two objects L, M of T , if the direct
sum L ⊕ M belongs to X , then so do L and M.
For a subcategory X of T , denote by thickT X the smallest thick subcategory of T
containing X .
We introduce the notion of a support data for a triangulated category.
Definition 2.2. Let T be a triangulated category. A support data for T is a pair (X, σ)
where X is a topological space and σ is an assignment which assigns to an object M of
T a closed subset σ(M) of X satisfying the following conditions:
(1) σ(0) = ∅.
(2) σ(Σn M) = σ(M) for any M ∈ T and n ∈ Z.
(3) σ(M ⊕ N) = σ(M) ∪ σ(N) for any M, N ∈ T .
TRIANGULATED EQUIVALENCES AND RECONSTRUCTION OF CLASSIFYING SPACES
5
(4) σ(M) ⊆ σ(L) ∪ σ(N) for any triangle L → M → N → ΣL in T .
Support data naturally appear in various areas of algebras.
Example 2.3. (1) Let R be a commutative Noetherian ring. For M ∈ Dsg (R), we define
the singular support of M by
SSupp (M) := {p ∈ Sing R | Mp ∼
6= 0 in Dsg (Rp )}.
R
Then (Sing R, SSuppR ) is a support data for Dsg (R). Indeed, it follows from [AIL,
Theorem 1.1] and [BM, Lemma 4.5] that SSuppR (M) is a closed subset of Sing R and
that SSuppR satisfies the condition (1) in Definition 2.2. The remained conditions
(2)-(4) are clear because the localization functor Dsg (R) → Dsg (Rp ) is exact.
Assume that R is Gorenstein. Denote by CM(R) the category of maximal CohenMacaulay R-modules (i.e., modules M satisfying ExtiR (M, R) = 0 for all integers
i > 0). Recall that the stable category CM(R) of CM(R) is the category whose
objects are the same as CM(R) and the set of morphisms from M to N is given by
HomR (M, N) := HomR (M, N)/PR (M, N),
where PR (M, N) consists of all R-linear maps from M to N factoring through some
free R-module. Then the stable category CM(R) has the structure of a triangulated
category; see [Hap]. Moreover, the natural inclusion induces a triangle equivalence
∼
=
→ Dsg (R) by [Buc]. Thus we obtain the support data (Sing R, SuppR ) for
F : CM(R) −
CM(R) by using this equivalence. Here,
Supp (M) := SSupp (F (M)) = {p ∈ Sing R | Mp ∼
6= 0 in CM(Rp )}
R
R
for M ∈ CM(R).
(2) Let X be a Noetherian scheme. For F ∈ Dperf (X), we define the cohomological support
of F by
SuppX (F ) := {x ∈ X | Fx ∼
6= 0 in Dperf (OX,x )}.
S
Then, SuppX (F ) = n∈Z SuppX (Hn (F )) is a finite union of supports of coherent OX modules and hence is a closed subspace of X. Moreover, (X, SuppX ) is a support data
for Dperf (X) because the localization is exact. For details, please see [Tho].
(3) Let k be a field of characteristic p > 0 and G a finite group such that p divides the
order of G. Then as in the case of Gorenstein rings, we can define the stable category
mod kG of mod kG and it is also a triangulated category.
We denote by
(
⊕i∈Z Hi (G; k) p = 2
H∗ (G; k) =
⊕i∈2Z Hi (G; k) p : odd
the direct sum of cohomologies of G with coefficient k. Then H∗ (G; k) has the structure of a graded-commutative Noetherian ring by using the cup product and we can
consider its homogeneous prime spectrum Proj H∗ (G; k). Denote by VG (M) the support
variety for a finitely generated kG-module M which is a closed space of Proj H∗ (G; k).
Then the pair (Proj H∗ (G; k), VG ) becomes a support data for mod kG. For details,
please refer to [Ben, Chapter 5].
Remark 2.4. Actually, the above examples of support data satisfy the following stronger
condition:
(1′ ) σ(M) = ∅ if and only if M ∼
= 0.
6
HIROKI MATSUI
Definition 2.5. Let U be a full subcategory of T . We say that U is a ⊕-ideal if it
satisfies
M ∈ U, N ∈ T ⇒ M ⊕ N ∈ U.
Remark 2.6. U ⊆ T is a ⊕-ideal if and only if T \ U is closed under taking direct
summands.
Example 2.7. (1) The full subcategory T \ {0} is a ⊕-ideal.
(2) The full subcategory T(T ) of test objects (see Definition 4.8 below) of T is a ⊕-ideal.
Let us fix the following notations:
Notation 2.8. Let T be a triangulated category, U ⊆ T a ⊕-ideal, and X a topological
space. Then we set:
• Th(T ) := {thick subcategories of T },
• ThU (T ) := {thick subcategories of T containing an object of U},
• Spcl(X) := {specialization closed subsets of X},
• Nesc(X) := {non-empty specialization-closed subsets of X},
• Nec(X) := {non-empty closed subsets of X},
• Irr(X) := {irreducible closed subsets of X}.
Let (X, σ) be a support data for T , X a thick subcategory of T , and W S
a specializationclosed subset of X. Then one can easily check that fσ (X ) := σ(X ) := M ∈X σ(M) is a
specialization-closed subset of X and gσ (W ) := σ −1 (W ) := {M ∈ T | σ(M) ⊆ W } is a
thick subcategory of T . Therefore, we obtain two order-preserving maps
with respect to the inclusion relations.
Definition 2.9. Let (X, σ) be a support data for T and U ⊆ T a ⊕-ideal. Then we say
that (X, σ) is a classifying support data for T with respect to U if
(i) X is a Noetherian sober space, and
(ii) the above maps fσ and gσ restrict to mutually inverse bijections:
ThU (T ) o
fσ
/
gσ
Nesc(X).
When this is the case, we say that X is a classifying space of T with respect to U.
We say simply a classifying support data for T (resp. a classifying space of T ), we
mean a classifying support data for T (resp. a classifying space of T ) with respect to
T \ {0}.
Remark 2.10. A classifying support data (X, σ) for T classifies all thick subcategories
of T containing gσ (∅) = σ −1 (∅). Indeed, the map gσ : Nesc(X) → ThU (T ) is injective
with image {X ∈ Th(T ) | X ) σ −1 (∅)}. Thus, we obtain a one-to-one correspondence
{X ∈ Th(T ) | X ⊇ σ −1 (∅)} o
fσ
gσ
/
Spcl(X).
In particular, if (X, σ) satisfies the condition (1′ ) in Remark 2.4, we obtain a one-to-one
correspondence:
Th(T ) o
fσ
gσ
/
Spcl(X).
Every classifying support data automatically satisfies the following realization property.
TRIANGULATED EQUIVALENCES AND RECONSTRUCTION OF CLASSIFYING SPACES
7
Lemma 2.11. Let (X, σ) be a classifying support data for T with respect to U. Then for
any non-empty closed subset Z of X, there is an object M of U, such that Z = σ(M).
Proof. Since X is a Noetherian sober space and σ(M) ∪ σ(N) = σ(M ⊕ N), we may
assume
that Z = {x} for some x ∈ X. From the assumption, one has Z = fσ gσ (Z) =
S
M ∈gσ (Z) σ(M). Hence, there is an element x of σ(M) for some M ∈ gσ (Z). Then we
obtain x ∈ σ(M) ⊆ Z = {x} and this implies that σ(M) = {x} = Z.
By definition of a classifying support data with respect to U, gσ (Z) = {N ∈ T | σ(N) ⊆
σ(M)} contains a object T of U. We conclude that σ(T ⊕M) = σ(T )∪σ(M) = σ(M) = Z
for T ⊕ M ∈ U.
Let me give two more notations.
Definition 2.12. Let U be a ⊕-ideal of T .
(1) We say that a thick subcategory X of T is U-principal if there is an object M of U such
that X = thickT M. Denote by PThU (T ) the set of all U-principal thick subcategories
of T .
(2) We say that a U-principal thick subcategory X of T is U-irreducible if X = thickT (X1 ∪
X2 ) (X1 , X2 ∈ PThU (T )) implies that X1 = X or X2 = X . Denote by IrrU (T ) the set
of all U-irreducible thick subcategories of T .
The following lemma shows that by using classifying support data with respect to U,
we can also classify U-principal thick subcategories and U-irreducible thick subcategories.
Lemma 2.13. Let (X, σ) be a classifying support data for T with respect to U, then the
one-to-one correspondence
fσ
ThU (T ) o
/
Nesc(X)
gσ
restricts to one-to-one correspondences
PThU (T ) o
IrrU (T ) o
fσ
/
gσ
fσ
/
gσ
Nec(X),
Irr(X).
Proof. Note that fσ (thickT M) = σ(M) for any M ∈ T . Therefore, the injective map
fσ : ThU (T ) → Nesc(X) induces a well defined injective map fσ : PThU (T ) → Nec(X).
The surjectivity has already been shown in Lemma 2.11.
Next, we show the second one-to-one correspondence. For X1 , X2 ∈ ThU (T ), one has
[
(1)
fσ (thickT (X1 ∪ X2 )) =
σ(M)
M ∈thickT (X1 ∪X2 )
=
[
σ(M)
M ∈X1 ∪X2
=(
[
M ∈X1
σ(M)) ∪ (
[
M ∈X2
= fσ (X1 ) ∪ fσ (X2 ).
σ(M))
8
HIROKI MATSUI
On the other hand, for Z1 , Z2 ∈ Nesc(X), one has
fσ (thickT (gσ (Z1 ) ∪ gσ (Z2 ))) = fσ (gσ (Z1 )) ∪ fσ (σ(Z2 ))
= Z1 ∪ Z2 .
Applying gσ to this equality, we get
(2)
thickT (gσ (Z1 ) ∪ gσ (Z2 )) = gσ (Z1 ∪ Z2 ).
Let W be an irreducible closed subset of X. Assume gσ (W ) = thickT (X1 ∪ X2 ) for some
X1 , X2 ∈ PThU (T ). Then from the above equality (1), we obtain an equality
W = fσ (gσ (W )) = fσ (thickT (X1 ∪ X2 )) = fσ (X1 ) ∪ fσ (X2 ).
Since W is irreducible, fσ (X1 ) = W or fσ (X2 ) = W and hence X1 = gσ (fσ (X1 )) = gσ (W )
or X2 = gσ (fσ (X2 )) = gσ (W ). This shows that gσ (W ) is U-irreducible.
Conversely, take a U-irreducible thick subcategory X of T and assume fσ (X ) = Z1 ∪ Z2
for some non-empty closed subsets Z1 , Z2 of X. From the above equality (2), we get
X = gσ (fσ (X )) = gσ (Z1 ∪ Z2 ) = thickT (gσ (Z1 ) ∪ gσ (Z2 )).
Since X is U-irreducible, X = gσ (Z1 ) or X = gσ (Z2 ) and therefore, Z1 = fσ (gσ (Z1 )) =
fσ (X ) or Z2 = fσ (gσ (Z2 )) = fσ (X ). Thus, fσ (X ) is irreducible.
These observations show the second one-to-one correspondence.
From this lemma, we can show the following uniqueness result for classifying support
data with respect to U.
Proposition 2.14. Let (X, σ) and (Y, τ ) be classifying support data for T with respect
to a ⊕-ideal U. Then X and Y are homeomorphic.
Proof. First note that for a topological space X, the natural map ιX : X → Irr(X), x 7→
{x} is bijective if and only if X is sober.
Define maps ϕ : X → Y and ψ : Y → X to be the composites
ι
gσ
fτ
ι−1
Y
X
→ Y,
Irr(X) −→ IrrU (T ) −→ Irr(Y ) −−
ϕ : X −→
ι
gτ
fσ
ι−1
Y
X
ψ : Y −→
Irr(Y ) −
→ IrrU (T ) −→ Irr(X) −−
→ X.
Then ϕ and ψ are well defined and mutually inverse bijections by Lemma 2.13.
Fix x ∈ X. For x′ ∈ {x}, one has ιX (x′ ) ⊆ ιX (x) and hence
{ϕ(x′ )} = ιY (ϕ(x′ )) = fτ (gσ (ιX (x′ ))) ⊆ {ϕ(x)} = ιY (ϕ(x)) = fτ (gσ (ιX (x))).
In particular, ϕ(x′ ) belongs to {ϕ(x)}. Therefore, ϕ({x}) ⊆ {ϕ(x)}.
Conversely, for y ∈ {ϕ(x)}, the above argument shows
ψ(y) ∈ ψ({ϕ(x)}) ⊆ {ψϕ(x)} = {x}.
Applying ϕ to this inclusion, we obtain y ∈ ϕ({x}) and therefore, {ϕ(x)} ⊆ ϕ({x}).
Thus, we conclude that ϕ({x}) = {ϕ(x)}. Since X is Noetherian, this equation means
that ϕ is a closed map. Similarly, ψ is also a closed map.
The following theorem is the main result of this section.
Theorem 2.15. Consider the following setting:
• T and T ′ are triangulated categories.
TRIANGULATED EQUIVALENCES AND RECONSTRUCTION OF CLASSIFYING SPACES
9
• U and U ′ are ⊕-ideals of T and T ′ , respectively.
• (X, σ) and (Y, τ ) are classifying support data for T and T ′ with respect to U and
U ′ , respectively.
Suppose that there is a triangle equivalence F : T → T ′ with F (U) = U ′ . Then X and Y
are homeomorphic.
Proof. From the assumption, F induces a one-to-one correspondence
∼
=
→ ThU ′ (T ′ ), X 7→ F̃ (X ),
F̃ : ThU (T ) −
where F̃ (X ) := {N ∈ T ′ | ∃M ∈ X such that N ∼
= F (M)}. For an object M of T , set
τ F (M) := τ (F (M)). Then we can easily verify that the pair (Y, τ F ) is a support data for
T . Furthermore, it becomes a classifying support data for T with respect to U. Indeed,
for X ∈ ThU (T ) and W ∈ Nesc(Y ), we obtain
[
[
[
fτ F (X ) =
τ F (M) =
τ (F (M)) =
τ (N) = fτ (F̃ (X )),
M ∈X
M ∈X
N ∈F̃ (X )
F̃ (gτ F (W )) = F̃ ({M ∈ T | τ F (M) ⊆ W })
= {N ∈ T ′ | τ (N) ⊆ W } = gτ (W ).
From these equalities, we get equalities fτ F = fτ ◦ F̃ and F̃ ◦ gτ F = gτ and thus fτ F
and gτ F give mutually inverse bijections between ThU (T ) and Nesc(Y ). Consequently, we
obtain two classifying support data (X, σ) and (Y, τ F ) for T with respect to U, and hence
X and Y are homeomorphic by Proposition 2.14.
3. Comparison with tensor triangulated structure
In this section, we discuss relation between the support theory we discussed in section
2 and the support theory for tensor triangulated categories.
Recall that a tensor triangulated category (T , ⊗, 1) consists of a triangulated category
T together with a symmetric monoidal tensor product ⊗ with unit object 1 which is
compatible with the triangulated structure of T . For the precise definition, please refer
to [HPS, Appendix A].
Example 3.1. (1) Let X be a Noetherian scheme. Then (Dperf (X), ⊗LOX , OX ) is a tensor
triangulated category. Here, ⊗LOX denotes the derived tensor product.
(2) Let k be a field and G a finite group. Then (mod kG, ⊗k , k) is a tensor triangulated
category.
Throughout this section, fix a tensor triangulated category (T , ⊗, 1). We begin with
recalling some basic definitions which are used in the support theory of tensor triangulated
categories.
Definition 3.2. (1) A full subcategory X of T is called a thick tensor ideal if it is a thick
subcategory of T and is closed under the action of T by ⊗: M ⊗ N ∈ X for any
M ∈ X and N ∈ T . For a subcategory X of T , denote by hX i the smallest thick
tensor ideal of T containing X .
10
HIROKI MATSUI
(2) For a thick subcategory X of T , define its radical by
√
X := {M ∈ T | ∃n > 0 such that M ⊗n ∈ X }.
Here, M ⊗n denotes the n-fold tensor product of M. By [Bal05, Lemma 4.2], the
radical of a thick subcategory is always a thick tensor ideal.
√
A thick tensor ideal X of T is called radical if it satisfies X = X .
(3) A thick tensor ideal X of T is called prime if it satisfies
M ⊗ N ∈ X ⇒ M ∈ X or N ∈ X .
Denote by Spc T the set of all prime thick tensor ideals of T .
(4) For M ∈ T , the Balmer support of M is defined as SppM := {P ∈ Spc T | M ∈
/ P}.
The set Spc T is a topological space with closed basis {SppM | M ∈ T } and call it
the Balmer spectrum of T .
(5) Let X be a topological space. We say that a subset W of X is a Thomason subset
if it is a union of closed subsets whose complements are quasi-compact. Denote by
Thom(X) the set of all Thomason subsets of X. Note that Thom(X) ⊆ Spcl(X).
We say that a support data (X, σ) for T is tensorial if it satisfies:
σ(M ⊗ N) = σ(M) ∩ σ(N)
for any M, N ∈ T . In [Bal05], tensorial support data are called simply support data.
Then gσ (W ) is a radical thick tensor ideal of T for every specialization-closed subset W
of X. We say that a tensorial support data (X, σ) is classifying if X is a Noetherian sober
space and there is a one-to-one correspondence:
{radical thick tensor ideals of T } o
fσ
/
gσ
Spcl(X).
Balmer showed the following celebrated result:
Theorem 3.3. [Bal05, Lemma 2.6, Theorem 4.10]
(1) The pair (Spc T , Spp) is a tensorial support data for T .
(2) There is a one-to-one correspondence:
{radical thick tensor ideals of T } o
fSpp
gSpp
/
Thom(Spc T ).
Remark 3.4. If a topological space X is Noetherian, then every specialization-closed
subset of X is Thomason. Therefore, the above theorem shows that (Spc T , Spp) is a
classifying tensorial support data for T provided Spc T is Noetherian.
Recall that a tensor triangulated category T is rigid if
(1) the functor M ⊗ − : T → T has a right adjoint F (M, −) : T → T for each M ∈ T
and
(2) every object M is strongly dualizable (i.e., the natural map F (M, 1) ⊗ N →
F (M, N) is an isomorphism for each N ∈ T ).
If T is rigid, then (Spc T , Spp) satisfies the stronger condition.
Lemma 3.5. Assume that T is rigid. Then the support data (Spc T , Spp) satisfies the
condition (1′ ) in Remark 2.4.
TRIANGULATED EQUIVALENCES AND RECONSTRUCTION OF CLASSIFYING SPACES
11
Proof. Take an object M ∈ T with Spp(M) = ∅. By [Bal05, Corollary 2.4], there is a
positive integer n such that M ⊗n ∼
= 0. On the other hand, by [HPS, Lemma A 2.6], M i
⊗
2i
belongs to thickT (M ) for any positive integer since every object is strongly dualizable.
Therefore, by using induction, we conclude that M ∼
= 0.
Note that a tensorial classifying support data for T is a classifying tensorial support
data for T . Indeed, for a tensorial classifying support data (X, σ) for T and X ∈ Th(T ),
we obtain an equalities
p
p
X = gσ (fσ (X )) = gσ (fσ ( thick⊗ X )) = thick⊗ X .
The following lemma gives a criterion for the converse implication of this fact.
Lemma 3.6. Let (X, σ) be a classifying tensorial support data for T . Suppose that T is
rigid. Then the following are equivalent:
(1) There is a one-to-one correspondence:
Th(T ) o
fσ
gσ
/
Spcl(X).
(2) (X, σ) is a classifying support data for T .
(3) Every thick subcategory of T is a thick ⊗-ideal.
(4) T = thickT 1.
Proof. By Lemma 3.5 and Theorem [Bal05, Theorem 5.2], (X, σ) satisfies the condition
(1′ ) in Remark 2.4. Therefore, (1) and (2) means the same conditions from Remark 2.10.
(1) ⇒ (3): From the assumption, every thick subcategory X of T is of the form
X = gσ (W ) for some specialization-closed subset W of X. On the other hand, gσ (W ) is
a radical thick ⊗-ideal as (X, σ) is a tensorial support data.
(3) ⇒ (4): By assumption, the thick subcategory thickT 1 is a thick tensor ideal. Thus,
for any M ∈ T , M ∼
= M ⊗ 1 belongs to thickT 1.
(4) ⇒ (1): Note that 1 is strongly dualizable and the family of all strongly dualizable
objects forms a thick subcategory of T by [HPS, Theorem A.2.5 (a)]. Therefore, every
object of T = thickT 1 is strongly dualizable. Thus, for any object M ∈ T , M belongs to
thick⊗
T (M ⊗ M) by [HPS, Lemma A.2.6]. Then [Bal05, Proposition 4.4] shows that every
thick tensor ideal of T is radical.
On the other hand, for any thick subcategory X of Y, one can easily verify that the
subcategory
Y := {M ∈ T | M ⊗ X ⊆ X }
is a thick ⊗-ideal of T containing 1. Thus, we obtain Y = thickT 1 = T and hence X is
a thick ⊗-ideal.
From these discussion, we conclude that every thick subcategory of T is a radical thick
⊗-ideal and this shows the implication (4) ⇒ (1).
The following corollaries are direct consequences of this lemma, Proposition 2.14 and
Theorem 2.15.
Corollary 3.7. Let T be a rigid tensor triangulated category. Assume that the Balmer
spectrum Spc T of T is Noetherian and that T = thickT 1. Then for any classifying
support data (X, σ) for T , X is homeomorphic to Spc T .
Corollary 3.8. Let T and T ′ be rigid tensor triangulated categories such that
12
HIROKI MATSUI
(1) Spc T and Spc T ′ are Noetherian, and
(2) T and T ′ are generated by their unit objects.
If T and T ′ are equivalent as triangulated categories, then Spc T and Spc T ′ are homeomorphic.
Next, we consider applications of these corollaries to tensor triangulated categories
appeared in Example 3.1.
Thomason showed the following classification theorem of thick tensor ideas of Dperf (X):
Theorem 3.9. [Tho, Theorem 3.15] Let X be a Noetherian scheme. Then (X, SuppX ) is
a classifying tensorial support data for Dperf (X).
As an application of Corollary 3.8, we can reconstruct underlying topological spaces of
a certain class of schemes from their perfect derived categories without tensor structure.
Theorem 3.10. Let X and Y be Noetherian quasi-affine schemes (i.e., open subschemes
of affine schemes). If X and Y are derived equivalent, then X and Y are homeomorphic.
In particular, topologically determined properties, such as the dimensions and the numbers
of irreducible components of quasi-affine Noetherian schemes are preserved by derived
equivalences.
Proof. First, let me remark that the functor F ⊗LOX − : Dperf (X) → Dperf (X) has a right
adjoint RHomOX (F , −) : Dperf (X) → Dperf (X) for each F ∈ Dperf (X) and moreover
Dperf (X) is rigid.
Note that a scheme X is quasi-affine if and only if its structure sheaf OX is ample.
Thus, every thick subcategory of Dperf (X) is thick tensor ideal by [Tho, Proposition
3.11.1]. Applying Corollary 3.8, we obtain the result.
Remark 3.11. Let X and Y be Noetherian schemes.
(1) As we have already remarked in the introduction, if X and Y are affine, then a derived
equivalence Dperf (X) ∼
= Dperf (Y ) implies that X and Y are isomorphic as schemes.
(2) By [Bal02, Theorem 9.7], if Dperf (X) and Dperf (Y ) are equivalent as tensor triangulated
categories, then X and Y are isomorphic as schemes.
Next consider stable module categories over group rings of finite groups. In this case,
the following classification theorem is given by Benson-Carlson-Rickard for algebraically
closed field k and by Benson-Iyengar-Krause for general k.
Theorem 3.12. [BCR, BIK] Let k be a field of characteristic p > 0 and G a finite
group such that p divides the order of G. Then the support data (Proj H∗ (G; k), VG ) is a
classifying tensorial support data for mod kG.
Applying Corollary 3.8 to this classifying tensorial support data, we obtain the following
result:
Theorem 3.13. Let k (resp. l) be field of characteristic p (resp. q), G (resp. H) be a
finite p-group (resp. q-group). If kG and lH are stably equivalent, then Proj H∗ (G; k) and
Proj H∗ (H; l) are homeomorphic.
Proof. For each M ∈ mod kG, the functor M ⊗k − : mod kG → mod kG has a right adjoint
Homk (M, −) : mod kG → mod kG and in addition mod kG is rigid. Moreover, for a pgroup G, kG has only one simple module k. Therefore, we have mod kG = thickmod kG k.
Applying Corollary 3.8, we are done.
TRIANGULATED EQUIVALENCES AND RECONSTRUCTION OF CLASSIFYING SPACES
13
Recall that the p-rank of a finite group G is by definition,
rp (G) := sup{r | (Z/p)r ⊆ G}.
Quillen [Qui] showed that the dimension of the cohomology ring H ∗ (G; k) is equal to the
p-rank of G. Thus, the p-rank is an invariant of stable equivalences:
Corollary 3.14. Let k, l, G, H be as in Theorem 3.13. Assume that there is a stable
equivalence between kG and lH, then rp (G) = rq (H).
Remark 3.15. Let G and H be a p-group and k a field of characteristic p.
(1) By [Lin, Corollary 3.6], if there exists a stable equivalence between kG and kH, then
|G| = |H|.
(2) By [Lin, Corollary 3.2], if there exists a stable equivalence of Morita type between kG
and kH, then G ∼
= H.
4. A necessary condition for singular equivalences
Recall that commutative Noetherian rings R and S are said to be singularly equivalent
if their singularity categories are equivalent as triangulated categories. The only known
examples of singular equivalences are the following:
Example 4.1. (1) If R ∼
= Dsg (S).
= S, then Dsg (R) ∼
(2) If R and S are regular, then Dsg (R) ∼
= Dsg (S).
=0∼
(3) (Knörrer’s periodicity [Yos, Chapter 12]) Let k be an algebraically closed field of
characteristic 0. Set R := k[[x0 , x1 , ..., xd ]]/(f ) and S := k[[x0 , x1 , ..., xd , u, v]]/(f +uv).
Then Dsg (R) ∼
= Dsg (S).
Remark 4.2. All of these singular equivalences, the singular loci Sing R and Sing S are
homeomorphic. In fact, the cases (1) and (2) are clear. Consider the case of R :=
k[[x0 , x1 , ..., xd ]]/(f ) and S := k[[x0 , x1 , ..., xd , u, v]]/(f + uv). Then
Sing S = V(∂f /∂x0 , . . . ∂f /∂xd , u, v)
∼
= Spec(S/(∂f /∂x0 , . . . , ∂f /∂xd , u, v))
∼
= Spec(k[[x0 , x1 , ..., xd , u, v]]/(f + uv, ∂f /∂x0 , . . . , ∂f /∂xd , u, v))
∼
= Spec(k[[x0 , x1 , ..., xd ]]/(f, ∂f /∂x0 , . . . , ∂f /∂xd )
∼
= V(∂f /∂x0 , . . . ∂f /∂xd ) = Sing R.
Here, the first and the last equalities are known as the Jacobian criterion.
Let me give some definitions appearing in the statement of the main theorem of this
section.
Definition 4.3. Let (R, m, k) be a commutative Noetherian local ring.
(1) We say that an ideal I of R is quasi-decomposable if there is an R-regular sequence
x of I such that I/(x) is decomposable as an R-module.
(2) A local ring R is said to be complete intersection if there is a regular local ring
S and an S-regular sequence x such that the completion R̂ of R is isomorphic
to S/(x). We say that R is a hypersurface if we can take x to be an S-regular
sequence of length 1.
(3) A local ring R is said to be locally a hypersurface on the punctured spectrum if Rp
is a hypersurface for every non-maximal prime ideal p.
14
HIROKI MATSUI
The following theorem is the main result of this section.
Theorem 4.4. Let R and S be commutative Noetherian local rings that are locally hypersurfaces on the punctured spectra. Assume that R and S are either
(a) complete intersection rings, or
(b) Cohen-Macaulay rings with quasi-decomposable maximal ideal.
If R and S are singularly equivalent, then Sing R and Sing S are homeomorphic.
For a ring R satisfying the condition (b) in Theorem 4.4, Nasseh-Takahashi [NT, Theorem B] shows that (Sing R, SSuppR ) is a classifying support data for Dsg (R). Therefore,
the statement of Theorem 4.4 follows from Theorem 2.15. Therefore, the problem is the
case of (a).
For a ring R satisfying the condition (a) in Theorem 4.4, Takahashi [Tak] classified
thick subcategories of Dsg (R) containing the residue field k of R by using the singular
locus Sing R and the singular support SSuppR . We would like to apply Theorem 2.15 also
for this case. The problem is that whether the condition “containing the residue field
k” is preserved by stable equivalences. As we will show later, this condition is actually
preserved by singular equivalences for local complete intersection rings. To do this, we
discuss replacing the residue field k with some categorically defined object.
First of all, let us recall the notion of a test module.
Definition 4.5. Let R be a Noetherian ring. We say that a finitely generated R-module
T is a test module if for any finitely generated R-module M,
TorR
n (T, M) = 0 for n ≫ 0 ⇒ pdR M < ∞.
Example 4.6. For a Noetherian local ring (R, m, k), the syzygy Ωn k of its residue field
is a test module for each n.
For commutative Noetherian rings admitting dualizing complexes (e.g., Gorenstein
rings), there is another characterization for test modules:
Theorem 4.7. [CDT, Theorem 3.2] Let R be a commutative Noetherian ring admitting
a dualizing complex. Then, test modules are nothing but finitely generated R-modules T
satisfying the following condition: for any finitely generated R-module M,
ExtnR (T, M) = 0 for n ≫ 0 ⇒ idR M < ∞.
Motivated by this theorem, we introduce the following notion.
Definition 4.8. Let T be a triangulated category. We say that T ∈ T is a test object if
for any object M of T ,
HomT (T, Σn M) = 0 for n ≫ 0 ⇒ M = 0.
Denote by T(T ) the full subcategory of T consisting of test objects.
The following lemma shows that we can consider the notion of a test object is a generalization of the notion of a test module.
Lemma 4.9. Let R be a Gorenstein ring. Then one has
T(CM(R)) = {T ∈ CM(R) | T is a test module}.
TRIANGULATED EQUIVALENCES AND RECONSTRUCTION OF CLASSIFYING SPACES
15
Proof. By Theorem 4.7, we have only to show
≫0
T(CM(R)) = {T ∈ CM(R) | all N ∈ mod R with ExtR
(M, N) = 0 satisfy idR N < ∞}.
Fix a maximal Cohen-Macaulay R-module T and a finitely generated R-module M.
Since R is Gorenstein and T is maximal Cohen-Macaulay, one has Ext1R (T, R) = 0. Therefore, we get isomorphisms
Exti (T, M) ∼
= Exti+1 (T, ΩR M) ∼
= Exti+2 (T, Ω2 M) ∼
= ···
R
R
R
R
for any positive integer i. Therefore, we get isomorphisms
Hom (T, Σd+n Ωd M) ∼
= Extn (T, M)
= Extd+n (T, Ωd M) ∼
R
R
R
R
R
for n > 0. Here, d denotes the dimension of R. Thus, we are done since ΩdR M is free if
and only if M has finite injective dimension.
Let us recall several classes of subcategories of modules.
Definition 4.10. (1) An additive subcategory X of mod R is called resolving if it satisfies
the following conditions:
(i) X is closed under extensions: for an exact sequence 0 → L → M → N → 0 in
mod R, if L and N belong to X , then so does M.
(ii) X is closed under kernels of epimorphisms: for an exact sequence 0 → L →
M → N → 0 in mod R, if M and N belong to X , then so does L.
(iii) X contains all projective R-modules.
For a finitely generated R-module M, denote by resR (M) the smallest resolving subcategory of mod R containing M.
(2) A non-empty additive subcategory X of mod R is called thick if X satisfies 2-out-of3 property: for an exact sequence 0 → L → M → N → 0 in mod R, if 2-out-of
{L, M, N} belong to X , then so does the third. For a finitely generated R-module M,
denote by thickR (M) the smallest thick subcategory of mod R containing M.
Lemma 4.11. Let T be a triangulated category and T an object of T . If thickT (T )
contains a test object of T , then T is also a test object.
Proof. Take an object M ∈ T with HomT (T, Σn M) = 0 for n ≫ 0. Set
X := {N ∈ T | HomT (N, Σn M) = 0 for n ≫ 0}.
Then one can easily verify that X is a thick subcategory of T . By assumption, X contains
a test object as X contains T . Thus, M must be zero and hence T is a test object.
The next proposition plays a key role to prove our main theorem.
Proposition 4.12. Let (R, m, k) be a d-dimensional local complete intersection ring and
T a finitely generated R-module. Then the following are equivalent:
(1) T is a test module.
(2) ΩdR k ∈ resR (T ).
(3) k ∈ thickR (T ⊕ R).
(4) k ∈ thickDb (mod R) (T ⊕ R).
(5) k ∈ thickDsg (R) (T ).
(6) ΩdR k ∈ thickCM(R) (Ωd T ).
16
HIROKI MATSUI
Proof. Notice resR (ΩiR T ) ⊆ resR (T ), thickR (T ⊕ R) = thickR (ΩiR T ⊕ R), thickDb (mod R) (T ⊕
R) = thickDb (mod R) (ΩiR T ⊕ R), thickDsg (R) (T ) = thickDsg (R)(ΩiR T ) and T is a test module if
and only if so is ΩR T . Hence we may assume that T is maximal Cohen-Macaulay. Then
we have
resR (T ) ⊆ thickR (T ⊕ R) = thickDb (mod R) (T ⊕ R) ∩ mod R.
Here, the first inclusion directly follows from the definition, and the second equality is
∼
=
→
given by [KS, Theorem 1]. Moreover, the composition functor Db (mod R) → Dsg (R) −
CM(R) sends k to ΩdR k[d], and the inverse image of thickCM(R) (T ) is thickDb (mod R) (T ⊕ R).
Therefore, the implications (2) ⇒ (3) ⇔ (4) ⇔ (5) ⇔ (6) hold true. Furthermore, by
using Lemma 4.9 and Lemma 4.11, the implication (5) ⇒ (1) follows.
Thus, it remains to show the implication (1) ⇒ (2). Assume that T is a test module.
Recall that the complexity cxR (M) of a finitely generated R-module M is the dimension
of the support variety VR (M) associated to M; see [AB] for details. By [CDT, Proposition
2.7], T has maximal complexity, namely cxR (T ) = codim(R) =: c. Thanks to the prime
avoidance lemma, we can take an R-regular sequence x of length d from m \ m2 . Set
R = R/(x) and T = T /(x). Then R is an Artinian complete intersection ring and
cxR (T ) = cxR (T ) = c = codimR = codim(R). Moreover, one has
VR (T ) = Acka = VR (k),
where k a denotes the algebraic closure of k. This follows from the fact that VR (T ) and
VR (k) are c-dimensional closed subvarieties of the c-dimensional affine space Acka . Hence,
by [CI, Theorem 5.6], k belongs to thickDb (mod R) (T ). As a result, we get
k ∈ thickDb (mod R) (T ) ∩ mod(R) ⊆ thickDb (mod R) (T ⊕ R) ∩ mod(R) = thickR (T ⊕ R).
Again, the second equality uses [KS, Theorem 1]. Since thickR (T ⊕ R) = resR (T ) by [DT,
Corollary 4.16], we deduce ΩdR k ∈ resR (T ) by using [Tak, Lemma 5.8].
Gathering [Tak, Theorem 6.7], [NT, Theorem B], Lemma 4.9 and Proposition 4.12, we
obtain the following proposition.
Proposition 4.13. Let R be a Noetherian local ring.
(1) If R satisfies the condition (a) in Theorem 4.4, then (Sing R, SSuppR ) is a classifying
support data for Dsg (R) with respect to T(Dsg (R)).
(2) If R satisfies the condition (b) in Theorem 4.4, then (Sing R, SSuppR ) is a classifying
support data for Dsg (R).
Now, the proof of Theorem 4.4 has almost been done.
Proof of Theorem 4.4. Use Proposition 4.13 and Theorem 2.15. Here, let me remark that
test objects are preserved by singular equivalences.
Remark 4.14. For a hypersurface ring R, the triangulated category Dsg (R) becomes a
pseudo tensor triangulated category (i.e., tensor triangulated category without unit). It
is shown by Yu implicitly in the paper [Yu] that for two hypersurfaces R and S, if a
singular equivalence between R and S preserves tensor products, then Sing R and Sing S
are homeomorphic. Indeed, Sing R is reconstructed from Dsg (R) by using its pseudo tensor
triangulated structure.
TRIANGULATED EQUIVALENCES AND RECONSTRUCTION OF CLASSIFYING SPACES
17
Since Theorem 4.4 gives a necessary condition for singular equivalences, we can generate
many pairs of rings which are not singularly equivalent. Let us start with the following
lemma.
Lemma 4.15. Let R be a local complete intersection ring with only an isolated singularity and r > 1 an integer. Then the ring R[[u]]/(ur ) is a local complete intersection
ring which is locally a hypersurface on the punctured spectrum, and Sing(R[[u]]/(ur )) is
homeomorphic to Spec R.
Proof. Of course T := R[[u]]/(ur ) is a local complete intersection ring.
∼
=
The natural inclusion R → T induces a homeomorphism f : Spec T −
→ Spec R. Then
one can easily check that P = (f (P ), u)T for any P ∈ Spec T and TP ∼
= Rf (P ) [[u]]/(ur ).
Therefore, T is locally a hypersurface on the punctured spectrum and Sing T = Spec T .
Corollary 4.16. Let R and S be local complete intersection rings which have only isolated
singularities. Assume that Spec R and Spec S are not homeomorphic. Then for any
integers r, s > 1, one has
Dsg (R[[u]]/(ur )) ∼
6= Dsg (S[[v]]/(v s )).
In particular, Dsg (R ∗ R) ∼
6= Dsg (S ∗ S). Here R ∗ R denotes the trivial extension ring of
a commutative ring R.
Proof. From the above lemma, we obtain
(1) R[[u]]/(ur ) and S[[v]]/(v s ) satisfies the condition (a) in Theorem 4.4,
(2) Sing R[[u]]/(ur ) ∼
= Spec S are not homeomorphic.
= Spec R and Sing S[[u]]/(v r ) ∼
r
∼
Thus, we conclude Dsg (R[[u]]/(u )) 6= Dsg (S[[v]]/(v s )) by Theorem 4.4.
The second statement follows from an isomorphism R ∗ R ∼
= R[[u]]/(u2 ).
The following corollary says that a Knörrer-type equivalence fails over a non-regular
ring.
Corollary 4.17. Let S be a regular local ring. Assume that S/(f ) has an isolated singularity. Then one has
Dsg (S[[u]]/(f, u2)) ∼
6= Dsg (S[[u, v, w]]/(f + vw, u2)).
Proof. Sing S[[u]]/(f, u2) ∼
= Spec S[[v, w]]/(f +
= Spec S/(f ) and Sing S[[u, v, w]]/(f +vw, u2) ∼
vw) have different dimensions and hence are not homeomorphic.
For the last of this paper, we will show that singular equivalence localizes.
Lemma 4.18. Let R be a d-dimensional Gorenstein local ring and p a prime ideal of R.
Then a full subcategory Xp := {M ∈ Dsg (R) | Mp ∼
= 0 in Dsg (Rp )} is thick and there is a
triangle equivalence
Dsg (R)/Xp ∼
= Dsg (Rp ).
Proof. By using the triangle equivalence Dsg (R) ∼
= CM(R), we may show the triangle
equivalence
CM(R)/Xp ∼
= CM(Rp ),
∼
where Xp := {M ∈ CM(R) | Mp = 0 in CM(Rp )}.
Note that the localization functor Lp : CM(R) → CM(Rp ), M 7→ Mp is triangulated.
Since Xp = Ker Lp , Xp is a thick subcategory of CM(R) and Lp induces a triangulated
18
HIROKI MATSUI
functor Lp : CM(R)/Xp → CM(Rp ). Thus, we have only to verify that Lp is dense and
fully faithful.
(i): Lp is dense.
δ
Let U be an Rp -module. Take a finite free presentation Rpn −
→ Rpm → U → 0 of U.
Then δ can be viewed as an m × n-matrix (αij ) with entries in Rp . Write αij = aij /s for
some aij ∈ R and s ∈ R \ p. Then the cokernel M := Coker((aij ) : Rn → Rm ) is a finitely
generated R-module and Mp ∼
= U. Since Mp is a maximal Cohen-Macaulay Rp -module,
we obtain isomorphisms
(Ω−d Ωd M)p ∼
=U
= Ω−d Ωd Mp ∼
= Mp ∼
R
R
Rp
Rp
in CM(Rp ). This shows that the functor Lp is dense.
(ii): Lp is faithful.
Let α : M → N be a morphism in CM(R)/Xp . Then α is given by a fraction f /s
of morphisms f : M → Z and s : N → Z in CM(R) such that the mapping cone
C(s) of s belongs to Xp . Assume Lp (α) = Lp (s)−1 Lp (f ) = (sp )−1 fp = 0. Then fp = 0 in
HomRp (Mp , Zp ). From the isomorphism HomR (M, Z)p ∼
= HomRp (Mp , Zp), there is a ∈ R\p
such that af = 0 in HomR (M, Z). Since a : Zp → Zp is isomorphism, the mapping cone
of the morphism a : Z → Z in CM(R) belongs to Xp . Thus, α = f /s = (af )/(as) = 0 in
CM(R)/Xp . This shows that Lp is faithful.
(iii): Lp is full.
Let g : Mp → Np be a morphism in CM(Rp ) where M, N ∈ CM(R). By the isomorphism
HomR (M, N)p ∼
= HomRp (Mp , Np ), there is a morphism f : M → N in CM(R) and a ∈ R\p
such that g = fp /a. Since the mapping cone of a : N → N is in Xp , we obtain a morphism
f /a : M → N in CM(R)/Xp and Lp (f /a) = fp /a = g. This shows that Lp is full.
Corollary 4.19. Let R and S be complete intersection rings which are locally hypersurfaces on the punctured spectra. If R and S are singularly equivalent, then there is a
homeomorphism ϕ : Sing R → Sing S such that Rp and Sϕ(p) are singularly equivalent for
any p ∈ Sing R.
Proof. As in Lemma 4.18, we may consider the category CM(R).
Let F : CM(R) → CM(S) be a triangle equivalence. Take a homeomorphism ϕ :
Sing R → Sing S given in Proposition 2.14 and Theorem 2.15. Then by construction, it
satisfies
[
SuppS F (M)
{ϕ(p)} =
M ∈CM(R), SuppR (M )⊆V(p)
for each p ∈ Sing R. Moreover, the following diagram is commutative:
F̃
ThT(CM(R)) (CM(R)) −−−→ ThT(CM(S)) (CM(S))
fSupp
fSupp y
y S
R
Nesc(Sing R)
−−−→
ϕ̃
Nesc(Sing S),
where the map F̃ and ϕ̃ are defined by F̃ (X ) := {N ∈ T ′ | ∃M ∈ X such that N ∼
=
F (M)} and ϕ̃(W ) := ϕ(W ), respectively.
Let p be an element of Sing R. Set Wp := {q ∈ Sing R | q 6⊆ p} which is a specializationclosed subset of Sing R. We establish two claims.
TRIANGULATED EQUIVALENCES AND RECONSTRUCTION OF CLASSIFYING SPACES
19
Claim 1. gSuppR (Wp ) = Xp .
Proof of Claim 1. Let M ∈ Xp . Since Mp = 0 in CM(Rp ), one has p 6∈ SuppR (M). Thus,
SuppR (M) ⊆ Wp and hence M ∈ gSuppR (Wp ).
Next, take M ∈ gSuppR (Wp ). Then SuppR (M) ⊆ Wp means that p does not belong to
SuppR (M). Therefore, Mp = 0 in CM(Rp ) and hence M ∈ Xp .
Claim 2. ϕ(Wp ) = Wϕ(p) := {q ∈ Sing S | q 6⊆ ϕ(p)}.
Proof of Claim 2. One can easily check that ϕ is order isomorphism with respect to the
inclusion relations. Since Sing R \ Wp has a unique maximal element p, ϕ(Sing R \ Wp ) =
Sing S \ ϕ(Wp ) also has a unique maximal element ϕ(p). This shows ϕ(Wp ) = Wϕ(p) .
From the above two claims, we obtain
F̃ (Xp ) = F̃ (gSuppR (Wp )) = gSuppS (ϕ̃(Wp )) = gSuppS (Wϕ(p) ) = Xϕ(p) ,
where the second equality comes from the above commutative diagram and the last equality is shown by the same proof as Claim 1. Consequently, the triangle equivalence F
induces triangle equivalences:
CM(Rp ) ∼
= CM(R)/Xp ∼
= CM(S)/Xϕ(p) ∼
= CM(Sϕ(p) ).
Acknowledgments. The author is grateful to his supervisor Ryo Takahashi for many supports and his helpful comments.
References
[AB] L. L. Avramov; R.-O. Buchweitz, Support varieties and cohomology over complete intersections, Invent. Math. 142 (2000), no. 2, 285–318.
[AIL] L. L. Avramov; S. B. Iyengar; J. Lipman, Reflexivity and rigidity for complexes I. Commutative rings, Algebra Number Theory 4 (2010), no. 1, 47–86.
[Bal02] P. Balmer, Presheaves of triangulated categories and reconstruction of schemes, Math. Ann.
324 (2002), no. 3, 557–580.
[Bal05] P. Balmer, The spectrum of prime ideals in tensor triangulated categories, J. Reine Angew.
Math. 588 (2005), 149–168.
[BM] H. Bass; M. P. Murthy, Grothendieck groups and Picard groups of abelian group rings, Ann.
of Math. 86 (1967), 16–73.
[Ben] D. J. Benson, Representations and cohomology II: Cohomology of groups and modules, Cambridge
Stud. Adv. Math. 31, Cambridge University Press (1991).
[BIK] D. J. Benson; S. B. Iyengar; H. Krause, Stratifying modular representations of finite groups,
Ann. of Math. 174 (2011), 1643-1684.
[BCR] D. J. Benson; J. F. Carlson; J. Rickard, Thick subcategories of the stable module category,
Fund. Math. 153 (1997), no. 1, 59–80.
[Buc] R.-O. Buchweitz, Maximal Cohen-Macaulay modules and Tate-cohomology over Gorenstein
rings, unpublished manuscript (1986), http://hdl.handle.net/1807/16682.
[CI] J. F. Carlson; S. B. Iyengar, Thick subcategories of the bounded derived category of a finite
group, Trans. Amer. Math. Soc. 367 (2015), no. 4, 2703–2717.
[CDT] O. Celikbas; H. Dao; R. Takahashi, Modules that detect finite homological dimensions,
Kyoto J. Math. 54 (2014), no. 2, 295–310.
[Che] X.-W. Chen, The singularity category of an algebra with radical square zero, Doc. Math. 16
(2011), 921–936.
[DT] H. Dao; R. Takahashi, The radius of a subcategory of modules, Algebra Number Theory 8 (2014),
no. 1, 141–172.
20
HIROKI MATSUI
[Hap] D. Happel, Triangulated categories in the representation theory of finite dimensional algebras,
London Math. Soc. Lecture Note Series 119, Cambridge University Press (1988).
[HPS] M. Hovey; J. H. Palmieri; N. P. Strickland, Axiomatic stable homotopy theory, Mem.
Amer. Math. Soc. 128 (1997), no. 610.
[IW] O. Iyama; M. Wemyss, Singular derived categories of Q-factorial terminalizations and maximal
modification algebras, Adv. Math. 261 (2014), 85–121.
[KS] H. Krause; G. Stevenson, A note on thick subcategories of stable derived categories, Nagoya
Math. J. 212 (2013), 87–96.
[Lin] M. Linckelmann, Stable equivalences of Morita type for selfinjective algebras and p-groups, Math.
Zeit. 223 (1996), 87–100.
[Mor] K. Morita, Duality of modules and its applications to the theory of rings with minimum condition,
Sci. Rep. Tokyo Kyoiku Daigaku, Sect. A 6 (1958), 85–142.
[Muk] S. Mukai, Duality between D(X) and D(X̂) with its application to Picard sheaves, Nagoya Math.
J. 81 (1981), 153–175.
[NT] S. Nasseh; R. Takahashi, Local rings with quasi-decomposable maximal ideal, preprint,
arXiv:1704.00719.
[Orl97] D. Orlov, Equivalences of derived categories and K3 surfaces, J. Math. Sci. 84 (1997), 1361–
1381.
[Orl04] D. Olrov, Triangulated categories of singularities and D-branes in Landau-Ginzburg model,
Proc. Steklov Inst. Math. 246 (2004), no. 3, 227–248.
[Qui] D. Quillen, The spectrum of an equivariant cohomology ring I, Ann. Math. 94 (1971), 549–572.
[Ric] J. Rickard, Morita theory for derived categories, J. London Math. Soc. 39 (1989), 436–456.
[Ste] G. Stevenson, Subcategories of singularity categories via tensor actions, Compos. Math. 150
(2014), no. 2, 229–272.
[Tak] R. Takahashi, Classifying thick subcategories of the stable category of Cohen–Macaulay modules,
Adv. Math. 225 (2010), no. 4, 2076–2116.
[Tho] R. W. Thomason, The classification of triangulated subcategories, Compos. Math. 105 (1997),
no. 1 , 1–27.
[Yos] Y. Yoshino, Cohen-Macaulay modules over Cohen-Macaulay rings, London Mathematical Society
Lecture Note Series, Cambridge University Press, Cambridge, 1990.
[Yu] X. Yu, The triangular spectrum of matrix factorizations is the singular locus, Proc. Amer. Math.
Soc. 144 (2016), no. 8, 3283–3290.
Graduate School of Mathematics, Nagoya University, Furocho, Chikusaku, Nagoya,
Aichi 464-8602, Japan
E-mail address: [email protected]
| 0 |
MILP and Max-Clique based heuristics for the Eternity
II puzzle
arXiv:1709.00252v2 [cs.DS] 5 Oct 2017
Fabio Salassaa , Wim Vancroonenburgb , Tony Wautersb,∗,
Federico Della Crocea , Greet Vanden Bergheb
a
Politecnico di Torino, DIGEP, Corso Duca degli Abruzzi 24, 10129 Torino, Italy
b
KU Leuven, Department of Computer Science, CODeS & iMinds-ITEC,
Gebroeders De Smetstraat 1, 9000 Gent, Belgium
Abstract
The present paper considers a hybrid local search approach to the Eternity
II puzzle and to unsigned, rectangular, edge matching puzzles in general.
Both an original mixed-integer linear programming (MILP) formulation and
a novel Max-Clique formulation are presented for this NP-hard problem. Although the presented formulations remain computationally intractable for
medium and large sized instances, they can serve as the basis for developing heuristic decompositions and very large scale neighbourhoods. As a side
product of the Max-Clique formulation, new hard-to-solve instances are published for the academic research community. Two reasonably well performing
MILP-based constructive methods are presented and used for determining
the initial solution of a multi-neighbourhood local search approach. Experimental results confirm that this local search can further improve the results
obtained by the constructive heuristics and is quite competitive with the
state of the art procedures.
Keywords: Edge matching puzzle, hybrid approach, local search
1. Introduction
The Eternity II puzzle (EII) is a commercial edge matching puzzle in which
256 square tiles with four coloured edges must be arranged on a 16 × 16 grid
∗
Corresponding author
Email address: [email protected] (Tony Wauters)
Preprint submitted to .
October 6, 2017
such that all tile edges are matched. In addition, a complete solution requires
that the ‘grey’ patterns, which appear only on a subset of the tiles, should be
matched to the outer edges of the grid. An illustration of a complete solution
for a small size puzzle, 5 × 5, is provided in Figure 1.
Figure 1: Solution to an Eternity II-like edge matching puzzle of size 5 × 5 (Image generated with the Eternity II editor, http://sourceforge.net/projects/eternityii/, accessed on
24 January 2014)
The EII puzzle was created by Christopher Monckton and released by the
toy distributor Tomy UK Ltd. in July 2007. Along with the puzzle release,
a large cash prize of 2 million USD was announced to be awarded to the
first person who could solve the puzzle. As can be expected, this competition attracted considerable attention. Many efforts were made to tackle this
challenging problem, yielding interesting approaches and results. However,
no complete solution has ever been generated. Meanwhile, the final scrutiny
date for the cash price, 31 December 2010, has passed, leaving the large
money prize unclaimed.
The EII puzzle belongs to the more general class of Edge Matching Puzzles,
which have been shown to be NP-complete [5]. Many approaches to Edge
Matching Puzzles are now available in the literature. Constraint programming approaches [2, 11] have been developed, in addition to metaheuristics
[4, 11, 13], backtracking [12] and evolutionary methods [10]. Other methods
translate the problem into a SAT formulation and then solve it with SAT
solvers [1, 7]. An extensive literature overview on the topic is provided in
[14], while [8] provides a survey on the complexity of other puzzles.
2
The present paper introduces a novel Mixed-Integer Linear Programming
(MILP) model and a novel Max-Clique based formulation for EII-like puzzles
of size n × n. Both formulations serve as components of heuristic decompositions used in a multi-neighbourhood local search approach. The remainder
of the paper is structured as follows. Section 2 presents the MILP and MaxClique formulations. In Section 3, several hybrid heuristic approaches are
introduced. Computational results are presented in Section 4. Final conclusions are drawn in Section 5.
2. Problem formulations
2.1. Mixed integer linear programming formulation
A novel mixed-integer linear programming model was developed for the EII
puzzle problem. The following notation will be used. The puzzle consists of
an n × n square onto which n2 tiles need to be placed. The index t = 1 . . . n2
is used to refer to tiles. The indices r = 1 . . . n, c = 1 . . . n denote the rows,
resp. columns, of the puzzle board. The index α = 0 . . . 3 refers to the rotation of the tile, i.e. α = 0 means not rotated, α = 1 means rotated clockwise
over 90◦ , etc. The coefficient CTt,α,l (resp. CB, CL, CR) is equal to 1 if tile
t has colour l at its top (resp. bottom, left, right) position when rotated by α.
The decision variables of the MILP model are defined as follows:
(
1 if tile t is placed in row r, column c with rotation α,
xt,r,c,α =
0 otherwise.
(
1 if the right edge of position (r, c) is unmatched ,
hr,c =
0 otherwise.
(
1 if the bottom edge of position (r, c) is unmatched ,
vr,c =
0 otherwise.
The model is then defined as follows:
Min
n X
n−1
X
r=1 c=1
hr,c +
n
n−1 X
X
r=1 c=1
s.t.
3
vr,c
(1)
n X
n X
3
X
xt,r,c,α = 1 ∀t = 1, . . . , n2
(2)
r=1 c=1 α=0
2
n X
3
X
xt,r,c,α = 1 ∀r = 1, . . . , n, c = 1, . . . , n
(3)
t=1 α=0
2
n X
3
X
2
CRt,α,l xt,r,c,α −
t=1 α=0
n X
3
X
CLt,α,l xt,r,c+1,α ≤ hr,c
t=1 α=0
∀r = 1, . . . , n, c = 1, . . . , n − 1, l = 1, . . . , L
2
−
n X
3
X
2
CRt,α,l xt,r,c,α +
t=1 α=0
n X
3
X
CLt,α,l xt,r,c+1,α ≤ hr,c
t=1 α=0
∀r = 1, . . . , n, c = 1, . . . , n − 1, l = 1, . . . , L
CBt,α,l xt,r,c,α −
n X
3
X
CTt,α,l xt,r+1,c,α ≤ vr,c
t=1 α=0
t=1 α=0
∀r = 1, . . . , n − 1, c = 1, . . . , n, l = 1, . . . , L
−
t=1 α=0
(6)
2
2
n X
3
X
(5)
2
2
n X
3
X
(4)
CBt,α,l xt,r,c,α +
n X
3
X
CTt,α,l xt,r+1,c,α ≤ vr,c
t=1 α=0
∀r = 1, . . . , n − 1, c = 1, . . . , n, l = 1, . . . , L
(7)
n2
3
XX
CTt,α,0 · xt,0,c,α = 1 ∀c = 1, . . . , n
(8)
CBt,α,0 · xt,n−1,c,α = 1 ∀c = 1, . . . , n
(9)
t=1 α=0
2
n X
3
X
t=1 α=0
2
n X
3
X
CLt,α,0 · xt,r,0,α = 1 ∀r = 1, . . . , n
(10)
CRt,α,0 · xt,r,n−1,α = 1 ∀r = 1, . . . , n
(11)
t=1 α=0
2
n X
3
X
t=1 α=0
xt,r,c,α ∈ {0, 1}
∀t = 1, . . . , n2 , r = 1, . . . , n, c = 1, . . . , n, α = 0, . . . , 3
0 ≤ hr,c ≤ 1 ∀r = 1, . . . , n, c = 1, . . . , n − 1
4
(12)
(13)
0 ≤ vr,c ≤ 1 ∀r = 1, . . . , n − 1, c = 1, . . . , n
(14)
The objective function (Expression 1) minimises the number of unmatched
edges in the inner region of the puzzle. Constraints (2) indicate that each
tile must be assigned to exactly one position, with one rotation. Constraints
(3) require that exactly one tile must be assigned to a position. The edge
constraints (4) and (5) force the hr,c variables to take on the value 1 if the
tiles on positions (r, c) and (r, c+1) are unmatched. Similarly, constraints (6)
and (7) do the same for the vertical edge variables vr,c . Finally, constraints
(8) - (11) ensure that the border edges are matched to the gray frame (colour
l = 0).
We point out that constraining the objective function to zero (i.e. no unmatched edges allowed), turns the model into a feasibility problem where
every feasible solution is also optimal. However, preliminary testing showed
that the latter model is only relevant for very small size problem instances.
If the MILP solver needs to be stopped prematurely on the feasibility model,
no solution is returned.
2.2. Clique formulation
The EII puzzle itself is a decision problem and can be modelled as (i.e.
it reduces to) the well known decision version of the clique problem [3] as
follows. Given a parameter k and an undirected graph G = (V, E), the clique
problem calls for finding a subset of pairwise adjacent nodes, called a clique,
with a cardinality greater than or equal to k. Let the nodes of the graph
correspond to variables xt,r,c,α from the formulation introduced in Section 2.1.
Each node thus represents a tile in a given position on the puzzle and with
a given rotation α. The nodes are connected iff there is no conflict between
the nodes in the puzzle. Possible causes of conflicts are:
• unmatching colors for adjacent positions
• same tile assigned to different positions
• same tile assigned to the same position with different rotations
• different tiles assigned to the same position.
The objective is to find a clique of size n2 , where n is the size of the puzzle.
5
Puzzle size
Optimal solution
Clique
Number of nodes
Number of edges
Graph density
q = 1, 000, 000
q = 10, 000, 000
q = 50, 000, 000
MILP
Number of variables
Number of constraints
1 thread, 3600s time
4 threads, 3600s time
3x3
12
4x4
24
5x5
40
6x6
60
7x7
84
8x8
112
30
291
66.9%
E (T )
12 (0.2)
12 (1.9)
12 (9.3)
138
6707
70.9%
E (T )
24 (0.3)
24 (3.8)
24 (19.3)
478
86184
75.6%
E (T )
40 (0.8)
40 (7.6)
40 (37.8)
1290
674512
81.1%
E (T )
57 (1.4)
60 (13.5)
60 (67.0)
2910
3620565
85.5%
E (T )
78 (2.5)
80 (22.4)
82 (110.5)
5770
14751304
88.6%
E (T )
102 (4.8)
106 (35.7)
106 (172.4)
336
126
E (T )
12 (0.1)
12 (0.1)
1048
288
E (T )
24 (0.1)
24 (0.1)
2540
630
E (T )
40 (16.7)
40 (1.2)
5244
1056
E (T )
60 (243.5)
60 (158.5)
9688
1638
E (T )
81 (3600.0)
81 (3600.0)
16496
2624
E (T )
103 (3600.0)
103 (3600.0)
Table 1: Results obtained with the maximum clique formulation solved with the algorithm
by [6], and the MILP formulation solved with CPLEX, on small size edge matching puzzle
instances.
2.3. Comparison of the MILP model and the Clique model
The applicability of the MILP model and the Clique model is investigated in what follows. Initial testing was performed on a set of small puzzle
instances, ranging from 3 × 3 up to 8 × 8 (refer to Section 4 for more information on these instances). The MILP model was implemented using
CPLEX 12.6. A state of the art heuristic was used for solving the maximum clique problem [6], kindly provided by its authors. The heuristic has
only one parameter, i.e. the number of selections q. The computing time
of the algorithm is linear with respect to q. We tested the heuristic with
q ∈ {1, 000, 000; 10, 000, 000; 50, 000, 000}. Both the MILP model and the
Max Clique heuristic were tested on a modern desktop pc1 .
Table 1 shows the results obtained with the max clique formulation and
the MILP formulation. For each instance, we report for the clique formulation: the number of nodes, the number of edges, the optimal solution (namely
the max. number of matching edges), the density, the best number of matching edges E and the average computing time T (in seconds) for 10 runs. The
number of variables and constraints is reported for the MILP formulation.
The solutions depicted in bold are optimal. The results show that instances
up to size 6 × 6 can be easily solved using a state of the art maximum clique
1
Intel Core i7-2600 [email protected]
6
algorithm. Instances of size 7 × 7 could not be solved completely, even when
the algorithm was executed with higher values of q and more runs. The
MILP is also able to solve up to size 6 × 6. However, the clique formulation
is significantly faster from size 6 × 6 upwards.
Note that the edge matching puzzles correspond with large, difficult clique
instances, for which current state-of-the-art max clique solvers are not able to
find the optimal solution. We provide the corresponding max clique instances
of the 3 × 3 to 9 × 9 instances to the academic community2 . Larger size
instances are hard to manage. The 10 × 10 graph file, for example, is larger
than 1 GB.
3. Solution Approaches
Both the MILP model and the clique formulation presented in the previous
section proved to be computationally intractable for medium sized instances.
The size appears to be restricted to 7 × 7 and 8 × 8 when the execution time
is limited to one day. The true EII puzzle instance is still far beyond the
grasp of these models. However, these models can serve as the basis for some
well performing heuristics, presented in the following paragraphs.
3.1. MILP-based greedy heuristic
A MILP-based greedy constructive heuristic has been developed for the problem studied here. The heuristic is based on subproblem optimisation. The
puzzle is divided in regions, e.g. by considering individual rows/columns or
rectangular regions. Regions are then consecutively constructed by employing a variant of the MILP model presented in Section 2.
First we introduce the notion of a partial solution S ∗ = T ∗ 7→ R∗ , in
which a subset T ∗ of the tiles T = {1, . . . , n2 } have been assigned to a subset
R∗ of the positions R = {(r, c)|r = 1, . . . , n, c = 1, . . . , n}. Given a partial
solution S ∗ , model (1)–(14) can be modified such that it only considers the
positions in a region R0 ⊆ R\R∗ , and it only aims to assign tiles T 0 ⊆ T \T ∗
(i.e. tiles that have not been assigned elsewhere). In addition, we restrict
0
0
R0 to a rectangular region, denoted by (rmin
, c0min ) × (rmax
, c0max ) (i.e. the
min/max positions of the region).
2
The instances can be downloaded from https://dl.dropbox.com/u/24916303/
Graph_Pieces.zip. A generator for these instances is available upon request from the
authors.
7
Figure 2 illustrates how model (1)–(14) can be modified to solve a region
(1, 4) × (4, 8), given a partial solution on (1, 1) × (4, 4). In this example, it is
required to select 16 of the available 64 − 16 = 48 tiles (16 tiles are already
assigned to region (1, 1) × (4, 4)) in such a way that the unmatched edges are
minimised. Hence, in order to consider region (1, 4) × (4, 8), the objective
function is modified as follows
Min
4 X
7
X
hr,c +
r=1 c=4
3 X
8
X
vr,c
(15)
r=1 c=4
Only 16 of the remaining 48 tiles must be selected and assigned to the region
and therefore constraints (2) are modified as follows. Note that the inequality
indicates that not all tiles will be selected.
4 X
8 X
3
X
xt,r,c,α ≤ 1 ∀t ∈ T \T ∗
(16)
r=1 c=4 α=0
Similarly, constraints (3–14) are also suitably modified in order to take into
account the specific region to be considered. Note that the edge constraints
forcing the values of the hr,c and vr,c variables also hold for rows and columns
matching the boundaries of previously solved regions. This enables building
a solution with only a few unmatched edges between region boundaries.
This partial optimization model can be applied to solve all regions sequentially, thus constructing the final complete solution. Initially R1 is optimised,
after which region R2 , disjoint from region R1 , is optimised, and so on. The
variables corresponding with the region are then optimally assigned by the
MILP solver. Algorithm 1 presents pseudocode of this approach.
Algorithm 1 Greedy heuristic
Require: R = {R1 , R2 , R3 , . . . RK }
. A decomposition of R in K regions Ri
∗
∗
∗
T0 ← ∅, R0 ← ∅, S0 ← ∅ . The initial partial solution S ∗ has no tiles assigned
for i = 1, . . . , K do
∗ ,T∗
∗
Si∗ , Ti∗ , Ri∗ ← Apply MILP model to region Ri , given Si−1
i−1 and Ri−1 .
∗
. Get a new partial solution Si , with tiles assigned to Ri
end for
∗
return SK
For each puzzle’s size, differently sized subsets of tiles have been tested
to assess the quality of the approach. Preliminary tests of regions varying
8
Partial solution
hrc
∀t,α
vrc
xt15α xt16α xt17α xt18α
xt25α xt26α xt27α xt28α
xt35α xt36α xt37α xt38α
xt45α xt46α xt47α xt48α
Empty region
Figure 2: Illustration of how model (1)–(14) can be modified such that it only considers
the positions in a region (1, 4) × (4, 8), given a partial solution S ∗ on region (1, 1) × (4, 4).
9
from 2 by 2 tiles (size 4) to 16 by 2 tiles (size 32) have been performed on
the EII puzzle instance. This preliminary analysis revealed that the CPU
time required at each iteration of the greedy heuristic limits the subset size
to 32 tiles. This roughly corresponds to 32500 MILP variables for the first
region of the real EII puzzle. Clearly, an increased number of tiles leads to
better results. However, more CPU time is needed to compute the optimal
solution, limiting the use in any hybrid framework.
3.2. MILP-based backtracking constructive heuristic
A backtracking version of the greedy heuristic has also been developed. The
main idea, namely building a complete solution by constructing optimal regions, is the same as for the greedy heuristic. The backtracking version,
however, restricts the optimal value of each subproblem to zero. All tiles in
the region should match both internally and with respect to the tiles outside
the region. Whenever a subproblem is determined to be infeasible (i.e. no
completely edge matching region can be constructed), the procedure backtracks to the previous region in order to find a new assignment in that region.
This may afterwards enable constructing a feasible assignment in the next
region. If not, then the process is repeated until the backtracks are sufficient
to find a complete solution.
Model (1)–(14), suitably modified, is again used to build partial solutions.
Let Ri be the current region considered by the procedure and Si∗ the related
partial solution once the corresponding MILP model is solved. Whenever the
lower bound of the MILP model related to region Ri is detected to be greater
than zero, optimisation of region Rk is stopped. Instead, the previous region
0∗
, again with
Ri−1 is reconsidered in order to obtain a new partial solution Si−1
∗
value 0. Let Xi−1 be the set of variables xt,r,c,α having value 1 in solution
∗
∗
Si−1
. The previous partial solution Si−1
must be cut off when searching for
0∗
solution Si−1
. The following new constraint is added to the model:
X
∗
xt,r,c,α ≤ |Xi−1
|−1
(17)
∗
t,r,c,α : xt,r,c,α ∈Xi−1
∗
The rationale is to force at least one of the variables of set Xi−1
to be equal to
zero. If no solution of the previous region Ri−1 can lead to a zero lower bound
in the current region Ri , the procedure backtracks further and searches for a
new solution for region Ri−2 (and so on).
10
Due to the enumerative nature, this procedure can lead to incomplete solutions despite long computation times. We decided to limit the backtracking
procedure to a fixed time limit, after which the greedy heuristic continues until a complete solution is generated. This backtracking heuristic is sketched in
Algorithm 2, in which a recursive method (BACKT RACKIN G HEU RIST IC)
∗
obtained
attempts to solve the current region Ri , given partial solution Si−1
in the previous region Ri−1 . If the lower bound lb of the current region is
greater than 0, the method backtracks to the previous level. However, if
the lower bound is still 0 (and a perfectly matched assignment is found),
the heuristic attempts to solve the next region. This will continue calling
recursively until the puzzle is solved, or shown infeasible given the current
∗
∗
assignments in Si−1
. In the latter case, the current partial solution Si−1
will
0∗
be excluded and a new partial solution Si−1 will be constructed, different
∗
and any other previously excluded partial solution.
from Si−1
If a timeout is reached, the method will continue with the best partial
solution and solve the remaining regions with the greedy heuristic, discussed
in the previous section.
3.3. A multi-neighbourhood local search approach
A multi-neighbourhood local search approach has been developed to improve
the solutions generated by the constructive heuristics (or any random solution). The key idea is to test, after an initial, complete solution is generated
by the heuristics, whether a controlled-size neighbourhood can still improve
the current solution. This local search method is a Steepest Descent search
that tries to improve a solution with the following neighbourhoods: Border
Optimisation, Region Optimisation, Tile Assignment and Tiles Swap and
Rotation. We refer to Figure 3 for an illustration of the regions considered
by these neighbourhoods.
The Border Optimisation (BO) neighbourhood only considers placing
tiles in the border, while all the tiles in the inner part are fixed. The decomposition tries to find the optimal border in terms of matching edges,
also considering the fixed tiles on the adjacent inner part. Correspondingly,
model (1)–(14) is modified in such a way that the inner tile/positions variables are fixed to their current value. Only the border tile/positions variables
can change value. This subproblem corresponds to a one-dimensional edgematching problem. Preliminary computational tests indicated that the related MILP model could always be solved. Solutions for the largest instances,
11
Algorithm 2 BACKT RACKIN G HEU RIST IC
Require: R = {R1 , R2 , R3 , . . . RK }
. A decomposition of R in K regions Ri
Require: i
. Current recursive level (i = 1, . . . , K)
∗
∗ 7→ R∗
Require: Si−1
: Ti−1
.
Partial
solution of the previous level (S0∗ is the
i−1
initial empty partial solution)
excludedsolutions ← ∅
. Partial solutions not leading to feasible solutions
while not timeout do
∗ , R , excludedsolutions)
Si∗ ← OP T IM IZE REGION (Si−1
i
if lb > 0 then
∗
return Si−1
. No feasible solution in current level. (backtrack)
else
∗
Si+1
← BACKT RACKIN G HEU RIST IC(R, i + 1, Si∗ )
∗
if Si+1 = Si∗ then
. Si∗ does not lead to feasible solution
excludedsolutions ← excludedsolutions ∪ Si∗
else
∗
∗
S ∗ ← Si+1
. Si+1
is the complete solution
∗
return S
end if
end if
end while
S ∗ ← GREEDY (Si∗ , Ri∗ )
return S ∗
12
such as the original EII puzzle, can be generated within little computation
time. When the BO neighborhood is considered, the corresponding MILP
model is solved, returning a solution at least as good as the current solution
and consisting of an optimal border with respect to the (n − 2) × (n − 2)
inner region.
The Region Optimisation (RO) neighbourhood relates to the optimisation of a smaller region inside the puzzle and only considers the tiles of this
region in the puzzle. Correspondingly, given the current solution, model (1)–
(14) is suitably modified in such a way that the tile/position variables outside
the region are fixed to their current value. Only the region’s tile/position variables can change value. The RO neighborhood can also be tackled by means
of the Max-Clique formulation by generating a graph only containing nodes
corresponding to tile/position assignments in the specified region. However,
only feasible tile/position assignments should be considered and nodes conflicting with assignments adjacent to (but outside) the region should not be
added to the graph. We recall that the purpose of the model is to find complete assignments, that is, without any unmatched edges. However, given the
tiles in the considered region, it may not be feasible to find such a solution.
In this case, holes are left in the region to which the remaining, unassigned
tiles should be assigned. The related MILP region model is solved where all
assigned tile/position variables are fixed to the value determined by the MaxClique solver. When the RO neighborhood is considered, the local search
procedure samples regions of fixed size in the current solution under consideration. For small sizes, the Max-Clique model (heuristically) is solved faster
than the MILP model. Therefore, the RO neighborhood is always addressed
by means of the Max-Clique formulation where the MILP formulation is only
used for completing the solution whenever holes are left in the region.
In the Tile Assignment (TA) neighbourhood, k tiles are removed from
non-adjacent positions (diagonally adjacent is allowed) and optimally reinserted, thereby minimising the number of unmatched edges. The related subproblem corresponds to a pure bipartite weighted matching problem, which
is optimally solvable by e.g. the Hungarian Algorithm [9]. The TA neighbourhood was first introduced by Schaus and Deville [11] who called it a very
large neighbourhood. Wauters et al. [14] developed a probabilistic version of
the TA neighbourhood that sets a higher probability to selecting tiles with
many unmatched edges. The latter TA variant was applied in the present
13
paper. The TA separates the inner and the border moves. It is prohibited
to reassign border pieces to the inner region and vice versa.
An extention to the TA neighbourhood is also tested. In particular a “checkers” configuration of selected tiles is studied, i.e. all tiles on the board that are
diagonally adjacent. We denote this extension Black and White (BW).
The local search procedure iterates in this neighbourhood, iteratively changing between “black” and “white” positions and solving the related bipartite
weighted matching problem until no more improvements are found.
Finally, the Tiles Swap and Rotation (TSR) neighbourhood is a standard
local search swap operator, in this case swapping the assignment of two tiles,
trying all possible rotations as well. The local search procedure exhaustively
searches the neighbourhood until a local optimum is reached.
4. Computational Results
This section provides computational results obtained by the multi-neighbourhood
local search approach on the Eternity II puzzle as well as the 10×10, 12×12,
14 × 14 and 16 × 16 instances that were used in the META 2010 EII contest3 .
The latter instances serve as an interesting test set for comparison, due to
the availability of some results from the contest. In addition, to the best
of our knowledge, the complete solutions of these instances are not publicly
available. The 3 × 3 to 9 × 9 instances used in Section 2 also originate from
this set.
All tests were performed on a 40 cores Intel Nehalem cluster with 120 GB
ram, with each core running @ 3.46 GHz with 8 MB cache. Computational
resources provided by DAUIN’s HPC Initiative4 . This cluster was used to
solve different instances/runs in parallel in order to reduce the total time
required to run all tests. Each individual test was run on a single processing
core, thus no parallelism was employed in the algorithms. All MILP models
are solved using CPLEX 12.4.
3 rd
3 International Conference on Metaheuristics and Nature Inspired Computing,
Djerba Island, Tunisia, October 27th -31th 2010 – Eternity 2 contest http://www2.lifl.
fr/META10/pmwiki.php?n=Main.Contest.
4
For more details see http://www.dauin-hpc.polito.it.
14
(a)
(b)
(d)
(c)
(e)
Figure 3: Illustration of the regions involved in the neighbourhood operations: (a) optimizing the border (BO); (b) optimizing a rectangular region (RO); (c) optimizing nonadjacent tile assignments (TA); (d) optimizing diagonally adjacent tile assignments in a
checkers fashion (BW); (e) swapping two tiles and possibly rotating them (TSR).
15
Parameter Name Value
T A.K
16 tiles
T A.N
1000 it
Clique.W
6 cols
Clique.H
6 rows
Clique.N
10 it
Table 2: Parameter Settings
Table 2 summarizes the parameter settings of the multi-neighbourhood local
search approach. The local search procedure starts either from a random
solution or from a solution obtained by the constructive heuristics. The
algorithm cycles through the proposed neighbourhoods in the following sequence: TA (for T A.N iterations with sample size T A.K), BO (for one iteration), BW (till local optimum), TSR (till local optimum) and finally RO
by means of the Max-Clique formulation (for Clique.N iterations with rectangular sample size Clique.W × Clique.H). This sequence was determined
experimentally, though the difference in performance between sequences was
very limited. At the end of the multi-neighbourhood step, the final solution
is a local minimum with respect to all considered neighbourhoods.
Table 3 shows the results for twenty runs of the MILP-based greedy heuristic
and the MILP-based backtracking heuristic for different region sizes on all
the problem instances. A timeout of 1200 seconds was set for the backtracking heuristic for the 10 × 10 instance, 1800 seconds for the 12 × 12 instance,
2400 seconds for the 14×14 instance and 3600 seconds for the 16×16 and EII
instances. The greedy heuristic (executed by itself, or after the backtracking
heuristic) is executed until all regions are solved. The table also shows the
results of both constructive methods after subsequent optimisation by the
local search heuristic.
In general, both constructive heuristics generate better results when larger
regions are used. This clearly affects the CPU time needed to compute optimal solutions for each region. By comparing the results of the two heuristics
(without the local search phase), it seems that the backtracking procedure
does not strongly dominate (on maximum, average and minimum values) the
greedy one, while consuming all the available time. This dominance tends to
be more evident for small puzzles and region sizes, while for larger instances
with solutions generated by larger regions, the gap becomes smaller.
16
In almost all cases, the local search procedure manages to improve the results
of the constructive heuristics by several units, indicating that these initial solutions are not local optima with respect to the considered neighbourhoods.
We conclude that many neighbourhoods in a complex structure are effective
for improving these greedy constructive solutions. Table 4 shows the performance of the local search procedure starting from a random initial solution of
poor quality. The procedure can achieve good quality results for the 10 × 10
instance, but not for larger instances. This can easily be related to the size
of the RO neighborhood. As it is quite large with respect to the puzzle size
in the 10 × 10 instance, it is able to optimize a large part of the puzzle. However, this ratio becomes smaller and is thus less effective for larger instances.
Finally, Table 5 compares the best published results with the results obtained
by the hybrid local search procedure. The CPU times refer to the considered
time limits. Table 5 also reports a large test of the procedure, where the
best performing configuration (backtracking+LS) was run 100 times within
a doubled execution time limit. Larger execution times (using CPLEX 12.4
as ILP solver) do not induce further improvements of the results. We note
that some of the entries of Table 5 are missing. Many of the approaches only
deal with a subset of the considered instances. Only three studies [4, 13, 14]
report results for the 10 × 10 to 16 × 16 instances that were tested in this
paper. Some approaches [11, 10, 12] were only applied to the real EII game
puzzle.
The algorithm reported in [11] was executed on a CPU Intel Xeon(TM)
2.80GHz, with computation time 24 hours. The best score over 20 runs
equals 458.
[10] obtained a best score of 371. No indication was provided on the computer and the required CPU time.
The algorithm of [4] was run on a PC Pentium Core 2 Quad (Q6600), 2.4
GHz, with 8 GB of RAM. It considered EII style problems (but not the real
puzzle) with sizes 10 × 10, 12 × 12, 14 × 14 and 16 × 16. The corresponding
time limits were 1200, 1800, 2400 and 3600 seconds respectively and the entries of Table 5 report the best solution obtained over 30 runs.
The algorithm of [13] addressed the same instances with the same time limits
and number of runs as [4]. It was tested on a personal computer with 1.8GHz
CPU and 1GB RAM. Time limits and number of runs were the same as in
[13]. The tests were performed on an Intel Core 2 Duo @ 3Ghz with 4GB of
17
RAM.
The algorithm of [14] was tested on all the instances from [4] and [13] and
also on the EII real game puzzle. The entry reports the best result obtained
over 30 runs with a time limit of 3600 seconds for EII.
Finally, the algorithm of [12] was tested on the EII real game puzzle only
running on a grid computing system over a period of several weeks/months
not explicitly indicated by the authors.
The results show that the algorithm is competitive with the state of the
art, obtaining top results for the 10 × 10 instance in a similar time frame
as the other algorithms. Most interesting, the initial solution constructed
by the greedy and backtracking heuristics is already of high quality, leaving
only a limited gap from the optimal solution. Therefore, we expect that these
methods may serve as the basis for reaching new top results. The best result
for the official EII puzzle instance, 467 obtained using a slipping tile, scanrow backtracking algorithm [12], is still out of our current grasp. However,
that algorithm was highly tailored to the EII puzzle instances, used precomputed sequences and was run over the course of several weeks/months (see
http://www.shortestpath.se/eii/eii details.html). A direct comparison with
the approach presented is partially misleading. Among the other existing
approaches, only [14] shows to be slightly superior to our approach. However, our approach should become more competitive along with the expected
performance improvement of MILP solvers over the years. Clearly, solving
larger subregions in both the constructive heuristics (greedy and backtracking) will lead to better initial solutions. In addition, the effectiveness of the
MIP-based local search neighbourhoods is expected to improve when larger
regions can be solved. If performance improvements allow ILP solvers to
address instances of size 8 × 8 or even 9 × 9 in a reasonable amount of time,
it may safely be assumed that the proposed approach will lead to improved
results competitive with the other state of the art approaches.
5. Conclusions
The present work introduced a hybrid approach to the Eternity II puzzle.
A MILP formulation is related to the puzzle’s optimisation version, where
the total number of unmatched edges should be minimised. It is shown that
the Eternity II puzzle can be modelled as a clique problem, providing, as a
byproduct of this work, new hard instances of the maximum clique problem
18
to the community. Preliminary testing revealed it was clear that these models cannot handle large size instances (such as the original EII puzzle), as
they quickly become computationally intractable. Therefore, these models
were used as the basis for heuristic decompositions, which could then be used
in a hybrid approach.
A greedy and a backtracking constructive heuristic have been designed, which
strongly rely on the capability of optimally solving a specific region of the
puzzle. Within a reasonable time limit, high quality solutions can be generated using these heuristics. A multi-neighbourhood local search approach
has also been proposed. By applying a set of different neighbourhoods, the
local search procedure manages to improve upon the initial solutions generated by the constructive heuristics and reaches solutions competitive to the
best available results.
These results confirm that a novel and clever use of mathematical models and solvers/heuristics is effective for large size problems, which cannot
be solved all in once by the same MILP solver. We believe that hybridizing local search approaches and mathematical programming techniques in a
matheuristic context is the key to break up the intractability of hard problems such as the EII puzzle.
References
[1] Ansótegui C, Béjar R, Fernàndez C, Mateu C. Edge matching puzzles as
hard SAT/CSP benchmarks. In: CP ’08 Proceedings of the 14th international conference on Principles and Practice of Constraint Programming,
560-565, 2008.
[2] Benoist T, Bourreau E. Fast global filtering for Eternity II. Constraint
Programming Letters, 3, 35-50, 2008.
[3] Bomze IM, Budinich M, Pardalos PM, PelilloM. The maximum clique
problem. In: Du DZ, Pardalos PM (eds) Handbook of combinatorial
optimization. Kluwer Academic, Dordrecht, 174, 1999.
[4] Coelho I, Coelho B, Coelho V, Haddad M, Souza M, Ochi L. A
general variable neighborhood search approach for the resolution of the
Eternity II puzzle. In: Proceedings of the 3rd International Conference
19
on Metaheuristics and Nature Inspired Computing, META’10. 2010,
http://www2.lifl.fr/META10/proceedings//meta20100 submission 157.pdf.
[5] Demaine ED, Demaine ML. Jigsaw puzzles, edge matching, and polyomino packing: Connections and complexity. Graphs and Combinatorics, 23, 195-208, 2007.
[6] Grosso A, Locatelli M, Pullan W. Simple ingredients leading to very efficient heuristics for the maximum clique problem. Journal of Heuristics,
14, 587-612, 2008.
[7] Heule MJH. Solving edge-matching problems with satisfiability solvers.
In: Proceedings of the Second International Workshop on Logic and
Search (LaSh 2008), 88-102, 2008.
[8] Kendall G, Parkes A, Spoerer K. A survey of NP-Complete puzzles.
International Computer Games Association Journal, 31, 13-34, 2008.
[9] Kuhn HW. The Hungarian Method for the assignment problem. Naval
Research Logistics Quarterly, 2, 83-97, 1955.
[10] Muñoz J, Gutierrez G, Sanchis A. Evolutionary genetic algorithms in
a constraint satisfaction problem: Puzzle Eternity II. In: Cabestany
J, Sandoval F, Prieto A, Corchado J, editors. Bio-Inspired Systems:
Computational and Ambient Intelligence; LNCS 5517, 720-727, 2009.
[11] Schaus P, Deville Y. Hybridization of CP and VLNS
for Eternity II. In:
JFPC’08 Quatrième Journées Francophones
de
Programmation
par
Contraintes.
2008,
http://www.info.ucl.ac.be/∼yde/Papers/JFPC2008 Eternity en.pdf.
[12] Verhaard L. http://www.shortestpath.se/eii/index.html; 2008.
[13] Wang WS, Chiang TC. Solving eternity-ii puzzles with a tabu
search algorithm. In: Proceedings of the 3rd International Conference on Metaheuristics and Nature Inspired Computing. 2010,
http://www2.lifl.fr/META10/proceedings//meta20100 submission 161.pdf.
[14] Wauters T, Vancroonenburg W, Vanden Berghe G. A guide-and-observe
hyper-heuristic approach to the Eternity II puzzle. Journal of Mathematical Modelling and Algorithms, 11, 217-233, 2012.
20
INSTANCE
10x10
12x12
14x14
16x16
EII-instance
START
greedy
greedy+LS
greedy
greedy+LS
backtracking
backtracking+LS
backtracking
backtracking+LS
greedy
greedy+LS
greedy
greedy+LS
backtracking
backtracking+LS
backtracking
backtracking+LS
greedy
greedy+LS
greedy
greedy+LS
backtracking
backtracking+LS
backtracking
backtracking+LS
greedy
greedy+LS
greedy
greedy+LS
backtracking
backtracking+LS
backtracking
backtracking+LS
greedy
greedy+LS
greedy
greedy+LS
backtracking
backtracking+LS
backtracking
backtracking+LS
Region Size
1 x 10
1 x 10
2 x 10
2 x 10
1 x 10
1 x 10
2 x 10
2 x 10
1 x 12
1 x 12
2 x 12
2 x 12
1 x 12
1 x 12
2 x 12
2 x 12
1 x 14
1 x 14
2 x 14
2 x 14
1 x 14
1 x 14
2 x 14
2 x 14
1 x 16
1 x 16
2 x 16
2 x 16
1 x 16
1 x 16
2 x 16
2 x 16
1 x 16
1 x 16
2 x 16
2 x 16
1 x 16
1 x 16
2 x 16
2 x 16
MAX
165
168
170
170
169
172
170
171
245
247
249
250
248
250
249
252
338
339
344
344
342
344
344
344
448
449
454
454
453
454
457
457
449
450
456
457
453
454
457
457
AVG
161.10
164.90
166.35
167.05
165.65
167.75
167.65
168.10
241.20
244.00
247.00
247.50
245.35
247.75
247.60
248.45
333.00
335.56
340.50
340.80
338.25
340.69
342.17
342.33
444.05
446.20
451.38
451.88
448.80
451.53
453.69
454.00
443.75
446.50
451.75
451.95
449.00
451.35
452.80
453.15
MIN
158
161
164
164
162
165
164
165
239
240
244
245
242
246
246
246
329
332
338
338
334
336
340
340
440
443
448
448
444
449
451
451
440
442
450
450
446
447
448
449
Time Avg. (s)
4.29
1204.29
25.93
1225.93
1200.24
2400.24
1207.06
2407.06
6.83
1806.83
101.02
1901.02
1801.51
3601.51
1868.85
3668.85
10.43
2410.43
2335.42
4735.42
2401.36
4801.36
4664.65
7064.65
24.08
3624.08
11188.03
14788.03
3605.68
7205.68
13722.86
17322.86
21.85
3621.85
13835.04
17435.04
3605.08
7205.08
15294.52
18894.52
Table 3: Results for the MILP-based greedy and backtracking heuristics with and without
local search, using different region sizes.
ObjMax
ObjAvg
ObjMin
Optimum
10 × 10
167
163.2
158
180
12 × 12
238
231.1
223
264
14 × 14
312
299.2
292
364
16 × 16
398
384.3
367
480
EII − instance
391
382
372
480
Table 4: Results for the local search procedure with optimal neighbourhoods starting from
a random solution.
21
Present paper (20 runs)
Present paper (100 runs)
Muñoz et al. [10]
Wang and Chiang [13]
Coelho et al. [4]
Schaus and Deville [11]
Wauters et al. [14]
Verhaard [12]
Optimum
10 × 10
172 (20 min)
172 (40 min)
163 (20 min)
167 (20 min)
172 (20 min)
180
12 × 12
252 (30 min)
252 (60 min)
234 (30 min)
241 (30 min)
254 (30 min)
264
14 × 14
344 (40 min)
347 (80 min)
318 (40 min)
325 (40 min)
348 (40 min)
364
16 × 16
457 (180 min)
457 (360 min)
418 (60 min)
425 (60 min)
460 (60 min)
480
EII − instance
457 (180 min)
458 (360 min)
371 (-)
458 (1440 min)
461 (60 min)
467 (weeks/months)
480
Table 5: Comparison of the best results to other approaches available in the literature.
(Execution times presented within parenthesis)
22
| 8 |
Mapping Images to Scene Graphs with Permutation-Invariant Structured
Prediction
Roei Herzig * 1 Moshiko Raboh * 1 Gal Chechik 2 3 Jonathan Berant 1 Amir Globerson 1
arXiv:1802.05451v1 [stat.ML] 15 Feb 2018
Abstract
Structured prediction is concerned with predicting
multiple inter-dependent labels simultaneously.
Classical methods like CRF achieve this by maximizing a score function over the set of possible
label assignments. Recent extensions use neural
networks to either implement the score function
or in maximization. The current paper takes an alternative approach, using a neural network to generate the structured output directly, without going
through a score function. We take an axiomatic
perspective to derive the desired properties and
invariances of a such network to certain input permutations, presenting a structural characterization
that is provably both necessary and sufficient. We
then discuss graph-permutation invariant (GPI)
architectures that satisfy this characterization and
explain how they can be used for deep structured
prediction. We evaluate our approach on the challenging problem of inferring a scene graph from
an image, namely, predicting entities and their
relations in the image. We obtain state-of-the-art
results on the challenging Visual Genome benchmark, outperforming all recent approaches.
1. Introduction
Structured prediction addresses the problem of classification when the label space contains multiple inter-dependent
labels. For example, in semantic segmentation of an image, each pixel is assigned a label, while considering the
labels of nearby pixels. A similar problem is the task of
recognizing multiple entities and their relations in an image, where recognizing one entity affects recognition of
the others. Structured prediction has attracted considerable
attention because it applies to many learning problems and
poses unique theoretical and applied challenges (e.g., see
Taskar et al., 2004; Chen et al., 2015; Belanger et al., 2017).
*
Equal contribution 1 Tel-Aviv University, Israel 2 Google Brain,
CA, 94043 3 Gonda brain research institute, Bar-Ilan University,
Ramat-Gen 52900, Israel. Correspondence to: Roei Herzig <[email protected]>, Moshiko Raboh <[email protected]>, Gal
Chechick <[email protected]>, Jonathan Berant <[email protected]>, Amir Globerson <[email protected]>.
Typically, structured prediction models define a score function s(x, y) that quantifies how well a label assignment y
is compatible, or consistent, with an input x. In this setup,
the inference task amounts to finding the label that maximizes the compatibility score y ∗ = arg maxy s(x, y). This
score-based approach separates a scoring component – implemented by a parametric model, from an optimization
component – aimed at finding a label that maximizes that
score. Unfortunately, for a general scoring function s(·),
the space of possible label assignments grows exponentially
with input size. For instance, the set of possible pixel label
assignments is too large even for small images. Thus, inferring the label assignment that maximizes a scoring function
is computationally hard in the general case.
An alternative approach to scored-based methods is to map
an input x to a structured output y with a “black box” neural
network, without explicitly defining a score function. This
raises a natural question: what properties and invariances
must be satisfied by such a network? We take this axiomatic
approach and argue that one important property is invariance to a particular type of input permutation. We then
prove that this invariance is equivalent to imposing certain
structural constraints on the architecture of the network, and
describe architectures that satisfy these constraints, significantly extending the expressive power of current structured
prediction approaches. We argue that respecting permutation invariance is important, as otherwise the model would
have to spend capacity on learning this invariance at training
time. Conceptually, our approach is motivated by recent
work on D EEP S ETS (Zaheer et al., 2017), which asked a
similar question for black-box functions on sets.
To evaluate our approach, we tackle the challenging task
of mapping an image to a scene graph, which describes
the entities in the image and their relations. We describe
a model that satisfies the permutation invariance property,
and show that it achieves state-of-the-art results on the competitive Visual Genome benchmark (Krishna et al., 2017),
demonstrating the power of our new design principle.
In summary, the novel contributions of this paper are: First,
we derive sufficient and necessary conditions for a deep
structured prediction architecture. Second, we improve the
state-of-the-art with this approach in a challenging pixel-tograph problem on a large dataset of complex visual scenes.
Mapping Images to Scene Graphs with Permutation-Invariant Structured Prediction
2. Structured Prediction
Scored-based methods in structured prediction define a
score function s(x, y)1 that reflects the degree to which y
is compatible with x, and infer a label by solving y ∗ =
arg maxy s(x, y) (e.g., see Lafferty et al., 2001; Taskar
et al., 2004; Meshi et al., 2010; Chen et al., 2015; Belanger et al., 2017). Most score functions previously used
decompose
as a sum over simpler functions, s(x, y) =
P
i fi (x, y), where solving maxy fi (x, y) can be performed
efficiently.2 This local maximization forms the basic building block of algorithms for approximately maximizing
s(x, y). One way to achieve this is to restrict fi (x, y) to
depend only on a small subset of the y variables.
The renewed interest in deep learning led to efforts to integrate deep networks with structured prediction, including
modeling the fi functions as deep networks. In this context,
the most widely-used score functions are singleton f (yi , x)
and pairwise fij (yi , yj , x). Initial work used a two-stage
architecture, learning local scores independently of the structured prediction goal (Chen et al., 2014; Farabet et al., 2013).
Later works considered end-to-end architectures where the
inference algorithm is part of the computation graph (Chen
et al., 2015; Pei et al., 2015; Schwing & Urtasun, 2015;
Zheng et al., 2015). These studies used standard inference
algorithms, such as loopy belief propagation, mean field
methods and gradient descent (Belanger et al., 2017).
Score-based methods provide several advantages. First, they
allow intuitive specification of local dependencies between
labels (like pairwise dependencies) and how these translate
to global dependencies. Second, when the score function
is linear in its parameters (s(x, y; w) is linear in w), the
learning problem has natural convex surrogates (e.g., logloss in CRF), making learning efficient. Third, inference in
large label spaces is often possible via exact combinatorial
algorithms or empirically accurate approximations.
However, with the advent of deep scoring functions
s(x, y; w), learning is no longer convex. Thus, it is worthwhile to rethink the architecture of structured prediction
models, and consider models that map inputs x to outputs y
directly without an explicit score function. We want these
models to enjoy the expressivity and predictive power of
neural networks, while maintaining the ability to specify
local dependencies between labels in a flexible manner. In
the next section, we present such an approach and consider
a natural question: what should be the properties of such a
deep neural network used for structured prediction.
1
The term energy function is also used.
More precisely, many
P message passing algorithms require that
functions fi (x, y) + k δk (yk ) can be maximized efficiently.
2
3. Permutation Invariant Structured
Prediction
We begin with some notation, focusing on structures that
consists of pairwise interactions, as these are simpler in
terms of notation, and sufficient for describing the structure
in many problems.
We denote a structured label with n entries by y =
[y1 , . . . , yn ]. In a score-based approach, the score is defined via a set of singleton scores fi (yi , x) and pairwise
scores fij (yi , yj , x), where the overall score s(x, y) is the
sum of these singleton and pair scores. For brevity, we also
denote fij = fij (yi , yj , x) and fi = fi (yi , x). An inference algorithm takes as input the set of local scores fi , fij
and outputs the assignment maximizing s(x, y). We can
therefore abstractly view an inference algorithm as a blackbox that takes as input a set of node- and edge-dependent
inputs (i.e., the local scores fi , fij ) and returns a label y,
even without an explicit score function s(x, y). While numerous inference algorithms exist for this setup, including
belief propagation (BP) and mean field, here we aim to develop a framework for a deep learning labeling algorithm
(we avoid the term “inference” since the algorithm does not
explicitly maximize a score function). Such an algorithm
will be a black-box with the f functions as input and the
labels y1 , . . . , yn as output. We next ask what architecture
such an algorithm should have.
We follow with several definitions. A graph labeling function F : (V, E) → Y is a function whose input is an ordered
set of node features V = [z 1 , . . . , z n ] and ordered set of
edge features E = [z 1,2 . . . , z i,j , . . . , z n,n−1 ]. For example, the z i ’s can be the array of values fi (yi , x), and the
z i,j ’s can be the table of values fi,j (yi , yj , x) . For simplicity, assume z i ∈ Rd and z i,j ∈ Re . The output of F is a set
of labels y = [y1 , . . . , yn ], which can be thought of as labeling the nodes. Thus, inference algorithms like BP are graph
labeling functions, since they take f as input and output a
set of labels. However, graph labeling functions need not
correspond to any inference algorithm (i.e., an algorithm
that maximizes a score function).
A natural requirement is that the algorithm produces
the same result when given the same score function.
For example, consider a label space containing three
variables y1 , y2 , y3 , and assume that the inference algorithm takes as input z = (z 1 , z 2 , z 3 , z 12 , z 13 , z 23 ) =
(f1 , f2 , f3 , f12 , f13 f23 ), and outputs a label y =
(y1∗ , y2∗ , y3∗ ). When the same algorithm is given an
input that is permuted in a consistent way, z 0 =
(f2 , f1 , f3 , f21 , f23 f13 ), this defines exactly the same score
function as the first scenario. Hence, we would expect it to
output the same label, only permuted, namely, it should output y = (y2∗ , y1∗ , y3∗ ). Most inference algorithms, including
BP and mean field, satisfy this symmetry requirement by de-
Mapping Images to Scene Graphs with Permutation-Invariant Structured Prediction
3.1. Characterizing Permutation Invariance
Figure 1. Graph permutation invariance and structured prediction. A graph labeling function F is graph permutation invariant (GPI) if permuting the names of nodes maintains the output.
sign. Here we design a deep learning black-box, hence need
to guarantee invariance to input permutations. A black-box
that does not satisfy this invariance has to waste capacity on
learning it at training time.
In what follows we use z to denote the joint set of node
and edge features. z can be thought of as a container with
n + n(n − 1) = n2 elements.
We next consider what happens to a graph labeling function,
when graph variables are permuted by a permutation σ.
Importantly, the edges in this case are also permuted in a
way that is consistent with the node permutation σ.
Definition 1. Let z be a set of node and edge features.
Given a permutation σ of {1, . . . , n}, denote σ(z) to be a
new set of node and edge features that are given by:
[σ(z)]i = z σ(i)
,
[σ(z)]i,j = z σ(i),σ(j) .
(1)
σ(z) has the same elements as in z, but the node elements
are permuted according to σ, and the edge elements are
permuted accordingly. In what follows, we use the notation
σ([y1 , . . . , yn ]) = [yσ(1) , . . . , yσ(n) ], namely, σ applied to
a set of labels yields the same labels, only permuted by σ.
Next comes our key definition of a function F whose output
is invariant to permutations of the input graph.
Definition 2. A graph labeling function F is said to be
graph-permutation invariant (GPI), if for all permutations σ of {1, . . . , n} and for all z it satisfies:
F(σ(z)) = σ(F(z)).
(2)
Figure 1 illustrates the desired invariance. The above property says that as long as the input to F describes the same
node and edge properties, the same labeling will be output.
This is indeed a property we would like any such F to have,
and we thus turn to characterizing a necessary and sufficient
structure for achieving it.
Motivated by the above discussion, we ask: what structure
is necessary and sufficient to guarantee that F is graphpermutation invariant? Note that a function F takes as input
an ordered set z. Therefore its output on z could certainly
differ from its output on σ(z). To achieve permutation invariance, F should intuitively contain certain symmetries.
For example, one permutation invariant architecture is to define yi = g(z i ) for any function g, but this characterization
is too restrictive to cover all permutation invariant functions.
The next Theorem provides a complete characterization,
while Figure 2 shows the corresponding architecture.
Theorem 1. Let F be a graph labeling function. Then
F is graph-permutation invariant if and only if there exist
functions α, ρ, φ such that for all k = 1, . . . , n:
[F(z)]k = ρ(z k ,
n
X
α(z i ,
i=1
X
φ(z i , z i,j , z j ))),
(3)
j6=i
where φ : R2d+e → RL and α : Rd+L → RW and ρ :
RW +d → R.
Proof. First, we show that any F satisfying the conditions of Theorem 1 is GPI. Namely, for any permutation
σ, [F(σ(z))]k = [F(z)]σ(k) . To see this, write [F(σ(z))]k
using Eq. 3 and Definition 1 as:
X
X
ρ(z σ(k) ,
α(z σ(i) ,
φ(z σ(i) , z σ(i),σ(j) , z σ(j) ))).
i
j6=i
The second argument of ρ above is clearly invariant under σ,
because the sum considers an index i and all other indices
j, hence the same elements are covered under permutation.
The expression therefore equals to:
X
X
ρ(z σ(k) ,
α(z i ,
φ(z i , z i,j , z j ))) = [F(z)]σ(k)
i
j6=i
where the equality follows from Eq. 3. We thus proved that
Eq. 3 implies graph permutation invariance.
Next, we prove that any black-box graph-permutation invariant function can be expressed as in Eq. 3. Namely, we
show how to define φ, α and ρ that can implement any permutation invariant function F. The key idea is to construct
φ, α such that the second argument of ρ in Eq. 3 contains
all the information about the graph features z, including the
edges they originated from . Then, the function ρ consists
of an application of the black box F to this representation,
followed by extracting the label yk .
To simplify notation assume that edge features are scalar
(e = 1). The extension to the vector case is simple, but
involves more indexing. We also assume that z k uniquely
identifies the node (i.e., no two nodes share the same node
Mapping Images to Scene Graphs with Permutation-Invariant Structured Prediction
Figure 2. A schematic representation of the GPI architecture in
Theorem 1. Singleton features z i are omitted for simplicity. First,
the features z i,j are processed element-wise by φ. Next, they
are summed to create a vector si , which is concatenated with z i .
Third, a representation of the entire graph is created by applying α
n times and summing the created vector. The graph representation
is then finally processed by ρ together with z k .
feature), which can be achieved by adding the index as another feature of z k . Finally, we assume that F is a function
only of the pairwise features z i,j . This can be achieved by
adding singleton features into the pairwise ones.
Let H be a hash function with L buckets mapping node
features z i to an index (bucket). Assume that H is perfect
(this can be achieved for a large enough L). Define φ to
map the pairwise features to a vector of size L. Let 1 [j]
be a one-hot vector of dimension RL , with one in the j th
coordinate. Recall that we consider scalar z i,j so that φ is
indeed in RL , and define φ as:
φ(z i , z i,j , z j ) = 1 [H(z j )] zi,j
(4)
i.e., φ “stores” zi,j in the unique bucket for node j.
P
Let si = j6=i φ(z i , zi,j , z j ) be the second argument of α
in Eq. 3 (si ∈ RL ). Then, since all z j are distinct, si stores
all the pairwise features for neighbors of i in unique positions within its L coordinates. Since si (H(z k )) contains
the feature zi,k whereas sj (H(z k )) contains the feature
z j,k , we cannot simply sum the si , since we would lose
the information of which edges the features originated from.
Instead, we define α to map si to RL×L such that each
feature is mapped to a distinct location. Formally:
α(z i , si ) = 1 [H(z i )] sTi .
(5)
α outputs a matrix that is all zeros except for the features
correspondingP
to node i that are stored in row H(z i ). The
matrix M = i α(z i , si ) (namely, the second argument
of ρ in Eq. 3) is a matrix with all the edge features in the
graph including the graph structure.
Figure 3. Illustration of the proof construction for Theorem 1. Here
H is a hash function of size L = 5 such that H(1) = 1, H(3) =
2, H(2) = 4, G is a three-node input graph, and z i,j ∈ R are
the pairwise features (in purple) of G. (a) φ is applied to each
z i,j . Each application yields a vector in R5 . The three dark yellow
columns correspond to φ(z 1,1 ),φ(z 1,2 ) and φ(z 1,3 ). Then, all
vectors φ(z i,j ) are summed over j to obtain three si vectors. (b)
α’s (blue matrices) are an outer product between 1 [H(z i )] and si
(see Eq. 5) resulting in a matrix of zeros except one row. The dark
blue matrix corresponds for α(z 1 , s1 ). (c) All α’s are summed to
a 5 × 5 matrix, isomorphic to the original z i,j matrix.
To complete the construction we set ρ to have the same
outcome as F. We first discard rows and columns in M
that do not correspond to original nodes (reducing M to
dimension n × n). Then, we use the reduced matrix as the
input z to the given F. Assume for simplicity that M does
not need to be contracted.3 Let the output of F on M be
y = y1 , . . . , yn . Then we set ρ(z k , M ) = y H(zk ) . Since
F is invariant to permutations this indeed returns the output
of F on the original input.
General Graphs So far, we discussed complete graphs,
where all edges correspond to valid feature pairs. Many
graphs however may be sparse and have certain structures.
For example, an n-variable chain graph in sequence labeling has only n − 1 edges. For such sparse graphs, the input
to F would not be all z i,j pairs but rather only features
corresponding to valid edges of the graph, and we are only
interested in invariances that preserve the graph structure,
namely, the automorphisms of the graph. Thus, the desired
invariance is that σ(F(z)) = F(σ(z)) only for automorphisms of the graph. It is easy to see that
P Theorem
P1 holds
in this case, if one replaces the sum j6=i with j∈N (i) ,
where N (i) are the neighbors of node i in the graph.
3
This merely introduces another indexing step.
Mapping Images to Scene Graphs with Permutation-Invariant Structured Prediction
4. Deep Graph Prediction
Theorem (1) provides the general requirements for designing an architecture for structured prediction. For a given
problem, one has to choose a specific architecture and parameterization for α, φ, ρ.
For instance, it is interesting to consider how an algorithm
like belief propagation (BP) can be implemented in our
framework. Following the proof of Theorem 1, one would
use φ, α to aggregate features, and then ρ would apply BP to
these features. Our architecture is of course more general by
construction. For example, it could use φ and α to “sketch”
the input graph, such that labeling can be performed on a
reduced representation.
We now survey certain architectures consistent with Theorem 1 and discuss their expressive power.
Introducing Attention. Attention is a powerful architectural component in deep learning (Bahdanau et al., 2015),
but most inference algorithms do not use attention. We now
show how attention can be introduced in our framework.
Intuitively, attention means that instead of aggregating features of neighbors, a node i weighs neighbors based on their
relevance. For example, the label of an entity in an image may depend more strongly on entities that are spatially
closer. We now implement attention for the architecture of
Eq. 3. Formally, we learn attention weights for the neighbors j of a node i, which scale the features z i,j of that
neighbor. We can also learn different attention weights for
individual features of each neighbor in a similar way.
Let wi,j ∈ R be an attention mask specifying the weight
that node i gives to node j:
X
wi,j (z i , z i,j , z j ) = eβ(zi ,zi,j ,zj ) /
eβ(zi ,zi,t ,zt ) , (6)
t
where β can be any scalar-valued function of its arguments
(e.g., a dot product of z i and z j as in standard attention
models). To introduce attention we wish α ∈ Re to have
the form of weighting
P wi,j over neighboring feature vectors
z i,j , namely, α = j6=i wi,j z i,j .
Figure 4. An image (top) and its scene graph (bottom) from the
Visual Genome dataset (Krishna et al., 2017). The scene graph
captures the entities in the image (nodes, blue circles) and their
pairwise relations (edges, red circles). Example relationships in
this graph include: hat, on, dog and dog, on, motorcycle .
Using RNNs as Components. Theorem 1 allows arbitrary functions for φ, α and ρ, except for their input dimensionality. Specifically, these functions can involve highly expressive recursive computation, simulate existing message
passing algorithms, and new algorithms that are learned
from data. This can of course be extended to more elaborate
structures like LSTMs (Hochreiter & Schmidhuber, 1997)
and Neural Turing Machines (Graves et al., 2014), which
we leave for future work.
Theorem 1 suggests that any function in the form of F is
graph permutation invariant. It is easy to show that composing two functions that are GPI, is also GPI. Therefore,
we can run F iteratively by providing the output of one
step of F as part of the input to the next step and maintain
graph-permutation invariance. This results in a recurrent
architecture, which we will employ in the next section to obtain state-of-the-art performance on scene graph prediction.
5. Application - Scene Graph Classification
X
si,1:e X eβ(zi ,zi,j ,zj ) z i,j
P
=
=
wi,j z i,j
β(z
,z
,z
)
i
i,j
j
si,e+1
j6=i e
We demonstrate the benefits of our axiomatic approach in
the task of inferring scene graphs from images. In this problem, the input is an image annotated with a set of rectangles
that bound entities in the image, known as bounding boxes.
The goal is to label each bounding box with the correct
entity category, and every pair of entities with their relation,
such that they form a coherent graph, known as a scene
graph. In a scene graph, nodes correspond to bounding
boxes labeled with the entity category and edges correspond
to relations among entities, which could be spatial (“on”) or
functional (“wearing”). Thus, in an image with 5 bounding
boxes there are 5 + 5 × 4 = 25 output variables.
A similar approach can be applied over α and ρ to model
attention over the outputs of α as well (graph nodes).
This concept is illustrated in Figure 4, showing an image of
a dog on a motorcycle (top) and the corresponding scene
To achieve this form we extend φ by a single entry, defining
φ ∈ Re+1 (namely we set L = e+1) as φ1:e (z i , z i,j , z j ) =
eβ(zi ,zi,j ,zj ) z i,j (here φ1:e are the first e elements of φ) and
φe+1 (z iP
, z i,j ; z j ) = eβ(zi ,zi,j ,zj ) . We keep the definition
si,1:e
of si = j6=i φ(z i , zi,j , z j ). Next, we define α = si,e+1
and substitute si and φ to obtain the desired form as attention weights wi,j over neighboring feature vectors z i,j :
α(z i , si ) =
j6=i
j6=i
Mapping Images to Scene Graphs with Permutation-Invariant Structured Prediction
graph below. The pink box in the image is labeled “motorcycle” and the white box is labeled “dog”. These two
boxes correspond to two nodes (light blue circles in Figure 4
bottom), and their relation “on” corresponds to an edge (red
circle labeled “on”). While scene graphs are typically very
sparse, one can view a scene graph as complete if each pair
of unrelated entities is connected by a ‘null’ edge. A scene
graph can be represented as a collection of triplets, each
with a relation and two entities, like dog, on, motorcycle .
5.1. Model
Our model has two components: A Label Predictor (LP)
that takes as input an image with bounding boxes and outputs a distribution over labels for each entity and relation.
Then, a Scene Graph Predictor (SGP) that takes all label
distributions and predicts more consistent label distributions
jointly for all entities and relations.
Label Prediction. The LP module (Figure 5) receives an
input image x and a set of bounding boxes bb1 , . . . , bbn ,
corresponding to image entities (as in Figure 4). LP outputs
a set of entity label probabilities p(yient | bbi ) for each box i
from a pre-defined set of candidate entity labels P and a set
rel
of relation probabilities p(yi,j
| bbi , bbj ) from another predefined set of relation labels R. These unary and pairwise
potentials are later fed to the SGP module.
To predict entity labels, we used R ES N ET 50 (He et al.,
2016a;b), taking as input a patch cropped from the full
image according to the ith bounding box (Figure 5). We used
a second R ES N ET 50 to predict relations, using a 5-channel
tensor input: three channels for the RGB image, and two
channels for binary masks for the subject entity and object
entity bounding boxes (Figure 5). The image patch provided
to this network was cropped such that it covers both the
subject entity and the object entity. Providing the two binary
masks breaks the symmetry of the subject entity and object
entity and allow the network to discriminate between triplets
like man, wearing, shirt and shirt, on, man ).
Scene Graph Prediction. While the LP module described above is trivially GPI, because the output variables
rel
yient , yi,j
are predicted independently, constructing a GPI
architecture for a Scene Graph Predictor is harder. We now
outline this construction. Entity classification in this module
is GPI following Theorem 1, where z i are features for every
bounding box and z i,j are features for box pairs. To classify
relations, we added a function ρrelation that reuses the GPI
representation created during entity classification. Because
the input to ρrelation is a GPI representation, it is easy to show
that our entire network is GPI.
Let z i be the concatenation of z features
and z spatial
. Where
i
i
features
zi
is the current label probability for entity i (logits
Figure 5. The Label Predictor. (a) Entity recognition network:
A network takes an image patch cropped based on a bounding
box, and outputs classification probabilities per label. (b) Relation
recognition network: A network takes an input tensor, containing
the RGB image in the first 3 channels, and two binary masks for
the subject and object entities in the remaining two channels.
before the final softmax layer) and z spatial
is i’s bounding
i
box given as a (x, y, width, height). In addition,
for z i,j , we used the confidences for relation i, j (logits before the final softmax layer). In each step of SGP we apply
the function F, which receives all entity features z i and
all relation features z i,j , and output updated confidences
for entities and relations. Because composing GPI functions is GPI, our SGP module is GPI. We now describe our
implementation of the three components of F: φ, α and ρ.
(1) φ is a network with two FC-layers. It receives (a) subject
features z i (b) relations features z i,j (c) entity features z j
and outputs a vector of size 500. Next, for each entity
i, we aggregate φ(z i , z i,j , z j ) into si using the attention
mechanism described in Section 4. To calculate the weights
wi,j , we implement β(·) (Eq. 6) with a FC layer that receives
the same input as φ and outputs a scalar.
(2) α is a two FC-layer network, receiving entity features
z i and context features si . The outputs of α are aggregated
with a similar attention mechanism over entities, resulting
in a vector g ∈ R500 representing the entire graph.
(3) ρ, consists of ρentity , which classifies entities, and ρrelation ,
which classifies relations. ρentity is a three FC-layer network
of size 500. It receives z i , si and g as input, and outputs a
vector qi with one scalar per entity class. Unlike Theorem
1, we allow ρ direct access to si , which maintains the GPI
property, and improved learning in practice. The final output
confidence is a linear interpolation of the current confidence
z fi eatures and the new confidence qi , controlled by a learned
forget gate, i.e., the output is qi + forget · z fi eatures . ρrelation ,
the relation classifier, is analogous to the entity classifier,
receiving as input z i , z j , the relation features z i,j , and the
graph representation g.
Mapping Images to Scene Graphs with Permutation-Invariant Structured Prediction
We also explored concatenating word embeddings of the
most probable entity class to z i . Word vectors were learned
with G LOV E (Pennington et al., 2014) from the ground-truth
captions of Visual Genome (Krishna et al., 2017).
5.2. Experimental Setup
Dataset. We evaluated our approach on the Visual
Genome (VG) dataset (Krishna et al., 2017). VG consists
of 108,077 images annotated with bounding boxes, entities
and relations. Distribution over entity classes and relations
is long-tailed with a total of 75,729 unique entity classes
and 40,480 unique relations. To allow apple-to-apple comparison with previous studies of this dataset (Xu et al., 2017;
Newell & Deng, 2017; Zellers et al., 2017), we used the
same preprocessed data, including the train and test splits,
as provided by (Xu et al., 2017). This dataset had on average
12 entities and 7 relations per image. For evaluation, we
used the same 150 entity categories and 50 relations as in
(Xu et al., 2017; Newell & Deng, 2017; Zellers et al., 2017).
To tune hyper-parameters, we also split the training data
into two by randomly selecting 5K examples, resulting in a
final 70K/5K/32K split for train/validation/test sets.
Training Procedure. We trained all networks using
Adam (Kingma & Ba, 2014); Input images were resized
to 224x224 to conform with the R ES N ET architecture. We
first trained the LP module, and then trained the SGP module using the best LP model.
In what follows, all the particular chosen values were tuned
on the validation set. For LP, we trained the relation network
with cross-entropy loss and a positive-to-negative ratio of
1:3 (where ‘positive’ refers to a labeled relation and ‘negative’ to unlabeled), and performed early-stopping after 90
epochs. We chose a batch size of 64, and also used data
augmentation techniques such as translation and rotation to
further improve the results. The loss function for the SGP
was the sum of cross entropy losses over all entities and
relations in the image. In the loss, we penalized entities 4
times more strongly than relations, and penalized negative
relations 10 times more weakly than positive relations. We
used batch size 100 and early-stopped after 120 epochs. The
recurrent application of F was performed for 2 steps.
Evaluation. Xu et al. (2017) defined three different subtasks when inferring scene graphs, and we focus on two:
(1) SGCls: Given ground-truth bounding boxes for entities,
predict all entity categories and relations categories.
(2) PredCls: Given bounding boxes annotated with entity
labels, predict all relations. Following (Lu et al., 2016), we
used Recall@K as the evaluation metric. It measures the
fraction of correct ground-truth triplets that appear within
the K most confident triplets proposed by the model.
Two evaluation protocols are used in the literature, which
differ by whether they enforce graph constraints over model
predictions. The first protocol requires that the top-K
triplets assign one consistent class per entity and relation.
This rules out putting more than one triplet for a pair of
bounding boxes. It also rules out inconsistent assignment,
like a bounding box that is labeled as one entity in one
triplet, and as another entity in another triplet. The second
evaluation protocol does not enforce any such constraints.
Models and baselines. We compare four variants of our
GPI approach with the reported results of four baselines that
are currently the state-of-the-art on various scene graph subtasks. All models use the same data split and pre-processing
as (Xu et al., 2017):
1. (L U ET AL ., 2016): This work leverages word embeddings to fine-tune the likelihood of predicted relations.
2. (X U ET AL ., 2017): This model passes messages between entities and relations, and iteratively refines the
feature map used for prediction.
3. (N EWELL & D ENG , 2017). The P IXEL 2G RAPH
model uses associative embeddings (Newell et al.,
2017) to produce a full graph from the image.
4. (Z ELLERS ET AL ., 2017) The N EURAL M OTIF
method encodes global context for capturing highorder motifs in scene graphs.
5. GPI: N O ATTENTION: Our GPI model, but with no
attention mechanism. Instead, following Theorem 1,
we simply sum the features.
6. GPI: N EIGHBOR ATTENTION: Our GPI model, using
attention over neighbors as described in Section 5.1.
7. GPI: M ULTI ATTENTION: Our GPI model, except that
we learn different attention weights per feature.
8. GPI: L INGUISTIC : Same as GPI: M ULTI ATTENTION
but also concatenating the word embedding vector for
the most probable entity label (see Sec. 5.1).
5.3. Results
Table 1 lists recall@50 and recall@100 for four variants
of our approach compared with three baselines, evaluating
with graph constraints. The GPI approach performs well,
and L INGUISTIC outperforms all baselines for both PredCls
and SGCls. Table 2 provides a similar comparison when
evaluating without graph constraints; again L INGUISTIC
performs best. More details in the supplemental material.
Figure 6 illustrates the model behavior. Predicting isolated
labels with LP (column (c)) mislabels several entities, but
these are corrected after joint prediction (column (d)). Column (e) shows that the system learned to attend more to
nearby entities (the window and building are closer to the
tree), and column (f) shows that stronger attention is learned
for the classes bird, presumably because it is usually more
informative than common classes like tree.
Mapping Images to Scene Graphs with Permutation-Invariant Structured Prediction
Figure 6. (a) An input image with bounding boxes from VG. (b) The ground-truth scene graph. (c) The LP fails to recognize some entities
(building and tree) and relations (in front of instead of looking at). (d) GPI:L INGUISTIC fixes most incorrect LP predictions. (e) Window
is the most significant neighbor of Tree. (f) The entity bird receives substantial attention, while Tree and building are less informative.
Table 1. Test set results for graph-constrained evaluation
SGC LS
R@50
R@100
(L U ET AL ., 2016)
(X U ET AL ., 2017)
N EURAL M OTIFS
N O ATTENTION
N EIGHBOR ATTEN .
M ULTI ATTENTION
L INGUISTIC
11.8
21.7
31.3
28.9
30.6
32.3
32.1
14.1
24.4
32.1
31.0
32.4
34.1
34.1
P RED C LS
R@50
R@100
35.0
44.8
65.8
63.0
63.7
65.4
66.0
27.9
53.0
68.0
65.2
66.5
67.8
68.3
Table 2. Test set results for unconstrained evaluation
SGC LS
R@50
R@100
P IXEL 2G RAPH
N O ATTENTION
N EIGHBOR ATTEN .
M ULTI ATTENTION
L INGUISTIC
26.5
35.2
37.6
39.0
39.2
30.0
42.1
43.2
44.5
44.9
P RED C LS
R@50
R@100
68.0
70.3
73.7
74.5
75.3
75.2
75.1
79.2
81.0
82.4
6. Related Work
There has been significant recent interest in extending deep
learning to structured prediction. Much of this work has
been on semantic segmentation, where convolutional networks (Shelhamer et al., 2017) became a standard approach
for obtaining “singleton scores” and various approaches
were proposed for adding structure on top. Most of these
approaches used variants of message passing algorithms,
unrolled into a computation graph (Xu et al., 2017). Some
studies parameterized parts of the message passing algorithm and learned its parameters (Lin et al., 2015). Recently,
gradient descent has also been used for maximizing score
functions (Belanger et al., 2017; Gygli et al., 2017).
An alternative approach for deep structured prediction is via
greedy decoding, where one label is inferred at a time, based
on previous labels. This has been popular in sequence-based
applications like dependency parsing (Chen & Manning,
2014). These works rely on the sequential structure of the
Table 3. Recall@5 of PredCls for the 20-top relations ranked by
their frequency, as in (Xu et al., 2017)
R ELATION
ON
HAS
IN
OF
WEARING
NEAR
WITH
ABOVE
HOLDING
BEHIND
UNDER
SITTING ON
IN FRONT OF
ATTACHED TO
AT
HANGING FROM
OVER
FOR
RIDING
L U ET AL .
(2016)
X U ET AL .
(2017)
L INGUIS -
99.71
98.03
80.38
82.47
98.47
85.16
31.85
49.19
61.50
79.35
28.64
31.74
26.09
8.45
54.08
0.0
9.26
12.20
72.43
99.25
97.25
88.30
96.75
98.23
96.81
88.10
79.73
80.67
92.32
52.73
50.17
59.63
29.58
70.41
0.0
0.0
31.71
89.72
99.4
98.9
96.2
98.2
99.5
95.0
94.2
83.7
95.6
90.6
83.2
90.5
74.7
77.2
81.1
74.7
54.9
43.4
96.2
TIC
input, where BiLSTMs can be effectively applied.
The concept of architectural invariance was recently proposed in D EEP S ETS (Zaheer et al., 2017). The invariance
we consider is much less restrictive (i.e., we do not need to
be invariant to all permutations of singleton and pairwise
features, just those consistent with a graph re-labeling), and
hence results in a substantially different set of architectures.
Extracting scene graphs from images provides a semantic
representation that can later be used for reasoning, question
answering, and image retrieval (Johnson et al., 2015; Lu
et al., 2016; Raposo et al., 2017). It is at the forefront of
machine vision research, integrating challenges like object
detection, action recognition and detection of human-object
interactions (Liao et al., 2016; Plummer et al., 2017).
Mapping Images to Scene Graphs with Permutation-Invariant Structured Prediction
7. Conclusion
We presented a deep learning approach to structured prediction, which constrains the architecture to be invariant to
structurally identical inputs. As in score-based methods,
our approach relies on pairwise features, capable of describing inter-label correlations, and thus inheriting the intuitive
aspect of score-based approaches. However, instead of maximizing a score function (which leads to computationallyhard inference), we directly produce an output that is invariant to equivalent representations of the pairwise terms.
The axiomatic approach can be extended in many ways.
For image labeling, geometric invariances (shift or rotation)
may be desired. In other cases, invariance to feature permutations may be desirable. We leave the derivation of the
corresponding architectures to future work. Finally, there
may be cases where the invariant structure is unknown and
should be discovered from data, which is related to work
on lifting graphical models (Bui et al., 2013). It would be
interesting to explore algorithms that discover and use such
symmetries for deep structured prediction.
References
Bahdanau, D., Cho, K., and Bengio, Y. Neural machine
translation by jointly learning to align and translate. In
International Conference on Learning Representations
(ICLR), 2015.
Belanger, David, Yang, Bishan, and McCallum, Andrew.
End-to-end learning for structured prediction energy networks. In Precup, Doina and Teh, Yee Whye (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70, pp. 429–439. PMLR, 2017.
Bui, Hung Hai, Huynh, Tuyen N., and Riedel, Sebastian.
Automorphism groups of graphical models and lifted
variational inference. In Proceedings of the TwentyNinth Conference on Uncertainty in Artificial Intelligence, UAI’13, pp. 132–141, Arlington, Virginia, United
States, 2013. AUAI Press. URL http://dl.acm.
org/citation.cfm?id=3023638.3023652.
Chen, Danqi and Manning, Christopher. A fast and accurate
dependency parser using neural networks. In Proceedings
of the 2014 conference on empirical methods in natural
language processing (EMNLP), pp. 740–750, 2014.
Chen, Liang Chieh, Papandreou, George, Kokkinos, Iasonas,
Murphy, Kevin, and Yuille, Alan L. Semantic image
segmentation with deep convolutional nets and fully connected CRFs. In Proceedings of the Second International
Conference on Learning Representations, 2014.
Chen, Liang Chieh, Schwing, Alexander G, Yuille, Alan L,
and Urtasun, Raquel. Learning deep structured models.
In Proc. ICML, 2015.
Farabet, Clement, Couprie, Camille, Najman, Laurent, and
LeCun, Yann. Learning hierarchical features for scene
labeling. IEEE transactions on pattern analysis and machine intelligence, 35(8):1915–1929, 2013.
Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural
turing machines. arXiv preprint arXiv:1410.5401, 2014.
Gygli, Michael, Norouzi, Mohammad, and Angelova,
Anelia. Deep value networks learn to evaluate and iteratively refine structured outputs. In Precup, Doina and
Teh, Yee Whye (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of
Proceedings of Machine Learning Research, pp. 1341–
1351, International Convention Centre, Sydney, Australia,
2017. PMLR.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun,
Jian. Deep residual learning for image recognition. In
2016 IEEE Conference on Computer Vision and Pattern
Recognition, CVPR 2016, Las Vegas, NV, USA, June 2730, 2016, pp. 770–778, 2016a.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun,
Jian. Identity mappings in deep residual networks. In
ECCV, volume 9908 of Lecture Notes in Computer Science, pp. 630–645. Springer, 2016b.
Hochreiter, S. and Schmidhuber, J. Long short-term memory.
Neural Computation, 9(8):1735–1780, 1997.
Johnson, Justin, Krishna, Ranjay, Stark, Michael, Li, LiJia, Shamma, David A., Bernstein, Michael S., and Li,
Fei-Fei. Image retrieval using scene graphs. In IEEE
Conference on Computer Vision and Pattern Recognition,
CVPR 2015,, pp. 3668–3678, 2015.
Kingma, Diederik P. and Ba, Jimmy. Adam: A method for
stochastic optimization. arXiv preprint arXiv: 1412.6980,
abs/1412.6980, 2014. URL http://arxiv.org/
abs/1412.6980.
Krishna, Ranjay, Zhu, Yuke, Groth, Oliver, Johnson, Justin,
Hata, Kenji, Kravitz, Joshua, Chen, Stephanie, Kalantidis, Yannis, Li, Li-Jia, Shamma, David A, et al. Visual
genome: Connecting language and vision using crowdsourced dense image annotations. International Journal
of Computer Vision, 123(1):32–73, 2017.
Lafferty, J., McCallum, A., and Pereira, F. Conditional
random fields: Probabilistic models for segmenting and
labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning, pp. 282–289,
2001.
Mapping Images to Scene Graphs with Permutation-Invariant Structured Prediction
Liao, Wentong, Yang, Michael Ying, Ackermann, Hanno,
and Rosenhahn, Bodo. On support relations and semantic
scene graphs. arXiv preprint arXiv:1609.05834, 2016.
Lin, Guosheng, Shen, Chunhua, Reid, Ian, and van den
Hengel, Anton. Deeply learning the messages in message
passing inference. In Advances in Neural Information
Processing Systems, pp. 361–369, 2015.
Lu, Cewu, Krishna, Ranjay, Bernstein, Michael S., and
Li, Fei-Fei. Visual relationship detection with language
priors. In European Conference on Computer Vision, pp.
852–869, 2016.
Meshi, O., Sontag, D., Jaakkola, T., and Globerson, A.
Learning efficiently with approximate inference via dual
losses. In Proceedings of the 27th International Conference on Machine Learning, pp. 783–790, New York, NY,
USA, 2010. ACM.
Newell, Alejandro and Deng, Jia. Pixels to graphs by associative embedding. In Advances in Neural Information
Processing Systems 30 (to appear), pp. 1172–1180. Curran Associates, Inc., 2017.
Newell, Alejandro, Huang, Zhiao, and Deng, Jia. Associative embedding: End-to-end learning for joint detection
and grouping. In Advances in Neural Information Processing Systems 30, pp. 2274–2284. Curran Associates,
Inc., 2017.
Pei, Wenzhe, Ge, Tao, and Chang, Baobao. An effective
neural network model for graph-based dependency parsing. In Proceedings of the 53rd Annual Meeting of the
Association for Computationa Linguistics, pp. 313–322,
2015.
Pennington, Jeffrey, Socher, Richard, and Manning, Christopher D. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543, 2014. URL http:
//www.aclweb.org/anthology/D14-1162.
Plummer, Bryan A., Mallya, Arun, Cervantes, Christopher M., Hockenmaier, Julia, and Lazebnik, Svetlana.
Phrase localization and visual relationship detection with
comprehensive image-language cues. In ICCV, 2017.
Raposo, David, Santoro, Adam, Barrett, David, Pascanu,
Razvan, Lillicrap, Timothy, and Battaglia, Peter. Discovering objects and their relations from entangled scene
representations. arXiv preprint arXiv:1702.05068, 2017.
Schwing, Alexander G and Urtasun, Raquel. Fully connected deep structured networks. ArXiv e-prints, 2015.
Shelhamer, Evan, Long, Jonathan, and Darrell, Trevor. Fully
convolutional networks for semantic segmentation. IEEE
Conference on Computer Vision and Pattern Recognition,
CVPR 2015,, 39(4):640–651, 2017.
Taskar, B., Guestrin, C., and Koller, D. Max margin Markov
networks. In Thrun, S., Saul, L., and Schölkopf, B. (eds.),
Advances in Neural Information Processing Systems 16,
pp. 25–32. MIT Press, Cambridge, MA, 2004.
Xu, D., Zhu, Y., Choy, C. B., and Fei-Fei, L. Scene Graph
Generation by Iterative Message Passing. In The IEEE
Conference on Computer Vision and Pattern Recognition.
2017.
Zaheer, Manzil, Kottur, Satwik, Ravanbakhsh, Siamak, Poczos, Barnabas, Salakhutdinov, Ruslan R, and Smola,
Alexander J. Deep sets. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and
Garnett, R. (eds.), Advances in Neural Information Processing Systems 30, pp. 3394–3404. Curran Associates,
Inc., 2017.
Zellers, Rowan, Yatskar, Mark, Thomson, Sam, and
Choi, Yejin. Neural motifs: Scene graph parsing
with global context. arXiv preprint arXiv:1711.06640,
abs/1711.06640, 2017. URL http://arxiv.org/
abs/1711.06640.
Zheng, Shuai, Jayasumana, Sadeep, Romera-Paredes,
Bernardino, Vineet, Vibhav, Su, Zhizhong, Du, Dalong,
Huang, Chang, and Torr, Philip HS. Conditional random
fields as recurrent neural networks. In Proceedings of the
IEEE International Conference on Computer Vision, pp.
1529–1537, 2015.
| 1 |
Improved Space-efficient Linear Time
Algorithms for Some Classical Graph
Problems
Sankardeep Chakraborty1, Seungbum Jo2 , and Srinivasa Rao Satti3
arXiv:1712.03349v1 [cs.DS] 9 Dec 2017
1
The Institute of Mathematical Sciences, HBNI, Chennai, India. [email protected]
2
University of Siegen, Siegen, Germany. [email protected]
3
Seoul National University, Seoul, South Korea. [email protected]
We provide space-efficient linear time algorithms for computing bridges, topological sorting, and strongly connected components improving on several recent results
of Elmasry et al. [STACS’15], Banerjee et al. [COCOON’16] and Chakraborty
et al. [ISAAC’16]. En route, we also provide another DFS implementation with
weaker input graph representation assumption without compromising on the time
and space bounds of the earlier results of Banerjee et al. [COCOON’16] and Kammer et al. [MFCS’16].
1 Introduction
Since the early days of designing graph algorithms, researchers have developed several approaches for testing whether a given undirected (or directed) graph G = (V, E) with n vertices
and m edges is (strongly connected) biconnected and/or 2-edge connected, and finding cut
vertices and/or bridges of G. All of these methods use depth-first search (DFS) as the backbone to design the main algorithm. The classical linear time algorithms due to Tarjan [11, 12]
computes the so-called “low-point” values (which are defined in terms of a DFS-tree of G) for
every vertex v, and checks some conditions using that to determine whether G has the desired
property. There are other linear time algorithms as well for these problems (see [10] and all
the references therein). All of these classical algorithms take O(m + n) time and O(n) words
(our model of computation is the standard word RAM model with word size w = Ω(lg n) bits)
of space. Our aim is to improve the space bounds of these algorithms without increasing the
running time.
1.1 Motivation and Related Work
Motivated mainly by the “big data” phenomenon among others, recently there has been a surge
of interest in improving the space complexity of the fundamental linear time graph algorithms
by paying little or no penalty in the running time i.e., reducing the working space of the classical
graph algorithms (which generally take O(n lg n) bits) to o(n lg n) bits without compromising
on time. Towards this, Elmasry et al. [7] gave, among others, an implementation for DFS
taking O(m + n) time and O(n lg lg n) bits of space. For sparse graphs (when m = O(n)), the
Time
Space (in bits)
DFS
O(n + m)
O(n + m)
O(n + m)
O(n + m)
O(n lg n)
O(n + m)
O(n lg(m/n))
O(n lg lg n)
[5]
[1, 9]
[3]
[7]
Testing biconnectivity
& reporting cut vertices
[11]
[1]
[3]
[9]
Testing 2-edge connectivity
& reporting bridges
[12]
[1]
[3]
This paper
Topological
sort
[5]
This paper
This paper
[7]
Testing strong
connectivity
[11]
This paper
This paper
[7]
Table 1: Summary of our results.
space bound was improved further to O(n) bits keeping the same linear time in [1]. Banerjee
et al. [1] gave, among others, a space efficient implementation for performing BFS using just
2n + o(n) bits of space and linear time, improving upon the result of [7]. Such algorithms for
a few other graph problems also have been considered recently [2, 3, 4, 6, 9].
1.2 Our Results
We assume that the input graph G, which is represented using adjacency array [1, 3, 7, 9], i.e.,
G is represented by an array of length |V | where the i-th entry stores a pointer to an array
that stores all the neighbors of the i-th vertex, is given in a read-only memory with a limited
read-write working memory, and write-only output. We count space in terms of the number of
bits in workspace used by the algorithms. Our main goal here is to improve the space bounds
of some of the classical and fundamental graph algorithms. We summarize all our main results
in Table 1. In this paper, basically we complete the full spectrum of results regarding the
space bounds for these problems keeping the running time linear by providing/improving the
missing/existing algorithms in the recent space efficient graph algorithm literature. Due to
lack of space, we provide only sketches of our proofs.
2 Testing 2-Edge Connectivity and Finding Bridges
In an undirected graph G, a bridge is an edge that when removed (without removing the
vertices) from a graph creates more components than previously in the graph. A (connected)
graph with at least two vertices is 2-edge-connected if and only if it has no bridge. Let T denote
the DFS tree of G. Following Kammer et al. [9], we call a tree edge (u, v) of T with u being
the parent of v full marked if there is a back edge from a descendant of v to a strict ancestor
of u, half marked if it is not full marked and there exists a back edge from a descendant of
v to u, and unmarked, otherwise. They use this definition to prove the following: (i) every
vertex u (except the root r) is a cut vertex exactly if at least one of the edges from u to one
of its children is either an unmarked edge or a half marked edge, and (ii) root r is a cut vertex
exactly if it has at least two children in T . Based on the above characterization, they gave
O(m + n) time and O(n lg lg n) bits algorithm to test/report if G has any cut vertex. Our main
observation is that we can give a similar characterization for bridges in G, and essentially using
a similar implementation, we can also obtain O(m + n) time and O(n lg lg n) bits algorithms
for testing 2-edge connectivity and reporting bridges of G. We start with the following lemma.
Lemma 1. A tree edge e = (u, v) in T is a bridge of G if and only if it is unmarked.
Proof sketch: If e is unmarked, then no descendants of v reaches u or any strict ancestor of
u, so deleting e would result in disconnected graph, thus e has to be a bridge. On the other
direction, it is easy to see that if e is a bridge, it has to be an unmarked edge.
Now we state our theorem below.
Theorem 2. Given an undirected graph G, in O(m + n) time and O(n lg lg n) bits of space we
can determine whether G is 2-edge connected. If G is not 2-edge connected, then in the same
amount of time and space, we can compute and output all the bridges of G.
Proof sketch: Using Lemma 1 and the similar implementation of using stack compression and
other tools of the algorithm provided in Section 3.2 of Kammer et al. [9] with few modifications,
we can prove the theorem.
Note that the space bound of Theorem 2 improves the results of [1] and [3] for sufficiently
dense graphs (when m = ω(n lg lg n) and m = ω(n lgO(1) n) respectively) while keeping the
same linear runtime (see Table 1).
3 DFS without Cross Pointers
Banerjee et al. [1] and subsequently Kammer et al. [9] gave O(m + n) bits and O(m + n) time
implementations of DFS improving on the bounds of [7] for sparse graphs. But both of these
DFS implementations assume that the input graph is represented using the adjacency array
along with cross pointers i.e., for undirected graphs, every neighbour v in the adjacency array
of a vertex u stores a pointer to the position of vertex u in the adjacency array of v. See [7] for
detailed definitions for directed graphs. We emphasize that this input assumption can double
the space usage, compared to the raw adjacency array in worst case. In what follows, we
provide the proof sketch of a DFS implementation taking the same time and space bounds as
that of [1, 9] but without using the cross pointers. Our main theorem is as follows.
Theorem 3. Given a directed or undirected graph G, represented as adjacency array, we can
perform DFS traversal of G using O(m + n) bits and O(m + n) time.
Proof sketch: We essentially modify the proof of [1] which uses a bitvector A of length
O(m + n) having one to one mapping with the unary encoding of the degree sequence to mark
the tree edges, and subsequently uses cross pointers to find the parent of any vertex during
backtracking as well as starting with next unvisited vertex after backtracking. We note that we
can represent the parents of all the vertices in another bitvector P of length O(m + n) (parallel
to A). Now to perform backtracking efficiently, we could use the constant time append only
structure (also with constant time rank/select) of Grossi et al. [8] along with the P array. With
these modifications, we could get rid of cross pointers without compromising on the running
time and space bound of the earlier algorithms.
4 Testing Strong Connectivity and Topological Sorting
Towards giving improved space efficient algorithms for strong connectivity (SC) and topological
sorting (TS), we first improve Lemma 4.1 of [7] which says the following: if DFS of a directed
graph G takes T (n, m) time and S(n, m) space, then we can output the vertices of G in
reverse postorder of the DFS tree T of G taking O(T (n, m)) time and O(S(n, m) + n lg lg n)
space. Combining this lemma with the classical algorithms for SC and TS [5] they obtained
O(n lg lg n) bits and O(m + n) time algorithms for both these problems. We improve these by
showing the following,
Theorem 4. If DFS of a directed graph G takes T (n, m) time and S(n, m) space, then the
vertices of G can be output in reverse postorder with respect to a DFS forest of G taking
O(T (n, m)) time and O(S(n, m) + m + n) space. As a result, we can also solve SC and TS in
O(m + n) time using O(n + m) bits of space.
Proof sketch: We use the DFS algorithm of Theorem 3 to first mark all the tree edges in
the array A. Now we start with the rightmost leaf vertex of the DFS tree and use rank/select
operations [8] on A and P (as defined in the proof of Theorem 3) carefully to traverse the tree
in reverse direction (along with standard DFS backtracking etc) to generate reverse postorder
sequence. Now using this as the back bone of the classical algorithms, we obtain O(m + n) bit
and O(n + m) time algorithms for SC and TS.
Theorem 4 improves the result of [7] for sparse (when m = O(n)) graphs. Now if we use the
DFS algorithm of Chakraborty et al. [3] and modify it suitably to perform the traversal of the
DFS tree in reverse, we obtain the following result.
Theorem 5. If DFS of a directed graph G takes T (n, m) time and S(n, m) space, then the
vertices of G can be output in reverse postorder with respect to a DFS forest of G taking
O(T (n, m)) time and O(S(n, m) + n lg(m/n)) space. As a result, we can also solve SC and TS
using O(m + n) time and O(n lg(m/n)) bits.
References
[1] N. Banerjee, S. Chakraborty, and V. Raman. Improved space efficient algorithms for
BFS, DFS and applications. In 22nd COCOON, volume 9797, pages 119–130. Springer,
LNCS, 2016.
[2] N. Banerjee, S. Chakraborty, V. Raman, S. Roy, and S. Saurabh. Time-space tradeoffs for
dynamic programming in trees and bounded treewidth graphs. In 21st COCOON, volume
9198, pages 349–360. springer, LNCS, 2015.
[3] S. Chakraborty, V. Raman, and S. R. Satti. Biconnectivity, chain decomposition and stnumbering using O(n) bits. In 27th ISAAC, volume 64 of LIPIcs, pages 22:1–22:13. Schloss
Dagstuhl - Leibniz-Zentrum fuer Informatik, 2016.
[4] S. Chakraborty and S. R. Satti. Space-efficient algorithms for Maximum Cardinality Search,
Stack BFS, Queue BFS and applications. In 23rd COCOON 2017, Hong Kong, China,
August 3-5, 2017, pages 87–98, 2017.
[5] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to Algorithms.
[6] S. Chakraborty, V. Raman, and S. R. Satti. Biconnectivity, st-numbering and other applications of DFS using O(n) bits. J. Comput. Syst. Sci., 90:63–79, 2017.
[7] A. Elmasry, T. Hagerup, and F. Kammer. Space-efficient basic graph algorithms. In 32nd
STACS, pages 288–301, 2015.
[8] R. Grossi and G. Ottaviano. The wavelet trie: maintaining an indexed sequence of strings
in compressed space. In 31st PODS, pages 203–214, 2012.
[9] F. Kammer, D. Kratsch, and M. Laudahn. Space-efficient biconnected components and
recognition of outerplanar graphs. In 41st MFCS, 2016.
[10] J. M. Schmidt. A simple test on 2-vertex- and 2-edge-connectivity. Inf. Process. Lett.,
113(7):241–244, 2013.
[11] R. E. Tarjan. Depth-first search and linear graph algorithms. SICOMP, 1(2):146–160,
1972.
[12] R. E. Tarjan. A note on finding the bridges of a graph. Inf. Pro. Lett., 2(6):160–161, 1974.
| 8 |
arXiv:1704.01458v1 [math.ST] 5 Apr 2017
β-mixing and moments properties of a non-stationary
copula-based Markov process
Fabio Gobbi
Sabrina Mulinacci
University of Bologna - Department of Statistics
April 6, 2017
Abstract
This paper provides conditions under which a non-stationary copula-based Markov process
is β-mixing. We introduce, as a particular case, a convolution-based gaussian Markov process
which generalizes the standard random walk allowing the increments to be dependent.
JEL classification: C22,C10
Mathematics Subject Classification (2010): 62M10, 62H20
Keywords: Markov process, copula, β-mixing, gaussian process.
1
Introduction
In this paper we analyze the temporal dependence properties satisfied by a discrete times nonstationary Markov process. Temporal dependence is relevant since it permits to verify of how well
theoretical models explain temporal persistency observed in financial data. Moreover, it is also a
useful tool to establish large sample properties of estimators for dynamic models. In particular,
in this paper we analyze the β-mixing property and we give sufficient conditions that ensure this
property be satisfied.
In the copula approach to univariate time series modelling, the finite dimensional distributions
are generate by copulas. Darsow et al. (1992) provide necessary and sufficient conditions for a
copula-based time series to be a Markov process. Recent literature on this topic has mainly focused
on the stationary case. Chen and Fan (2006) introduce a copula-based strictly stationary first order
Markov process generated from (G0 (·), C(·, ·, α0 )) where G0 (·) is the invariant distribution of Yt
and C(·, ·, α0 ) is the parametric copula for (Yt−1 , Yt ). The authors show that the β-mixing temporal dependence measure is purely determined by the properties of copulas and present sufficient
conditions to ensure that the process (Yt )t based on gaussian and EFGM copulas are geometric β-mixing. Beare (2010) shows that all Markov models generated via symmetric copulas with
positive and square integrable densities are geometric β-mixing. Many commonly used bivariate
copulas without tail dependence such as gaussian, EFGM and Frank copulas satisfy this condition.
Chen et al. (2009) show that Clayton, Gumbel and Student’s t copula based Markov models are
geometrically ergodic which is a stronger condition than the geometric β-mixing.
In this paper we focus on Markov processes where some dependence between each state variable
and increment is allowed and modeled through a time-invariant copula. In particular, we introduce
a gaussian Markov process, which is non-stationary and generalizes the classical gaussian random
walk, and we study related moments properties and provide conditions under wich the process is
β-mixing.
1
The paper is organized as follows. Section 2 presents a general result on the β-mixing properties
satisfied by non-stationary Markov processes. Section 3 restricts the study to the gaussian case.
Section 4 concludes.
2
Copula-based Markov processes and β-mixing properties
Throughout the paper Y = (Yt )t∈Z is a discrete time Markov process. Thanks to the seminal
paper of Darsow et al. (1992), the markovianity of a stochastic process can be characterized
through a specific requirement that the copulas, representing the dependence structure of the
finite dimensional distributions induced by the stochastic process (for a detailed discussion on
copulas see Nelsen (2006), Joe (1997), Cherubini et al. (2012) and Durante and Sempi (2015)),
must satisfy. In particular, in Darsow et al (1992) it is proved that the Chapman-Kolmogorov
equations for transition probabilities are equivalent to the requirement that, if Ci,j is the copula
associated to the vector (Yi , Yj ), then
Cs,t (u, v) = Cs,r ∗ Cr,t (u, v) =
Z
1
0
∂
∂
Cs,r (u, w)
Cr,t (w, v) dw, ∀s < r < t.
∂w
∂w
As a consequence, since Y is a discrete times Markov process, if we assume that the set of bivariate
copulas Ct,t+1 (representing the dependence structure of the stochastic process at two adjacent
times) is given for t ∈ Z and k > 0, then necessarily (we remind that the ∗-operator is associative)
Ct,t+k (u, v) = Ct,t+k−1 ∗ Ct+k−1,t+k (u, v) = Ct,t+1 ∗ Ct+1,t+2 ∗ · · · ∗ Ct+k−1,t+k (u, v).
(1)
Notice that, in the stationary case considered in Beare (2010), Ct,t+1 = C for all t ∈ Z, therefore
all bivariate copulas Ct,t+k are functions of the copula C and of the lag k and not of the time t. In
this paper we extend the study to the more general non-stationary case. In particular we analyze
the temporal dependence problem with a special attention to mixing properties.
The notion of β-mixing was introduced by Volkonskii and Rozanov (1959 and 1961) and was
attribute there to Kolmogorov. Given a (not necessarily stationary) sequence of random variables
Y = (Yt )t∈Z , let Ftl be the σ-field Ftl = σ(Yt , t ≤ t ≤ l) with −∞ ≤ t ≤ l ≤ +∞ and let
J
I
+∞
t
β̃(F−∞
, Ft+k
)=
1 XX
|P(Ai ∩ Bj ) − P(Ai )P(Bj )|,
{Ai },{Bj } 2 i=1 j=1
sup
(2)
where the second supremum is taken over all finite partitions {A1 , ...AI } and {B1 , ...BJ } of Ω such
t
∞
that Ai ∈ F−∞
for each i and Bj ∈ Ft+k
for each j. Define the following dependence coefficient
+∞
t
βk = sup β̃(F−∞
, Ft+k
).
t∈Z
We say that the sequence (Yt )t∈Z is β−mixing (or absolutely regular) if βk → 0 as k → +∞.
In next Theorem we give conditions on the set of copulas Ct,t+1 , t ∈ Z in order to guarantee that
the Markov resulting process is β-mixing. These conditions are based on specific requirements on
the maximal correlation coefficients of the copulas Ct,t+1 . We remind that the maximal correlation
η of a copula C is given by
Z Z
1
1
sup
f,g
f (x)g(x)C(dx, dy)
0
0
R1
R1
R1
R1
where f, g ∈ L2 ([0, 1]), 0 f (x)dx = 0 g(x)dx = 0 and 0 f 2 (x)dx = 0 g 2 (x)dx = 1 and we refer
to Beare (2010) and Rényi (1959) for more details.
2
Theorem 2.1. Let Y = (Yt )t∈Z be a Markov process. Let Ct,t+1 be the copula associated to
the vector (Yt , Yt+1 ) for t ∈ Z that we assume to be absolutely continuous, with symmetric and
square-integrable density ct,t+1 so that (ct,t+1 )t∈Z is uniformly bounded in L2 ([0, 1]).
If the maximal correlation coefficients ηt of Ct,t+1 satisfy
η̂ = sup ηt < 1,
(3)
t∈Z
then Y is β-mixing.
Proof. The proof follows that of Theorem 3.1 in Beare (2010) who proves a similar result for
stationary copula-based Markov processes. First of all, since the stochastic process is Markovian,
(2) can be rewritten in terms of the cumulative distribution functions of (Yt , Yt+k ), Yt and Yt+k
(Ft,t+k , Ft and Ft+k , respectively) and the total variation norm k · kT V (see Bradley, 2007) and
then, applying Sklar’s theorem, we can write
1
k Ft,t+k (x, y) − Ft (x)Ft+k (y) kT V =
2
1
= k Ct,t+k (Ft (x), Ft+k (y)) − Ft (x)Ft+k (y) kT V ≤
2
1
≤ k Ct,t+k (u, v) − uv) kT V .
2
+∞
t
β̃(F−∞
, Ft,t+k
)=
From (1) it follows that all bivariate copulas of type Ct,t+k for t ∈ Z and k ≥ 1 are absolutely
continuous: let us denote their density as ct,t+k . Then
+∞
t
β̃(F−∞
, Ft+k
)≤
1
1
k ct,t+k (u, v) − 1 kL1 ≤ k ct,t+k (u, v) − 1 kL2
2
2
and
1
sup k ct,t+k (u, v) − 1 kL2 .
2 t∈Z
βk ≤
Since ct,t+1 is a symmetric square-integrable joint density with uniform margins, it admits the
following series expansion in terms of a complete orthonormal sequence (φi )i≥1 in L2 [0, 1],
+∞
X
ct,t+1 (u, v) = 1 +
λi,t φi (u)φi (v),
i=1
where the eigenvalues (λi,t )i form a square-summable sequence of nonnegative real numbers: notice
that, as proved in Lancaster(1958)
max λi,t = ηt .
(4)
i≥1
Applying (1), we get
ct,t+k (u, v) = 1 +
+∞
X
i=1
Then, using (4) and (3), we get
k ct,t+k (u, v) − 1 kL2 =
+∞
X
i=1
k−1
Y
j=0
k−1
Y
j=0
λi,t+j φi (u)φi (v).
λi,t+j φi (u)φi (v)
L2
1/2
k−1
+∞
Y
X
λ2i,t+j
=
=
i=1
j=0
1/2
1/2
+∞
k−1
+∞
k−1
X
Y
X
Y
2
=
λ2i,t
λ2i,t+j
≤
λ2i,t
ηt+j
≤
i=1
≤ η̂ k−1
j=1
"+∞
X
i=1
λ2i,t
#1/2
i=1
j=1
= η̂ k−1 k ct,t+1 (u, v) − 1 kL2 .
3
(5)
Therefore
βk ≤
1 k−1
η̂
sup k ct,t+1 (u, v) − 1 kL2
2
t∈Z
which, since (ct,t+1 )t∈Z is uniformly bounded in L2 ([0, 1]), tends to zero as k → +∞.
3
A gaussian convolution-based Markov process
From now on, we assume that the Markov process Y is obtained through
Yt = Yt−1 + ξt ,
Y0 = 0,
(6)
where (ξt )t≥1 is a sequence of identically distributed random variables such that ξt is dependent
on Yt−1 for each t. The dependence structure is modelled by a time-invariant copula function C.
The process defined in (6) is not stationary. However, we can determine the distribution of
C
Yt for each t thanks to the C-convolution operator (denoted by ∗), introduced in Cherubini et
al. (2011) as a tool to recover the distribution of the sum of two dependent random variables.
As shown in Cherubini et al. (2011, 2012 and 2015) the C-convolution technique may be used
in the construction of dependent increments stochastic processes like (6). More precisely, if Ft−1
is the cumulative distribution function of Yt−1 and Ht that of ξt , we may recover the cumulative
distribution function of Yt iterating the C-convolution for all t
Z 1
C
−1
Ft (yt ) = (Ft−1 ∗ Ht )(yt ) =
D1 C(w, Ht (yt − Ft−1
(w)))dw, t ≥ 2
(7)
0
while, the copula associated to (Yt−1 , Yt ) is
Z u
−1
D1 C(w, Ht (Ft−1 (v) − Ft−1
Ct−1,t (u, v) =
(w)))dw,
t≥2
(8)
0
∂
C(u, v).
where D1 C(u, v) = ∂u
Equations (7) and (8) provide the ingredients to construct discrete times Markov processes
according to Darsow et al. (1992).
Our model (6) is a sort of a modified version of a random walk process where the independence
assumption for the innovations (ξt )t≥1 is no longer required: however, its weakness is that in most
cases the distribution function cannot be expressed in closed form and it may be evaluated only
numerically.
From now on we assume that innovations (ξt )t≥1 are gaussian identically distributed with zero
mean and standard deviation σξ and that the copula between Yt−1 and ξt is a (stationary) gaussian
copula with constant parameter ρ for all t. This way, the distribution of Yt is gaussian for all t
and, more specifically, in section 4.3.1 of Cherubini et al. (2016) it is shown that
Yt ∼ N (0, Vt2 ),
where
Vt2 = V ar(Yt ) = V12 + (t − 1)σξ2 + 2ρσξ
(9)
t−1
X
Vt−i ,
t ≥ 2,
(10)
i=1
where V12 = σξ2 since by assumption Y1 = ξ1 . Moreover, the copula between Yt and Yt+1 is gaussian
with parameters
Vt + ρσξ
τt,t+1 =
, t≥2
Vt+1
since E[Yt Yt+1 ] = Vt2 + ρVt σξ .
4
The limiting behavior of the standard deviation Vt has also been analyzed in Cherubini et al.
(2016) where it is proved that
σε
− 2ρ , if ρ ∈ (−1, 0)
lim Vt =
t→+∞
+∞, otherwise.
Notice that only in case of negative correlation with the increments, the standard deviation of the
levels does not explode: in the following we will restrict the analysis to the case ρ ∈ (−1, 0).
3.1
Moments and autocorrelation function
In this subsection we study the behavior of moments and autocorrelation functions of the process
(Yt )t≥1 when t → +∞. It is just the case to recall that in the standard random walk model the
k-th order autocorrelation function of (Yt )t≥1 tends to 1 as t → +∞, for each lag k. In our more
general setting, this is no longer true. The limit of the k-th order autocorrelation function of
(Yt )t≥1 is a function of k and ρ as the following proposition shows.
Proposition 3.1. Let ρ ∈ (−1, 0). The k-th order autocorrelation function of (Yt )t≥1 tends to
(1 − 2ρ2 )k for any k ≥ 1 as t → +∞.
Proof. As proved in section 4.3.1 in Cherubini et al. (2016), using the fact that the ∗-product
of two gaussian copulas has a parameter given by the product of the parameters of the copulas
involved in the ∗-product, we have that the copula between Yt and Yt+k is gaussian with parameter
τt,t+k =
k−1
Y
Vt+s + ρσξ
.
Vt+s+1
s=0
Therefore, since as t → +∞ and for any s ≥ 1
σ
− 2ρξ + ρσξ2
Vt+s + ρσξ
→
= 1 − 2ρ2 ,
σ
Vt+s+1
− 2ρξ
(11)
we easily get the result.
On the other hand, the innovations (ξt )t≥1 are no longer serially independent as in the random
walk case and the k-th order autocorrelation function approaches to a limit which again depends
on ρ and k.
Proposition 3.2. Let ρ ∈ (−1, 0). The k-th order autocorrelation function of (ξt )t≥1 tends to
−ρ2 (1 − 2ρ2 )k−1 for any k ≥ 1 as t → +∞.
Proof. We compute first the autocovariance of order k, with k ≥ 1, E[ξt ξt+k ]. We have
E[ξt ξt+k ] = E[(Yt − Yt−1 )(Yt+k − Yt+k−1 )] =
= E[Yt Yt+k ] − E[Yt Yt+k−1 ] − E[Yt−1 Yt+k ] + E[Yt−1 Yt+k−1 ] =
= τt,t+k Vt Vt+k − τt,t+k−1 Vt Vt+k−1 − τt−1,t+k Vt−1 Vt+k + τt−1,t+k−1 Vt−1 Vt+k−1 .
σ
Since for any fixed k ≥ 1, τt,t+k → (1 − 2ρ2 )k and Vt → − 2ρξ as t → +∞ we get
E[ξt ξt+k ] →
σξ2
4ρ2
(1 − 2ρ2 )k − (1 − 2ρ2 )k−1 − (1 − 2ρ2 )k+1 + (1 − 2ρ2 )k =
= −ρ2 σξ2 (1 − 2ρ2 )k−1 ,
as t → +∞.
Moreover, it is immediate to find the statement of the proposition since as t → +∞
corr(ξt ξt+k ) → −ρ2 (1 − 2ρ2 )k−1 .
5
3.2
β-mixing properties
In our gaussian framework ct,t+1 is the density of a gaussian copula for which it is well known
that the maximal correlation coefficient is equal to the absolute value of the simple correlation
coefficient (see Lancaster, 1957). Therefore, according to the notation of Theorem 2.1, for each t,
ηt = |τt,t+1 |. The following results, which is an application of Theorem 2.1, holds
Corollary 3.1. The Markov process defined by (6) with ξt ∼ N (0, σξ ) and ρ ∈ (−1, 0) is β-mixing.
Proof. Firstly notice that
|τt,t+1 | < 1, ∀t.
2
2
In fact this is equivalent to (Vt + ρσξ ) < Vt+1
which is always verified since ρ2 < 1 by assumption.
2
Thanks to (11), since |1 − 2ρ | < 1, we have that |τt,t+1 | = ηt is bounded by a constant smaller
than 1, that is (3) is satisfied. Furthermore, it is not hard to prove that for any t
k ct,t+1 (u, v) − 1 kL2 =
2
τt,t+1
η̂ 2
≤
.
2
1 − τt,t+1
1 − η̂ 2
Thus Theorem 2.1 applies.
4
Concluding remarks
In this paper we provide conditions under which a non-stationary copula-based Markov process is βmixing. Our results represent a generalization of those in Beare (2010), where the author considers
the stationary case. Our analysis is focused on the particular case of a gaussian Markov process
with dependent increments that represents a generalization of the standard gaussian random walk.
In this particular non-stationary setting it is proved that the k-th order autocorrelation function
of the process does not converge to 1, as in the random walk case, but to a quantity that depends
on the lag and the correlation between the state variable and the innovation, which is assumed to
be time-invariant. Additionally, it is proved that the process satisfies the conditions required to be
β-mixing.
References
[1] Beare B. (2010): ”Copulas and temporal dependence”, Econometrica, 78(1), 395-410.
[2] Bradley R.C. (2007): Introduction to strong mixing conditions, vols. 1-3. Kendrick Press. herber
City.
[3] Chen X., Fan Y. (2006): ”Estimation of Copula-Based Semiparametric Time Series Models”,
Journal of Econometrics, 130, 307335.
[4] Chen X., Wu W.B., Yi Y. (2009): ”Efficient estimation of copula-based semiparametric Markov
models”, The Annals of Statistics, 37(6B) 42144253.
[5] Cherubini U., Gobbi F., Mulinacci S., (2016) ”Convolution Copula Econometrics”, SpringerBriefs in Statistics.
[6] Cherubini U., Gobbi F., Mulinacci S., Romagnoli S. (2012): Dynamic Copula Methods in
Finance, John Wiley & Sons.
[7] Cherubini, U., Mulinacci S., Romagnoli S. (2011) ”A Copula-based Model of Speculative Price
Dynamics in Discrete Time”, Journal of Multivariate Analysis, 102, 1047-1063.
[8] Darsow W.F. - Nguyen B. - Olsen E.T.(1992): ”Copulas and Markov Processes”, Illinois Journal of Mathematics, 36, 600-642.
6
[9] Durante F., Sempi C. (2015) Principles of copula theory, Boca Raton: Chapman and Hall/CRC
[10] Joe H. (1997): Multivariate Models and Dependence Concepts, Chapman & Hall, London
[11] Lancaster,H.O. (1957) ”Some properties of the bivariate normal distribution considered in the
form of a contingency table”, Biometrika, 44, 289-292.
[12] Lancaster H.O. (1958): ”The structure of bivariate distributions”, Annals of Mathematical
Statistics, 29, 719-736.
[13] Nelsen R.(2006): An Introduction to Copulas, Springer
[14] Renyi A. (1959): ”On measures of dependence”, Acta Mathematica Academiae Scientiarum
Hungaricae, 10, 441-451.
[15] Volkonskii V.A., Rozanov Yu.A. (1959): ”Some limit theorems for random functions I”, Theor.
Probab. Appl., 4, 178-197.
[16] Volkonskii V.A., Rozanov Yu.A. (1961): ”Some limit theorems for random functions II”,
Theor. Probab. Appl., 6, 186-198.
7
| 10 |
Pragmatic-Pedagogic Value Alignment
arXiv:1707.06354v2 [cs.AI] 5 Feb 2018
Jaime F. Fisac, Monica A. Gates, Jessica B. Hamrick, Chang Liu,
Dylan Hadfield-Menell, Malayandi Palaniappan, Dhruv Malik, S. Shankar Sastry,
Thomas L. Griffiths, and Anca D. Dragan
Abstract As intelligent systems gain autonomy and capability, it becomes vital to
ensure that their objectives match those of their human users; this is known as the
value-alignment problem. In robotics, value alignment is key to the design of collaborative robots that can integrate into human workflows, successfully inferring and
adapting to their users’ objectives as they go. We argue that a meaningful solution to
value alignment must combine multi-agent decision theory with rich mathematical
models of human cognition, enabling robots to tap into people’s natural collaborative capabilities. We present a solution to the cooperative inverse reinforcement
learning (CIRL) dynamic game based on well-established cognitive models of decision making and theory of mind. The solution captures a key reciprocity relation: the
human will not plan her actions in isolation, but rather reason pedagogically about
how the robot might learn from them; the robot, in turn, can anticipate this and interpret the human’s actions pragmatically. To our knowledge, this work constitutes the
first formal analysis of value alignment grounded in empirically validated cognitive
models.
Key words: Value Alignment, Human-Robot Interaction, Dynamic Game Theory
1 Introduction
The accelerating progress in artificial intelligence (AI) and robotics is bound to have
a substantial impact in society, simultaneously unlocking new potential in augmenting and transcending human capabilities while also posing significant challenges to
safe and effective human-robot interaction. In the short term, integrating robotic systems into human-dominated environments will require them to assess the intentions
All authors are with the University of California, Berkeley.
e-mail: {jfisac,mgates,jhamrick,changliu,dhm,malayandi,dhruvmalik,
shankar_sastry,tom_griffiths,anca}@berkeley.edu
1
2
J. F. Fisac, M. A. Gates, J. B. Hamrick, C. Liu, D. Hadfield-Menell, et al.
and preferences of their users in order to assist them effectively, while avoiding failures due to poor coordination. In the long term, ensuring that advanced and highly
autonomous AI systems will be beneficial to individuals and society will hinge on
their ability to correctly assimilate human values and objectives [1]. We envision
the short- and long-term challenges as being inherently coupled, and predict that
improving the ability of robots to understand and coordinate with their human users
will inform solutions to the general AI value-alignment problem.
Successful value alignment requires moving from typical single-agent AI formulations to robots that account for a second agent—the human—who determines
what the objective is. In other words, value alignment is fundamentally a multi-agent
problem. Cooperative Inverse Reinforcement Learning (CIRL) formulates value
alignment as a two-player game in which a human and a robot share a common
reward function, but only the human has knowledge of this reward [2]. In practice,
solving a CIRL game requires more than multi-agent decision theory: we are not
dealing with any multi-agent system, but with a human-robot system. This poses
a unique challenge in that humans do not behave like idealized rational agents [3].
However, humans do excel at social interaction and are extremely perceptive of the
mental states of others [4, 5]. They will naturally project mental states such as beliefs and intentions onto their robotic collaborators, becoming invaluable allies in
our robots’ quest for value alignment.
In the coming decades, tackling the value-alignment problem will be crucial to
building collaborative robots that know what their human users want. In this paper,
we show that value alignment is possible not just in theory, but also in practice. We
introduce a solution for CIRL based on a model of the human agent that is grounded
in cognitive science findings regarding human decision making [6] and pedagogical
reasoning [7]. Our solution leverages two closely related insights to facilitate value
alignment. First, to the extent that improving their collaborator’s understanding of
their goals may be conducive to success, people will tend to behave pedagogically,
deliberately choosing their actions to be informative about these goals. Second, the
robot should anticipate this pedagogical reasoning in interpreting the actions of its
human users, akin to how a pragmatic listener interprets a speaker’s utterance in
natural language. Jointly, pedagogical actions and pragmatic interpretations enable
stronger and faster inferences among people [7]. Our result suggests that it is possible for robots to partake in this naturally-emerging equilibrium, ultimately becoming
more perceptive and competent collaborators.
2 Solving Value Alignment using Cognitive Models
2.1 Cooperative Inverse Reinforcement Learning (CIRL)
Cooperative Inverse Reinforcement Learning (CIRL) [2] formalizes value alignment
as a two-player game, which we briefly present here. Consider two agents, a human
Pragmatic-Pedagogic Value Alignment
3
H and a robot R, engaged in a dynamic collaborative task involving a (possibly
infinite) sequence of steps. The goal of both agents is to achieve the best possible
outcome according to some objective θ ∈ Θ . However, this objective is only known
to H. In order to contribute to the objective, R will need to make inferences about
θ from the actions of H (an Inverse Reinforcement Learning (IRL) problem), and
H will have an incentive to behave informatively so that R becomes more helpful,
hence the term cooperative IRL.
Formally, a CIRL game is a dynamic (Markov) game of two players (H and R),
described by a tuple hS, {AH , AR }, T, {Θ , r}, P0 , γi, where S is the set of possible
states of the world; AH , AR are the sets of actions available to H and R respectively;
T : S × S × AH × AR → [0, 1] a discrete transition measure1 over the next state, conditioned on the previous state and the actions of H and R: T (s0 |s, aH , aR ); Θ is the
set of possible objectives; r : S × AH × AR × Θ → R is a cumulative reward function assigning a real value to every tuple of state and actions for a given objective:
r(s, aH , aR ; θ ); P0 : S × Θ → [0, 1] is a probability measure on the initial state and
the objective; γ ∈ [0, 1] is a geometric time discount factor making future rewards
gradually less valuable.
2.2 Pragmatic Robots for Pedagogic Humans
Asymmetric information structures in games (even static ones) generally induce an
infinite hierarchy of beliefs: our robot will need to maintain a Bayesian belief over
the human’s objectives to decide on its actions. To reason about the robot’s decisions, the human would in principle need to maintain a belief on the robot’s belief,
which will in turn inform her decisions, thereby requiring the robot to maintain a
belief on the human’s belief about its own belief, and so on [8]. In [2], it was shown
that an optimal pair of strategies can be found for any CIRL game by solving a
partially observed Markov decision process (POMDP). This avoids this bottomless
recursion as long as both agents are rational and can coordinate perfectly before the
start of the game.
Unfortunately, when dealing with human agents, rationality and prior coordination are nontrivial assumptions. Finding an equivalent tractability result for more
realistic human models is therefore crucial in using the CIRL formulation to solve
real-world value-alignment problems. We discover the key insight in cognitive studies of human pedagogical reasoning [7], in which a teacher chooses actions or utterances to influence the beliefs of a learner who is aware of the teacher’s intention.
The teacher can then exploit the fact that the learner can interpret utterances pragmatically. Infinite recursion is averted by finding a fixed-point relation between the
teacher’s best utterance and the learner’s best interpretation, exploiting a common
modeling assumption in Bayesian theory of mind: the learner models the teacher
as a noisily rational decision maker [9], who will be likelier to choose utterances
1
Note that the theoretical formulation is easily extended to arbitrary measurable sets; we limit our
analysis to finite state and objective sets for computational tractability and clarity of exposition.
4
J. F. Fisac, M. A. Gates, J. B. Hamrick, C. Liu, D. Hadfield-Menell, et al.
causing the learner to place a high posterior belief on the correct hypothesis, given
the learner’s current belief. While in reality, the teacher cannot exactly compute the
learner’s belief, the model supposes that she estimates it (from the learner’s previous responses to her utterances), then introduces noise in her decisions to capture
estimation inaccuracies. This framework can predict complex behaviors observed
in human teaching-learning interactions, in which pedagogical utterances and pragmatic interpretations permit efficient communication [7].
We adopt an analogous modeling framework to that in [7] for value alignment,
with a critical difference: the ultimate objective of the human is not to explicitly
improve the robot’s understanding of the true objective, but to optimize the team’s
expected performance towards this objective. Pedagogic behavior thus emerges implicitly to the extent that a well-informed robot becomes a better collaborator.
2.3 Pragmatic-Pedagogic Equilibrium Solution to CIRL
The robot does not have access to the true objective θ , but rather estimates a belief
bR over θ . We assume that this belief on θ can be expressed parametrically (this
is always true if Θ is a finite set), and define 4Θ to be the corresponding (finitedimensional) parameter space, denoting R’s belief by bR ∈ 4Θ . While in reality the
human cannot directly observe bR , we assume, as in [7], that she can compute it or
infer it from the robot’s behavior (and model estimation inaccuracies as noise in her
policy). We can then let Q : S × 4Θ × AH × AR ×Θ → R represent the state-action
value function of the CIRL game for a given objective θ , which we are seeking to
compute: if θ ∈ Θ is the true objective known to H, then Q(s, bR , aH , aR ; θ ) represents the best performance the team can expect to achieve if H chooses aH and R
chooses aR from state s, with R’s current belief being bR .
In order to solve for Q, we seek to establish an appropriate dynamic programming relation for the game, given a well-defined information structure and a model
of the human’s decision making. Since it is typically possible for people to predict
a robot’s next action if they see its beginning [10], we assume that H can observe
aR at each turn before committing to aH . A well-established model of human decision making in psychology and econometrics is the Luce choice rule, which models
people’s decisions probabilistically, making high-utility choices more likely than
those with lower utility [9]. In particular, we employ a common case of the Luce
choice rule, the Boltzmann (or soft-max) noisy rationality model [6], in which the
probability of a choice decays exponentially as its utility decreases in comparison
to competing options. The relevant utility metric in our case is the sought Q (which
captures H’s best expected outcome for each of her available actions aH ). Therefore
the probability that H will choose action aH has the form
πH (aH |s, bR , aR ; θ ) ∝ exp β Q(s, bR , aH , aR ; θ ) ,
(1)
Pragmatic-Pedagogic Value Alignment
5
where β > 0 is termed the rationality coefficient of H and quantifies the concentration of H’s choices around the optimum; as β → ∞, H becomes a perfect rational
agent, while, as β → 0, H becomes indifferent to Q. The above expression can be
interpreted by R as the likelihood of action aH given a particular θ . The evolution
of R’s belief bR is then given (deterministically) by the Bayesian update
b0R (θ |s, bR , aR , aH ) ∝ πH (aH |s, bR , aR ; θ )bR (θ ) ,
(2)
Jointly, (1) and (2) define a fixed-point equation analogous to the one in [7],
which states how R should pragmatically update bR based on a noisily rational
pedagogic aH . This amounts to a deterministic transition function for R’s belief,
b0R = fb (s, bR , aH , aR ). Crucially, however, the fixed-point relation derived here involves Q itself, which we have yet to compute.
Unlike H, R is modeled as a rational agent; however, not knowing the true θ , the
best R can do is to maximize2 the expectation of Q based on its current belief3 bR :
πR∗ (s, bR ) := arg max
aR
∑ Q(s, bR , aH , aR ; θ ) · πH (aH |s, bR , aR ; θ )bR (θ )
.
(3)
aH ,θ
Combining (2) with the state transition measure T (s0 |s, aH , aR ), we can define the
Bellman equation for H under the noisily rational policy πH for any given θ ∈ Θ :
h
i
Q(s, bR , aH , aR ; θ ) = r(s, aH , aR ; θ ) + Es0 ,a0H γ · Q0 s0 , b0R , a0H , πR∗ (s0 , b0R ); θ
, (4)
where s0 ∼ T (s0 |s, aH , aR ); b0R = fb (s, bR , aH , aR ); a0H ∼ πH (aH |s0 , b0R , πR∗ (s0 , b0R ); θ ).
Note that H’s next action a0H implicitly depends on R’s action at the next turn.
Substituting (1-3) into (4), we obtain the sought dynamic programming relation
for the CIRL problem under a noisily rational-pedagogic human and a pragmatic
robot. The human is pedagogic because she takes actions according to (1), which
takes into account how her actions will influence the robot’s belief about the objective. The robot is pragmatic because it assumes the human is actively aware of how
her actions convey the objective, and interprets them accordingly.
The resulting problem is similar to a POMDP (in this case formulated in beliefstate MDP form), with the important difference that the belief transition depends on
the value function itself. In spite of this complication, the problem can be solved in
backward time through dynamic programming: each Bellman update will be based
on a pragmatic-pedagogic fixed point that encodes an equilibrium between the Q
function (and therefore H’s policy for choosing her action) and the belief transition
(that is, R’s rule for interpreting H’s actions). Evidence in [7] suggests that people
are proficient at finding such equilibria, even though uniqueness is not guaranteed
in general; study of disambiguation is an open research direction.
2
We assume for simplicity that the optimum is unique or a well-defined disambiguation rule exists.
Note that this does not imply certainty equivalence, nor do we assume separation of estimation
and control: R is fully reasoning about how its actions and those of H may affect its future beliefs.
3
6
J. F. Fisac, M. A. Gates, J. B. Hamrick, C. Liu, D. Hadfield-Menell, et al.
3 A Proof-of-Concept
We introduce the benchmark domain ChefWorld, a household collaboration setting
in which a human H seeks to prepare a meal with the help of an intelligent robotic
manipulator R. There are multiple possible meals that H may want to prepare using
the available ingredients, and R does not know beforehand which one she has chosen
(we assume H cannot or will not tell R explicitly). The team obtains a reward only
if H’s intended recipe is successfully cooked. If H is aware of R’s uncertainty, she
should take actions that give R actionable information, particularly the information
that she expects will allow R to be as helpful as possible as the task progresses.
Our problem has 3 ingredients, each with 2 or 3 states: spinach (absent, chopped),
tomatoes (absent, chopped, puréed), and bread (absent, sliced, toasted). Recipes correspond to (joint) target states for the food. Soup requires the tomatoes to be chopped
then puréed, the bread to be sliced then toasted, and no spinach. Salad requires the
spinach and tomatoes to be chopped, and the bread to be sliced then toasted. H and
R can slice or chop any of the foods, while only R can purée tomatoes or toast bread.
A simple scenario with the above two recipes is solved using discretized beliefstate value iteration and presented as an illustrative example in Fig 1. R has a wrong
initial belief about H’s intended recipe. Under standard IRL, H fails to communicate
her recipe. But if R is pragmatic and H is pedagogic, H is able to change R’s belief
and they successfully collaborate to make the meal.
In addition, we computed the solution to games with 4 recipes through a modification of POMDP value iteration (Table 1). In the pragmatic-pedagogic CIRL equilibrium with β = 5, H and R successfully cook the correct recipe 97% of the time,
whereas under the standard IRL framework (with H acting as an expert disregarding
R’s inferences) they only succeed 46% of the time—less than half as often.
Fig. 1 Simple collaborative scenario with 2 possible objectives. The human H wants soup but the
robot R initially believes her goal is salad. Even under a full POMDP formulation, if R reasons “literally” about H’s actions using standard IRL (assuming H behaves as if R knew the true objective),
it fails to infer the correct objective. Conversely, under the pragmatic-pedagogic CIRL equilibrium,
R views H as incentivized to choose pedagogic actions that will fix R’s belief when needed. Under
the pragmatic interpretation, H’s wait action in turn 2 (instead of adding spinach, which would be
preferred by a pedagogic H wanting salad) indicates H wants soup. While H’s actions are the same
under both solutions, only the pragmatic R achieves value alignment and completes the recipe.
Pragmatic-Pedagogic Value Alignment
IRL
CIRL
Boltzmann (β = 1) Boltzmann (β = 2.5) Boltzmann (β = 5)
0.2351
0.3783
0.4555
0.2916
0.7026
0.9727
7
Rational
0.7083
1.0000
Table 1 A comparison of the expected value (or equivalently here, the probability of success)
achieved by CIRL and IRL on the ChefWorld domain with four recipes when the robot begins
with a uniform belief over the set of recipes. We ran each algorithm across different models of the
human’s behavior, namely a rational model and a Boltzmann-rational model with various values of
β (a higher β corresponds to a more rational human). When the human is highly irrational (β = 1),
both CIRL and IRL unsurprisingly perform rather poorly. However, as the human becomes less
noisy (β = 2.5, β = 5), CIRL outperforms IRL by a significant margin; in fact, the pragmaticpedagogic CIRL strategy with a Boltzmann-rational human performs comparably (β = 2.5) or
even substantially outperforms (β = 5) the IRL result when the human is perfectly rational.
4 Discussion
We have presented here an analysis of the AI value alignment problem that incorporates a well-established model of human decision making and theory of mind
into the game-theoretic framework of cooperative inverse reinforcement learning
(CIRL). Using this analysis, we derive a Bellman backup that allows solving the
dynamic game through dynamic programming. At every instant, the backup rule is
based on a pragmatic-pedagogic equilibrium between the robot and the human: the
robot is uncertain about the objective and therefore incentivized to learn it from the
human, whereas the human has an incentive to help the robot infer the objective so
that it can become more helpful.
We note that this type of pragmatic-pedagogic equilibrium, recently studied in
the cognitive science literature for human teaching and learning [7], may not be
unique in general: there may exist two actions for H and two corresponding interpretations for R leading to different fixed points. For example, H could press a blue
or a red button which R could then interpret as asking it to pick up a blue or a red
object. Although we might feel that blue-blue/red-red is a more intuitive pairing,
blue-red/red-blue is valid as well: that is, if H thinks that R will interpret pressing
the blue button as asking for the red object then she will certainly be incentivized to
press blue when she wants red; and in this case R’s policy should consistently be to
pick up the red object upon H’s press of the blue button. When multiple conventions
are possible, human beings tend to naturally disambiguate between them, converging on salient equilibria or “focal points” [11]. Accounting for this phenomenon is
likely to be instrumental for developing competent human-centered robots.
On the other hand, it is important to point out that, although they are computationally simpler than more general multi-agent planning problems, POMDPs are
still PSPACE-complete [12], so reducing pragmatic-pedagogic equilibrium computation to solving a modified POMDP falls short of rendering the problem tractable in
general. However, finding a POMDP-like Bellman backup does open the door to efficient CIRL solution methods that leverage and benefit from the extensive research
on practical algorithms for approximate planning in large POMDPs [13].
8
REFERENCES
We find the results in this work promising for two reasons. First, they provide
insight into how CIRL games can be not only theoretically formulated but also practically solved. Second, they demonstrate, for the first time, formal solutions to value
alignment that depart from the ideal assumption of a rational human agent and instead benefit from modern studies of human cognition. We predict that developing
efficient solution approaches and incorporating more realistic human models will
constitute important and fruitful research directions for value alignment.
Acknowledgements This work is supported by ONR under the Embedded Humans MURI
(N00014-13-1-0341), by AFOSR under Implicit Communication (16RT0676), and by the Center
for Human-Compatible AI.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
D. Amodei, J. Steinhardt, D. Man, and P. Christiano. “Concrete Problems in
AI Safety”. arXiv preprint (2017).
D. Hadfield-Menell, A. Dragan, P. Abbeel, and S. Russell. “Cooperative Inverse Reinforcement Learning”. NIPS (2016).
A. Tversky and D. Kahneman. “Judgment under Uncertainty: Heuristics and
Biases”. Science 185.4157 (1974).
F. Heider and M. Simmel. “An Experimental Study of Apparent Behavior”.
The American Journal of Psychology 57.2 (1944).
A. N. Meltzoff. “Understanding the intentions of others: Re-enactment of
intended acts by 18-month-old children.” Dev. Psych. 31.5 (1995).
C. L. Baker and J. B. Tenenbaum. “Modeling Human Plan Recognition Using
Bayesian Theory of Mind”. Plan, Activity, and Intent Recognition. 2014.
P. Shafto, N. D. Goodman, and T. L. Griffiths. “A rational account of pedagogical reasoning: Teaching by, and learning from, examples”. Cog. Psych.
71 (2014).
S. Zamir. “Bayesian games: Games with incomplete information”. Computational Complexity: Theory, Techniques, and Applications (2012).
R. D. Luce. Individual Choice Behavior: a Theoretical Analysis. John Wiley
and Sons, 1959.
A. D. Dragan and S. Srinivasa. “Integrating human observer inferences into
robot motion planning”. Autonomous Robots (2014).
T. C. Schelling. The strategy of conflict. Harvard University Press, 1960.
M. Mundhenk, J. Goldsmith, C. Lusena, and E. Allender. “Complexity of
Finite-horizon Markov Decision Process Problems”. J. ACM 47.4 (2000).
D. Silver and J. Veness. “Monte-Carlo Planning in Large POMDPs”. NIPS.
2010.
| 2 |
arXiv:1606.09481v1 [cs.DS] 30 Jun 2016
Generating massive complex networks
with hyperbolic geometry faster in practice
Moritz von Looz
Mustafa Safa Özdayi
Karlsruhe Institute of Technology (KIT), Germany
Email: [email protected]
Istanbul Technical University, Turkey
Email: [email protected]
Sören Laue
Henning Meyerhenke
Friedrich Schiller University Jena, Germany
Email: [email protected]
Karlsruhe Institute of Technology (KIT), Germany
Email: [email protected]
Abstract—Generative network models play an important role
in algorithm development, scaling studies, network analysis,
and realistic system benchmarks for graph data sets. The
commonly used graph-based benchmark model R-MAT has
some drawbacks concerning realism and the scaling behavior
of network properties. A complex network model gaining
considerable popularity builds random hyperbolic graphs, generated by distributing points within a disk in the hyperbolic
plane and then adding edges between points whose hyperbolic
distance is below a threshold. We present in this paper a
fast generation algorithm for such graphs. Our experiments
show that our new generator achieves speedup factors of 3-60
over the best previous implementation. One billion edges can
now be generated in under one minute on a shared-memory
workstation. Furthermore, we present a dynamic extension to
model gradual network change, while preserving at each step
the point position probabilities.
1. Introduction
Relational data of complex relationships often take the
form of complex networks, graphs with heterogeneous and
often hierarchical structure, low diameter, high clustering,
and a heavy-tailed degree distribution. Examples include
social networks, the graph of hyperlinks between websites,
protein interaction networks, and infrastructure routing networks on the autonomous system level [1].
Frequently found properties in generative models for
complex network are non-negligible clustering (ratio of
triangles to triads), a community structure, and a heavytailed degree distribution [2], such as a power-law.
Benchmarks developed to evaluate a system with respect
to floating point operations do not represent the requirements
of graph algorithms, especially with heterogeneous datasets
such as complex networks. The Graph500 benchmark [3]
addresses this gap; it is the most widely-used graph benchmark in high-performance computing. It uses the Recursive
Matrix (R-MAT) [4] model to generate synthetic networks as
benchmark instances. Graphs from this model are efficiently
computable, but suffer from drawbacks in terms of realism.
For example, even with fixed parameters, the clustering
coefficient shrinks with graph size, while the number of
connected components increases, which is problematic for
scaling studies [5].
An interesting model without this problem are random
hyperbolic graphs (RHG), a family of geometric graphs
in the hyperbolic plane. Krioukov et al. [6] introduced
this graph model and showed how the structure of complex networks naturally develops from the properties of
hyperbolic geometry. To generate a RHG, one randomly
samples node positions in a hyperbolic disk, then connects
two nodes with an edge with a probability depending on
their hyperbolic distance. In a special case of this model,
an edge between two nodes is added exactly if their distance is below a threshold. This subset of RHG, sometimes
called threshold random hyperbolic graphs, is well-analyzed
theoretically [7], [8], [9] and could be considered as unitdisk graphs in hyperbolic space. The resulting graphs show
a power-law degree distribution with adjustable exponent,
provably high clustering [8], and small diameter [9].
Motivation, outline, and contribution. A fast generator
implementation that scales to large graph sizes and provides
sufficient realism is necessary to create meaningful graph
benchmark instances in acceptable time. While our previous
work [10] was able to improve the quadratic time complexity
of the pairwise probing approach [11] for threshold RHGs,
it still has superlinear time complexity. We therefore provide
a faster generation algorithm in this paper for threshold
random hyperbolic graphs (Section 3), using a new spatial
data structure. The key idea is to divide the relevant part of
the hyperbolic plane into ring-shaped slabs and use these
to bound the coordinates of possible neighbors in each
slab. As our experiments (Section 4) show, a network with
10 million vertices and 1G edges can be generated with
our shared-memory parallel implementation in under one
minute, yielding a speedup factor of up to 60 over the best
previous implementation [10]. For a graph with n nodes and
m edges, the measurements suggest an O(n log n+m) time
on the hyperbolic plane, the distance between them is given
by the hyperbolic law of cosines:
complexity, but we do not have a proof for this.
While an algorithm with optimal expected linear time
complexity has been suggested in a theoretical paper [12],
our present work provides the fastest implementation to date.
The generator code is publicly available in our network
analysis toolkit NetworKit [13].
cosh dist(p, q) = cosh rp cosh rq −sinh rp sinh rq cos |φp − φq |
(3)
As mentioned briefly in Section 1, an important special
case is T = 0, where an edge is added to a node pair exactly
if the hyperbolic distance between the points is below a
threshold. This graph family is sometimes called threshold
random hyperbolic graphs, hyperbolic unit-disk graphs or
(slightly confusingly) just random hyperbolic graphs. While
we consider hyperbolic unit-disk graphs to be more precise,
we stick with threshold random hyperbolic graphs to avoid
name proliferation. Many theoretical results are for this
special case [9].
2. Related Work
Generative Models. Due to the growing interest in complex
networks, numerous generators for them exist. For a comprehensive overview, which would be outside the scope of
this paper, we refer the interested reader to Goldenberg’s survey [14]. None of the models is suitable for all use cases. As
mentioned above, the Recursive Matrix (R-MAT) [4] model
has received particular attention in the HPC community due
to its use in the Graph500 benchmark [3].
RHG Generation Algorithms. Previous generators for random hyperbolic graphs exist, both for the general and special
case. Aldecoa et al. [11] present a generator for the general
case with quadratic time complexity, calculating distances
and sampling edges for all Θ(n2 ) node pairs.
Von Looz et al. [10] use polar quadtrees to generate
threshold RHGs with a time complexity of O((n3/2 +
m) log n) (with high probability). Recently, von Looz and
Meyerhenke [17] have extended this approach to generate
general RHGs with the same time complexity.
Bringmann et al. [12] propose Geometric Inhomogeneous Random Graphs as a generalization of RHGs and
describe a generation algorithm with expected linear time
complexity. To our knowledge no implementation of this
algorithm is available.
Hyperbolic Geometry. Hyperbolic space is one of the three
isotropic spaces, the other two being the (more common)
Euclidean space and spherical space. In contrast to the
flat Euclidean geometry and the positively curved spherical
geometry, hyperbolic geometry has negative curvature [15].
Among other interesting properties, hyperbolic geometry
shows an exponential expansion of space: While the area
of a Euclidean circle grows quadratically with the circle
radius, the area of a circle on the hyperbolic plane grows
exponentially with its radius. In balanced trees, the number
of nodes at a certain distance from the root also grows
exponentially with said distance, leading to the suggestion
that hierarchical complex networks with tree-like structures
might be easily embeddable in hyperbolic space [6]. Indeed,
Boguñá et al. [16] demonstrate the connection between
hyperbolic geometry and complex networks by embedding
the autonomous system internet graph in the hyperbolic
plane and enabling locally greedy routing.
As a generative model, Krioukov et al. [6] introduced
random hyperbolic graphs in 2010. To generate a graph,
points are first distributed randomly within in a disk DR of
radius R in the hyperbolic plane. The probability density
functions for the point distributions are given in polar coordinates, the angular coordinate φ is distributed uniformly
over [0, 2π], the radial coordinate r is given by [6, Eq. (17)]:
f (r) = α
sinh(αr)
cosh(αR) − 1
3. Algorithm
Our main idea is to partition the hyperbolic plane into
concentric ring-shaped slabs (Section 3.1) and use them to
limit the number of necessary distance calculations during
edge creation (Algorithm 1). Point positions are sampled,
sorted by their angular coordinates and stored in the appropriate slab as determined by their radial coordinates. To
gather the neighborhood of a point v , we then iterate over
all slabs and examine possible neighbors within them. Since
each slab limits the radial coordinates of points it contains,
we can use Eq. (3) to also bound the angular coordinates of
possible neighbors in each slab, thus reducing the number
of comparisons and running time.
(1)
The parameter α governs node dispersion, which determines
the power-law exponent of the resulting degree distribution.
After sampling point positions, edges are then added
to each node pair (u, v) with a probability given in [6,
Eq. (41)], depending on their hyperbolic distance and
parametrized by a temperature T ≥ 0:
1
f (x) = (1/T )·(x−R)/2
(2)
e
+1
3.1. Data Structure
Let C = {c0 , c1 , ...cmax } be a set of log n ordered radial
boundaries, with c0 = 0 and cmax = R. We then define a slab
Si as the area enclosed by ci and ci+1 . A point p = (φp , rp )
is contained in slab Si exactly if ci ≤ rp < ci+1 . Since slabs
are ring-shaped, they partition the hyperbolic disk DR :
For α ≥ 21 , the resulting degree distribution follows a
power law with exponent γ := 2α + 1 [6, Eq. (29)]. Given
two points in polar coordinates p = (φp , rp ), q = (φq , rq )
DR =
2
log
[n
i=0
Si .
φm
ax
Algorithm 1: Graph Generation
1
2
φ min
3
4
ci
ci+1
5
6
7
8
R
9
10
11
Figure 1: Graph in hyperbolic geometry with unit-disk
neighborhood. Neighbors of the bold blue vertex are in the
hyperbolic circle, marked in blue.
12
13
14
15
The choice of radial boundaries is an important tuning
parameter. After experimenting with different divisions, we
settled on a geometric sequence with ratio p = 0.9. The
relationship between successive boundary values is then:
ci+1 − ci = p · (ci − ci−1 ). From c0 = 0 and cmax = R, we
derive the value of c1 :
log
n−1
X
16
17
18
19
20
log n
1−p
c1 p = R ⇔ c1
1−p
(1 − p)R
= R ⇔ c1 =
1 − plog n
k=0
(4)
The remaining values follow geometrically.
Figure 1 shows an example of a graph in the hyperbolic
plane, together with slab Si . The neighbors of the bold blue
vertex v are those within a hyperbolic circle of radius R
(0.2R in this visualization), marked by the blue egg-shaped
area. When considering nodes in Si as possible neighbors of
v , the algorithm only needs to examine nodes whose angular
coordinate is between φmin and φmax .
k
21
22
23
24
Input: number of vertices n, average degree k ,
power-law exponent γ
Output: G = (V, E)
α = (γ − 1)/2;
R = getTargetRadius(n, k, α);
V = n vertices;
C = {c0 , c1 , ...cmax } set of log n ordered radial
coordinates, with c0 = 0 and cmax = R;
B = {b0 , b1 , ...bmax } set of log n empty sets;
for vertex v ∈ V do in parallel
draw φ[v] from U[0, 2π);
draw r[v] with density
f (r) = α sinh(αr)/(cosh(αR) − 1);
insert (φ[v], r[v]) in suitable bi so that
ci ≤ r[v] ≤ ci+1 ;
end
for b ∈ B do in parallel
sort points in b by their angular coordinates;
end
for vertex v ∈ V do in parallel
for band bi ∈ B , where ci+1 > r[v] do
minφ , maxφ =
getMinMaxPhi(φ[v], r[v]), ci , ci+1 , R);
for vertex w ∈ bi , where
minφ ≤ φ[w] ≤ maxφ do
if distH (v, w) ≤ R then
add (v, w) to E ;
end
end
end
end
return G;
Algorithm. Algorithm 1 shows the generation of G =
(V, E) with average degree k and power-law exponent γ .
First, the radius R of the hyperbolic disk is calculated
according to desired graph size and density (Line 2).
The value of ζ can be fixed while retaining all degrees
of freedom in the model [7], we thus assume ζ = 1. We
then use binary search with fixed n, α and desired k to find
an R that gives us a close approximation of the desired
average degree k . Note that the above equation is only an
approximation and might give wrong results for extreme
values. Our implementation could easily be adapted to skip
this step and accept the commonly used [18] parameter C ,
with R = 2 ln n+C or even accept R directly. For increased
usability, we accept the average degree k as a parameter in
the default version.
getTargetRadius. This function is unchanged from our
previous work [10]. For given values of n, α and R, an
approximation of the expected average degree k is given
by [6, Eq. (22)] and the notation ξ = (α/ζ)/(α/ζ − 1/2):
2
2
k = ξ 2 n · e−ζR/2 + ξ 2 n
(5)
π
π
2
R π ζ
ζ
· e−αR α
− (π − 1) + (π − 2) − 1
2 4 α
α
(6)
Vertex Positions and Bands. After settling the disk boundary, the radial boundaries ci are calculated (Line 4) as
defined above, the disk DR is thus partitioned into log n
slabs. For each slab Si , a set bi stores the vertices located
in the area of Si . These sets bi are initially empty (Line 5).
The vertex positions are then sampled randomly in polar
coordinates (Lines 7 and 8) and stored in the corresponding
set, i.e, vertex v is put into set bi iff ci ≤ r[v] < ci+1
(Line 9). Within each set, vertices are sorted with respect to
their angular coordinates (Lines 11 to 13).
3.2. Generation Algorithm
3
outside the hyperbolic disk DR . In this case, the movement
is inverted and the node “bounces” off the boundary. The
different probability densities in the center of the disk and
the outer regions can be translated into movement speed:
A node is less likely to be in the center; thus it needs to
spend less time there while traversing it, resulting in a higher
speed. We implement this movement in two phases: In the
initialization, step values τφ and τr are assigned to each node
according to the desired movement. Each movement step of
a node then consists of a rotation and a radial movement.
The rotation step is a straightforward addition of angular
coordinates: rotated(φ, r, τφ ) = (φ+τφ ) mod 2π . The radial
movement is described in Algorithm 2 and a visualization
is shown in Figure 2.
getMinMaxPhi. The neighbors of a given vertex v =
(φv , rv ) are those whose hyperbolic distance to v is at
most R. Let bi be the slab between ci and ci+1 , and
u = (φu , ru ) ∈ bi a neighbor of v in bi . Since u is in
bi , ru is between ci and ci+1 . With the hyperbolic law of
cosines, we can conclude:
cosh R ≥ cosh rv cosh ci − sinh rv sinh ci cos |φu − φv | ⇔
(7)
cosh rv cosh ci − cosh R ≤ sinh rv sinh ci cos |φu − φv | ⇔
(8)
cosh rv cosh ci − cosh R
⇔
cos |φu − φv | ≥
sinh rv sinh ci
(9)
cosh
r
cosh
c
−
cosh
R
v
i
−1
|φu − φv | ≤ cos
sinh rv sinh ci
(10)
Algorithm 2: Radial movement in dynamic model
Input: r, τr , R, α.
Output: rnew
1) x = sinh(r · α);
2) y = x+τr ;
3) z = asinh(y)/α;
4) Return z
To gather the neighborhood of a vertex v = (φv , rv ), we
iterate over all slabs Si and compute for each slab how far
the angular coordinate φq of a possible neighbor in bi can
deviate from φv (Line 16). We call the vertices in bi whose
angular coordinates are within these bounds the neighbor
candidates for v in bi .
Since points are sorted according to their angular coordinates, we can quickly find the leftmost and rightmost
neighbor candidate in each slab using binary search. We
then only need to check each neighbor candidate (Line 17),
compute its hyperbolic distance to v and add an edge if
this distance is below R (Lines 18 and 19). Since edges
can be found from both ends, we only need to iterate
over slabs in one direction; we choose outward in our
implementation (Line 15). The process is repeated for every
vertex v (Line 14).
Not surprisingly the running time of Algorithm 1 is dominated by the range queries (Lines 14-23). Our experiments
in Section 4 suggest a running time of O(n log n + m) for
the complete algorithm. This should be seen as an empirical
observation; we leave a mathematical proof for future work.
If the new node position would be outside the boundary
(r > R) or below the origin (r < 0), the movement is
reflected and τr set to −τr .
Theorem 1. Let fr,φ ((pr , pφ )) be the probability density of point positions, given in polar coordinates. Let
move((pr , pφ )) be a movement step. Then, the node movement preserves the distribution of angular and radial distributions: fr,φ (move((pr , pφ ))) = fr,φ ((pr , pφ )).
Proof. Since the distributions of angular and radial coordinates are independent, we consider them separately:
fr,φ (pr , pφ ) = fr (pr ) · fφ (pφ ).
As introduced in Eq. (1), the radial coordinate
r is sampled from a distribution with density
α sinh(αr)/(cosh(αR) − 1). We introduce random
variables X, Y, Z for each step in Algorithm 2, each is
denoted with the upper case letter of its equivalent. An
additional random variable Q denotes the pre-movement
radial coordinate. The other variables are defined as
X = sinh(Q · α), Y = X + τr and Z = asinh(Y )/α.
Let fQ , fX , fY and fZ denote the density functions of
these variables:
α sinh(αr)
fQ (r) =
(11)
cosh(αR) − 1
asinh(r)
αr
fX (r) = fQ
=
(12)
α
cosh(αR) − 1
αr − τr
fY (r) = fX (r − τr ) =
(13)
cosh(αR) − 1
α sinh(αr) − τr
fZ (r) = fY (sinh(r · α)) =
(14)
cosh(αR) − 1
τr
= fQ (r) −
(15)
cosh(αR) − 1
3.3. Dynamic Model
To model gradual change in networks, we design and
implement a dynamic version with node movement. While
deleting nodes or inserting them at random positions is a
suitable dynamic behavior for modeling internet infrastructure with sudden site failures or additions, change in e. g.,
social networks happens more gradually.
A suitable node movement model needs to be consistent: After moving a node, the network may change, but
properties should stay the same in expectation. Since the
properties emerge from the node positions, the probability
distribution of node positions needs to be preserved. In our
implementation, movement happens in discrete time steps.
We choose the movement to be directed: If a node i moves
in a certain direction at time t, it will move in the same
direction at t + 1, except if the new position would be
4
to 60 for graphs with 107 nodes and ≈ 4 · 107 edges. Very
roughly, the experimental running times fit a complexity
of O(n log n + m). While the running times of the faster
generator appear to grow more steeply with increasing edge
count, this is an artifact of the logarithmic plot: The same
constant increase is relatively larger compared to a smaller
running time, and thus appears larger in the logarithmic
drawing.
sinh(αR)
R
R
⇒ ⇒
0
1
n = 105 , our impl.
n = 106 , our impl.
n = 107 , our impl.
n = 105 , impl. of [10]
n = 106 , impl. of [10]
n = 107 , impl. of [10]
n = 105 , theoretical fit
n = 106 , theoretical fit
n = 107 , theoretical fit
0
Figure 2: For each movement step, radial coordinates are
mapped into the interval [1, sinh(αR)), where the coordinate
distribution is uniform. Adding τr and transforming the
coordinates back results in correctly scaled movements.
The distributions of Q and Z only differ in the constant
addition of τr /(cosh(αR) − 1). Every (cosh(αR) − 1)/τr
steps, the radial movement reaches a limit (0 or R) and is
reflected, causing τr to be multiplied with -1. On average,
τr is thus zero and FQ (r) = FZ (r).
A similar argument works for the rotational step: While
the rotational direction is unchanged, the change in coordinates is balanced by the addition or subtraction of 2π
whenever the interval [0, 2π) is left, leading to an average
of zero in terms of change.
running time in seconds
103
4. Experimental Evaluation
Setup. The generation algorithm is implemented in C++11
and parallelized with OpenMP. Running time measurements
were made on a server with 256 GB RAM and 2x8 Intel
Xeon E5-2680 cores at 2.7 GHz. With hyperthreading enabled, we use up to 32 threads. For memory allocations, we
use the lock-free malloc implementation of Intel’s Threading
Building Blocks library. Our code is included in the network
analysis toolkit NetworKit [13].
To compare performance, we generate graphs with 105 ,
6
10 and 107 nodes and average degrees between 1 and
64, both with the algorithm presented in this work and the
implementation of von Looz et al. [10].
To validate the distribution of generated graphs, we
compare our implementation with the implementation of
Aldecoa et al. [11]. We generate graphs with 104 nodes
each for a combination of parameters and calculate several
network analytic characteristics, averaging over 100 runs.
For the dynamic model, we measure the time required for
a movement step and again compare the distributions of
network analytic properties.
102
101
100
10−1
105
106
107
edges
108
109
Figure 3: Comparison of running times to generate networks with 104 -107 vertices, α = 1 and varying k . Circles represent running times of our implementation, diamonds the running times of the implementation of [10].
Our running times are fitted with the equation T (n, m) =
(7.07 · n log10 n + 2.23 · m + 891) · 10−8 seconds.
The scaling behavior for 1 to 32 threads on 16 cores
is shown in Figure 4. Considering edge sampling alone, it
shows strong scaling up to the number of physical cores,
with a speedup of 13.48 for 16 threads. With hyperthreading,
the speedup increases to 18.38. Combining the edge lists
later on into the NetworKit graph data structure, however,
requires coordination and proves to be a bottleneck in parallel. If only edge lists are required, this final step can be
omitted – as done for example in the Graph500 benchmark.
Running Time. Figure 3 shows the running times to generate graphs with 105 to 107 nodes and 2 · 105 to 128 · 107
edges. The speedup over the previously fastest implementation [10] increases with graph size and sparsity, reaching up
Distribution of Generated Graphs. The average degree
assortativity, degeneracy, clustering coefficient and size and
diameter of largest components of our generator and the
5
20
18.38
total
edge sampling
speedup factor
15
[3]
D. A. Bader, J. Berry, S. Kahan, R. Murphy, E. J. Riedy, and
J. Willcock, “Graph 500 benchmark 1 (”search”), version 1.1,” Graph
500, Tech. Rep., 2010.
[4]
D. Chakrabarti, Y. Zhan, and C. Faloutsos, “R-MAT: A recursive
model for graph mining,” in Proc. 4th SIAM Intl. Conf. on Data
Mining (SDM). Orlando, FL: SIAM, Apr. 2004.
[5]
T. G. Kolda, A. Pinar, T. Plantenga, and C. Seshadhri, “A scalable
generative graph model with community structure,” SIAM J. Scientific
Computing, vol. 36, no. 5, pp. C424–C452, Sep 2014.
[6]
D. Krioukov, F. Papadopoulos, M. Kitsak, A. Vahdat, and
M. Boguñá, “Hyperbolic geometry of complex networks,” Physical
Review E, vol. 82, no. 3, p. 036106, Sep 2010. [Online]. Available:
http://link.aps.org/doi/10.1103/PhysRevE.82.036106
[7]
M. Bode, N. Fountoulakis, and T. Müller, “The probability that
the hyperbolic random graph is connected,” Random Structures and
Algorithms, 2016, to appear. Preprint available at http://www.staff.
science.uu.nl/∼muell001/Papers/BFM.pdf.
[8]
L. Gugelmann, K. Panagiotou, and U. Peter, “Random hyperbolic
graphs: Degree sequence and clustering - (extended abstract),”
in Automata, Languages, and Programming - 39th International
Colloquium, ICALP 2012, Proceedings, Part II, ser. Lecture Notes
in Computer Science, A. Czumaj, K. Mehlhorn, A. M. Pitts, and
R. Wattenhofer, Eds., vol. 7392. Springer, 2012, pp. 573–585.
[Online]. Available: http://dx.doi.org/10.1007/978-3-642-31585-5 51
[9]
A. Bonato, “A survey of models of the web graph,” in Combinatorial
and Algorithmic Aspects of Networking, ser. Lecture Notes in
Computer Science. Springer Berlin Heidelberg, 2005, vol. 3405, pp.
159–172. [Online]. Available: http://dx.doi.org/10.1007/11527954 16
10
5
0
4
8
12
16
20
24
28
32
threads
Figure 4: Speedup curves for n = 107 , k = 6, γ = 3 on
a machine with 16 physical cores (marked with a vertical
line) and hyperthreading. Averaged over 10 runs.
one by Aldecoa et al. [11] are shown in Plots 5 and 6 in
Appendix A. Averaged over 100 runs, the network analytic
properties show a very close match between the distributions
of the two generation algorithms.
[10] M. von Looz, R. Prutkin, and H. Meyerhenke, “Generating random
hyperbolic graphs in subquadratic time,” in ISAAC 2015: Proc. 26th
Int’l Symp. on Algorithms and Computation, 2015.
Dynamic Model. Our implementation allows updating a
graph without rebuilding it from scratch. Moving up to 12%
of nodes and updating an existing graph is still faster than a
new static generation. The distribution of generated graphs
is indistinguishable from the static model (Appendix B).
[11] R. Aldecoa, C. Orsini, and D. Krioukov, “Hyperbolic
graph generator,” Computer Physics Communications, 2015.
[Online]. Available: http://www.sciencedirect.com/science/article/pii/
S0010465515002088
[12] K. Bringmann, R. Keusch, and J. Lengler, “Geometric inhomogeneous random graphs,” arXiv preprint arXiv:1511.00576, 2015.
5. Conclusions
[13] C. L. Staudt, A. Sazonovs, and H. Meyerhenke, “NetworKit: A tool
suite for large-scale complex network analysis,” Network Science,
2015, to appear.
We have provided the fastest implementation so far
to generate massive complex networks based on threshold
random hyperbolic graphs. The running time improvement
is particularly large for graphs with realistic densities.
We have also presented a model extension to cover
gradual node movement and have proved its consistency
regarding the probability densities of vertex positions.
Both the static and the dynamic model can serve as
complex network generators with reasonable realism and
fast generation times even for massive networks.
[14] A. Goldenberg, A. X. Zheng, S. E. Fienberg, and E. M. Airoldi, “A
survey of statistical network models,” Foundations and Trends R in
Machine Learning, vol. 2, no. 2, pp. 129–233, 2010.
[15] J. W. Anderson, Hyperbolic geometry; 2nd ed., ser. Springer undergraduate mathematics series. Berlin: Springer, 2005.
[16] M. Boguñá, F. Papadopoulos, and D. Krioukov, “Sustaining
the internet with hyperbolic mapping,” Nature Communications,
no. 62, September 2010. [Online]. Available: http://www.nature.com/
ncomms/journal/v1/n6/abs/ncomms1063.html
[17] M. von Looz and H. Meyerhenke, “Querying Probabilistic
Neighborhoods in Spatial Data Sets Efficiently,” ArXiv preprint
arXiv:1509.01990, Sep. 2015.
Acknowledgements. This work is partially supported by German Research Foundation (DFG) grant ME 3619/3-1 (FINCA)
and grant GI-711/5-1, both within the Priority Programme 1736
Algorithms for Big Data.
[18] M. Kiwi and D. Mitsche, “A bound for the diameter of random
hyperbolic graphs,” in 2015 Proceedings of the Twelfth Workshop on
Analytic Algorithmics and Combinatorics (ANALCO). SIAM, Jan
2015, pp. 26–39. [Online]. Available: http://epubs.siam.org/doi/abs/
10.1137/1.9781611973761.3
References
[1]
M. Newman, Networks: An Introduction.
2010.
Oxford University Press,
[2]
D. Chakrabarti and C. Faloutsos, “Graph mining: Laws, generators,
and algorithms,” ACM Computing Surveys (CSUR), vol. 38, no. 1,
p. 2, 2006.
6
Appendix A.
Comparison
tion [11]
with
Previous
Degree Assortativity
Degree Assortativity
γ = 2.2
γ = 4.6
γ = 7.0
0.4
0.2
0
0.6
degree assortativity
degree assortativity
0.6
Implementa-
−0.2
γ = 2.2
γ = 4.6
γ = 7.0
0.4
0.2
0
−0.2
101
102
101
k
102
Clustering Coefficient
k
0.86
0.84
Degeneracy
Degeneracy
Clustering Coefficient
0.86
k=4
k = 32
k = 256
0.82
cc
2
k
3
4
5
6
7
2
3
4
γ
·104
101
102
vertices in largest component
102
0.76
0.76
101
101
0.8
0.78
0.78
2
k
Figure 5: Comparison of degree assortativity and degeneracy
for the implementation of [11] (left) and our implementation
(right). Degree assortativity describes whether vertices have
neighbors of similar degree. A value near 1 signifies subgraphs with equal degree, a value of -1 star-like structures.
k -Cores, in turn, are a generalization of connected components and result from iteratively peeling away vertices of
degree k and assigning to each vertex the core number of
the innermost core it is contained in. Degeneracy refers to
the largest core number. Values are averaged over 100 runs.
Size of Largest Component
1
0.8
0.6
0.4
0.2
γ = 2.2
γ = 4.6
γ = 7.0
0
101
·104
Size of Largest Component
0.6
0.4
0.2
γ = 2.2
γ = 4.6
γ = 7.0
0
101
102
k
Diameter of Largest Component
diameter of largest component
7
0.8
102
γ = 2.2
γ = 4.6
γ = 7.0
600
400
200
0
102
k
6
1
k
101
5
γ
vertices in largest component
101
10
0.8
Diameter of Largest Component
diameter of largest component
max core number
max core number
10
2
cc
0.82
γ = 2.2
γ = 4.6
γ = 7.0
γ = 2.2
γ = 4.6
γ = 7.0
k=4
k = 32
k = 256
0.84
600
γ = 2.2
γ = 4.6
γ = 7.0
400
200
0
101
102
k
Figure 6: Comparison of clustering coefficients, size of
largest component and diameter of largest components for
the implementation of [11] (left) and our implementation
(right). Values are averaged over 100 runs.
7
Appendix B.
Consistency of Dynamic Model
Degree Assortativity
Degree Assortativity
0.2
0
−0.2
γ = 2.2
γ = 4.6
γ = 7.0
0.4
0.2
0
−0.2
101
102
101
Clustering Coefficient
102
0.86
k
k
0.84
Degeneracy
max core number
0.8
102
101
102
k
0.76
0.76
2
3
4
5
6
7
2
3
4
γ
·104
101
101
0.8
0.78
0.78
101
vertices in largest component
max core number
102
0.82
cc
γ = 2.2
γ = 4.6
γ = 7.0
k=4
k = 32
k = 256
0.84
0.82
cc
Degeneracy
γ = 2.2
γ = 4.6
γ = 7.0
Clustering Coefficient
0.86
k=4
k = 32
k = 256
102
k
Figure 7: Comparison of degree assortativity and degeneracy
for graphs with 104 nodes, before and after one movement step. All nodes were moved, with τφ ∈ (−1, 1) and
τr ∈ (−10, 1) sampled randomly. Distribution of graphs
after node movement are shown left, before node movement
right. Values are averaged over 100 runs.
Size of Largest Component
1
0.8
0.6
0.4
0.2
γ = 2.2
γ = 4.6
γ = 7.0
0
101
·104
Size of Largest Component
0.6
0.4
0.2
γ = 2.2
γ = 4.6
γ = 7.0
0
101
102
k
Diameter of Largest Component
diameter of largest component
7
0.8
102
γ = 2.2
γ = 4.6
γ = 7.0
400
200
0
102
k
6
1
k
101
5
γ
vertices in largest component
0.4
0.6
Diameter of Largest Component
diameter of largest component
γ = 2.2
γ = 4.6
γ = 7.0
degree assortativity
degree assortativity
0.6
600
γ = 2.2
γ = 4.6
γ = 7.0
400
200
0
101
102
k
Figure 8: Comparison of clustering coefficients, size of
largest component and diameter of largest components for
graphs with 104 nodes, before and after one movement step.
All nodes were moved, with τφ ∈ (−1, 1) and τr ∈ (−10, 1)
sampled randomly. Distribution of graphs after node movement are shown left, before node movement right. Values
are averaged over 100 runs.
8
| 8 |
The (1|1)-Centroid Problem on the Plane
Concerning Distance Constraints ⋆
arXiv:1608.03680v1 [cs.CG] 12 Aug 2016
Hung-I Yu, Tien-Ching Lin, and D. T. Lee
Institute of Information Science, Academia Sinica,
Nankang, Taipei 115, Taiwan,
{herbert,kero,dtlee}@iis.sinica.edu.tw
Abstract. In 1982, Drezner proposed the (1|1)-centroid problem on the
plane, in which two players, called the leader and the follower, open
facilities to provide service to customers in a competitive manner. The
leader opens the first facility, and the follower opens the second. Each
customer will patronize the facility closest to him (ties broken in favor
of the first one), thereby decides the market share of the two facilities.
The goal is to find the best position for the leader’s facility so that its
market share is maximized. The best algorithm of this problem is an
O(n2 log n)-time parametric search approach, which searches over the
space of market share values.
In the same paper, Drezner also proposed a general version of (1|1)centroid problem by introducing a minimal distance constraint R, such
that the follower’s facility is not allowed to be located within a distance
R from the leader’s. He proposed an O(n5 log n)-time algorithm for this
general version by identifying O(n4 ) points as the candidates of the optimal solution and checking the market share for each of them. In this
paper, we develop a new parametric search approach searching over the
O(n4 ) candidate points, and present an O(n2 log n)-time algorithm for
the general version, thereby close the O(n3 ) gap between the two bounds.
Keywords: competitive facility, Euclidean plane, parametric search
1
Introduction
In 1929, economist Hotelling introduced the first competitive location problem
in his seminal paper [13]. Since then, the subject of competitive facility location
has been extensively studied by researchers in the fields of spatial economics,
social and political sciences, and operations research, and spawned hundreds of
contributions in the literature. The interested reader is referred to the following
survey papers [3,7,8,9,11,12,17,19].
Hakimi [10] and Drezner [5] individually proposed a series of competitive
location problems in a leader-follower framework. The framework is briefly described as follows. There are n customers in the market, and each is endowed
⋆
Research supported by under Grants No. MOST 103-2221-E-005-042, 103-2221-E005-043.
2
Hung-I Yu, Tien-Ching Lin, and D. T. Lee
with a certain buying power. Two players, called the leader and the follower,
sequentially open facilities to attract the buying power of customers. At first,
the leader opens his p facilities, and then the follower opens another r facilities. Each customer will patronize the closest facility with all buying power (ties
broken in favor of the leader’s ones), thereby decides the market share of the
two players. Since both players ask for market share maximization, two competitive facility location problems are defined under this framework. Given that
the leader locates his p facilities at the set Xp of p points, the follower wants to
locate his r facilities in order to attract the most buying power, which is called
the (r|Xp )-medianoid problem. On the other hand, knowing that the follower
will react with maximization strategy, the leader wants to locate his p facilities
in order to retain the most buying power against the competition, which is called
the (r|p)-centroid problem.
Drezner [5] first proposed to study the two competitive facility location problems on the Euclidean plane. Since then, many related results [4,5,6,11,14] have
been obtained for different values of r and p. Due to page limit, here we introduce
only previous results about the case r = p = 1. For the (1|X1 )-medianoid problem, Drezner [5] showed that there exists an optimal solution arbitrarily close to
X1 , and solved the problem in O(n log n) time by sweeping technique. Later, Lee
and Wu [14] obtained an Ω(n log n) lower bound for the (1|X1 )-medianoid problem, and thus proved the optimality of the above result. For the (1|1)-centroid
problem, Drezner [5] developed a parametric search based approach that searches
over the space of O(n2 ) possible market share values, along with an O(n4 )-time
test procedure constructing and solving a linear program of O(n2 ) constraints,
thereby gave an O(n4 log n)-time algorithm. Then, by improving the test procedure via Megiddo’s result [16] for solving linear programs, Hakimi [11] reduced
the time complexity to O(n2 log n).
In [5], Drezner also proposed a more general setting for the leader-follower
framework by introducing a minimal distance constraint R ≥ 0 into the (1|X1 )medianoid problem and the (1|1)-centroid problem, such that the follower’s facility is not allowed to be located within a distance R from the leader’s. The
augmented problems are respectively called the (1|X1 )R -medianoid problem
and (1|1)R -centroid problem in this paper. Drezner showed that the (1|X1 )R medianoid problem can also be solved in O(n log n) time by using nearly the
same proof and technique as for the (1|X1 )-medianoid problem. However, for
the (1|1)R -centroid problem, he argued that it is hard to generalize the approach for the (1|1)-centroid problem to solve this general version, due to the
change of problem properties. Then, he gave an O(n5 log n)-time algorithm by
identifying O(n4 ) candidate points on the plane, which contain at least one optimal solution, and performing medianoid computation on each of them. So far,
the O(n3 ) bound gap between the two centroid problems remains unclosed.
In this paper, we propose an O(n2 log n)-time algorithm for the (1|1)R centroid problem on the Euclidean plane, thereby close the gap last for decades.
Instead of searching over market share values, we develop a new approach based
on the parametric search technique by searching over the O(n4 ) candidate points
The (1|1)R -Centroid Problem on the Plane
3
mentioned in [5]. This is made possible by making a critical observation on the
distribution of optimal solutions for the (1|X1 )R -medianoid problem given X1 ,
which provides us a useful tool to prune candidate points with respect to X1 . We
then extend the usage of this tool to design a key procedure to prune candidates
with respect to a given vertical line.
The rest of this paper is organized as follows. Section 2 gives formal problem
definitions and describes previous results in [5,11]. In Section 3, we make the observation on the (1|X1 )R -medianoid problem, and make use of it to find a “local”
centroid on a given line. This result is then extended as a new pruning procedure
with respect to any given line in Section 4, and utilized in our parametric search
approach for the (1|1)R -centroid problem. Finally, in Section 5, we give some
concluding remarks.
2
Notations and Preliminary Results
Let V = {v1 , v2 , · · · , vn } be a set of n points on the Euclidean plane R2 , as
the representatives of the n customers. Each point vi ∈ V is assigned with a
positive weight w(vi ), representing its buying power. To simplify the algorithm
description, we assume that the points in V are in general position, that is, no
three points are collinear and no two points share a common x or y-coordinate.
Let d(u, w) denote the Euclidean distance between any P
two points u, w T
∈ R2 .
For any set Z of points on the plane, we define W (Z) = {w(v)|v ∈ V Z}.
Suppose that the leader has located his facility at X1 = {x}, which is shortened
as x for simplicity. Due to the minimal distance constraint R mentioned in [5],
any point y ′ ∈ R2 with d(y ′ , x) < R is infeasible to be the follower’s choice. If
the follower locates his facility at some feasible point y, the set of customers
patronizing y instead of x is defined as V (y|x) = {v ∈ V |d(v, y) < d(v, x)}, with
their total buying power W (y|x) = W (V (y|x)). Then, the largest market share
that the follower can capture is denoted by the function
W ∗ (x) =
max
y∈R2 ,d(y,x)≥R
W (y|x),
which is called the weight loss of x. Given a point x ∈ R2 , the (1|x)R -medianoid
problem is to find a (1|x)R -medianoid, which denotes a feasible point y ∗ ∈ R2
maximizing the weight loss of x.
In contrast, the leader tries to minimize the weight loss of his own facility by
finding a point x∗ ∈ R2 such that
W ∗ (x∗ ) ≤ W ∗ (x)
for any point x ∈ R2 . The (1|1)R -centroid problem is to find a (1|1)R -centroid,
which denotes a point x∗ minimizing its weight loss. Note that, when R = 0, the
two problems degenerate to the (1|x)-medianoid and (1|1)-centroid problems.
4
2.1
Hung-I Yu, Tien-Ching Lin, and D. T. Lee
Previous approaches
In this subsection, we briefly review previous results for the (1|x)R -medianoid,
(1|1)-centroid, and (1|1)R -centroid problems in [5,11], so as to derive some basic
properties essential to our approach.
Let L be an arbitrary line, which partitions the Euclidean plane into two halfplanes. For any point y ∈
/ L, we define H(L, y) as the close half-plane including
y, and H − (L, y) as the open half-plane including y (but not L). For any two
distinct points x, y ∈ R2 , let B(y|x) denote the perpendicular bisector of the line
segment from x to y.
Given an arbitrary point x ∈ R2 , we first describe the algorithm for finding a
(1|x)R -medianoid in [5]. Let y be an arbitrary point other than x, and y ′ be some
point on the open line segment from y to x. We can see that H − (B(y|x), y) ⊂
H − (B(y ′ |x), y ′ ), which implies the fact that W (y ′ |x) = W (H − (B(y ′ |x), y ′ )) ≥
W (H − (B(y|x), y)) = W (y|x), It shows that moving y toward x does not diminish
its weight capture, thereby follows the lemma.
Lemma 1 [5] There exists a (1|x)R -medianoid in {y | y ∈ R2 , d(x, y) = R}.
For any point z ∈ R2 , let CR (x) and Cγ (x) be the circles centered at z with
radii R and γ = R/2, respectively. By Lemma 1, finding a (1|x)R -medianoid
can be reduced to searching a point y on CR (x) maximizing W (y|x). Since the
perpendicular bisector B(y|x) of each point y on CR (x) is a tangent line to the
circle Cγ (x), the searching of y on CR (x) is equivalent to finding a tangent line
to Cγ (x) that partitions the most weight from x. The latter problem can be
solved in O(n log n) time as follows. For each v ∈ V outside Cγ (x), we calculate
its two tangent lines to Cγ (x). Then, by sorting these tangent lines according to
the polar angles of their corresponding tangent points with respect to x, we can
use the angle sweeping technique to check how much weight they partition.
Theorem 1 [5] Given a point x ∈ R2 , the (1|x)R -medianoid problem can be
solved in O(n log n) time.
Next, we describe the algorithm of the (1|1)R -centroid problem in [5]. Let S
be a subset of V . We define C(S) to be the set of all circles Cγ (v), v ∈ S, and
CH(C(S)) to be the convex hull of these circles. It is easy to see the following.
Lemma 2 [5] Let S be a subset of V . For any point x ∈ R2 , W ∗ (x) ≥ W (S) if
x is outside CH(C(S)).
For any positive number W0 , let I(W0 ) be the intersection of all convex hulls
CH(C(S)), where S ⊆ V and W (S) ≥ W0 . We have the lemma below.
Lemma 3 [5] Let W0 be a positive real number. For any point x ∈ R2 , W ∗ (x) <
W0 if and only if x ∈ I(W0 ).
The (1|1)R -Centroid Problem on the Plane
5
Proof. Consider first the case that x ∈ I(W0 ). By definition, x intersects with
every CH(C(S)) of subset S ⊆ V with W (S) ≥ W0 . Let S ′ ⊆ V be any of such
subsets. Since x ∈ CH(C(S ′ )), for any point y feasible to x, there must exist a
point v ∈ S such that v ∈
/ H − (B(y|x), y), implying that no feasible point y can
acquire all buying power from customers of S ′ . It follows that no feasible point
y can acquire buying power larger than or equal to W0 , i.e., W ∗ (x) < W0 .
If x ∈
/ I(W0 ), there must exist a subset S ⊆ V with W (S) ≥ W0 , such that
x∈
/ CH(C(S)). By Lemma 2, W ∗ (x) ≥ W (S) ≥ W0 .
⊓
⊔
Drezner [5] argued that the set of all (1|1)R -centroids is equivalent to some
intersection I(W0 ) for smallest possible W0 . We slightly strengthen his argument
below. Let W = {W (y|x) | x, y ∈ R2 , d(x, y) ≥ R}. The following lemma can be
obtained.
Lemma 4. Let W0∗ be the smallest number in W such that I(W0∗ ) is not null.
A point x is a (1|1)R -centroid if and only if x ∈ I(W0∗ ).
Proof. Let WOP T be the weight loss of some (1|1)R -centroid x∗ . We first show
that I(W0 ) is null for any W0 ≤ WOP T . Suppose to the contrary that it is not
null and there exists a point x′ in I(W0 ). By Lemma 3, W ∗ (x′ ) < W0 ≤ WOP T ,
which contradicts the optimality of x∗ . Moreover, since I(W0∗ ) is not null, we
have that W0∗ > WOP T .
We now show that a point x is a (1|1)R -centroid if and only if x ∈ I(W0∗ ).
If x is a (1|1)R -centroid, we have that W ∗ (x) = WOP T < W0∗ . By Lemma
3, x ∈ I(W0∗ ). On the other hand, if x is not a (1|1)R -centroid, we have that
W ∗ (x) > WOP T . Since by definition W ∗ (x) ∈ W, we can see that W ∗ (x) ≥ W0∗ .
Thus, again by Lemma 3, x ∈
/ I(W0∗ ).
⊓
⊔
Although it is hard to compute I(W0∗ ) itself, we can find its vertices as
solutions to the (1|1)R -centroid problem. Let T be the set of outer tangent lines
of all pairs of circles in C(V ). For any subset S ⊆ V , the boundary of CH(C(S))
is formed by segments of lines in T and arcs of circles in C(V ). Since I(W0 )
is an intersection of such convex hulls, its vertices must fall within the set of
intersection points between lines in T , between circles in C(V ), and between one
line in T and one circle in C(V ). Let T × T , C(V ) × C(V ), and T × C(V ) denote
the three sets of intersection points, respectively. We have the lemma below.
Lemma 5 [5] There exists a (1|1)R -centroid in T × T , C(V ) × C(V ), and T ×
C(V ).
Obviously, there are at most O(n4 ) intersection points, which can be viewed
as the candidates of being (1|1)R -centroids. Drezner thus gave an algorithm by
evaluating the weight loss of each candidate by Theorem 1.
Theorem 2 [5] The (1|1)R -centroid problem can be solved in O(n5 log n) time.
We remark that, when R = 0, CH(C(S)) for any S ⊆ V degenerates to a
convex polygon, so does I(W0 ) for any given W0 , if not null. Drezner [5] proved
6
Hung-I Yu, Tien-Ching Lin, and D. T. Lee
that in this case I(W0 ) is equivalent to the intersection of all half-planes H with
W (H) ≥ W0 . Thus, whether I(W0 ) is null can be determined by constructing
and solving a linear program of O(n2 ) constraints, which takes O(n2 ) time by
Megiddo’s result [16]. Since |W| = O(n2 ), by Lemma 4, the (1|1)-centroid problem can be solved in O(n2 log n) time [11], by applying parametric search over
W for W0∗ . Unfortunately, it is hard to generalize this idea to the case R > 0,
motivating us to develop a different approach.
Local (1|1)R -Centroid within a Line
3
In this section, we analyze the properties of (1|x)R -medianoids of a given point
x in Subsection 3.1, and derive a procedure that prunes candidate points with
respect to x. Applying this procedure, we study a restricted version of the (1|1)R centroid problem in Subsection 3.2, in which the leader’s choice is limited to
a given non-horizontal line L, and obtain an O(n log2 n)-time algorithm. The
algorithm is then extended as the basis of the test procedure for the parametric
search approach in Section 4.
3.1
Pruning with Respect to a Point
Given a point x ∈ R2 and an angle θ between 0 and 2π, let y(θ|x) be the
point on CR (x) with polar angle θ with respect to x.1 We define M A(x) =
{θ | W (y(θ|x)|x) = W ∗ (x), 0 ≤ θ < 2π}, that is, the set of angles θ maximizing
W (y(θ|x)|x) (see Figure 1). It can be observed that, for any θ ∈ M A(x) and
sufficiently small ǫ, both θ + ǫ and θ − ǫ belong to M A(x), because each v ∈
V (y(θ|x)|x) does not intersect B(y(θ|x)|x) by definition. This implies that angles
in M A(x) form open angle interval(s) of non-zero length.
To simplify the terms, let W (θ|x) = W (y(θ|x)|x) and B(θ|x) = B(y(θ|x)|x)
in the remaining of this section. Also, let F (θ|x) be the line passing through x
and parallel to B(θ|x). The following lemma provides the basis for pruning.
Lemma 6 Let x ∈ R2 be an arbitrary point, and θ be an angle in M A(x). For
any point x′ ∈
/ H − (F (θ|x), y(θ|x)), W ∗ (x′ ) ≥ W ∗ (x).
Proof. Since x′ ∈
/ H − (F (θ|x), y(θ|x)) and y(θ|x) ∈ H − (F (θ|x), y(θ|x)), by the
definition of bisectors, the distance between F (θ|x′ ) and B(θ|x) is no less than
R/2, which implies that H − (B(θ|x), y(θ|x)) ⊆ H − (B(θ|x′ ), y(θ|x′ )). Therefore,
we can derive the following inequality
W ∗ (x′ ) ≥ W (θ|x′ )
= W (H − (B(θ|x′ ), y(θ|x′ ))
≥ W (H − (B(θ|x), y(θ|x))
= W (θ|x)
= W ∗ (x),
1
We assume that a polar angle is measured counterclockwise from the positive x-axis.
The (1|1)R -Centroid Problem on the Plane
7
Fig. 1. The black arcs represent the intervals of angles in M A(x), whereas the open
circles represent the open ends of these intervals.
which completes the proof.
⊓
⊔
This lemma tells us that, given a point x and an angle θ ∈ M A(x), all points
not in H − (F (θ|x), y(θ|x)) can be ignored while finding (1|1)R -centroids, as their
weight losses are no less than that of x. By this lemma, we can also prove that
the weight loss function is convex along any line on the plane, as shown below.
Lemma 7 Let x1 , x2 be two arbitrary distinct points on a given line L. For any
point x ∈ x1 x2 \{x1 , x2 }, W ∗ (x) ≤ max{W ∗ (x1 ), W ∗ (x2 )}.
Proof. Suppose by contradiction that W ∗ (x) > W ∗ (x1 ) and W ∗ (x) > W ∗ (x2 )
for some point x ∈ x1 x2 \{x1 , x2 }. Since W ∗ (x) > W ∗ (x1 ), by Lemma 6 there
exists an angle θ ∈ M A(x) such that x1 is included in H − (F (θ|x), y(θ|x)).
However, since x ∈ x1 x2 \{x1 , x2 }, x1 and x2 locate on different sides of F (θ|x).
It follows that x2 is outside H − (F (θ|x), y(θ|x)) and W ∗ (x2 ) ≥ W ∗ (x) by Lemma
6, which contradicts the assumption. Thus, the lemma holds.
⊓
⊔
We further investigate the distribution of angles in M A(x). Let CA(x) be
the minimal angle interval covering all angles in M A(x) (see Figure 2(a)), and
δ(CA(x)) be its angle span in radians. As mentioned before, M A(x) consists of
open angle interval(s) of non-zero length, which implies that CA(x) is an open
interval and δ(CA(x)) > 0. Moreover, we can derive the following.
Lemma 8 If δ(CA(x)) > π, x is a (1|1)R -centroid.
Proof. We prove this lemma by showing that W ∗ (x′ ) ≥ W ∗ (x) ∀ x′ 6= x. Let
x′ ∈ R2 be an arbitrary point other than x, and θ′ be its polar angle with
respect to x. Obviously, any angle θ satisfying x′ ∈ H − (F (θ|x), y(θ|x)) is in the
open interval (θ′ − π/2, θ′ + π/2), the angle span of which is equal to π. Since
8
Hung-I Yu, Tien-Ching Lin, and D. T. Lee
(a) CA(x)
(b) W edge(x)
Fig. 2. CA(x) and W edge(x).
δ(CA(x)) > π, by its definition there exists an angle θ ∈ M A(x) such that
x′ ∈
/ H − (F (θ|x), y(θ|x)). Thus, by Lemma 6, we have W ∗ (x′ ) ≥ W ∗ (x), thereby
proves the lemma.
⊓
⊔
We call a point x satisfying Lemma 8 a strong (1|1)R -centroid, since its
discovery gives an immediate solution to the (1|1)R -centroid problem. Note that
there are problem instances in which no strong (1|1)R -centroids exist.
Suppose that δ(CA(x)) ≤ π for some point x ∈ R2 . Let W edge(x) denote the
wedge of x, defined as the intersection of the two half-planes H(F (θb |x), y(θb |x))
and H(F (θe |x), y(θe |x)), where θb and θe are the beginning and ending angles of
CA(x), respectively. As illustrated in Figure 2(b), W edge(x) is the infinite region
lying between two half-lines extending from x (including x and the two halflines). The half-lines defined by F (θe |x) and F (θb |x) are called its boundaries,
and the counterclockwise (CCW) angle between the two boundaries is denoted
by δ(W edge(x)). Since 0 < δ(CA(x)) ≤ π, we have that W edge(x) 6= ∅ and
0 ≤ δ(W edge(x)) < π.
It should be emphasized that W edge(x) is a computational byproduct of
CA(x) when x is not a strong (1|1)R -centroid. In other words, not every point
has its wedge. Therefore, we make the following assumption (or restriction) in
order to avoid the misuse of W edge(x).
Assumption 1 Whenever W edge(x) is mentioned, the point x has been found
not to be a strong (1|1)R -centroid, either by computation or by properties. Equivalently, δ(CA(x)) ≤ π.
The following essential lemma makes W edge(x) our main tool for prune-andsearch. (Note that its proof cannot be trivially derived from Lemma 6, since by
definition θb and θe do not belong to the open intervals CA(x) and M A(x).)
Lemma 9 Let x ∈ R2 be an arbitrary point. For any point x′ ∈
/ W edge(x),
W ∗ (x′ ) ≥ W ∗ (x).
Proof. By symmetry, suppose that x′ ∈
/ H(F (θb |x), y(θb |x)). We can further
divide the position of x′ into two cases, (1) x′ ∈ H(F (θe |x), y(θe |x)) and (2)
x′ ∈
/ H(F (θe |x), y(θe |x)).
The (1|1)R -Centroid Problem on the Plane
9
Consider case (1). The two assumptions ensure that there exists an angle θ′ ∈
(θb , θe ], such that F (θ′ |x) passes through x′ . Obviously, any angle θ′′ ∈ (θb , θ′ )
satisfies that x′ ∈
/ H(F (θ′′ |x), y(θ′′ |x)). By the definition of CA(x), there must
′
exist an angle θb ∈ (θb , θ′ ) infinitely close to θb , such that θb′ belongs to M A(x).
Thus, by Lemma 6, we have that W ∗ (x′ ) ≥ W ∗ (x).
In case (2), for any angle θ′′ ∈ M A(x), we have that x′ ∈
/ H(F (θ′′ |x), y(θ′′ |x)),
′′
∗ ′
∗
since θ is in (θb , θe ). Again, W (x ) ≥ W (x) by Lemma 6.
⊓
⊔
Finally, we consider the computation of W edge(x).
Lemma 10 Given a point x ∈ R2 , M A(x), CA(x), and W edge(x) can be computed in O(n log n) time.
Proof. By Theorem 1, we first compute W ∗ (x) and those ordered tangent lines
in O(n log n) time. Then, by performing angle sweeping around Cγ (x), we can
identify in O(n) time those open intervals of angles θ with W (θ|x) = W ∗ (x), of
which M A(x) consists. Again by sweeping around Cγ (x), CA(x) can be obtained
from M A(x) in O(n) time. Now, if we find x to be a strong (1|1)R -centroid by
checking δ(CA(x)), the (1|1)R -centroid problem is solved and the algorithm can
be terminated. Otherwise, W edge(x) can be constructed in O(1) time.
⊓
⊔
3.2
Searching on a Line
Although computing wedges can be used to prune candidate points, it does not
serve as a stable prune-and-search tool, since wedges of different points have
indefinite angle intervals and spans. However, Assumption 1 makes it work fine
with lines. Here we show how to use the wedges to compute a local optimal point
on a given line, i.e. a point x with W ∗ (x) ≤ W ∗ (x′ ) for any point x′ on the line.
Let L be an arbitrary line, which is assumed to be non-horizontal for ease
of discussion. For any point x on L, we can compute W edge(x) and make use
of it for pruning purposes by defining its direction with respect to L. Since
δ(W edge(x)) < π by definition, there are only three categories of directions
according to the intersection of W edge(x) and L:
Upward – the intersection is the half-line of L above and including x;
Downward – the intersection is the half-line of L below and including x;
Sideward – the intersection is x itself.
If W edge(x) is sideward, x is a local optimal point on L, since by Lemma 9
W ∗ (x) ≤ W ∗ (x′ ) ∀ x′ ∈ L. Otherwise, either W edge(x) is upward or downward,
the points on the opposite half of L can be pruned by Lemma 9. It shows that
computing wedges acts as a predictable tool for pruning on L.
Next, we list sets of breakpoints on L in which a local optimal point locates.
Recall that T is the set of outer tangent lines of all pairs of circles in C(V ).
We define the T -breakpoints as the set L × T of intersection points between L
and lines in T , and the C-breakpoints as the set L × C(V ) of intersection points
between L and circles in C(V ). We have the following lemmas for breakpoints.
10
Hung-I Yu, Tien-Ching Lin, and D. T. Lee
Lemma 11 Let x1 , x2 be two distinct points on L. If W ∗ (x1 ) > W ∗ (x2 ), there
exists at least a breakpoint on the segment x1 x2 \{x1 }.
Proof. Let θ be an arbitrary angle in M A(x1 ) and S be the subset of V located in
the half-plane H − (B(θ|x1 ), y(θ|x1 )). By definition, x1 is outside the convex hull
CH(C(S)) and W ∗ (x1 ) = W (S). On the other hand, since W ∗ (x2 ) < W ∗ (x1 ) =
W (S) by assumption, we have that x2 is inside CH(C(S)) by Lemma 2. Thus,
the segment x1 x2 \{x1 } intersects with the boundary of CH(C(S)). Since the
boundary of CH(C(S)) consists of segments of lines in T and arcs of circles in
C(V ), the intersection point is either a T -breakpoint or a C-breakpoint, thereby
proves the lemma.
⊓
⊔
Lemma 12 There exists a local optimal point x∗L which is also a breakpoint.
Proof. Let x∗L be a local optimal point such that W ∗ (x′ ) > W ∗ (x∗L ) for some
point x′ adjacent to x∗L on L. Note that, if no such local optimal point exists,
every point on L must have the same weight loss and be local optimal, and the
lemma holds trivially. If such x∗L and x′ exist, by Lemma 11 there is a breakpoint
on x′ x∗L \{x′ }, which is x∗L itself. Thus, the lemma holds.
⊓
⊔
We remark that outer tangent lines parallel to L are exceptional cases while
considering breakpoints. For any line T ∈ T that is parallel to L, either T
does not intersect with L or they just coincide. In either case, T is irrelevant
to the finding of local optimal points, and should not be counted for defining
T -breakpoints.
Now, by Lemma 12, if we have all breakpoints on L sorted in the decreasing
order of their y-coordinates, a local optimal point can be found by performing
binary search using wedges. Obviously, such sorted sequence can be obtained in
O(n2 log n) time, since |L × T | = O(n2 ) and |L × C(V )| = O(n). However, in
order to speed up the computations of local optimal points on multiple lines, alternatively we propose an O(n2 log n)-time preprocessing, so that a local optimal
point on any given line can be computed in O(n log2 n) time.
The preprocessing itself is very simple. For each point v ∈ V , we compute a
sequence P (v), consisting of points in V \{v} sorted in increasing order of their
polar angles with respect to v. The computation for all v ∈ V takes O(n2 log n)
time in total. Besides, all outer tangent lines in T are computed in O(n2 ) time.
We will show that, for any given line L, O(n) sorted sequences can be obtained
from these pre-computed sequences in O(n log n) time, which can be used to
replace a sorted sequence of all T -breakpoints in the process of binary search.
For any two points v ∈ V and z ∈ R2 , let T r (z|v) be the outer tangent line
of Cγ (v) and Cγ (z) to the right of the line from v to z. Similarly, let T l (z|v)
be the outer tangent line to the left. (See Figure 3.) Moreover, let trL (z|v) and
tlL (z|v) be the points at which T r (z|v) and T l (z|v) intersect with L, respectively.
We partition T into O(n) sets T r (v) = {T r (vi |v)|vi ∈ V \{v}} and T l (v) =
{T l (vi |v)|vi ∈ V \{v}} for v ∈ V , and consider their corresponding T -breakpoints
independently. By symmetry, we only discuss the case about L × T r (v).
The (1|1)R -Centroid Problem on the Plane
11
Fig. 3. Outer tangent lines of v.
Lemma 13 For each v ∈ V , we can compute O(1) sequences of T -breakpoints
on L, which satisfy the following conditions:
(a) Each sequence is of length O(n) and can be obtained in O(log n) time.
(b) Breakpoints in each sequence are sorted in decreasing y-coordinates.
(c) The union of breakpoints in all sequences form L × T r (v).
Proof. Without loss of generality, suppose that v is either strictly to the right
of L or on L. Note that each point vi ∈ V \{v} corresponds to exactly one outer
tangent line T r (vi |v), thereby exactly one breakpoint trL (vi |v). Such one-to-one
correspondence can be easily done in O(1) time. Therefore, equivalently we are
computing sequences of points in V \{v}, instead of breakpoints.
In the following, we consider two cases about the relative position between
L and Cγ (v), (1) L intersects with Cγ (v) at zero or one point, (2) L intersects
with Cγ (v) at two points.
Case (1): Let θL be the angle of the upward direction along L. See Figure 4(a).
We classify the points in V \{v} by their polar angles with respect to v. Let
P1 (v) denote the sequence of those points with polar angles in the interval
(θL , θL + π) and sorted in CCW order. Similarly, let P2 (v) be the sequence of
points with polar angles in (θL + π, θL ) and sorted in CCW order. Obviously,
P1 (v) and P2 (v) together satisfy condition (c). (Note that points with polar
angles θL and θL + π are ignored, since they correspond to outer tangent
lines parallel to L.)
By general position assumption, we can observe that, for any two distinct
points vi , vj in P1 (v), trL (vi |v) is strictly above trL (vj |v) if and only if vi precedes vj in P1 (v). Thus, the ordering of points in P1 (v) implicitly describes
an ordering of their corresponding breakpoints in decreasing y-coordinates.
Similarly, the ordering in P2 (v) implies an ordering of corresponding breakpoints in decreasing y-coordinates. It follows that both P1 (v) and P2 (v)
satisfy condition (b).
As for condition (a), both P1 (v) and P2 (v) are of length O(n) by definition.
Also, since we have pre-computed the sequence P (v) as all points in V \{v}
12
Hung-I Yu, Tien-Ching Lin, and D. T. Lee
(a) no intersection
(b) two intersection points
Fig. 4. Two subcases about how Cγ (v) intersects C.
sorted in CCW order, P1 (v) and P2 (v) can be implicitly represented as concatenations of subsequences of P (v). This can be done in O(log n) time by
searching in P (v) the foremost elements with polar angles larger than θL
and θL + π, respectively.
Case (2): Suppose that the two intersection points between L and Cγ (v) are c1
and c2 , where c1 is above c2 . Let θ1 = θ1′ + π/2 and θ2 = θ2′ + π/2, in which
θ1′ and θ2′ are respectively the polar angles of c1 and c2 with respect to v. See
Figure 4(b). By assumption, we have that θL < θ1′ < θL + π/2 < θ2′ < θL + π,
which implies that θ1 ∈ (θL , θL + π) and θ2 ∈ (θL + π, θL ).
We divide the points in V \{v} into four sequences P1 (v), P2 (v), P3 (v), and
P4 (v) by their polar angles with respect to v. P1 (v) consists of points with
polar angles in (θL , θ1 ), P2 (v) in [θ1 , θL + π), P3 (v) in (θL + π, θ2 ], and P4 (v)
in (θ2 , θL ), all sorted in CCW order. It follows that the four sequences satisfy
conditions (c).
Condition (a) and (b) hold for P1 (v) and P4 (v) from similar discussion as
above. However, for any two distinct points vi , vj in P2 (v), we can observe
that trL (vi |v) is strictly below trL (vj |v) if and only if vi precedes vj in P2 (v).
Similarly, the argument holds for P3 (v). Thus, what satisfy condition (b)
are actually the reverse sequences of P2 (v) and P3 (v), which can also be
obtained in O(log n) time, satisfying condition (a).
⊓
⊔
By Lemma 13.(c), searching in L × T r (v) is equivalent to searching in the
O(1) sequences of breakpoints, which can be computed more efficiently than
the obvious way. Besides, we can also obtain a symmetrical lemma constructing
sequences for L × T l (v). In the following, we show how to perform a binary
search within these sequences.
Lemma 14 With an O(n2 log n)-time preprocessing, given an arbitrary line L,
a local optimal point x∗L can be computed in O(n log2 n) time.
The (1|1)R -Centroid Problem on the Plane
13
Proof. By Lemma 12, the searching of x∗L can be done within L × T and L ×
C(V ). L × T can be further divided into L × T r (v) and L × T l (v) for each
v ∈ V . By Lemma 13, these 2n sets can be replaced by O(n) sorted sequences
of breakpoints on L. Besides, L × C(V ) consists of no more than 2n breakpoints,
which can be computed and arranged into a sorted sequence in decreasing ycoordinates. Therefore, we can construct N0 = O(n) sequences P1 , P2 , · · · , PN0
of breakpoints, each of length O(n) and sorted in decreasing y-coordinates.
The searching in the N0 sorted sequences is done by performing parametric
search for parallel binary searches, introduced in [1]. The technique we used here
is similar to the algorithm in [1], but uses a different weighting scheme. For each
sorted sequence Pj , 1 ≤ j ≤ N0 , we first obtain its middle element xj , and
associate xj with a weight mj equal to the number of elements in Pj . Then, we
compute the weighted
defined as the ele0 middle elements,
P median [18] of the NP
P
ment
x
such
that
{m
|x
is
above
x}
≥
m
/2
and
{m
|xj is below x} ≥
j
j
j
j
P
mj /2. Finally, we apply Lemma 10 on the point x. If x is a strong (1|1)R centroid, of course it is local optimal. If not, Assumption 1 holds and W edge(x)
can be computed. If W edge(x) is sideward, a local optimal point x∗L = x is
directly found. Otherwise, W edge(x) is either upward or downward, and thus
all breakpoints on the opposite half can be pruned by Lemma 9. The pruning
makes a portion of sequences, that possesses over half of total breakpoints by the
definition of weighted median, lose at least a quarter of their elements. Hence,
at least one-eighths of breakpoints are pruned. By repeating the above process,
we can find x∗L in at most O(log n) iterations.
The time complexity for finding x∗L is analyzed as follows. By Lemma 13,
constructing sorted sequences for L × T r (v) and L × T l (v) for all v ∈ V takes
O(n log n) time. Computing and sorting L × C(V ) also takes O(n log n) time.
There are at most O(log n) iterations of the pruning process. At each iteration, the N0 middle elements and their weighted median x can be obtained in
O(N0 ) = O(n) time by the linear-time weighted selection algorithm [18]. Then,
the computation of W edge(x) takes O(n log n) time by Lemma 10. Finally, the
pruning of those sequences can be done in O(n) time. In summary, the searching
⊓
⊔
of x∗L requires O(n log n) + O(log n) × O(n log n) = O(n log2 n) time.
We remark that, by Lemma 14, it is easy to obtain an intermediate result for
the (1|1)R -centroid problem on the plane. By Lemma 5, there exists a (1|1)R centroid in T × T , T × C(V ), and C(V ) × C(V ). By applying Lemma 14 to the
O(n2 ) lines in T , the local optimum among the intersection points in T × T and
T × C(V ) can be obtained in O(n3 log2 n) time. By applying Theorem 1 on the
O(n2 ) intersection points in C(V )× C(V ), the local optimum among them can be
obtained in O(n3 log n) time. Thus, we can find a (1|1)R -centroid in O(n3 log2 n)
time, a nearly O(n2 ) improvement over the O(n5 log n) bound in [5].
4
(1|1)R -Centroid on the Plane
In this section, we study the (1|1)R -centroid problem and propose an improved
algorithm of time complexity O(n2 log n). This algorithm is as efficient as the
14
Hung-I Yu, Tien-Ching Lin, and D. T. Lee
best-so-far algorithm for the (1|1)-centroid problem, but based on a completely
different approach.
In Subsection 4.1, we extend the algorithm of Lemma 14 to develop a procedure allowing us to prune candidate points with respect to a given vertical
line. Then, in Subsection 4.2, we show how to compute a (1|1)R -centroid in
O(n2 log n) time based on this newly-developed pruning procedure.
4.1
Pruning with Respect to a Vertical Line
Let L be an arbitrary vertical line on the plane. We call the half-plane strictly
to the left of L the left plane of L and the one strictly to its right the right plane
of L. A sideward wedge of some point on L is said to be rightward (leftward )
if it intersects the right (left) plane of L. We can observe that, if there is some
point x ∈ L such that W edge(x) is rightward, every point x′ on the left plane of
L can be pruned, since W ∗ (x′ ) ≥ W ∗ (x) by Lemma 9. Similarly, if W edge(x) is
leftward, points on the right plane of L can be pruned. Although the power of
wedges is not fully exerted in this way, pruning via vertical lines and sideward
wedges is superior than directly via wedges due to predictable pruning regions.
Therefore, in this subsection we describe how to design a procedure that
enables us to prune either the left or the right plane of a given vertical line L.
As mentioned above, the key point is the searching of sideward wedges on L.
It is achieved by carrying out three conditional phases. In the first phase, we
try to find some proper breakpoints with sideward wedges. If failed, we pick
some representative point in the second phase and check its wedge to determine
whether or not sideward wedges exist. Finally, in case of their nonexistence,
we show that their functional alternative can be computed, called the pseudo
wedge, that still allows us to prune the left or right plane of L. In the following,
we develop a series of lemmas to demonstrate the details of the three phases.
Property 1 Given a point x ∈ L, for each possible direction of W edge(x), the
corresponding CA(x) satisfies the following conditions:
Upward – CA(x) ⊆ (0, π),
Downward – CA(x) ⊆ (π, 2π),
Rightward – 0 ∈ CA(x),
Leftward – π ∈ CA(x).
Proof. When W edge(x) is upward, by definition the beginning angle θb and the
ending angle θe of CA(x) must satisfy that both half-planes H(F (θb |x), y(θb |x))
and H(F (θe |x), y(θe |x)) include the half-line of L above x. It follows that 0 ≤
θb , θe ≤ π, and thus CA(x) ⊆ (0, π). (Recall that θb , θe ∈
/ CA(x).) The case that
W edge(x) is downward can be proved in a symmetric way.
When W edge(x) is rightward, we can see that H(F (θb |x), y(θb |x)) must not
contain the half-line of L above x, and thus π < θb < 2π. By similar arguments,
0 < θe < π. Therefore, counterclockwise covering angles from θb to θe , CA(x)
must include the angle 0. The case that W edge(x) is leftward can be symmetrically proved.
⊓
⊔
The (1|1)R -Centroid Problem on the Plane
15
Lemma 15 Let x1 , x2 be two points on L, where x1 is strictly above x2 . For
any angle 0 ≤ θ ≤ π, W (θ|x1 ) ≤ W (θ|x2 ). Symmetrically, for π ≤ θ ≤ 2π,
W (θ|x2 ) ≤ W (θ|x1 ).
Proof. For any angle 0 ≤ θ ≤ π, we can observe that H − (B(θ|x1 ), y(θ|x1 )) ⊂
H − (B(θ|x2 ), y(θ|x2 )), since x1 is strictly above x2 . It follows that W (θ|x1 ) ≤
W (θ|x2 ). The second claim also holds by symmetric arguments.
⊓
⊔
Lemma 16 Let x be an arbitrary point on L. If W edge(x) is either upward or
downward, for any point x′ ∈ L\W edge(x), W edge(x′ ) has the same direction
as W edge(x).
Proof. By symmetry, we prove that, if W edge(x) is upward, W edge(x′ ) is also
upward for every x′ ∈ L strictly below x. By Property 1, the fact that W edge(x)
is upward means that CA(x) ⊂ (0, π) and thus M A(x) ⊂ (0, π). Let x′ be a
point on L strictly below x. By Lemma 15, we have that W (θ|x′ ) ≥ W (θ|x) for
0 < θ < π and W (θ|x′ ) ≤ W (θ|x) for π ≤ θ ≤ 2π. It follows that M A(x′ ) ⊂ (0, π)
and CA(x′ ) ⊂ (0, π), so W edge(x′ ) is upward as well.
⊓
⊔
Following from this lemma, if there exist two arbitrary points x1 and x2 on
L with their wedges downward and upward, respectively, we can derive that x1
must be strictly above x2 , and that points with sideward wedges or even strong
(1|1)R -centroids can locate only between x1 and x2 . Thus, we can find sideward
wedges between some specified downward and upward wedges. Let xD be the
lowermost breakpoint on L with its wedge downward, xU the uppermost breakpoint on L with its wedge upward, and GDU the open segment xD xU \{xD , xU }.
(For ease of discussion, we assume that both xD and xU exist on L, and show
how to resolve this assumption later by constructing a bounded box.) Again, xD
is strictly above xU . Also, we have the following corollary by their definitions.
Corollary 17 If there exist breakpoints in the segment GDU , for any such breakpoint x, either x is a strong (1|1)R -centroid or W edge(x) is sideward.
Given xD and xU , the first phase can thus be done by checking whether
there exist breakpoints in GDU and picking any of them if exist. Supposing
that the picked one is not a strong (1|1)R -centroid, a sideward wedge is found
by Corollary 17 and can be used for pruning. Notice that, when there are two
or more such breakpoints, one may question whether their wedges are of the
same direction, as different directions result in inconsistent pruning results. The
following lemma answers the question in the positive.
Lemma 18 Let x1 , x2 be two distinct points on L, where x1 is strictly above x2
and none of them is a strong (1|1)R -centroid. If W edge(x1 ) and W edge(x2 ) are
both sideward, they are either both rightward or both leftward.
Proof. We prove this lemma by contradiction. By symmetry, suppose the case
that W edge(x1 ) is rightward and W edge(x2 ) is leftward. This case can be further
divided into two subcases by whether or not CA(x1 ) and CA(x2 ) intersect.
16
Hung-I Yu, Tien-Ching Lin, and D. T. Lee
Fig. 5. CH(C(S)) intersects L between x1 and x2 .
Consider first that CA(x1 ) does not intersect CA(x2 ). Because W edge(x1 )
is rightward, 0 ∈ CA(x1 ) by Property 1. Thus, there exists an angle θ, 0 <
θ < π, such that θ ∈ M A(x1 ). Since x1 is strictly above x2 , by Lemma 15
we have that W ∗ (x1 ) = W (θ|x1 ) ≤ W (θ|x2 ) ≤ W ∗ (x2 ). Furthermore, since
W edge(x2 ) is leftward, we can see that x1 ∈
/ W edge(x2 ) and therefore W ∗ (x1 ) ≥
∗
W (x2 ) by Lemma 9. It follows that W (θ|x2 ) = W ∗ (x2 ) and thus θ ∈ M A(x2 ).
By definition, M A(x1 ) ⊆ CA(x1 ) and M A(x2 ) ⊆ CA(x2 ), which implies that
CA(x1 ) and CA(x2 ) intersect at θ, contradicting the subcase assumption.
When CA(x1 ) intersects CA(x2 ), their intersection must be completely included in either (0, π) or (π, 2π) due to Assumption 1. By symmetry, we assume
the latter subcase. Using similar arguments as above, we can find an angle θ′ ,
where 0 < θ′ < π, such that θ′ ∈ M A(x1 ) and θ′ ∈ M A(x2 ). This is a contradiction, since θ′ ∈
/ (π, 2π).
Since both subcases do not hold, the lemma is proved.
⊓
⊔
The second phase deals with the case that no breakpoint exists between xD
and xU by determining the wedge direction of an arbitrary inner point in GDU .
We begin with several auxiliary lemmas.
Lemma 19 Let x1 , x2 be two distinct points on L such that W ∗ (x1 ) = W ∗ (x2 )
and x1 is strictly above x2 . There exists at least one breakpoint in the segment
(a) x1 x2 \{x2 }, if M A(x2 ) intersects (0, π) but M A(x1 ) does not,
(b) x1 x2 \{x1 }, if M A(x1 ) intersects (π, 2π) but M A(x2 ) does not.
Proof. By symmetry, we only show the correctness of condition (a). From its
assumption,
there exists an angle θ, where 0 < θ < π, such that θ ∈ M A(x2 ). Let
T
S = V H − (B(θ|x2 ), y(θ|x2 )). By definition, we have that W (S) = W (θ|x2 ) =
W ∗ (x2 ) = W ∗ (x1 ) and CH(C(S)) ⊂ H − (F (θ|x2 ), y(θ|x2 )), which implies that
CH(C(S)) is strictly above F (θ|x2 ). (See Figure 5.)
We first claim that CH(C(S)) intersects L. If not, there must exist an angle
θ′ , where 0 < θ′ < π, such that CH(C(S)) ⊂ H − (F (θ′ |x1 ), y(θ′ |x1 )), that is,
The (1|1)R -Centroid Problem on the Plane
17
S ⊂ H − (B(θ′ |x1 ), y(θ′ |x1 )). By definition, W ∗ (x1 ) ≥ W (θ′ |x1 ) ≥ W (S). Since
W ∗ (x1 ) = W (S), W (θ′ |x1 ) = W ∗ (x1 ) and thus θ′ ∈ M A(x1 ), which contradicts
the condition that M A(x1 ) does not intersect (0, π). Thus, the claim holds.
When CH(C(S)) intersects L, x1 locates either inside or outside CH(C(S)).
Since x2 locates outside CH(C(S)), in the former case the boundary of CH(C(S))
intersects x1 x2 \{x2 } and forms a breakpoint, thereby proves condition (a). On
the other hand, if x1 is outside CH(C(S)), again there exists an angle θ′′ such
that CH(C(S)) ⊂ H − (F (θ′′ |x1 ), y(θ′′ |x1 )). By similar arguments, we can show
that θ′′ ∈ M A(x1 ). By assumption, θ′′ must belong to (π, 2π), which implies that
CH(C(S)) is strictly below F (θ′′ |x1 ). Since CH(C(S)) is strictly above F (θ|x2 )
as mentioned, any intersection point between CH(C(S)) and L should be inner
⊓
⊔
to x1 x2 . Therefore, the lemma holds.
Lemma 20 Let G be a line segment connecting two consecutive breakpoints on
L. For any two distinct points x1 , x2 inner to G, W ∗ (x1 ) = W ∗ (x2 ).
Proof. Suppose to the contrary that W ∗ (x1 ) 6= W ∗ (x2 ). By Lemma 11, there
exists at least one breakpoint in x1 x2 , which contradicts the definition of G.
Thus, the lemma holds.
⊓
⊔
Lemma 21 When there is no breakpoint between xD and xU , any two distinct
points x1 , x2 in GDU have the same wedge direction, if they are not strong (1|1)R centroids.
Proof. Suppose by contradiction that the directions of their wedges are different.
By Lemmas 16 and 18, there are only two possible cases.
(1) W edge(x1 ) is downward, and W edge(x2 ) is either sideward or upward.
(2) W edge(x1 ) is sideward, and W edge(x2 ) is upward.
In the following, we show that both cases do not hold.
Case (1): Because W edge(x1 ) is downward, we have that CA(x1 ) ⊆ (π, 2π) by
Property 1 and thus M A(x1 ) does not intersect (0, π). On the other hand,
whether W edge(x2 ) is sideward or upward, we can see that CA(x2 ) and
M A(x2 ) intersect (0, π) by again Property 1. Since W ∗ (x1 ) = W ∗ (x2 ) by
Lemma 20, the status of the two points satisfies the condition (a) of Lemma
19, so at least one breakpoint exists between x1 and x2 . By definitions of x1
and x2 , this breakpoint is inner to GDU , thereby contradicts the assumption.
Therefore, Case (1) does not hold.
Case (2): The proof of Case (2) is symmetric to that of Case (1). The condition
(b) of Lemma 19 can be applied similarly to show the existence of at least
one breakpoint between x1 and x2 , again a contradiction.
Combining the above discussions, we prove that the wedges of x1 and x2 are of
the same direction, thereby completes the proof of this lemma.
⊓
⊔
18
Hung-I Yu, Tien-Ching Lin, and D. T. Lee
This lemma enables us to pick an arbitrary point in GDU , e.g., the bisector
point xB of xD and xU , as the representative of all inner points in GDU . If xB
is not a strong (1|1)R -centroid and W edge(xB ) is sideward, the second phase
finishes with a sideward wedge found. Otherwise, if W edge(xB ) is downward or
upward, we can derive the following and have to invoke the third phase.
Lemma 22 If there is no breakpoint between xD and xU and W edge(xB ) is not
sideward, there exist neither strong (1|1)R -centroids nor points with sideward
wedges on L.
Proof. By Lemma 16, this lemma holds for points not in GDU . Without loss of
generality, suppose that W edge(xB ) is downward. For all points in GDU above
xB , the lemma holds by again Lemma 16.
Consider an arbitrary point x ∈ xB xU \{xB , xU }. We first show that x is not
a strong (1|1)R -centroid. Suppose to the contrary that x really is. By definition,
we have that δ(CA(x)) > π, and thus CA(x) and M A(x) intersect (0, π). On
the other hand, CA(xB ) and M A(xB ) do not intersect (0, π) due to downward
W edge(xB ) and Property 1. Since W ∗ (xB ) = W ∗ (x) by Lemma 20, applying
the condition (a) of Lemma 19 to xB and x shows that at least one breakpoint
exists between them, which contradicts the no-breakpoint assumption. Now that
x is not a strong (1|1)R -centroid, it must have a downward wedge, as xB does
by Lemma 21. Therefore, the lemma holds for all points on L.
⊓
⊔
When L satisfies Lemma 22, it consists of only points with downward or upward wedges, and is said to be non-leaning. Obviously, our pruning strategy via
sideward wedges could not apply to such non-leaning lines. The third phase overcomes this obstacle by constructing a functional alternative of sideward wedges,
called the pseudo wedge, on either xD or xU , so that pruning with respect to L
is still achievable. Again, we start with auxiliary lemmas.
Lemma 23 If L is non-leaning, the following statements hold:
(a) W ∗ (xD ) 6= W ∗ (xU ),
(b) W ∗ (x) = max{W ∗ (xD ), W ∗ (xU )} for all points x ∈ GDU .
Proof. We prove the correctness of statement (a) by contradiction, and suppose
that W ∗ (xD ) = W ∗ (xU ). Besides, the fact that L is non-leaning implies that
no breakpoint exists in GDU . By Lemmas 22 and 21, the wedges of all points
in GDU are of the same direction, either downward or upward. Suppose the
downward case by symmetry, and pick an arbitrary point in GDU , say, xB .
Since W edge(xB ) is downward, we have that M A(xB ) does not intersect (0, π).
Oppositely, by definition W edge(xU ) is upward, so CA(xU ) and M A(xU ) are
included in (0, π). Because xB is strictly above xU and W ∗ (xD ) = W ∗ (xU ),
according to the condition (a) of Lemma 19, there exists at least one breakpoint
in xB xU \{xU }, which is a contradiction. Therefore, statement (a) holds.
The proof of statement (b) is also done by contradiction. By symmetry, assume that W ∗ (xD ) > W ∗ (xU ) in statement (a). Consider an arbitrary point
The (1|1)R -Centroid Problem on the Plane
19
x ∈ GDU . By Lemma 7, we have that W ∗ (x) ≤ max{W ∗ (xD ), W ∗ (xU )} =
W ∗ (xD ). Suppose that the equality does not hold. Then, by Lemma 11, at least
one breakpoint exists in the segment xD x\{xD }, contradicting the no-breakpoint
fact. Thus, W ∗ (x) = W ∗ (xD ) and statement (b) holds.
⊓
⊔
Let W1 = max{W ∗ (xD ), W ∗ (xU )}. We are going to define the pseudo wedge
on either xU or xD , depending on which one has the smaller weight loss. We
consider first the case that W ∗ (xD ) > W ∗ (xU ), and obtain the following.
Lemma 24 If L is non-leaning and W ∗ (xD ) > W ∗ (xU ), there exists one angle
θ for xU , where π ≤ θ ≤ 2π, such that W (H(B(θ|xU ), y(θ|xU ))) ≥ W1 .
Proof. We first show that there exists at least a subset S ⊆ V with W (S) = W1 ,
such that xU locates on the upper boundary of CH(C(S)). Let x be the point
strictly above but arbitrarily close to xU on L. By Lemma 23, W ∗ (x) = W1 ,
hence W ∗ (x) > W ∗ (xU ) by case assumption. It follows that xU ∈ W edge(x)
by Lemma 9 and W edge(x) must be downward. By Property 1, we have that
CA(x) ⊆ (π, 2π). Thus, there exists an angle θ′ ∈ M A(x), where π < θ′ < 2π,
′
such that W (H −
|x), y(θ′ |x))) = W (θ′ |x) = W1 .
T(B(θ
−
Let S = V H (B(θ′ |x), y(θ′ |x)). Since W ∗ (xU ) < W1 = W (S), xU is
inside CH(C(S)) by Lemma 2. Oppositely, by the definition of S, x is outside
the convex hull CH(C(S)). It implies that xU is the topmost intersection point
between CH(C(S)) and L, hence on the upper boundary of CH(C(S)). (It is
possible that xU locates at the leftmost or the rightmost point of CH(C(S)).)
The claimed angle θ is obtained as follows. Since xU is a boundary point of
CH(C(S)), there exists a line F passing through xU and tangent to CH(C(S)).
Let θ be the angle satisfying that F (θ|xU ) = F , π ≤ θ ≤ 2π, and CH(C(S)) ⊂
H(F (θ|xU ), y(θ|xU )). Obviously, we have that S ⊂ H(B(θ|xU ), y(θ|xU )) and
thus W (H(B(θ|xU ), y(θ|xU ))) ≥ W ∗ (S) = W1 .
⊓
⊔
Let θU be an arbitrary angle satisfying the conditions of Lemma 24. We apply
the line F (θU |xU ) for trimming the region of W edge(xU ), so that a sideward
wedge can be obtained. Let P W (xU ), called the pseudo wedge of xU , denote
the intersection of W edge(xU ) and H(F (θU |xU ), y(θU |xU )). Deriving from the
three facts that W edge(xU ) is upward, δ(W edge(xU )) < π, and π ≤ θU ≤ 2π,
we can observe that either P W (xU ) is xU itself, or it intersects only one of the
right and left plane of L. In the two circumstances, P W (xU ) is said to be null
or sideward, respectively. The pseudo wedge has similar functionality as wedges,
as shown in the following corollary.
Corollary 25 For any point x′ ∈
/ P W (xU ), W ∗ (x′ ) ≥ W ∗ (xU ).
Proof. If x′ ∈
/ W edge(xU ), the lemma directly holds by Lemma 9. Otherwise,
we have that x′ ∈
/ H(F (θ|xU ), y(θ|xU )) and thus H(B(θ|x′ ), y(θ|x′ )) contains
H(B(θ|xU ), y(θ|xU )). Then, by Lemma 24, W ∗ (x′ ) ≥ W (H(B(θ|x′ ), y(θ|x′ ))) ≥
W (H(B(θ|xU ), y(θ|xU ))) ≥ W1 , thereby completes the proof.
⊓
⊔
20
Hung-I Yu, Tien-Ching Lin, and D. T. Lee
By this lemma, if P W (xU ) is found to be sideward, points on the opposite
half-plane with respect to L can be pruned. If P W (xU ) is null, xU becomes
another kind of strong (1|1)R -centroids, in the meaning that it is also an immediate solution to the (1|1)R -centroid problem. Without confusion, we call xU a
conditional (1|1)R -centroid in the latter case.
On the other hand, considering the reverse case that W ∗ (xD ) < W ∗ (xU ), we
can also obtain an angle xD and a pseudo wedge P W (xD ) for xD by symmetric
arguments. Then, either P W (xD ) is sideward and the opposite side of L can be
pruned, or P W (xD ) itself is a conditional (1|1)R -centroid. Thus, the third phase
solves the problem of the nonexistence of sideward wedges.
Recall that the three phases of searching sideward wedges is based on the
existence of xD and xU on L, which was not guaranteed before. Here we show
that, by constructing appropriate border lines, we can guarantee the existence
of xD and xU while searching between these border lines. The bounding box is
defined as the smallest axis-aligned rectangle that encloses all circles in C(V ).
Obviously, any point x outside the box satisfies that W ∗ (x) = W (V ) and must
not be a (1|1)R -centroid. Thus, given a vertical line not intersecting the box,
the half-plane to be pruned is trivially decided. Moreover, let Ttop and Tbtm
be two arbitrary horizontal lines strictly above and below the bounding box,
respectively. We can obtain the following.
Lemma 26 Let L be an arbitrary vertical line intersecting the bounding box,
and x′D and x′U denote its intersection points with Ttop and Tbtm , respectively.
W edge(x′D ) is downward and W edge(x′U ) is upward.
Proof. Consider the case about W edge(x′D ). As described above, we know the
fact that W ∗ (xD ) = W (V ). Let θ be an arbitrary angle with 0 ≤ θ ≤ π.
We can observe that H − (F (θ|x′D ), y(θ|x′D )) cannot contain all circles in C(V ),
that is, V 6⊂ H − (B(θ|x′D ), y(θ|x′D )). This implies that W (θ|x′D ) < W ∗ (x′D ) and
θ ∈
/ M A(x′D ). Therefore, we have that M A(x′D ) ⊂ (π, 2π) and W edge(x′D ) is
downward by Property 1. By similar arguments, we can show that W edge(x′U )
is upward. Thus, the lemma holds.
⊓
⊔
According to this lemma, by inserting Ttop and Tbtm into T , the existence
of xD and xU is enforced for any vertical line intersecting the bounding box.
Besides, it is obvious to see that the insertion does not affect the correctness of
all lemmas developed so far.
Summarizing the above discussion, the whole picture of our desired pruning
procedure can be described as follows. In the beginning, we perform a preprocessing to obtain the bounding box and then add Ttop and Tbtm into T . Now,
given a vertical line L, whether to prune its left or right plane can be determined
by the following steps.
1. If L does not intersect the bounding box, prune the half-plane not containing
the box.
2. Compute xD and xU on L.
The (1|1)R -Centroid Problem on the Plane
21
3. Find a sideward wedge or pseudo wedge via three forementioned phases.
(Terminate whenever a strong or conditional (1|1)R -centroid is found.)
(a) If breakpoints exist between xD and xU , pick any of them and check it.
(b) If no such breakpoint, decide whether L is non-leaning by checking xB .
(c) If L is non-leaning, compute P W (xU ) or P W (xD ) depending on which
of xU and xD has smaller weight loss.
4. Prune the right or left plane of L according to the direction of the sideward
wedge or pseudo wedge.
The correctness of this procedure follows from the developed lemmas. Any
vertical line not intersecting the bounding box is trivially dealt with in Step 1,
due to the property of the box. When L intersects the box, by Lemma 26, xD and
xU can certainly be found in Step 2. The three sub-steps of Step 3 correspond
to the three searching phases. When L is not non-leaning, a sideward wedge is
found, either at some breakpoint between xD and xU in Step 3(a) by Corollary
17, or at xB in Step 3(b) by Lemma 21. Otherwise, according to Lemma 24
or its symmetric version, a pseudo wedge can be built in Step 3(c) for xU or
xD , respectively. Finally in Step 4, whether to prune the left or right plane of
L can be determined via the just-found sideward wedge or pseudo wedge, by
respectively Lemma 9 or Corollary 25.
The time complexity of this procedure is analyzed as follows. The preprocessing for computing the bounding box trivially takes O(n) time. In Step 1,
any vertical line not intersecting the box can be identified and dealt with in
O(1) time. Finding xD and xU in Step 2 requires the help of the binary-search
algorithm developed in 3.2. Although the algorithm is designed to find a local
optimal point, we can easily observe that slightly modifying its objective makes
it applicable to this purpose without changing its time complexity. Thus, Step
2 can be done in O(n log2 n) time by Lemma 14.
In Step 3(a), all breakpoints between xD and xU can be found in O(n log n)
time as follows. As done in Lemma 14, we first list all breakpoints on L by O(n)
sorted sequences of length O(n), which takes O(n log n) time. Then, by performing binary search with the y-coordinates of xD and xU , we can find within each
sequence the breakpoints between them in O(log n) time. In Step 3(a) or 3(b),
checking a picked point x is done by computing CA(x), that requires O(n log n)
time by Lemma 10. To compute the pseudo wedge in Step 3(c), the angle θU
satisfying Lemma 24, or symmetrically θD , can be computed in O(n log n) time
by sweeping technique as in Lemma 10. Thus, P W (xU ) or P W (xD ) can be computed in O(n log n) time. Finally, the pruning decision in Step 4 takes O(1) time.
Summarizing the above, these steps require O(n log2 n) time in total. Since the
invocation of Lemma 14 needs an additional O(n2 log n)-time preprocessing, we
have the following result.
Lemma 27 With an O(n2 log n)-time preprocessing, whether to prune the right
or left plane of a given vertical line L can be determined in O(n log2 n) time.
22
4.2
Hung-I Yu, Tien-Ching Lin, and D. T. Lee
Searching on the Euclidean Plane
In this subsection, we come back to the (1|1)R -centroid problem. Recall that,
by Lemma 5, at least one (1|1)R -centroid can be found in the three sets of
intersection points T × T , C(V ) × T , and C(V ) × C(V ), which consist of total
O(n4 ) points. Let L denote the set of all vertical lines passing through these
O(n4 ) intersection points. By definition, there exists a vertical line L∗ ∈ L such
that its local optimal point is a (1|1)R -centroid. Conceptually, with the help
of Lemma 27, L∗ can be derived by applying prune-and-search approach to L:
pick the vertical line L from L with median x-coordinates, determine by Lemma
27 whether the right or left plane of L should be pruned, discard lines of L in
the pruned half-plane, and repeat above until two vertical lines left. Obviously,
it costs too much if this approach is carried out by explicitly generating and
sorting the O(n4 ) lines. However, by separately dealing with each of the three
sets, we can implicitly maintain sorted sequences of these lines and apply the
prune-and-search approach.
Let LT , LM , and LC be the sets of all vertical lines passing through the
intersection points in T × T , C(V ) × T , and C(V ) × C(V ), respectively. A local
optimal line of LT is a vertical line L∗t such that its local optimal point has
weight loss no larger than those of points in T × T . The local optimal lines L∗m
and L∗c can be similarly defined for LM and LC , respectively. We will adopt
different prune-and-search techniques to find the local optimal lines in the three
sets, as shown in the following lemmas.
Lemma 28 A local optimal line L∗t of LT can be found in O(n2 log n) time.
Proof. Let N1 = |T |. By definition, there are (N1 )2 intersection points in T × T
and (N1 )2 vertical lines in LT . For efficiently searching within these vertical lines,
we apply the ingenious idea of parametric search via parallel sorting algorithms,
proposed by Megiddo [15].
Consider two arbitrary lines Tg , Th ∈ T . If they are not parallel, let tgh
be their intersection point and Lgh be the vertical line passing through tgh .
Suppose that Tg is above Th in the left plane of Lgh . If applying Lemma 27 to
Lgh prunes its right plane, Tg is above Th in the remained left plane. On the
other hand, if the left plane of Lgh is pruned, Th is above Tg in the remained
right plane. Therefore, Lgh can be treated as a “comparison” between Tg and
Th , in the sense that applying Lemma 27 to Lgh determines their ordering in
the remained half-plane. It also decides the ordering of their intersection points
with the undetermined local optimal line L∗t , since the pruning ensures that a
local optimal line stays in the remained half-plane.
It follows that, by resolving comparisons, the process of pruning vertical lines
in LT to find L∗t can be reduced to the problem of determining the ordering of
the intersection points of the N1 lines with L∗t , or say, the sorting of these intersection points on L∗t . While resolving comparisons during the sorting process,
we can simultaneously maintain the remained half-plane by two vertical lines
as its boundaries. Thus, after resolving all comparisons in LT , one of the two
boundaries must be a local optimal line. As we know, the most efficient way to
The (1|1)R -Centroid Problem on the Plane
23
obtain the ordering is to apply some optimal sorting algorithm AS , which needs
to resolve only O(N1 log N1 ) comparisons, instead of (N1 )2 comparisons. Since
resolving each comparison takes O(n log2 n) time by Lemma 27, the sorting is
done in O(n log2 n) × O(N1 log2 N1 ) = O(n3 log3 n) time, so is the finding of L∗t .
However, Megiddo [15] observed that, when multiple comparisons can be indirectly resolved in a batch, simulating parallel sorting algorithms in a sequential
way naturally provides the scheme for batching comparisons, thereby outperform
the case of applying AS . Let AP be an arbitrary cost-optimal parallel sorting
algorithm that runs in O(log n) steps on O(n) processors, e.g., the parallel merge
sort in [2]. Using AP to sort the N1 lines in LT on L∗t takes O(log N1 ) parallel
steps. At each parallel step, there are k = O(N1 ) comparisons L1 , L2 , · · · , Lk
to be resolved. We select the one with median x-coordinate among them, which
is supposed to be some Li . If applying Lemma 27 to Li prunes its left plane,
for each comparison Lj to the left of Li , the ordering of the corresponding lines
of Lj in the remained right plane of Li is directly known. Thus, the O(k/2)
comparisons to the left of Li are indirectly resolved in O(k/2) time. If otherwise
the right plane of Li is pruned, the O(k/2) comparisons to its right are resolved
in O(k/2) time. By repeating this process of selecting medians and pruning on
the remaining elements O(log k) times, all k comparisons can be resolved, which
takes O(n log2 n) × O(log k) + O(k + k/2 + k/4 + · · · ) = O(n log3 n) + O(N1 ) =
O(n2 ) time. Therefore, going through O(log N1 ) parallel steps of AP requires
O(n2 log N1 ) = O(n2 log n) time, which determines the ordering of lines in LT
⊓
⊔
on L∗t and also computes a local optimal line L∗t .
Lemma 29 A local optimal line L∗m of LM can be found in O(n2 log n) time.
Proof. To deal with the set LM , we use the ideas similar to the proofs of Lemmas 13 and 14 in order to divide C(V ) × T into sorted sequences of points. Given
a fixed circle C = Cγ (u0 ) for some point u0 ∈ V , we show that the intersection
points in C × T r (v) and C × T l (v) for each v ∈ V can be grouped into O(1)
sequences of length O(n), which are sorted in increasing x-coordinates. Summarizing over all circles in C(V ), there will be total O(n2 ) sequences of length
O(n), each of which maps to a sequence of O(n) vertical lines sorted in increasing
x-coordinates. Then, finding a local optimal line L∗m can be done by performing prune-and-search to the O(n2 ) sequences of vertical lines via parallel binary
searches. The details of these steps are described as follows.
First we discuss about the way for grouping intersection points in C × T r (v)
and C ×T l (v) for a fixed point v ∈ V , so that each of them can be represented by
O(1) subsequences of P (v). By symmetry, only C × T r (v) is considered. Similar
to Lemma 13, we are actually computing sequences of points in V \{v} corresponding to these intersection points. For each vi ∈ V \{v}, the outer tangent line
T r (vi |v) may intersect C at two, one, or zero point. Let trC,1 (vi |v) and trC,2 (vi |v)
denote the first and second points, respectively, at which T r (vi |v) intersects C
along the direction from v to vi . Note that, when T r (vi |v) intersects C at less
than two points, trC,2 (vi |v) or both of them will be null.
In the following, we consider the sequence computation under two cases about
the relationship between u0 and v, (1) u0 = v, and (2) u0 6= v.
24
Hung-I Yu, Tien-Ching Lin, and D. T. Lee
(a) no intersection
(b) two intersection points
Fig. 6. Two subcases about how Cγ (v) intersects C.
Case (1): Since v coincides with u0 , C × T r (v) is just the set of tangent points
trC,1 (vi |v) for all vi ∈ V \{v}. It is easy to see the the angular sorted sequence P (v) directly corresponds to a sorted sequence of these n − 1 tangent
points in CCW order. P (v) can be further partitioned into two sub-sequences
P1 (v) and P2 (v), which consist of points in V \{v} with polar angles (with
respect to v) in the intervals [0, π) and [π, 2π), respectively. Since they are
sorted in CCW order, we have that intersection points corresponding to
P2 (v) and to the reverse of P1 (v) are sorted in increasing x-coordinates,
as we required. Obviously, P1 (v) and P2 (v) are of length O(n) and can be
obtained in O(log n) time.
Case (2): Suppose without loss of generality that v locates on the lower left
quadrant with respect to u0 , and let θ0 be the polar angle of u0 with respect
to v. This case can be further divided into two subcases by whether or not
Cγ (v) intersects C at less than two points.
Consider first the subcase that they intersect at none or one point (see Figure
6(a).) Let θ3 and θ4 be the angles such that T r (y(θ3 |v)|v) and T r (y(θ4 |v)|v)
are inner tangent to Cγ (v) and C, where θ3 ≤ θ4 (Note that θ3 = θ4 only
when the two circles intersect at one point.) For each vi ∈ V \{v}, T r (vi |v)
does not intersect C, if the polar angle of vi with respect to v is neither
in [θ0 , θ3 ] nor in [θ4 , θ0 + π]. We can implicitly obtain from P (v) two subsequences P3 (v) and P4 (v), consisting of points with polar angles in [θ0 , θ3 ] and
in [θ4 , θ0 + π], respectively. It can be observed that the sequence of points
vi listed in P3 (v) corresponds to a sequence of intersection points trC,1 (vi |v)
listed in clockwise (CW) order on C and, moreover, a sequence of trC,2 (vi |v)
listed in CCW order on C. Symmetrically, the sequence of points vj in P4 (v)
corresponds to a sequence of trC,1 (vj |v) in CCW order and a sequence of
trC,2 (vj |v) in CW order. The four implicit sequences of intersection points
on C can be further partitioned by a horizontal line Lh passing through
The (1|1)R -Centroid Problem on the Plane
25
its center u0 , so that the resulted sequences are naturally sorted in either
increasing or decreasing x-coordinates. Therefore, we can implicitly obtain
at most eight sorted sequences of length O(n) in replace of C × T r (v), by
appropriately partitioning P (v) in O(log n) time.
Consider that Cγ (v) intersects Cγ (u0 ) at two points c5 and c6 , where c5
is to the upper right of c6 (see Figure 6(b).) Let θ5 and θ6 be the angles
such that T r (y(θ5 |v)|v) and T r (y(θ6 |v)|v) are tangent to Cγ (v) at c5 and
c6 , respectively. Again, P (v) can be implicitly partitioned into three subsequences P5 (v), P6 (v), and P7 (v), which consists of points with polar angles in
[θ0 , θ5 ), [θ5 , θ6 ), and [θ6 , θ0 + π], respectively. By similar observations, P5 (v)
corresponds to two sequences of intersection points listed in CW and CCW
order, respectively, and P7 (v) corresponds to two sequences listed in CCW
and CW order, respectively. However, the sequence of points vi in P6 (v) corresponds to the sequences of trC,1 (vi |v) and trC,2 (vi |v) listed in both CCW
order. These sequences can also be partitioned by Lh into sequences sorted in
x-coordinates. It follows that we can implicitly obtain at most twelve sorted
sequences of length O(n) in replace of C × T r (v) in O(log n) time.
According to the above discussion, for any two points u, v ∈ V , Cγ (u)×T r (v)
and Cγ (u) × T l (v) can be divided into O(1) sequences in O(log n) time, each
of which consists of O(n) intersection points on Cγ (u) sorted in increasing xcoordinates. Thus, C(V ) × T can be re-organized as O(n2 ) sorted sequences of
length O(n) in O(n2 log n) time, which correspond to O(n2 ) sorted sequences
of O(n) vertical lines. Now, we can perform parametric search for parallel binary search to these sequences of vertical lines, by similar techniques used in
Lemma 14. For each of the O(n2 ) sequences, its middle element is first obtained
and assigned with a weight equal to the sequence length in O(1) time. Then,
the weighted median L of these O(n2 ) elements are computed in O(n2 ) time
[18]. By applying Lemma 27 to L in O(n log2 n) time, at least one-eighths of
total elements can be pruned from these sequences, taking another O(n2 ) time.
Therefore, a single iteration of pruning requires O(n2 ) time. After O(log n) such
iterations, a local optimal line L∗m can be found in total O(n2 log n) time, thereby
proves the lemma.
⊓
⊔
Lemma 30 A local optimal line L∗c of LC can be found in O(n2 log n) time.
Proof. There are at most O(n2 ) points in C(V )×C(V ). Thus, LC can be obtained
and sorted according to x-coordinates in O(n2 log n) time. Then, by simply performing binary search with Lemma 27, a local optimal line L∗c can be easily
found in O(log n) iterations of pruning, which require total O(n log3 n) time. In
summary, the computation takes O(n2 log n) time, and the lemma holds.
⊓
⊔
By definition, L∗ can be found among L∗t , L∗m , and L∗c , which can be computed in O(n2 log n) time by Lemmas 28, 29, and 30, respectively. Then, a (1|1)R centroid can be computed as the local optimal point of L∗ in O(n log2 n) time by
Lemma 14. Combining with the O(n2 log n)-time preprocessing for computing
the angular sorted sequence P (v)s and the bounding box enclosing C(V ), we
have the following theorem.
26
Hung-I Yu, Tien-Ching Lin, and D. T. Lee
Theorem 3 The (1|1)R -centroid problem can be solved in O(n2 log n) time.
5
Concluding Remarks
In this paper, we revisited the (1|1)-centroid problem on the Euclidean plane
under the consideration of minimal distance constraint between facilities, and
proposed an O(n2 log n)-time algorithm, which close the bound gap between
this problem and its unconstrained version. Starting from a critical observation
on the medianoid solutions, we developed a pruning tool with indefinite region
remained after pruning, and made use of it via multi-level structured parametric
search approach, which is quite different to the previous approach in [5,11].
Considering distance constraint between facilities in various competitive facility location models is both of theoretical interest and of practical importance.
However, similar constraints are rarely seen in the literature. It would be good
starting points by introducing the constraint to the facilities between players in
the (r|Xp )-medianoid and (r|p)-centroid problems, maybe even to the facilities
between the same player.
References
1. R. Cole, “Slowing down sorting networks to obtain faster sorting algorithms,” Journal of the ACM, vol. 34, no. 1, pp. 200–208, 1987.
2. R. Cole, “Parallel merge sort,” SIAM Journal on Computing, vol. 17, no. 4, pp.
770–785, 1988.
3. A. Dasci, “Conditional Location Problems on Networks and in the Plane,” In: H.
A. Eiselt, V. Marianov (eds.), Foundations of Location Analysis, Springer, New
York, pp. 179–206, 2011.
4. I. Davydov, Y. Kochetov, and A. Plyasunov, “On the complexity of the (r|p)centroid problem in the plane,” TOP, vol. 22, no. 2, pp. 614–623, 2013.
5. Z. Drezner, “Competitive location strategies for two facilities,” Regional Science
and Urban Economics, vol 12, no. 4, pp. 485–493, 1982.
6. Z. Drezner, and Z. Eitan, “Competitive location in the plane,” Annals of Operations
Research, vol. 40, no. 1, pp. 173–193, 1992.
7. H. A. Eiselt and G. Laporte, “Sequential location problems,” European Journal of
Operational Research, vol. 96, pp. 217–231, 1997.
8. H. A. Eiselt, G. Laporte, and J.-F. Thisse, “Competitive location models: a framework and bibliography,” Transportation Science, vol. 27, pp. 44–54, 1993.
9. H. A. Eiselt, V. Marianov, Vladimir and T. Drezner, Tammy, “Competitive Location Models,” In: G. Laporte, S. Nickel, F. Saldanha da Gama (eds.), Location
Science, Springer International Publishing, pp. 365–398, 2015.
10. S. L. Hakimi, “On locating new facilities in a competitive environment,” European
Journal of Operational Research, vol. 12, no. 1, pp. 29–35, 1983.
11. S. L. Hakimi, “Locations with spatial interactions: competitive locations and
games,” In: P.B. Mirchandani, R.L. Francis (eds), Discrete location theory, Wiley, New York, pp. 439–478, 1990.
12. P. Hansen, J.-F. Thisse, and R. W. Wendell, “Equilibrium analysis for voting and
competitive location problems,” In: P. B. Mirchandani, R. L. Francis RL (eds),
Discrete location theory, Wiley, New York, pp 479–501, 1990.
The (1|1)R -Centroid Problem on the Plane
27
13. H. Hotelling, “Stability in competition,” Economic Journal, vol. 39, 41–57, 1929.
14. D. T. Lee, Y. F. Wu, “Geometric complexity of some location problems,” Algorithmica, vol. 1, no. 1, pp. 193–211, 1986.
15. N. Megiddo, “Applying parallel computation algorithms in the design of serial
algorithms,” Journal of the ACM, vol. 30, no. 4, pp. 852–865, 1983.
16. N. Megiddo, “Linear-time algorithms for linear programming in R3 and related
problems,” SIAM Journal on Computing, vol. 12, no. 4, pp. 759–776, 1983.
17. F. Plastria, “Static competitive facility location: an overview of optimisation approaches.” European Journal of Operational Research, vol. 129, no. 3, pp. 461–470,
2001.
18. A. Reiser, “A linear selection algorithm for sets of elements with weights,” Information Processing Letters, vol. 7, no. 3, pp. 159–162, 1978.
19. D. R. Santos-Peñate, R. Suárez-Vega, and P. Dorta-González, “The leader–follower
location model,” Networks and Spatial Economics, vol. 7, no. 1, pp. 45–61, 2007.
| 8 |
HYBRID FUEL CELLS POWER FOR LONG DURATION
ROBOT MISSIONS IN FIELD ENVIRONMENTS
Jekanthan Thangavelautham1, Danielle Gallardo2,
Daniel Strawser1, Steven Dubowsky1
1
Mechanical Engineering Department, Massachusetts Institute of Technology
77 Massachusetts Ave., Cambridge, MA, 02139,
{jekan, dstrawse, dubowsky}@mit.edu
2
Department of Mechanical, Aerospace and Nuclear Engineering, Rensselaer
Polytechnic Institute, 110th 8th Jonsson Engineering Center, Troy New York, 12180
Mobile robots are often needed for long duration missions. These include search rescue,
sentry, repair, surveillance and entertainment. Current power supply technology limit
walking and climbing robots from many such missions. Internal combustion engines have
high noise and emit toxic exhaust while rechargeable batteries have low energy densities
and high rates of self-discharge. In theory, fuel cells do not have such limitations. In
particular Proton Exchange Membrane (PEMs) can provide very high energy densities,
are clean and quiet. However, PEM fuel cells are found to be unreliable due to
performance degradation. This can be mitigated by protecting the fuel cell in a fuel-cell
battery hybrid configuration using filtering electronics that ensure the fuel cell is isolated
from electrical noise and a battery to isolate it from power surges. Simulation results are
presented for a HOAP 2 humanoid robot that suggests a fuel cell powered hybrid power
supply superior to conventional batteries.
I. Introduction
Mobile robots, including walking robots are needed to perform long duration
missions that are difficult, dangerous and tedious. These include search and
rescue, repair, entertainment, sentry, surveillance applications [1, 2].
Continuous operation of these robots, lasting days and weeks (not hours) would
be ideal for these applications. Typical power demands for field robots will vary
significantly during a mission, often with high peak power demands. These field
systems often have constraints on their mass, volume and noise.
Current power supply technology is a key limiting factor for long duration field
robotic applications. Internal combustion engines can provide high power for
long durations but produce toxic exhaust, noise and strong thermal signatures
making them inappropriate for many important applications.
Current
rechargeable batteries have very low energy densities and high rates of selfdischarge, requiring systems to stop and recharge every few hours, making them
1
2
ineffective for continuous long duration missions. Hence, there is a significant
need for a power supply that can provide the high total energy required for long
duration missions that is quiet and clean.
Figure 1: (Left) Boston Dynamics Big Dog, a four-legged supply
robot. (Right) Robonaut 2 repair robot.
II. Fuel Cell Power for Mobile Robots
Fuel cells are high energy sources of power that have been suggested for robots
[7, 8]. They are a promising alternative mobile source of power and have the
potential to overcome limitations of current batteries and internal combustion
engines. They are simple electrochemical devices that convert chemical energy
into electricity (Figure 2). Unlike a battery, fuel cells require a constant supply
of fuel and oxidant to produce electricity. Proton Exchange Membrane (PEM)
fuel cells are particularly attractive for robotics. These devices consist of simple
solid state components sandwiched together as shown in Figure 2. They
combine hydrogen fuel, and oxygen (from breathing air) through the most
energy releasing reaction known, to produce electricity and water. It has been
demonstrated; PEM fuel cells can reach 65-70 % or higher operating efficiencies
at room temperature and produce clean water exhaust [9].
III. The Challenges of Fuel Cells for Robots
While PEM fuel cells are simple and sound great in theory, they have three
fundamental problems for practical robotics applications. These problems are
storage of hydrogen fuel, long-life reliability of fuel cells and low power.
Hydrogen fuel due to its high energy content and low density is difficult to store.
In our research, we have developed simple, innovative hydrogen storage
3
technologies that promise energy storage densities better than the best batteries
of today [5].
Figure 2: A PEM Fuel Cell Consumes Reactants, Hydrogen and Oxygen
to Produce Electricity, Water and Heat.
Second, PEM fuel cells have been found to be unreliable [3]. Our studies of
PEM fuel cells show that they are delicate and unreliable due to degradation of
their components, resulting in short lives and premature failure [4]. However
our physical models and experiments suggests that PEM fuel cells controlled to
operate within narrow operating can be made robust, have long lives of 3-5
years or more and high operating efficiencies [4]. Among the factors known to
degrade fuel cells, are high operating voltages and electrical noise. As discussed
below, mobile field robots operating in unstructured environments are subject to
very substantial variation, which, without proper control can result in fuel cell
degradation that shortens their lives. A solution to this problem is discussed in
the section below.
A third problem with fuel cells is that while they are high energy devices, they
have relatively low power. This is a problem for robotics, where typical power
requirements can vary substantially over a mission, with low-power rest periods
and short bursts at peak power. These varying power demands are known to
stress the fuel cells, resulting in short lives. A solution is to use fuel-cells in a
hybrid system for mobile robots that maintains a fuel cell at optimal operating
conditions to maximize life and efficiency, by protecting it from external
electrical load variations, noises and meeting peak power requirement using a
battery (see Figure 3).
4
Fuel cell hybrid systems have been subjected to meet rapid, transient power
demands in large and stationary applications, and in robotics, to meet power
surges [7, 8]. However these hybrid system designs have not considered the
effects of fuel cell degradation.
Figure 3: Proposed Fuel Cell-Battery Hybrid Power Supply for
Robots.
IV. Research
The research presented here is focused on developing hybrid system design
concept for mobile robots that have energy densities that exceed the best battery
technology. The hybrid system is designed to meet the required peak power
demands and isolate the fuel cell from degrading stresses of high and low
frequency noises generated by conditioning circuits required for battery
management. Physical models are used to simulate expected conditions and
control systems are developed to demonstrate the concept. It is shown that the
results are vast improvement over conventional batteries in terms of life,
efficiency, energy density and power density.
V. Case Study: Power for Humanoid Walking Robot
Here a hybrid fuel cell power supply for a HOAP-2 humanoid walking robot
(Figure 4) developed by Fujitsu is presented. The HOAP-2 is a 7.8 kg robot,
with a maximum rated power of 250 W. It contains a 1.2 kg Nickel Metal
Hydride rechargeable battery pack by default. The system contains 25 servo
actuators, 6 for each leg and 5 for each arm, 2 for the head and 1 one for the
waist. The robot has onboard computer equivalent to a PC-104 Pentium III
system, a vision system consisting of 2 CCD cameras, onboard accelerometers,
gyroscope and pressure sensors on each feet.
5
Figure 4: (Right) Fujitsu’s HOAP 2 Robot (Left)
Cyberbotics WebotsTM Model of the HOAP-2.
A simulation model of HOAP-2 from Cyberbotics WebotsTM [6] is used for our
power demand calculations. The robot system consists of three different
subsystems for power calculations, namely electro-mechanical system, computer
+ sensors and power system. For the electro-mechanical system the simulator
model provides mechanical power output of the servo motors. The servo motors
are assumed to have a 50 % electrical to mechanical efficiency. The computer
and sensor system are assumed to be always powered and consume 40 W based
on the HOAP-2 specifications. Below, power demand profiles of the robot’s
walking behavior is shown (Figure 5). For these scenarios, alternative power
sources are compared with the default nickel metal hydride battery packs that
weigh 1.2 kg.
VI. Fuel Cell Hybrid System
The fuel cell hybrid system consists of a fuel cell stack that provides steady
power source and rechargeable lithium ion A123 Nanophosphate TM battery that
meets peak power demands. The fuel cells within the stack are operated at
constant operating voltage of 0.8, providing a 65 % operating efficiency. Our
research into the degradation of fuel cells, based on models and experimental
results shows that increased operating voltages exponentially decreases the life
of the fuel cell [4]. Operating the fuel cell at constant voltage of 0.8 V or less
ensures long-life, while providing sufficiently high operating efficiency. The
fuel cell trickle charges the battery during idle times, ensuring the battery is fully
charged, to meet power peaks. The NanophosphateTM battery handles peak
demands and can better handle deep battery discharges compared to
6
conventional lithium ion batteries. Ensuring the battery is nearly fully charged,
maximizes its life. The battery for the hybrid system is sized based on its
specific power density to meet the maximum possible power requirements of the
robot.
An oscillation suppression circuit interfaces a fuel cell to the power management
system, consisting of power switching circuitry and a DC-DC convertor. The
interface circuit effectively extracts the energy from the fuel cell and transfers it
into the battery. The oscillation suppression circuits prevents any voltage
oscillation from electrical circuits, particularly DC-DC convertors from being
noticed by the fuel cell. This ensures the fuel cell operates at steady operating
voltage, without any electrical load oscillations.
Figure 5: Power Demand of a HOAP-2 Robot, walking
at 0.06 m/s.
VII. Hybrid System Sizing
Based on the power demand profiles (Figure 5), the fuel cell will provide a
constant steady source of power of 45 W. While the power peaks will be
handled by the battery. For 45 W steady supply of power, a fuel cell stack
weighing 150 g will be required. For a 250 W peak required by the robot, a 135
g lithium ion NanophosphateTM battery is required (with specific power of 1,850
W/kg for APR18650 cells) . Another 115 g is allocated for mass of power
electronics and other items, leaving 800 g for the Lithium Hydride fuel supply
with an energy density of 4,950 Wh/kg.
7
VIII. Power System Comparison
Four power supply configurations are compared, including a nickel metal
hydride and lithium ion battery system, a fuel cell system and a fuel cell hybrid
system. See Table 1.
Table 1: Power Supply Comparison for HOAP-2 Humanoid Robot
Power Supply
FC Stack FC Fuel
Energy
System
RunMass
Mass
Density
Life
Time
NiMH Battery
40 Wh/kg
0.3 year
3 hours
Li Ion Battery
120 Wh/kg
1 year
9 hours
Fuel Cell
300 g
900 g
4950 Wh/kg
5 days
99 hours
Fuel Cell Hybrid
150 g
800 g
4950 Wh/kg
3 years
88 hours
Nickel metal hydride and lithium ion batteries have the lowest energy densities
and thus provide short run-times, before requiring recharging. The system life
of the batteries is computed based on expected lifetime of 1,000 charge/recharge
multiplied by the run-times hours. A direct fuel cell system has the longest runtime. However the life of the system, based on our degradation models is
expected to last just 5 days making this option impractical. The fuel cell hybrid
system offers a good trade-off between both run-time and system life.
IX. Summary and Conclusions
Based on these results, the fuel cell hybrid system concept offers high energy
density, long-life and would meet the required peak power demands of a battery.
The key to our hybrid system concept is effective control and design, where a
fuel cell and battery are optimally sized to minimize stress on the fuel cell, while
enabling a battery to meet power demand peaks. By minimizing stresses on the
fuel cell, the system can be operated for long-lives at high operating efficiencies.
X. References
1.
2.
3.
4.
Hägele, M., “Contribution to World Robotics 2005,” Technical Report,
European Robotics Network, 2005.
Asada, H., et al. “A Roadmap to US Robotics: From Internet to Robotics,”
Technical Report, Editor: Computing Community Consortium/Computing
Research Association, 2009.
Rubio, M.A., Urquia, A., and Dormida, S. (2009) “Diagnosis of Performance
Degradation Phenomena in PEM Fuel Cells,” International Journal of
Hydrogen Energy, pp. 1-5.
Thangavelautham, J., Dubowsky, S., (2013) “On the Catalytic
Degradation in Fuel Cell Power Supplies for Long-Life Mobile Field
Sensors,” Fuel Cells, vol. 13, no. 2, pp. 1-24.
8
5.
Thangavelautham, J., Strawser, D., Dubowsky, S., “Lithium hydride
powered PEM fuel cells for long-duration small mobile robotic missions,”
IEEE International Conference on Robots and Automation, 2012, p. 1-8.
6.
Michel, O. / Cyberbotics Ltd. (2004) “WebotsTM: Professional Mobile
Robot Simulation,” International Journal of Advanced Robotic Systems, pp.
39-42, Vol. 1 No. 1.
Kesner, S. B., Plante, J. S., Boston, P., Fabian, T., Dubowsky, S., “Mobility
and Power Feasibility of a Microbot Team System for Extraterrestrial Cave
Exploration,” Proceedings of the 2007 IEEE International Conference
Robotics and Automation, Rome, Italy, April 2007.
Joh, H., et al., (2010) A direct methanol fuel cell system to power a
humanoid robot, Journal of Power Sources, Vol. 195, No. 1, pp. 293-298.
F. Barbir, PEM Fuel Cells: Theory and Practice, Academic Press, 2005.
7.
8.
9.
View publication stats
| 3 |
Using a hierarchy of Domain Specific Languages in
complex software systems design
V. S. Lugovsky <[email protected]>
arXiv:cs/0409016v1 [cs.PL] 9 Sep 2004
February 1, 2008
Abstract
A new design methodology is introduced, with some examples on building Domain
Specific Languages hierarchy on top of Scheme.
1
Introduction
nized as “true” macros as they hide an access
to the host language.
This situation looks like a paradox. On the
one hand, industry uses the metaprogramming
ideas and tools, and it is easy to imagine how
it would suffer without them. On the other
hand, industry does not want to hear anything
related to the metaprogramming. It does not
want people inventing new programming languages — plenty of industry coders barely use
only one language and IT managers believe
without any reason that they can not be taught
to use more [6].
Industry prefers to “re–invent a wheel” and
to express any sort of complexity in the form
of libraries for static, steady languages. For
some strange reason learning complicated libraries for a language which barely fits problem
domain needs is preferred to learning a small
new language specifically designed for it.
In this paper I am trying to advocate the
metaprogramming approach as the major design methodology for complex systems. Sounds
like another one “silver bullet” invention?
There were many methodologies claiming to
solve all possible problems of the mankind —
RUP, eXtreme programming, etc. Why do we
need another one? Simply because the previous approaches did not succeed. They were
too tied to particular programming technologies (mostly — OOP varieties), which are definitely not “silver bullets”. Metaprogramming
Programs that write programs that write programs (...) Too complicated? A hackers technique which can not be applied to the “real
world” problems? This is exactly how IT industry specialists think about metaprogramming. And this is a completely wrong notion!
Metaprogramming is the only known way
to reduce the complexity significantly. In some
areas “programs that write programs” are accepted by the industry due to the enormous
level of complexity of the corresponding handwritten code — regular expressions, lexers and
parsers generators to name a few, and code wizards and templates in popular “integrated development environments” are also widely used.
But this does not help at all in the overall methodology recognition. The industry’s
most beloved and widely buzzworded language,
Java, does not have even such a rudimentary
preprocessor as C does. Very few C++ programmers have an idea on how to use the templates, they just utilize STL without any understanding of the true source of the power.
Even in the enlightened world of Lisp programming the misunderstanding is surprisingly
wide: almost all of Lisp dialects and Scheme
implementations have problems with macros
(not so many people are using them), and even
the current Scheme standard R5RS contains
only hygienic macros that can hardly be recog1
methodology is different, it strongly encourages of domain specific languages is not very tricky.
the use of all possible programming technolo- An approach I will describe here is based on
metaprogramming techniques. It requires a so
gies and to invent the “impossible” ones.
called Core Language, on top of which we will
build a hierarchy of our domain specific lan2 Domain specific lan- guages. The Core Language should possess the
following properties:
guages
Below I am providing an outline of the proposed methodology.
Any problem domain can be best expressed
using a language (mathematical, programming,
natural, ...) specially designed for it. In most
cases there should be one entity in a language
for every entity in a problem domain. For example, if a problem domain is the recognition
of syntax constructions in a characters stream,
the Domain Specific Language should contain
characters and characters sets as a primary entity and automata constructions for expressing
syntax. That is enough — regular expressions
language is designed. It is hard to believe that
somebody will ever invent anything better for
this purpose than this “most optimal” DSL.
If a problem domain is already specified as
an algebra, we even do not have to design the
DSL: it will be this algebra itself, galvanised
with any underlying computational semantics
— this is the way SQL was born. If a problem domain is 3D graphics, linear algebra and
stereometry should be used. All the languages
and data formats dedicated to 3D contain subsets of this formal theories.
As it is stated in [1],
“The object of a DSL-based software architecture is to minimise the
semantic distance between the system’s specification and its implementation.”
3
Core language
For any problem it is convenient to have a language that best fits it. There already exist specialized languages for some common problems.
But what to do if none is available? The answer is trivial: implement it. Implementation
• True macros. That is, we must have an
access to a complete programming language (preferably the same as a host
language, or a different one) inside the
macro definitions. Macros should be real
programs which can do anything that the
programs written in the host language
can do. Macros are producing the code
in the host language, in the form of text
or directly as an abstract syntax tree.
• True runtime eval. Programs that are
generated in the runtime should be evaluated. This can be a different language
than the host language, or, better, the
same one.
• Turing-completeness. This should be a
real programming language, equivalent in
its expressive power to the “general purpose” languages.
• Simplicity. It is an extensible core and
should not contain any unnecessary complexity that can be later added by a user
who really needs it.
• Comprehensive and easy to use data
types system. If a type system is well
suited for expressing any possible abstract syntax trees, the language fits this
requirement.
On top of the Core Language we have to
build functionality that will be needed to implement programming languages. It is lexing,
parsing, intermediate languages that fit well
computational models different from the model
of the Core Language (e.g., if the core language
is imperative or an eager functional, we will
need a graph reduction engine to implement
lazy functional DSLs, or a term unification engine to implement logical languages and a stack
machine if we have to go to lower levels). The
2
Core Language enriched with this “Swiss army
knife” for programming languages development
then becomes a major tool for any project.
4
New methodology
The development process must fit in the following chain:
• divide the problem into sub–problems,
possibly using some object oriented design techniques, or whatever fits better.
• formalize each sub–problem.
5
Scheme example
A good example of a practical Core Language is
Scheme (with addition of Common Lisp–style
macros). It uses S–expressions as an AST,
and S–expressions composition is very natural. S–expressions are good enough to represent any possible AST (for example, XML is
naturally represented as SXML). It provides a
true runtime eval hosting the same language
as in compile time. There exist some practical
and efficient Scheme implementations which
provide performance acceptable for most tasks,
good FFI, and, thus, integration with legacy libraries.
• implement the Domain Specific Language after this formalization, using the
Let us start with adding the functionality
Core Language and other DSL with the
described
above to Scheme. First of all we will
same semantics.
need parsing — not all of our team members
• solve the problem using the best possible are fond of parentheses, so we have to implelanguage.
ment many complicated syntaxes. The most
This way any project will grow into a tree (hier- natural way for a functional programming lanarchy) of domain specific languages. Any lan- guage is to implement a set of parsing combinaguage is a subset or a superset of another lan- tors for building recursive descendant parsers
guage in the hierarchy (or, may be, combina- (mostly LL(1), but it is not such a fixed limit as
tion of several languages), and the amount of LALR(1) for Yacc–like automata generators).
coding for a new language if we already have
a deep and comprehensive hierarchy is quite
small.
A development team working within this
methodology should consist of at least one specialist who maintains this hierarchy, an architect who formalizes problems, and a number of
coders who specialize in particular problem domains, they even may not be programmers at
all — they just have to know well their domains
and operate them in terms that are as close as
possible to the native problem domain terminology. For example, HTML designer will be
happy operating HTML–like tags for his templates (that is why JSP custom tags are so popular); mathematician will find a language modelled after the standard mathematical notation
intuitive — for this reason Wolfram Mathematica is so popular among non-programmers;
game script writer will operate a language expressing characters, their properties and action
rules — stating, not programming. This list can
be continued infinitely.
Of course we will use metaprogramming
wherever possible. All the parsers should be
functions which consume a list of tokens (e.g.
characters) as an input and return the result in
the following form:
((RESULT anyresult) unparsed-input-rest)
or
((FAIL reason) input)
To access the parsing result we will provide
the following macros:
(define-macro (success? r )
‘(not (eq? (caar ,r ) ’FAIL)))
And if we are sure that we have some result, we will use the following macro to extract
it (otherwise, this will return a fail message):
(define-macro (result r )
‘(cdar ,r ))
3
The last definition looks surprisingly comIn any case, we can access the rest of the
pact, thanks to the pselect macro. From this
stream after the parsing pass:
stage the power of metaprogramming becomes
(define-macro (rest r )
more and more obvious.
‘(cdr ,r ))
Just as a reference, we will show here the
These macros could also be implemented as definition of a choice combinator:
functions. But all the macros are available in (define-macro (pOR0 p1 p2 )
the context of macro definitions while functions ‘(λ (l)
are not.
(let ((r1 (,p1 l)))
Almost all of the parsers should fail on the
(if (success? r1 )
end of the input, so the following safeguard
r1
macro will be extremely useful:
(,p2 l)))))
(define-macro (parser p)
‘(λ (l)
(if (∅? l) ’((FAIL "EMPTY"))
(,p l))))
And its nested version is obvious:
(define-macro (pOR p1 . p~o )
0
‘(pselect pOR ,p1 ,@p~o ))
We will skip the rest of the combinators
Now this game becomes more interesting. definitions and just show what we gained after
Here is a very handy macro that nests a se- all. For example, now to define a floating point
quence of applications into the form of (m p1 number recognizer, we can use this definition:
(m p2 . . . (m px pn ))):
(define parse-num
(p+
(define-macro (pselect m p1 . p~o )
(pOR (pcsx-or (#\− #\+))
(if (∅? p~o ) p1
parse-any)
(let ((p2 (car p~o ))
(pMANY pdigit)
(px (cdr p~o )))
OR
(p
(p+ (pcharx #\.)
‘(,m ,p1 (pselect ,m ,p2 ,@px )))))
(pMANY pdigit))
Sequence parsing combinator with two arparse-any)))
guments can be declared as follows:
It looks like BNF, but still too Schemish.
(define-macro (p+0 p1 p2 )
This is already a Domain Specific Language on
‘(λ (l)
top of Scheme, but it does not conform to the
(let ((r1 (,p1 l)))
perfectionist requirement. However, we can use
(if (success? r1 )
this still not perfect parsing engine to imple(let ((r2 (,p2 (rest r1 ))))
ment an intermediate regular expressions lan(if (success? r2 )
guage as a macro. Omitting the definitions, we
(cons (cons ’RESULT
will show the previous recognizer implemented
(append
in a new way:
(result r1 )
(define parse-num
(result r2 )))
(regexp
(rest r2 ))
((#\− / #\+) / parse-any) +
(cons (list ’FAIL "p+" (car r2 )) l)))
(pdigit ∗) +
r1 ))))
(("." + (pdigit ∗)) /
And it will be immediately turned into the
parse-any)))
sequence parsing combinator with an arbitrary
This new Domain Specific Language can be
number of arguments:
used in many ways. For example, we can build
(define-macro (p+ p1 . p~o )
‘(pselect p+0 ,p1 ,@p~o ))
a simple infix pre–calculator for constants:
(define-macro (exp1 . v )
4
(defparsers
(letrec
((epr
(let
((body
(regexp
(num :−> $ 0) /
(lst -> (aprs epr )))))
(regexp
((body + (SCM psym +) + epr )
:−> (list (+ $ 0 $ 2))) /
((body + (SCM psym −) + epr )
:−> (list (− $ 0 $ 2))) /
((body + (SCM psym ∗) + epr )
:−> (list (∗ $ 0 $ 2))) /
((body + (SCM psym / ) + epr )
:−> (list (/ $ 0 $ 2))) / body
))))
(car (result (epr v ))))))
only languages with a computational model
which is close to the model of Scheme (eager dynamically typed functional languages
with imperative features), but any possible
languages, providing small intermediate DSLs
which simulate alternative computational models. For those who need very lowlevel power it
is possible to produce an intermediate code in
C language (for example, the Bigloo Scheme [3]
implementation allows to include C code when
compiling through C backend). For implementing complicated runtime models it is easy to
produce an intermediate Forth–like DSL on top
of Scheme and then use both Scheme and Forth
metaprogramming powers.
6
Alternatives
To make the picture complete, it is necesAnd then, wherever we want to calculate a
sary to mention other possible choices for the
numerical constant in the compilation time, we
Core Language. The popular programming
may use the exp1 macro:
language, C++, could become such a Core
(exp1 5 + ((10 / 2)−(1 / 5)))
Language relatively easily. It has a TuringThis language does not look like Scheme complete macro system, unfortunately, featurany more. And we can go even further, im- ing the language different from the host lanplementing a Pascal (or Rlisp)–like language guage (so only one stage preprocessing is poson top of Scheme, using just the same regexp sible). It lacks a good type system, but it could
macro to describe both a lexer and a parser, be simulated on top of the existing lowlevel feaand then to compile the resulting code to the tures. There exist some implementations of
the recursive descendant parsing combinators
underlying Scheme.
for C++ (e.g., Boost Spirit library [2]), implementation of the functional programming (e.g.,
(pasqualish
Boost Lambda [2]), and even Lisp compilers on
"
top of the C++ template system. The runtime
function fac(x)
evaluation is available in different ways: using
begin
pluggable scripting languages other than C++,
if (x > 0) then
using the C++ interpreter [14]. An interesting
x*fac(x - 1)
approach is described in [13].
else 1;
Another choice is Forth. It is a powerful
end
metalanguage,
but the core language remains
")
too lowlevel and unsafe. Forth is often the only
No more parenthesis that frighten non–Lisp choice available for the embedded systems with
programmers so much! Now even Pascal pro- limited resources.
It is worth mentioning modern experimengrammers can use Scheme.
The code samples above demonstrate some tal extensions for strictly typed functional lanof the techniques available in this approach. guages: Template Haskell [12] and MetaOCaml
The complete implementation can be down- [11]. Both of them conform well to all of
loaded from [4]. It is possible to produce not the Core Language requirements. Objective
5
Caml also provides one–stage metaprogram- metaprogramming too.
ming using a sophisticated preprocessing engine CamlP4. And OCaml is quite good for
Conclusion
implementing interpreters using the closure– 7
based technique. Some examples can be found
The idea of metaprogramming is not something
in [7].
esoteric. Metaprogramming is used widely by
No doubt that Common Lisp would also commercial programmers, they just do not rebe a very good platform since it shares almost alize it. The methodology proposed in this paall the features with Scheme with exception of per is an attempt of uncovering all the hidsimplicity. The killing feature of Common Lisp den power of the metaprogramming techniques
is advanced runtime compilation in some of the available.
major implementations (CMU CL [8] and its
The Scheme example presented above is
descendant SBCL [9] are good examples), and part of the working project, which already
the defmacro is guaranteed to be working in all proved the supremacy of this approach. A subthe implementations available, which is a great set of the Domain Specific Languages hierarchy
advantage over Scheme.
designed for the WWW data acquiring project
For relatively small projects Tcl [10] would is shown on the Fig. 1.
be a good choice. Its computational model
The subject discussed requires future reis based on rewrites (and primary data struc- search and practical approbation, whose final
tures are just the strings of text), which ren- result may be a completely formalized, matheders an extremely powerful metaprogramming matically strict methodology description and a
tool. JavaScript language is also based on Core Language which will best fit this methodthe rewrites semantics, so it could be used for ology.
6
References
[1] Diomidis Spinellis. Reliable software implementation using domain specific languages. In
G. I. Schuëller and P. Kafka, editors, Proceedings ESREL ’99 — The Tenth European
Conference on Safety and Reliability, pages 627–631, Rotterdam, September 1999. ESRA,
VDI, TUM, A. A. Balkema //
[draft] http://www.dmst.aueb.gr/dds/pubs/conf/1999-ESREL-SoftRel/html/dsl.html
[2] The Boost project // http://www.boost.org/
[3] The Bigloo Practical Scheme implementation // http://www.bigloo.org/
[4] V. S. Lugovsky, DSLEngine project home // http://dslengine.sourceforge.net/
[5] P. Graham, The Hundred-Year Language // http://www.paulgraham.com/hundred.html
[6] P. Graham, The Python Paradox // http://www.paulgraham.com/pypar.html
[7] V. S. Lugovsky, publications list // http://ontil.ihep.su/˜vsl
[8] CMU Common Lisp // http://www.cons.org/cmucl/
[9] Steel Bank Common Lisp // http://sbcl.sourceforge.net/
[10] Tcl programming language resource // http://tcl.activestate.com/
[11] MetaOCaml project home // http://www.metaocaml.org/
[12] T. Sheard, S. P. Jones, Template metaprogramming for Haskell //
http://research.microsoft.com/˜simonpj/papers/meta-haskell/
[13] Tempo project home // http://compose.labri.fr/prototypes/tempo/
[14] CINT project home // http://root.cern.ch/root/Cint.html
7
Core language
Parsing combinators
Stack machine
Lexer
Graph machine
Unification machine
Templates language
Parser generator
Forth-like
Regular expressins
Data aquision regexps
Pascal-like
Rule engine
SQL templates
Figure 1: A sample DSLs hierarchy subset for the Web crawler project.
8
| 2 |
ACD Term Rewriting
arXiv:cs/0608016v1 [cs.PL] 3 Aug 2006
Gregory J. Duck, Peter J. Stuckey, and Sebastian Brand
NICTA Victoria Laboratory
Department of Computer Science & Software Engineering,
University of Melbourne, Australia
Abstract. In this paper we introduce Associative Commutative Distributive Term Rewriting (ACDTR), a rewriting language for rewriting
logical formulae. ACDTR extends AC term rewriting by adding distribution of conjunction over other operators. Conjunction is vital for expressive term rewriting systems since it allows us to require that multiple
conditions hold for a term rewriting rule to be used. ACDTR uses the
notion of a “conjunctive context”, which is the conjunction of constraints
that must hold in the context of a term, to enable the programmer to
write very expressive and targeted rewriting rules. ACDTR can be seen
as a general logic programming language that extends Constraint Handling Rules and AC term rewriting. In this paper we define the semantics
of ACDTR and describe our prototype implementation.
1
Introduction
Term rewriting is a powerful instrument to specify computational processes. It is
the basis of functional languages; it is used to define the semantics of languages
and it is applied in automated theorem proving, to name only a few application
areas.
One difficulty faced by users of term rewriting systems is that term rewrite
rules are local, that is, the term to be rewritten occurs in a single place. This
means in order to write precise rewrite rules we need to gather all relevant
information in a single place.
Example 1. Imagine we wish to “program” an overloaded ordering relation for
integers variables, real variables and pair variables. In order to write this the
“type” of the variable must be encoded in the term1 as in:
int(x) ≤ int(y) → intleq(int(x), int(y))
real(x) ≤ real(y) → realleq(real(x), real(y))
pair(x1 , x2 ) ≤ pair(y1 , y2 ) → x1 ≤ y1 ∨ x1 = y1 ∧ x2 ≤ y2
In a more standard language, the type information for variables (and other
information) would be kept separate and “looked up” when required.
⊓
⊔
1
Operator precedences used throughout this paper are: ∧ binds tighter than ∨, and
all other operators, e.g. ¬, =, bind tighter than ∧.
Term rewriting systems such as constraint handling rules (CHRs) [5] and
associative commutative (AC) term rewriting [3] allow “look up” to be managed
straightforwardly for a single conjunction.
Example 2. In AC term rewriting the above example could be expressed as:
int (x) ∧ int (y) ∧ x ≤ y → int(x) ∧ int(y) ∧ intleq(x, y)
real (x) ∧ real (y) ∧ x ≤ y → real (x) ∧ real (y) ∧ realleq (x, y)
pair (x, x1 , x2 ) ∧ pair (y, y1 , y2 ) ∧ x ≤ y → pair (x, x1 , x2 ) ∧ pair (y, y1 , y2 )∧
(x1 ≤ y1 ∨ x1 = y1 ∧ x2 ≤ y2 )
where each rule replaces the x ≤ y by an appropriate specialised version, in the
conjunction of constraints. The associativity and commutativity of ∧ is used to
easily collect the required type information from a conjunction.
⊓
⊔
One difficulty remains with both AC term rewriting and CHRs. The “look
up” is restricted to be over a single large conjunction.
Example 3. Given the term int(x1 ) ∧ int(y1 ) ∧ pair (x, x1 , x2 ) ∧ pair (y, y1 , y2 ) ∧
x ≤ y. Then after rewriting x ≤ y to (x1 ≤ y1 ∨ x1 = y1 ∧ x2 ≤ y2 ) we could not
rewrite x1 ≤ y1 since the types for x1 , y1 appear in a different level.
In order to push the type information inside the disjunction we need to
distribute conjunction over disjunction.
⊓
⊔
Simply adding distribution rules like
A ∧ (B ∨ C) → A ∧ B ∨ A ∧ C
A ∧ B ∨ A ∧ C → A ∧ (B ∨ C)
(1)
(2)
does not solve the problem. Rule (1) creates two copies of term A, which increases
the size of the term being rewritten. Adding Rule (2) to counter this effect results
in a non-terminating rewriting system.
1.1
Conjunctive context
We address the non-termination vs. size explosion problem due to distributivity
rewrite rules in a similar way to how commutativity is dealt with: by handling
distributivity on the language level. We restrict ourselves to dealing with expanding distributivity of conjunction ∧ over any other operator, and we account
for idempotence of conjunction.2 Thus we are concerned with distribution rules
of the form
P ∧ f (Q1 , . . . , Qn ) → P ∧ f (P ∧ Q1 , . . . , P ∧ Qn ).
2
(3)
This means that conjunction is distributive over any function f in presence of a
redundant copy of P , i.e. P ∧ (P ∧ f (Q1 , . . . , Qn )) → P ∧ f (P ∧ Q1 , . . . , P ∧ Qn ).
We use idempotence to simplify the RHS and derive (3).
2
Let us introduce the conjunctive context of a term and its use in rewrite
rules, informally for now. Consider a term T and the conjunction C ∧ T modulo
idempotence of ∧ that would result from exhaustive application of rule (3) to
the superterm of T . By the conjunctive context of T we mean the conjunction C.
Example 4. The conjunctive context of the boxed occurrence of x in the term
(x = 3) ∧ (x2 > y ∨ ( x = 4) ∧ U ∨ V ) ∧ W,
is (x = 3) ∧ U ∧ W .
⊓
⊔
We allow a rewrite rule P → T to refer to the conjunctive context C of the rule
head P . We use the following notation:
C \ P ⇐⇒ T.
This facility provides ∧-distributivity without the undesirable effects of rule (3)
on the term size.
Example 5. We can express that an equality can be used anywhere “in its scope”
by viewing the equality as a conjunctive context:
x = a \ x ⇐⇒ a.
Using this rule on the term of Example 4 results in
(x = 3) ∧ (32 > y ∨ (3 = 4) ∧ U ∨ V ) ∧ W
⊓
⊔
without dissolving the disjunction.
1.2
Motivation and Applications
Constraint Model Simplification. Our concrete motivation behind associative commutative distributive term rewriting (ACDTR) is constraint model
mapping as part of the G12 project [7]. A key aim of G12 is the mapping of solver
independent models to efficient solver dependent models. We see ACDTR as
the basis for writing these mappings. Since models are not flat conjunctions of
constraints we need to go beyond AC term rewriting or CHRs.
Example 6. Consider the following simple constraint model inspired by the Social Golfers problem. For two groups g1 and g2 playing in the same week there
can be no overlap in players: maxOverlap(g1 , g2 , 0) The aim is to maximise the
number of times the overlap between two groups is less than 2; in other words
minimise the number of times two players play together in a group.
^
constraint
maxOverlap(g1 , g2 , 0)
∀w∈Weeks
∀g1 ,g2 ∈weeks[w]
g1 <g2
maximise
X
holds(maxOverlap(g1 , g2 , 1))
∀w1 ,w2 ∈Weeks
∀g1 ∈weeks[w1 ]
∀g2 ∈weeks[w2 ]
g1 <g2
3
Consider the following ACDTR program for optimising this constraint model.
maxOverlap(a, b, c1 ) \ maxOverlap(a, b, c2 ) ⇐⇒ c2 ≥ c1 | true
holds(true) ⇐⇒ 1
holds (false) ⇐⇒ 0
The first rule removes redundant maxOverlap constraints. The next two rules
implement partial evaluation of the holds auxiliary function which coerces a
Boolean to an integer.
By representing the constraint model as a giant term, we can optimise the
model by applying the ACDTR program. For example, consider the trivial case
with one week and two groups G1 and G2 . The model becomes
maxOverlap(G1 , G2 , 0) ∧ maximise(holds (maxOverlap(G1 , G2 , 1))).
The subterm holds(maxOverlap(G1 , G2 , 1)) simplifies to 1 using the conjunctive
context maxOverlap(G1 , G2 , 0).
⊓
⊔
It is clear that pure CHRs are insufficient for constraint model mapping for
at least two reasons, namely
– a constraint model, e.g. Example 6, is typically not a flattened conjunction;
– some rules rewrite functions, e.g. rules (2) and (3) rewriting function holds,
which is outside the scope of CHRs (which rewrite constraints only).
Global Definitions. As we have seen conjunctive context matching provides
a natural mechanism for making global information available. In a constraint
model, structured data and constraint definitions are typically global, i.e. on the
top level, while access to the data and the use of a defined constraint is local, e.g.
the type information from Example 1. Another example is partial evaluation.
Example 7. The solver independent modelling language has support for arrays.
Take a model having an array a of given values. It could be represented as the
top-level term array (a, [3, 1, 4, 1, 5, 9, 2, 7]). Deeper inside the model, accesses to
the array a occur, such as in the constraint x > y + lookup(a, 3). The following
rules expand such an array lookup:
array(A, Array) \ lookup(A, Index ) ⇐⇒ list element(Array, Index )
list element([X|Xs], 0) ⇐⇒ X
list element ([X|Xs], N ) ⇐⇒ N > 0 | list element (Xs, N − 1)
Referring to the respective array of the lookup expression via its conjunctive
context allows us to ignore the direct context of the lookup, i.e. the concrete
constraint or expression in which it occurs.
⊓
⊔
4
Propagation rules. When processing a logical formula, it is often useful to be
able to specify that a new formula Q can be derived from an existing formula
P without consuming P . In basic term rewriting, the obvious rule P ⇐⇒ P ∧ Q
causes trivial non-termination. This issue is recognised in CHRs, which provide
support for inference or propagation rules. We account for this fact and use rules
of the form P =⇒ Q to express such circumstances.
Example 8. The following is the classic CHR leq program reimplemented for
ACD term rewriting (we omit the basic rules for logical connectives):
leq(X, X) ⇐⇒ true
leq(X, Y ) \ leq(Y, X) ⇐⇒ X = Y
leq(X, Y ) \ leq(X, Y ) ⇐⇒ true
leq(X, Y ) ∧ leq(Y, Z) =⇒ leq(X, Z)
(reflexivity)
(antisymmetry)
(idempotence)
(transitivity)
These rules are almost the same as the CHR version, with the exception of
the second and third rule (antisymmetry and idempotence) which generalise its
original by using conjunctive context matching.
⊓
⊔
Propagation rules are also used for adding redundant information during model
mapping.
The rest of the paper is organised as follows. Section 2 covers the standard
syntax and notation of term rewriting. Section 3 defines the declarative and operational semantics of ACDTR. Section 4 describes a prototype implementation
of ACDTR as part of the G12 project. Section 5 compares ACDTR with related
languages. Finally, in Section 6 we conclude.
2
Preliminaries
In this section we briefly introduce the notation and terminology used in this
paper. Much of this is borrowed from term rewriting [3].
We use T (Σ, X) to represent the set of all terms constructed from a set of
function symbols Σ and set of variables X (assumed to be countably infinite).
We use Σ (n) ⊆ Σ to represent the set of function symbols of arity n.
A position is a string (sequence) of integers that uniquely determines a subterm of a term T , where ǫ represents the empty string. We define function T |p ,
which returns the subterm of T at position p as
T |ǫ = T
f (T1 , . . . , Ti , . . . , Tn )|ip = Ti |p
We similarly define a function T [S]p which replaces the subterm of T at position
p with term S. We define the set Pos(T ) to represent the set of all positions of
subterms in T .
An identity is a pair (s, t) ∈ T (Σ, X) × T (Σ, X), which is usually written as
s ≈ t. Given a set of identities E, we define ≈E to be the set of identities closed
under the axioms of equational logic [3], i.e. symmetry, transitivity, etc.
5
We define the congruence class [T ]≈E = {S ∈ T (Σ, X)|S ≈E T } as the set
of terms equal to T with respect to E.
Finally, we define function vars(T ) to return the set of variables in T .
3
Syntax and Semantics
The syntax of ACDTR closely resembles that of CHRs. There are three types of
rules of the following form:
(simplification)
(propagation)
(simpagation)
r @ H ⇐⇒ g | B
r @ H =⇒ g | B
r @ C \ H ⇐⇒ g | B
where r is a rule identifier, and head H, conjunctive context C, guard g and body
B are arbitrary terms. The rule identifier is assumed to uniquely determine the
rule. A program P is a set of rules.
We assume that vars(g) ⊆ vars(H) or vars(g) ⊆ vars(H) ∪ vars(C) (for
simpagation rules). The rule identifier can be omitted. If g = true then the guard
can be omitted.
We present the declarative semantics of ACDTR based on equational logic.
First we define the set of operators that ACDTR treats specially.
Definition 1 (Operators). We define the set of associate commutative operators as AC. The set AC must satisfy AC ⊆ Σ (2) and (∧) ∈ AC.
For our examples we assume that AC = {∧, ∨, +, ×}. We also treat the operator
∧ as distributive as explained below.
ACDTR supports a simple form of guards.
Definition 2 (Guards). A guard is a term. We denote the set of all “true”
guards as G, i.e. a guard g is said to hold iff g ∈ G. We assume that true ∈ G
and false 6∈ G.
We can now define the declarative semantics for ACDTR. In order to do so
we employ a special binary operator where to explicitly attach a conjunctive
context to a term. Intuitively, the meaning of T where C is equivalent to that of
T provided C is true, otherwise the meaning of T where C is unconstrained. For
Boolean expressions, it is useful to interpret where as conjunction ∧, therefore
where-distribution, i.e. identity (6) below, becomes equivalent to ∧-distribution
(3). The advantage of distinguishing where and ∧ is that we are not forced to
extend the definition of ∧ to arbitrary (non-Boolean) functions.
We denote by B the following set of built-in identities:
A◦B ≈B ◦A
(A ◦ B) ◦ C ≈ A ◦ (B ◦ C)
T ≈ (T where true)
A ∧ B ≈ (A where B) ∧ B
T where (W1 ∧ W2 ) ≈ (T where W1 ) where W2
f (A1 , ..., Ai , ..., An ) where W ≈ f (A1 , ..., Ai where W, ..., An ) where W
6
(1)
(2)
(3)
(4)
(5)
(6)
for all ◦ ∈ AC, functions f ∈ Σ (n) , and i ∈ {1, . . . , n}.
Definition 3 (Declarative Semantics for ACDTR). The declarative semantics for an ACDTR program P (represented as a multiset of rules) is given
by the function JK defined as follows:
JP K
= {Jθ(R)K | ∀R, θ . R ∈ P ∧ θ(guard(R)) ∈ G} ∪ B
JH ⇐⇒ g | BK
= ∃vars(B)−vars(H) (H ≈ B)
JC \ H ⇐⇒ g | BK = ∃vars(B)−vars(C,H) (H where C ≈ B where C)
JH =⇒ g | BK
= ∃vars(B)−vars(H) (H ≈ H ∧ B)
where function guard(R) returns the guard of a rule.
The function JK maps ACDTR rules to identities between the head and the
body terms, where body-only variables are existentially quantified.3 Note that
there is a new identity for each possible binding of guard(R) that holds in G.
A propagation rule is equivalent to a simplification rule that (re)introduces the
head H (in conjunction with the body B) in the RHS. This is analogous to
propagation rules under CHRs.
A simpagation rule is equivalent to a simplification rule provided the conjunctive context is satisfied.
The built-in rules B from Definition 3 contain identities for creating/destroying (3) and (4), combining/splitting (5), and distributing downwards/upwards (6) a conjunctive context in terms of the where operator.
The set B also contains identities (1) and (2) for the associative/commutative
properties of the AC operators.
Example 9. Consider the following ACDTR rule and the corresponding identity.
JX = Y \ X ⇐⇒ Y K
=
(Y where X = Y ) ≈ (X where X = Y )
(7)
Under this identity and using the rules in B, we can show that f (A)∧(A = B) ≈
f (B) ∧ (A = B), as follows.
f (A) ∧ (A = B)
≈(4)
(f (A) where (A = B)) ∧ (A = B)
≈(6)
(f (A where (A = B)) where (A = B)) ∧ (A = B) ≈(7)
(f (B where (A = B)) where (A = B)) ∧ (A = B) ≈(6)
(f (B) where (A = B)) ∧ (A = B)
≈(4)
f (B) ∧ (A = B)
⊓
⊔
3.1
Operational Semantics
In this section we describe the operational semantics of ACDTR. It is based
on the theoretical operational semantics of CHRs [1,4]. This includes support
for identifiers and propagation histories, and conjunctive context matching for
simpagation rules.
3
All other variables are implicitly universally quantified, where the universal quantifiers appear outside the existential ones.
7
Propagation history. The CHR concept of a propagation history, which prevents trivial non-termination of propagation rules, needs to be generalised over
arbitrary terms for ACDTR. A propagation history is essentially a record of all
propagation rule applications, which is checked to ensure a propagation rule is
not applied twice to the same (sub)term.
In CHRs, each constraint is associated with a unique identifier. If multiple
copies of the same constraint appear in the CHR store, then each copy is assigned
a different identifier. We extend the notion of identifiers to arbitrary terms.
Definition 4 (Identifiers). An identifier is an integer associated with each
(sub)term. We use the notation T #i to indicate that term T has been associated
with identifier i. A term T is annotated if T and all subterms of T are associated
with an identifier. We also define function ids(T ) to return the set of identifiers
in T , and term(T ) to return the non-annotated version of T .
For example, T = f (a#1, b#2)#3 is an annotated term, where ids(T ) = {1, 2, 3}
and term(T ) = f (a, b).
Identifiers are considered separate from the term. We could be more precise
by separating the two, i.e. explicitly maintain a map between Pos(T ) and the
identifiers for T . We do not use this approach for space reasons. We extend
and overload all of the standard operations over terms (e.g. from Section 2) to
annotated terms in the obvious manner. For example, the subterm relation T |p
over annotated terms returns the annotated term at position p. The exception
are elements of the congruence class [T ]≈AC , formed by the AC relation ≈AC ,
which we assume satisfies the following constraints.
A#i ◦ B#j ≈AC B#j ◦ A#i
A#i ◦ (B#j ◦ C#k) ≈AC (A#i ◦ B#j) ◦ C#k
We have neglected to mention the identifiers over AC operators. These identifiers
will be ignored later, so we leave them unconstrained.
A propagation history is a set of entries defined as follows.
Definition 5 (Entries). A propagation history entry is of the form (r @ E),
where r is a propagation rule identifier, and E is a string of identifiers. We
define function entry(r, T ) to return the propagation history entry of rule r for
annotated term T as follows.
entry(r, T )
= (r @ entry(T ))
entry(T1 ◦ T2 )
= entry(T1 ) entry(T2 )
entry(f (T1 , ..., Tn )#i) = i entry(T1 ) ... entry(Tn )
◦ ∈ AC
otherwise
This definition means that propagation history entries are unaffected by associativity, but are effected by commutativity.
Example 10. Consider the annotated term T = f ((a#1 ∧ b#2)#3)#4. We have
that T ∈ [T ]≈AC and T ′ = f ((b#2 ∧ a#1)#3)#4 ∈ [T ]≈AC . Although T
and T ′ belong to [T ]≈AC they have different propagation history entries, e.g.
entry(r, T ) = (r @ (4 1 2)) while entry(r, T ′ ) = (r @ (4 2 1)).
⊓
⊔
8
When a (sub)term is rewritten into another, the new term is assigned a set
of new unique identifiers. We define the auxiliary function annotate(P, T ) = Ta
to map a set of identifiers P and un-annotated term T to an annotated term Ta
such that ids(Ta ) ∩ P = ∅ and |ids(Ta )| = |Pos(T )|. These conditions ensure that
all identifiers are new and unique.
When a rule is applied the propagation history must be updated accordingly
to reflect which terms are copied from the matching. For example, the rule
f (X) ⇐⇒ g(X, X) essentially clones the term matching X. The identifiers,
however, are not cloned. If a term is cloned, we expect that both copies will
inherit the propagation history of the original. Likewise, terms can be merged,
e.g. g(X, X) ⇐⇒ f (X) merges two instances of the term matching X. In this
case, the propagation histories of the copies are also merged.
To achieve this we duplicate entries in the propagation history for each occurrence of a variable in the body that also appeared in the head.
Definition 6 (Updating History). Define function
update(H, Ha , B, Ba , T0 ) = T1
where H and B are un-annotated terms, Ha and Ba are annotated terms, and T0
and T1 are propagation histories. T1 is a minimal propagation history satisfying
the following conditions:
– T0 ⊆ T1 ;
– ∀p ∈ Pos(H) such that H|p = V ∈ X (where X is the set of variables), and
∃q ∈ Pos(B) such that B|q = V , then define identifier renaming ρ such that
ρ(Ha |p ) and Ba |q are identical annotated terms. Then if E ∈ T0 we have
that ρ(E) ∈ T1 .
Example 11. Consider rewriting the term Ha = f ((a#1 ∧ b#2)#3)#4 with a
propagation history of T0 = {(r @ (1 2))} using the rule f (X) ⇐⇒ g(X, X).
The resulting term is Ba = g((a#5 ∧b#6)#7), (a#8 ∧b#9)#10#11 and the new
propagation history is T1 = {(r @ (1 2)), (r @ (5 6)), (r @ (8 9))}.
⊓
⊔
Conjunctive context. According to the declarative semantics, a term T with
conjunctive context C is represented as (T where C). Operationally, we will
never explicitly build a term containing a where clause. Instead we use the
following function to compute the conjunctive context of a subterm on demand.
Definition 7 (Conjunctive Context). Given an (annotated) term T and a
position p ∈ Pos(T ), we define function cc(T, p) to return the conjunctive context
at position p as follows.
cc(T, ǫ)
= true
cc(A ∧ B, 1p)
= B ∧ cc(A, p)
cc(A ∧ B, 2p)
= A ∧ cc(B, p)
cc(f (T1 , . . . , Ti , . . . , Tn ), ip) = cc(Ti , p)
9
(f 6= ∧)
States and transitions. The operational semantics are defined as a set of
transitions on execution states.
Definition 8 (Execution States). An execution state is a tuple of the form
hG, T, V, Pi, where G is a term (the goal), T is the propagation history, V is
the set of variables appearing in the initial goal and P is a set of identifiers.
We also define initial and final states as follows.
Definition 9 (Initial and Final States). Given an initial goal G for program
P , the initial state of G is
hGa , ∅, vars(G), ids(Ga )i
where Ga = annotate(∅, G). A final state is a state where no more rules are
applicable to the goal G.
We can now define the operational semantics of ACDTR as follows.
Definition 10 (Operational Semantics).
hG0 , T0 , V, P0 i hG1 , T1 , V, P1 i
1. Simplify: There exists a (renamed) rule from P
H ⇐⇒ g | B
such that there exists a matching substitution θ and a term G′0 such that
–
–
–
–
G0 ≈AC G′0
∃p ∈ Pos(G′0 ) . G′0 |p = θ(H)
θ(g) ∈ G
Ba = annotate(P0 , θ(B))
Then G1 = G′0 [Ba ]p , P1 = P0 ∪ ids(G1 ) and T1 = update(H, G′0 |p , B, Ba , T0 ).
2. Propagate: There exists a (renamed) rule from P
r @ H =⇒ g | B
such that there exists a matching substitution θ and a term G′0 such that
–
–
–
–
–
G0 ≈AC G′0
∃p ∈ Pos(G′0 ) . G′0 |p = θ(H)
θ(g) ∈ G
entry(r, G′0 |p ) 6∈ T0
Ba = annotate(P0 , θ(B))
Then G1 = G′0 [G′0 |p ∧ Ba ]p , T1 = update(H, G′0 |p , B, Ba , T0 ) ∪ {entry(r, G′0 |p )}
and P1 = P0 ∪ ids(G1 ).
3. Simpagate: There exists a (renamed) rule from P
C \ H ⇐⇒ g | B
such that there exists a matching substitution θ and a term G′0 such that
10
h(leq(X1 , Y2 )3 ∧4 leq(Y5 , Z6 )7 ∧8 ¬9 leq(X10 , Z11 )12 ), ∅i trans
h(leq(X1 , Y2 )3 ∧4 leq(Y5 , Z6 )7 ∧13 leq(X15 , Z16 )14 ∧8 ¬9 leq(X10 , Z11 )12 ), T i idemp
h(leq(X1 , Y2 )3 ∧4 leq(Y5 , Z6 )7 ∧13 leq(X15 , Z16 )14 ∧8 ¬9 true17 ), T i simplif y
h(leq(X1 , Y2 )3 ∧4 leq(Y5 , Z6 )7 ∧13 leq(X15 , Z16 )14 ∧8 f alse18 ), T i simplif y
h(leq(X1 , Y2 )3 ∧4 leq(Y5 , Z6 )7 ∧13 f alse19 ), T i simplif y
h(leq(X1 , Y2 )3 ∧4 f alse20 ), T i simplif y
h(f alse21 ), T i
Fig. 1. Example derivation for the leq program.
–
–
–
–
–
G0 ≈AC G′0
∃p ∈ Pos(G′0 ) . G′0 |p = θ(H)
∃D.θ(C) ∧ D ≈AC cc(G′0 , p)
θ(g) ∈ G
Ba = annotate(P0 , θ(B))
Then G1 = G′0 [Ba ]p , T1 = update(H, G′0 |p , B, Ba , T0 ) and P1 = P0 ∪ ids(G1 ).
Example. Consider the leq program from Example 8 with the goal
leq(X, Y ) ∧ leq(Y, Z) ∧ ¬leq(X, Z)
Figure 1 shows one possible derivation of this goal to the final state representing
f alse. For brevity, we omit the V and P fields, and represent identifiers as subscripts, i.e. T #i = Ti . Also we substitute T = {transitivity @ (3 2 1 7 5 6)}.
We can state a soundness result for ACDTR.
Theorem 1 (Soundness). If hG0 , T0 , V, Pi ∗ hG′ , T ′ , V, P ′ i with respect to
a program P , then JP K |= ∃vars(G′ )−V G0 ≈ G′
This means that for all algebras A that satisfy JP K, G0 and G′ are equivalent
for some assignment of the fresh variables in G′ .
4
Implementation
We have implemented a prototype version of ACDTR as part of the mapping
language of the G12 project, called Cadmium. In this section we give an overview
of the implementation details. In particular, we will focus on the implementation
of conjunctive context matching, which is the main contribution of this paper.
Cadmium constructs normalised terms from the bottom up. Here, a normalised term is one that cannot be reduced further by an application of a rule.
Given a goal f (t1 , ..., tn ), we first must recursively normalise all of t1 , ..., tn (to
say s1 , ..., sn ), and then attempt to find a rule that can be applied to the top-level
of f (s1 , ..., sn ). This is the standard execution algorithm used by many TRSs
implementations.
11
This approach of normalising terms bottom up is complicated by the consideration of conjunctive context matching. This is because the conjunctive context
of the current term appears “higher up” in the overall goal term. Thus conjunctive context must be passed top down, yet we are normalising bottom up. This
means there is no guarantee that the conjunctive context is normalised.
Example 12. Consider the following ACDTR program that uses conjunctive context matching.
X = V \ X ⇐⇒ var(X) ∧ nonvar(V ) | V.
one(X) ⇐⇒ X = 1.
not one(1) ⇐⇒ f alse.
Consider the goal not one(A)∧one(A), which we expect should be normalised to
f alse. Assume that the sub-term not one(A) is selected for normalisation first.
The conjunctive context for not one(A) (and its subterm A) is one(A). No rule
is applicable, so not one(A) is not reduced.
Next the subterm one(A) is reduced. The second rule will fire resulting in
the new term A = 1. Now the conjunctive context for the first term not one(A)
has changed to A = 1, so we expect that A should be rewritten to the number
⊓
⊔
1. However not one(A) has already being considered for normalisation.
The current Cadmium prototype solves this problem by re-normalising terms
when and if the conjunctive context “changes”. For example, when the conjunctive context one(A) changes to A = 1, the term not one(X) will be renormalised
to not one(1) by the first rule.
The general execution algorithm for Cadmium is shown in Figure 2. Function
normalise takes a term T , a substitution θ, a conjunctive context CC and a
Boolean value Ch which keeps track of when the conjunctive context of the
current subterm has changed. If Ch = true, then we can assume the substitution
θ maps variables to normalised terms. For the initial goal, we assume θ is empty,
otherwise if we are executing a body of a rule, then θ is the matching substitution.
Operationally, normalise splits into three cases depending on what T is. If
T is a variable, and the conjunctive context has changed (i.e. Ch = true),
then θ(T ) is no longer guaranteed to be normalised. In this case we return the
result of renormalising θ(T ) with respect to CC. Otherwise if Ch = f alse, we
simply return θ(T ) which must be already normalised. If T is a conjunction
T1 ∧ T2 , we repeatedly call normalise on each conjunct with the other added
to the conjunctive context. This is repeated until a fixed point (i.e. further
normalisation does not result in either conjunct changing) is reached, and then
return the result of apply rule on the which we will discuss below. This fixed
point calculation accounts for the case where the conjunctive context of a term
changes, as shown in Example 12. Otherwise, if T is any other term of the form
f (T1 , ..., Tn ), construct the new term T ′ by normalising each argument. Finally
we return the result of apply rule applied to T ′ .
The function call apply rule(T ′ ,CC) will attempt to apply a rule to normalised
term T ′ with respect to conjunctive context CC. If a matching rule is found, then
12
normalise(T ,θ,CC,Ch)
if is var(T )
if Ch
return normalise(θ(T ),θ,CC,f alse)
else
return θ(T )
else if T = T1 ∧ T2
do
T1′ := T1
T2′ := T2
T1 := normalise(T1′ ,θ,T2′ ∧ CC,true)
T2 := normalise(T2′ ,θ,T1′ ∧ CC,true)
while T1 6= T1′ ∧ T2 6= T2′
return apply rule(T1′ ∧ T2′ ,CC)
else
T = f (T1 , ..., Tn )
T ′ := f (normalise(T1 ,θ,CC,Ch), ..., normalise(Tn ,θ,CC,Ch))
return apply rule(T ′ ,CC)
Fig. 2. Pseudo code of the Cadmium execution algorithm.
the result of normalise(B,θ,CC,f alse) is returned, where B is the (renamed) rule
body and θ is the matching substitution. Otherwise, T ′ is simply returned.
5
Related Work
ACDTR is closely related to both TRS and CHRs, and in this section we compare
the three languages.
5.1
AC Term Rewriting Systems
The problem of dealing with associative commutative operators in TRS is well
studied. A popular solution is to perform the rewriting modulo some permutation
of the AC operators. Although this complicates the matching algorithm, the
problem of trivial non-termination (e.g. by continually rewriting with respect to
commutativity) is solved.
ACDTR subsumes ACTRS (Associative Commutative TRS) in that we have
introduced distributivity (via simpagation rules), and added some “CHR-style”
concepts such as identifiers and propagation rules.
Given an ACTRS program, we can map it to an equivalent ACDTR program
by interpreting each ACTRS rule H → B as the ACDTR rule H ⇐⇒ B. We
can now state the theorem relating ACTRS and ACDTR.
Theorem 2. Let P be an ACTRS program and T a ground term, then
T →∗ S under P iff hTa , ∅, ∅, ids(Ta )i ∗ hSa , ∅, ∅, Pi under α(P ) (where
Ta = annotate(∅, T )) for some P and term(Sa ) = S.
13
5.2
CHRs and CHR∨
ACDTR has been deliberately designed to be an extension of CHRs. Several
CHR concepts, e.g. propagation rules, etc., have been adapted.
There are differences between CHRs and ACDTR. The main difference is
that ACDTR does not have a “built-in” or “underlying” solver, i.e. ACDTR is
not a constraint programming language. However it is possible to encode solvers
directly as rules, e.g. the simple leq solver from Example 8. Another important
difference is that CHRs is based on predicate logic, where there exists a distinction between predicate symbols (i.e. the names of the constraints) and functions
(used to construct terms). ACDTR is based on equational logic between terms,
hence there is no distinction between predicates and functions (a predicate is
just a Boolean function). To overcome this, we assume the existence of a set
Pred, which contains the set of function symbols that are Boolean functions.
We assume that AC ∩ Pred = {∧(2) }.
The mapping between a CHR program and an ACDTR program is simply
α(P ) = P ∪ {X ∧ true ⇐⇒ X}.4 However, we assume program P is restricted
as follows:
– rules have no guards apart from implicit equality guards; and
– the only built-in constraint is true
and the initial goal G is also restricted:
– G must be of the form G0 ∧ ... ∧ Gn for n > 0;
– Each Gi is of the form fi (A0 , ..., Am ) for m ≥ 0 and fi ∈ Pred;
– For all p ∈ Pos(Aj ), 0 ≤ j ≤ m we have that if Aj |p = g(B0 , ..., Bq ) then
g (q) 6∈ AC and g (q) 6∈ Pred.
These conditions disallow predicate symbols from appearing as arguments in
CHR constraints.
Theorem 3. Let P be a CHR program, and G an initial goal both satisfying
V
the above conditions, then hG, ∅, true, ∅iV
1 h∅, S, true, T ii (for some T , i
and V = vars(G)) under the theoretical operational semantics [4] for CHRs
iff hGa , ∅, V, ids(Ga )i hSa , T ′ , V, Pi (for some T ′ , P) under ACDTR, where
term(Sa ) = S1 ∧...∧Sn and S = {S1 #i1 , ..., Sn #in } for some identifiers i1 , ..., in .
We believe that Theorem 3 could be extended to include CHR programs that
extend an underlying solver, provided the rules for handling tell constraints are
added to the ACDTR program. For example, we can combine rules for rational
tree unification with the leq program from Example 8 to get a program equivalent
to the traditional leq program under CHRs.
ACDTR generalises CHRs by allowing other operators besides conjunction
inside the head or body of rules. One such extension of CHRs has been studied
before, namely CHR∨ [2] which allows disjunction in the body. Unlike ACDTR,
4
There is one slight difference in syntax: CHRs use ‘,’ to represent conjunction,
whereas ACDTR uses ‘∧’.
14
which manipulates disjunction syntactically, CHR∨ typically finds solutions using
backtracking search.
One notable implementation of CHR∨ is [6], which has an operational semantics described as an and/or (∧/∨) tree rewriting system. A limited form of
conjunctive context matching is used, similar to that used by ACDTR, based
on the knowledge that conjunction ∧ distributes over disjunction ∨. ACDTR
generalises this by distributing over all functions.
6
Future Work and Conclusions
We have presented a powerful new rule-based programming language, ACDTR,
that naturally extends both AC term rewriting and CHRs. The main contribution is the ability to match a rule against the conjunctive context of a (sub)term,
taking advantage of the distributive property of conjunction over all possible
functions. We have shown this is a natural way of expressing some problems,
and by building the distributive property into the matching algorithm, we avoid
non-termination issues that arise from naively implementing distribution (e.g.
as rewrite rules).
We intend that ACDTR will become the theoretical basis for the Cadmium
constraint mapping language as part of the G12 project [7]. Work on ACDTR
and Cadmium is ongoing, and there is a wide scope for future work, such as
confluence, termination and implementation/optimisation issues.
References
1. S. Abdennadher. Operational semantics and confluence of constraint propagation
rules. In Gert Smolka, editor, Proceedings of the Third International Conference
on Principles and Practice of Constraint Programming, LNCS 1330, pages 252–266.
Springer-Verlag, 1997.
2. S. Abdennadher and H. Schütz. CHR∨ : A flexible query language. In International
conference on Flexible Query Answering Systems, number 1495 in LNCS, pages
1–14, Roskilde, Denmark, 1998. Springer-Verlag.
3. F. Baader and T. Nipkow. Term rewriting and all that. Cambridge Univ. Press,
1998.
4. G. Duck, P. Stuckey, M. Garcia de la Banda, and C. Holzbaur. The refined operational semantics of constraint handling rules. In B. Demoen and V. Lifschitz,
editors, Proceedings of the 20th International Conference on Logic Programming,
LNCS 3132, pages 90–104. Springer-Verlag, September 2004.
5. T. Frühwirth. Theory and practice of constraint handling rules. Journal of Logic
Programming, 37:95–138, 1998.
6. L. Menezes, J. Vitorino, and M. Aurelio. A High Performance CHR∨ Execution
Engine. In Second Workshop on Constraint Handling Rules, Sitges, Spain, 2005.
7. P.J. Stuckey, M. Garcia de la Banda, M. Maher, K. Marriott, J. Slaney, Z. Somogyi,
M. Wallace, and T. Walsh. The G12 project: Mapping solver independent models
to efficient solutions. In M. Gabrielli and G. Gupta, editors, Proceedings of the
21st International Conference on Logic Programming, number 3668 in LNCS, pages
9–13. Springer-Verlag, 2005.
15
A
Examples
A.1
Further Motivating Examples
Example 13 (Conjunctive Normal Form). One of the roles of mapping models
is to convert a model written in an expressive language into a restricted language which is easy to solve. Many standard approaches to solving propositional
formulae require that the formulae are in conjunctive normal form (CNF). Disjunction ∨ is distributive over ∧, which can be used to establish CNF in a direct
way, using the oriented rule
P ∨ Q ∧ R → (P ∨ Q) ∧ (P ∨ R).
CNF conversion based on this rule can exponentially increase the size of the formula. This undesirable circumstance means that in practice CNF conversions are
preferred that replace subformulae by new propositional atoms, which increases
the formula size at most linearly.
Let us formulate this approach in rewrite rules. To keep this example simple,
we assume that the non-CNF subformula P ∨ Q ∧ R occurs in a positive context
(for example by a preprocessing into negation normal form). We replace Q ∧ R
by a new atom s defined by the logical implication s ⇒ (Q ∧ R). In rewrite rule
form, we have
P ∨ Q ∧ R → (P ∨ s) ∧ (¬s ∨ Q) ∧ (¬s ∨ R).
(8)
Unit resolution and unit subsumption can be formalised in rewrite rules. Here
are two versions, one using conjunctive context and a regular one:
with conj. context:
regular:
P \ P ⇐⇒ true
P ∧P → P
P ∧ (P ∨ Q) → P
P ∧ ¬P → false
P \ ¬P ⇐⇒ false
P ∧ (¬P ∨ Q) → P ∧ Q
We furthermore assume rules eliminating the logical constants true and false
from conjunctions and disjunctions in the obvious way. Let us contrast the two
rule sets for the formula (a∨b∧(c∨d))∧d. The following is a terminating rewrite
history:
with conj. context:
regular:
(a ∨ b ∧ (c ∨ d)) ∧ d
(a ∨ b ∧ (c ∨ d)) ∧ d
(a ∨ b ∧ (c ∨ true)) ∧ d
(a ∨ b ∧ true) ∧ d
(a ∨ s) ∧ (¬s ∨ b) ∧ (¬s ∨ c ∨ d) ∧ d
(a ∨ s) ∧ (¬s ∨ b) ∧ true ∧ d
(a ∨ b) ∧ d
(a ∨ s) ∧ (¬s ∨ b) ∧ d
16
To obtain the simple conjunct (a ∨ b) using the regular rule format, a rule expressing binary resolution, i.e. from (P ∨ S) ∧ (¬S ∨ Q) follows (P ∨ Q), would be
required. However, such a rule is undesirable as it would create arbitrary binary
resolvents, increasing formula size. Moreover, the superfluous atom s remains in
the formula.
⊓
⊔
Example 14 (Type remapping). One of the main model mappings we are interested in expressing is where the type of a variable is changed from a high level
type easy for modelling to a low level type easy to solve. A prime example of this
is mapping a set variable x ranging over finite subsets of some fixed set s to an array x′ of 0/1 variables indexed by s. So for variable x we have e ∈ x ⇔ x′ [e] = 1.
For this example we use the more concrete modelling syntax: t : x indicates variable x has type t, the types we are interested are l..u an integers in the range l to
u, set of S a set ranging over elements in S, and array[I] of E an array indexed
by set I of elements of type E. We use f orall and sum looping constructs which
iterate over sets. This is expressed in ACDTR as follows.
set of s : x ⇐⇒ array[s] of 0..1 : x′ ∧ map(x, x′ )
map(x, x′ ) \ x ⇐⇒ x′
array[s] of 0..1 : x \ card(x) ⇐⇒ sum(e in s) x[e]
array[s] of 0..1 : x ∧
z :: (array[s] of 0..1 : z ∧
⇐⇒
array[s] of 0..1 : y \ x ∩ y
f orall(e in s) z[e] = x[e] && y[e])
array[s] of 0..1 : x ∧
z :: (array[s] of 0..1 : z ∧
⇐⇒
array[s] of 0..1 : y \ x ∪ y
f orall(e in s) z[e] = x[e] || y[e])
array[s] of 0..1 : x \ x = ∅ ⇐⇒ f orall(e in s) x[e] = 0
card(t :: c) ⇐⇒ card(t) :: c
(t1 :: c) ∪ t2 ⇐⇒ t1 ∪ t2 :: c
t1 ∪ (t2 :: c) ⇐⇒ t1 ∪ t2 :: c
(t1 :: c) ∩ t2 ⇐⇒ t1 ∩ t2 :: c
t1 ∩ (t2 :: c) ⇐⇒ t1 ∩ t2 :: c
(t1 :: c) = t2 ⇐⇒ t1 = t2 ∧ c
t1 = (t2 :: c) ⇐⇒ t1 = t2 ∧ c
(t1 :: c) ≤ t2 ⇐⇒ t1 ≤ t2 ∧ c
t1 ≤ (t2 :: c) ⇐⇒ t1 ≤ t2 ∧ c
(t :: c1 ) :: c2 ⇐⇒ t :: (c1 ∧ c2 )
maxOverlap(x, y, c) ⇐⇒ card(x ∩ y) ≤ c
(typec)
(vsubs)
(card)
(cap)
(cup)
(emptyset)
(↑ card)
(↑ cupl)
(↑ cupr)
(↑ capl)
(↑ capr)
(↑ eql)
(↑ eqr)
(↑ leql)
(↑ leqr)
(↑ cc)
(maxO)
The :: constructor adds some local conjunctive context to an arbitrary term (like
where) and the last 11 rules bar 1 move this context outwards to the nearest
predicate scope. The last rule defines the maxOverlap predicate. They are used
to introduce new variables z and their type and the constraints upon then. As
17
an example, consider the following derivation:
maxO
typec
vsubs
typec
vsubs
cap
↑card
↑leql
set of 1..n : x ∧ set of 1..n : y ∧ maxOverlap(x, y, 1)
set of 1..n : x ∧ set of 1..n : y ∧ card(x ∩ y) ≤ 1
array[1..n] of 0..1 : x′ ∧ map(x, x′ ) ∧ set of 1..n : y ∧ card(x ∩ y) ≤ 1
array[1..n] of 0..1 : x′ ∧ map(x, x′ ) ∧ set of 1..n : y ∧ card(x′ ∩ y) ≤ 1
array[1..n] of 0..1 : x′ ∧ map(x, x′ ) ∧ array[1..n] of 0..1 : y ′ ∧ map(y, y ′ ) ∧
card(x′ ∩ y) ≤ 1
array[1..n] of 0..1 : x′ ∧ map(x, x′ ) ∧ array[1..n] of 0..1 : y ′ ∧ map(y, y ′ ) ∧
card(x′ ∩ y ′ ) ≤ 1
array[1..n] of 0..1 : x′ ∧ map(x, x′ ) ∧ array[1..n] of 0..1 : y ′ ∧ map(y, y ′ ) ∧
card(z :: (array[1..n] of 0..1 : z ∧ f orall(e in 1..n) z[e] = x′ [e] && y ′ [e]) ≤ 1
array[1..n] of 0..1 : x′ ∧ map(x, x′ ) ∧ array[1..n] of 0..1 : y ′ ∧ map(y, y ′ ) ∧
card(z) :: (array[1..n] of 0..1 : z ∧ f orall(e in 1..n) z[e] = x′ [e] && y ′ [e]) ≤ 1
array[1..n] of 0..1 : x′ ∧ map(x, x′ ) ∧ array[1..n] of 0..1 : y ′ ∧ map(y, y ′ ) ∧
card(z) ≤ 1 ∧ array[1..n] of 0..1 : z ∧ f orall(e in 1..n) z[e] = x′ [e] && y ′ [e]
The final goal is a flat conjunction of constraints and types. It can be similarly
translated into a conjunction of pseudo-Boolean constraints that can be sent to
a finite domain solver, by unrolling f orall and replacing the arrays by sequences
of n variables.
⊓
⊔
Example 15 (Rational Tree Unification). We can directly express the rational
tree unification algorithm of Colmerauer5 as an ACD term rewriting system.
f (s1 , . . . sn ) = f (t1 , . . . , tn ) ⇐⇒ s1 = t1 ∧ · · · sn = tn
f (s1 , . . . sn ) = g(t1 , . . . , tm ) ⇐⇒ f alse
(split)
(f ail)
The (split) rule must be defined for each constructor f /n and the (fail) rule for
each pair of different constructors f /n and g/m. The remaining rules are:
x = x ⇐⇒ var(x) | true
t = x ⇐⇒ var(x) ∧ nonvar(t) | x = t
x = s \ x = t ⇐⇒ var(x) ∧ nonvar(s) ∧ size(s) ≤ size(t) | s = t
x = y \ x ⇐⇒ var(x) ∧ var(y) ∧ x 6≡ y | y
(id)
(f lip)
(tsubs)
(vsubs)
where size(t) is the size of the term t in terms of number of symbols, and ≡ is
syntactic identity. Even though the goals are a single conjunction of constraints,
ACD is used for succinctly expressing the (vsubs) rule which replaces one variable
by another in any other position.
5
A. Colmerauer. Prolog and Infinite Trees. Logic Programming, APIC Studies in Data
Processing (16). Academic Press. 1992
18
The following derivation illustrates the unification process in action. The
underlined part show the matching elements
f lip
vsubs
vsubs
tsubs
split
split
tsubs
split
tsubs
split
id
x = y ∧ f (f (x)) = x ∧ y = f (f (f (y)))
x = y ∧ x = f (f (x)) ∧ y = f (f (f (y)))
x = y ∧ y = f (f (x)) ∧ y = f (f (f (y)))
x = y ∧ y = f (f (y)) ∧ y = f (f (f (y)))
x = y ∧ y = f (f (y)) ∧ f (f (y)) = f (f (f (y)))
x = y ∧ y = f (f (y)) ∧ f (y) = f (f (y))
x = y ∧ y = f (f (y)) ∧ y = f (y)
x = y ∧ f (y) = f (f (y)) ∧ y = f (y)
x = y ∧ y = f (y) ∧ y = f (y)
x = y ∧ y = f (y) ∧ f (y) = f (y)
x = y ∧ y = f (y) ∧ f (y) = f (y)
x = y ∧ y = f (y) ∧ true
⊓
⊔
A.2
Expanded Examples
The purpose of this section is to show some example derivations under the operational semantics of ACDTR, rather than high-level descriptions. We allow for
some shorthand, namely T #i = Ti .
Identifiers and conjunctive context. In this section we explain parts of the
derivation from Example 15 in more detail. The initial goal is
x = y ∧ f (f (x)) = x ∧ y = f (f (f (y)))
which corresponds to the initial state:
h(((x1 = y2 )3 ∧ (f (f (x4 )5 )6 = x7 )8 )9 ∧ (y10 = f (f (f (y11 )12 )13 )14 )15 )16 , ∅,
{x, y}, {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}i
The initial state is a quadruple contained an annotated version of the goal, an
empty propagation history, the set of variables in the goal and a set of “used”
identifiers.
The first derivation step is a Simplify transition with the f lip rule:
h(((x1 = y2 )3 ∧ (f (f (x4 )5 )6 = x7 )8 )9 ∧ (y10 = f (f (f (y11 )12 )13 )14 )15 )16 , ∅,
{x, y}, {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}i
h(((x1 = y2 )3 ∧ (x17 = f (f (x18 )19 )20 )21 )9 ∧ (y10 = f (f (f (y11 )12 )13 )14 )15 )16 , ∅,
{x, y}, {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21}i
We have replaced the annotated subterm (f (f (x4 )5 )6 = x7 )8 with x17 =
f (f (x18 )19 )20 )21 (i.e. flipped the operands to the equality) and reannotated the
19
new term with fresh identifiers. These were also added to the set of used identifiers. Since the propagation history is empty, it remains unchanged.
The next derivation step is a Simpagate transition with the vsubs rule.
h(((x1 = y2 )3 ∧ (x17 = f (f (x18 )19 )20 )21 )9 ∧ (y10 = f (f (f (y11 )12 )13 )14 )15 )16 , ∅,
{x, y}, {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21}i
h(((x1 = y2 )3 ∧ (y21 = f (f (x18 )19 )20 )21 )9 ∧ (y10 = f (f (f (y11 )12 )13 )14 )15 )16 , ∅,
{x, y}, {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22}i
The conjunctive context for subterm x17 is
cc(Ga , p) = (x1 = y2 )3 ∧ (y10 = f (f (f (y11 )12 )13 )14 )15 ∧ true
where Ga is the current goal and p is the position of x17 . The first conjunct
matches the conjunctive context of the vsubs rule, thus subterm x17 is replaced
with y21 . Identifier 21 is added to the list of used identifiers.
Execution proceeds until the final state
h(x = y ∧ y = f (y)) ∧ true, ∅, {x, y}, Pi
is reached, for some annotation of the goal and some set of identifiers P. This is
a final state because no more rules are applicable to it.
AC matching and propagation histories. Consider the propagation rule
from the leq program:
trans @ leq(X, Y ) ∧ leq(Y, Z) =⇒ X 6≡ Y ∧ Y 6≡ Z | leq(X, Z)
and the initial state
hleq(A1 , B2 )3 ∧4 leq(B5 , A6 )7 , ∅, {A, B}, {1, 2, 3, 4, 5, 6, 7}i.
We can apply Propagate directly (i.e. without permuting the conjunction)
to arrive at the state:
h(leq(A1 , B2 )3 ∧4 leq(B5 , A6 )7 ) ∧8 leq(A9 , A10 )11 ,
{trans @ (3 1 2 7 6 5)}, {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}i.
The propagation history prevents the rule from firing on the same terms
again, however we can permute the terms to find a new matching. Namely, we
can permute the annotated goal (which we call Ga )
(leq(A1 , B2 )3 ∧4 leq(B5 , A6 )7 ) ∧8 leq(A9 , A10 )11
to
(leq(B5 , A6 )7 ∧4 leq(A1 , B2 )3 ) ∧8 leq(A9 , A10 )11 .
20
The latter is an element of [Ga ]AC , and the identifiers have been preserved in the
correct way. The entry trans @ (7 6 5 3 1 2) is not in the propagation history,
so we can apply Propagate again to arrive at:
h((leq(B5 , A6 )7 ∧4 leq(A1 , B2 )3 ) ∧12 leq(B13 , B14 )15 ) ∧8 leq(A9 , A10 )11 ,
{trans @ (3 1 2 7 6 5), trans @ (7 6 5 3 1 2)}, {1...15}i.
Now the propagation history prevents the rule trans being applied to the
first two leq constraints. The guard also prevents the trans rule firing on either
of the two new constraints,6 thus we have reached a final state.
Updating propagation histories. Consider a modified version of the previous
example, now with two rules:
X ∧ X ⇐⇒ X
trans @ leq(X, Y ) ∧ leq(Y, Z) =⇒ leq(X, Z)
The first rule enforces idempotence of conjunction.
Consider the initial state:
hleq(A1 , A2 )3 ∧4 leq(A5 , A6 )7 ∧8 leq(A9 , A10 )11 , ∅, {A}, {1...11}i
We apply the trans rule to the first two copies of the leq constraint (with identifiers 3 and 7).
hleq(A1 , A2 )3 ∧4 leq(A5 , A6 )7 ∧8 leq(A9 , A10 )11 ∧12 leq(A13 , A14 )15 ,
{trans @ (3 1 2 7 5 6)}, {A}, {1...15}i
Next we apply idempotence to leq constraints with identifiers 7 and 11.
hleq(A1 , A2 )3 ∧4 leq(A16 , A17 )18 ∧12 leq(A13 , A14 )15 ,
{trans @ (3 1 2 7 5 6), trans @ (3 1 2 18 16 17)}, {A}, {1...18}i
An extra entry (trans @ (3 1 2 18 16 17)) is added to the propagation history
in order to satisfy the requirements of Definition 6. This is because we have
replaced the annotated constraint leq(A5 , A6 )7 with the newly annotated term
leq(A16 , A17 )18 , which defines an identifier renaming
ρ = {5 7→ 16, 6 7→ 17, 7 7→ 18}.
Since E = (trans @ (3 1 2 7 5 6)) is an element of the propagation history, we
have that ρ(E) = (trans @ (3 1 2 18 16 17)) must also be an element, and hence
the history is expanded.
6
Without the guard both ACDTR and CHRs are not guaranteed to terminate.
21
| 6 |